id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.19455
Arborescences, Colorful Forests, and Popularity
Our input is a directed, rooted graph $G = (V \cup \{r\},E)$ where each vertex in $V$ has a partial order preference over its incoming edges. The preferences of a vertex extend naturally to preferences over arborescences rooted at $r$. We seek a popular arborescence in $G$, i.e., one for which there is no "more popular" arborescence. Popular arborescences have applications in liquid democracy or collective decision making; however, they need not exist in every input instance. The popular arborescence problem is to decide if a given input instance admits a popular arborescence or not. We show a polynomial-time algorithm for this problem, whose computational complexity was not known previously. Our algorithm is combinatorial, and can be regarded as a primal-dual algorithm. It searches for an arborescence along with its dual certificate, a chain of subsets of $E$, witnessing its popularity. In fact, our algorithm solves the more general popular common base problem in the intersection of two matroids, where one matroid is the partition matroid defined by any partition $E = \bigcup_{v\in V} \delta(v)$ and the other is an arbitrary matroid on $E$ of rank $|V|$, with each $v \in V$ having a partial order over elements in $\delta(v)$. We extend our algorithm to the case with forced or forbidden edges. We also study the related popular colorful forest (or more generally, the popular common independent set) problem where edges are partitioned into color classes, and the task is to find a colorful forest that is popular within the set of all colorful forests. For the case with weak rankings, we formulate the popular colorful forest polytope, and thus show that a minimum-cost popular colorful forest can be computed efficiently. By contrast, we prove that it is NP-hard to compute a minimum-cost popular arborescence, even when rankings are strict.
Telikepalli Kavitha, Kazuhisa Makino, Ildikó Schlotter, Yu Yokoi
2023-10-30T11:29:02Z
http://arxiv.org/abs/2310.19455v1
# Arborescences, Colorful Forests, and Popularity ###### Abstract Our input is a directed, rooted graph \(G=(V\cup\{r\},E)\) where each vertex in \(V\) has a partial order preference over its incoming edges. The preferences of a vertex extend naturally to preferences over arborescences rooted at \(r\). We seek a _popular_ arborescence in \(G\), i.e., one for which there is no "more popular" arborescence. Popular arborescences have applications in liquid democracy or collective decision making; however, they need not exist in every input instance. The popular arborescence problem is to decide if a given input instance admits a popular arborescence or not. We show a polynomial-time algorithm for this problem, whose computational complexity was not known previously. Our algorithm is combinatorial, and can be regarded as a primal-dual algorithm. It searches for an arborescence along with its dual certificate, a chain of subsets of \(E\), witnessing its popularity. In fact, our algorithm solves the more general popular common base problem in the intersection of two matroids, where one matroid is the partition matroid defined by any partition \(E=\biguplus_{v\in V}\delta(v)\) and the other is an arbitrary matroid \(M=(E,\mathcal{I})\) of rank \(|V|\), with each \(v\in V\) having a partial order over elements in \(\delta(v)\). We extend our algorithm to the case with forced or forbidden edges. We also study the related popular **colorful forest** (or more generally, the **popular common independent set**) problem where edges are partitioned into color classes, and the task is to find a colorful forest that is popular within the set of all colorful forests. For the case with weak rankings, we formulate the popular colorful forest polytope, and thus show that a minimum-cost popular colorful forest can be computed efficiently. By contrast, we prove that it is NP-hard to compute a minimum-cost popular arborescence, even when rankings are strict. ## 1 Introduction Let \(G=(V\cup\{r\},E)\) be a directed graph where the vertex \(r\) (called the root) has no incoming edge. Every vertex \(v\in V\) has a partial ordering \(\succ_{v}\) (i.e., a preference relation that is irreflexive, antisymmetric and transitive) over its incoming edges, as in this example from [21] where preference orders are strict rankings. Here \(V=\{a,b,c,d\}\) and the preference orders of these four vertices on their incoming edges are as follows: We are interested in computing an _optimal arborescence_ rooted at \(r\), where an arborescence is an acyclic subgraph of \(G\) in which each vertex \(v\in V\) has a unique incoming edge. Our notion of optimality is a function of the preferences \((\succ_{v})_{v\in V}\) of vertices for their incoming edges. Given any pair of arborescences \(A\) and \(A^{\prime}\) in \(G\), we say that \(v\in V\) prefers \(A\) to \(A^{\prime}\) if \(v\) prefers its incoming edge in \(A\) to its incoming edge in \(A^{\prime}\), i.e., \(v\) prefers \(A\) to \(A^{\prime}\) if \(A(v)\succ_{v}A^{\prime}(v)\) where \(A(v)\) (resp., \(A^{\prime}(v)\)) is \(v\)'s incoming edge in \(A\) (resp., \(A^{\prime}\)). Let \(\phi(A,A^{\prime})\) be the number of vertices that prefer \(A\) to \(A^{\prime}\). We say that \(A\) is _more popular than_\(A^{\prime}\) if \(\phi(A,A^{\prime})>\phi(A^{\prime},A)\). **Definition 1.1**.: _An arborescence \(A\) is popular if \(\phi(A,A^{\prime})\geq\phi(A^{\prime},A)\) for all arborescences \(A^{\prime}\)._ Our notion of optimality is popularity, in other words, we seek a popular arborescence \(A\) in \(G\). So there is _no_ arborescence more popular than \(A\), thus \(A\) is maximal under the "more popular than" relation. The "more popular than" relation is not transitive and popular arborescences need not always exist. Consider the example from [21] illustrated above. The arborescence \(A=\{(r,a),(a,b),(a,c),(c,d)\}\) is not popular, since the arborescence \(A^{\prime}=\{(r,d),(d,c),(c,a),(a,b)\}\) is more popular. This is because \(a\) and \(c\) prefer \(A^{\prime}\) to \(A\), while \(d\) prefers \(A\) to \(A^{\prime}\), and \(b\) is indifferent between \(A\) and \(A^{\prime}\). We can similarly obtain an arborescence \(A^{\prime\prime}=\{(r,b),(b,a),(b,d),(d,c)\}\) more popular than \(A^{\prime}\). It is easy to check that for any arborescence here, there is a more popular arborescence. Therefore this instance has no popular arborescence. Consider the above instance without the edge \((r,d)\). Vertex preferences are the same as in the earlier instance, except that vertex \(d\) has no third-choice edge. It can be shown that this instance has two popular arborescences: \(A=\{(r,a),(a,b),(a,c),(c,d)\}\) and \(A^{\prime\prime\prime}=\{(r,b),(b,a),(a,c),(c,d)\}\) (Appendix A has more details). **The popular arborescence problem.** Given a directed graph \(G\) as described above, the popular arborescence problem is to determine if \(G\) admits a popular arborescence or not, and to find one, if so. The computational complexity of the popular arborescence problem was posed as an open problem at the Emlektabla workshop [24] in 2019 and the problem has remained open till now. Thus it is an intriguing open problem--aside from its mathematical interest and curiosity, it has applications in _liquid democracy_, which is a voting scheme that allows a voter to delegate its vote to another voter.1 Footnote 1: A vertex \(v\) delegating its vote to \(u\) should be represented as the edge \((v,u)\); however as said in [21], it will be more convenient to denote this delegation by \((u,v)\) so as to be consistent with downward edges in an arborescence. **Popular branchings.** A special case of the popular arborescence problem is the popular branching problem. A branching is a directed forest in a digraph \(G=(V,E)\) where each vertex has at most one incoming edge. Any branching in \(G\) can be viewed as an arborescence in an auxiliary graph obtained by augmenting \(G\) with a new vertex \(r\) as the root and adding the edge \((r,v)\) for each \(v\in V\) as the least-preferred incoming edge of \(v\). So the problem of deciding whether the given instance \(G\) admits a popular branching or not reduces to the problem of deciding whether this auxiliary instance admits a popular arborescence or not. An efficient algorithm for this special case of the popular arborescence problem (where the root \(r\) is an in-neighbor of every \(v\in V\)) was given in [21]. The applications of popular branchings in liquid democracy were discussed in [21]--as mentioned above, each voter can delegate its vote to another voter; however delegation cycles are forbidden. A popular branching \(B\) represents a cycle-free delegation process that is stable, and every root in \(B\) casts a weighted vote on behalf of all its descendants. As mentioned in [21], liquid democracy has been used for internal decision making at Google [18] and political parties such as the German _Pirate Party_ or the Swedish party _Demoex_. We refer to [30] for more details. However, in many real-world applications, not all agents would be willing to be representatives, i.e., to be roots in a branching. Thus it cannot be assumed that _every_ vertex is an out-neighbor of \(r\), so it is only agents who are willing to be representatives that are out-neighbors of \(r\) in our instance. Thus the popular arborescence problem has to be solved in a general digraph \(G=(V\cup\{r\},E)\) rather than in one where every vertex is an out-neighbor of \(r\). As mentioned earlier, the computational complexity of the popular arborescence problem was open till now. We show the following result. **Theorem 1.1**.: _Let \(G=(V\cup\{r\},E)\) be a directed graph where each \(v\in V\) has a partial order over its incoming edges. There is a polynomial-time algorithm to solve the popular arborescence problem in \(G\)._ **Popular matchings and assignments.** The notion of popularity has been extensively studied in the domain of bipartite matchings where vertices on one side of the graph have weak rankings (i.e., linear preference order with possible ties) over their neighbors. The popular matching problem is to decide if such a bipartite graph admits a _popular matching_, i.e., a matching \(M\) such that there is no matching more popular than \(M\). An efficient algorithm for the popular matching problem was given almost 20 years ago [1]. Very recently (in 2022), the popular assignment problem was considered [20]. What is sought in this problem is a perfect matching that is popular within the set of perfect matchings--so the cardinality of the matching is more important than popular components. The popularity here. It is easy to see that the popular assignment problem is a generalization of the popular matching problem (a simple reduction from the popular matching problem to the popular assignment problem can be shown by adding some dummy vertices). An efficient algorithm for the popular assignment problem was given in [20]. **Popular common base problem.** Observe that the popular arborescence and popular assignment problems are special cases of the popular common base problem in the intersection of two matroids, where one matroid is the partition matroid defined by any partition \(E=\bigcup_{v\in V}\delta(v)\) and the other is an arbitrary matroid \(M=(E,\mathcal{I})\) of rank \(|V|\), and each \(v\in V\) has a partial order \(\succ_{v}\) over elements in \(\delta(v)\). * For any pair of common bases (i.e., common maximal independent sets) \(I\) and \(I^{\prime}\) in the matroid intersection, we say that \(v\in V\) prefers \(I\) to \(I^{\prime}\) if \(v\) prefers the element in \(I\cap\delta(v)\) to the element in \(I^{\prime}\cap\delta(v)\), i.e., \(e\succ_{v}f\) where \(I\cap\delta(v)=\{e\}\) and \(I^{\prime}\cap\delta(v)=\{f\}\). Let \(\phi(I,I^{\prime})\) be the number of vertices in \(V\) that prefer \(I\) to \(I^{\prime}\). The set \(I\) is popular within the set of common bases if \(\phi(I,I^{\prime})\geq\phi(I^{\prime},I)\) for all common bases \(I^{\prime}\). Arborescences are the common bases in the intersection of a partition matroid with a graphic matroid (for any edge set \(I\subseteq E\), \(I\in\mathcal{I}\) if and only if \(I\) has no cycle in the underlying undirected graph) while assignments are common bases in the intersection of two partition matroids. In fact, our algorithm and the proof of correctness for Theorem 1.1 work for the general popular common base problem. **Theorem 1.2**.: _A popular common base in the intersection of a partition matroid on \(E=\bigcup_{v\in V}\delta(v)\) with any matroid \(M=(E,\mathcal{I})\) of rank \(|V|\) can be computed in polynomial time._ Interestingly, the popular common independent set problem which asks for a common independent set that is popular in the set of all common independent sets (of all sizes) in the matroid intersection can be reduced to the popular common base problem (see Section 4). Therefore, the following fact is obtained as a corollary to Theorem 1.2. **Corollary 1.1**.: _A popular common independent set in the intersection of a partition matroid on \(E=\bigcup_{v\in V}\delta(v)\) with any matroid \(M=(E,\mathcal{I})\) can be computed in polynomial time._ All of the following problems fall in the framework of a popular common base (or common independent set) in the intersection of a partition matroid with another matroid: 1. Popular matchings [1]. 2. Popular assignments [20]. 3. Popular branchings [21]. 4. Popular matchings with matroid constraints2[19]. Footnote 2: This problem asks for a popular many-to-one matching in a bipartite graph \(G=(A\cup B,E)\) where vertices in \(A\) have weak rankings and the vertices that get matched to each \(b\in B\) must form an independent set in a matroid \(M_{b}\). Since Corollary 1.1 holds for partial order preferences, it generalizes the tractability result in [19] which assumes that preferences are weak rankings (note that the results in [19] are based on the paper [1], which in turn strongly relies on weak rankings). There are other problems that fall in this framework, e.g., the popular colorful forest problem and the popular colorful spanning tree problem--these are natural generalizations of the popular branching problem and popular arborescence problem, respectively. The popular colorful forest problem and popular color spanning tree problem are new problems introduced in this paper. **Popular colorful forests and popular colorful spanning trees.** The input here is an undirected graph \(G\) where each edge has a color in \(\{1,\ldots,n\}\). A forest \(F\) is _colorful_ if each edge in \(F\) has a distinct color. Colorful forests are the common independent sets of the partition matroid defined by color classes and the graphic matroid of \(G\). For each \(i\in\{1,\ldots,n\}\), we assume there is an agent \(i\) with a partial order \(\succ_{i}\) over color \(i\) edges. Agent \(i\) prefers forest \(F\) to forest \(F^{\prime}\) if either (i) \(F\) contains an edge colored \(i\) while \(F^{\prime}\) has no edge colored \(i\) or (ii) both \(F\) and \(F^{\prime}\) contain color \(i\) edges and \(i\) prefers the color \(i\) edge in \(F\) to the color \(i\) edge in \(F^{\prime}\). A colorful forest \(F\) is popular if \(\phi(F,F^{\prime})\geq\phi(F^{\prime},F)\) for all colorful forests \(F^{\prime}\), where \(\phi(F,F^{\prime})\) is the number of agents that prefer \(F\) to \(F^{\prime}\). The popular colorful forest problem is to decide if a given graph \(G\) admits a popular colorful forest or not, and to find one, if so. The motivation here is to find an optimal _independent_ network (cycles are forbidden) with diversity, i.e., there is at most one edge from each color class--as before, our definition of optimality is popularity. The popular branching problem is a special case of the popular colorful forest problem where all edges entering vertex \(i\) are colored \(i\). A colorful spanning tree is a colorful forest with exactly one component. In the popular colorful spanning tree problem, _connectivity_ is more important than popularity, and we seek popularity within the set of colorful spanning trees rather than popularity within the set of all colorful forests. **Implications of Theorem 1.2**.: Along with the popular arborescence problem, our algorithm also solves the problems considered in [19, 20, 21, 1]; furthermore, it also solves the popular colorful forest and popular colorful spanning tree problems. The algorithms given in [19, 20, 21] for solving their respective problems are quite different from each other. Thus our algorithm provides a unified framework for all these problems and shows that there is one polynomial-time algorithm that solves all of them. In general, the matroid intersection need not admit common bases, and in such a case, an alternative is a largest common independent set that is popular among all largest common independent sets. This problem can be easily reduced to the popular common base problem (see Appendix C). Furthermore, along with some simple reductions, we can use our popular common base algorithm to find a popular solution under certain constraints. For example, we can find a common independent set that is popular subject to a size constraint (if a solution exists). We can further solve the problem under a category-wise size constraint: consider a setting where the set \(V\) of voters is partitioned into categories, and for each category, there are lower and upper bounds on the number of voters who (roughly speaking) have an element in the chosen independent set belonging to them (see Appendix C). In the liquid democracy application mentioned earlier, this translates to setting lower and upper bounds on the number of representatives taken from each category so as to ensure that there is diversity among representatives. **Popular common independent set polytope.** If preferences are weak rankings, then we also give a formulation of an extension of the _popular common independent set polytope_, i.e., the convex hull of incidence vectors of popular common independent sets in our matroid intersection. **Theorem 1.3**.: _If preferences are weak rankings, the popular common independent set polytope is a projection of a face of the matroid intersection polytope._ There are an exponential number of constraints in this formulation, however it admits an efficient separation oracle. As a consequence, when there is a function \(\mathsf{cost}:E\to\mathbb{R}\), a min-cost popular common independent set can be computed in polynomial time by optimizing over this polytope, assuming that preferences are weak rankings. Unfortunately, such a result does not hold for the min-cost popular arborescence problem. **Theorem 1.4**.: _Given an instance \(G=(V\cup\{r\},E)\) of the popular arborescence problem where each vertex has a strict ranking over its incoming edges along with a function \(\mathsf{cost}:E\to\{0,1,\infty\}\), it is \(\mathsf{NP}\)-hard to compute a min-cost popular arborescence in \(G\)._ Nevertheless, finding a popular arborescence with forced/forbidden edges in an input instance with partial order preferences is polynomial-time solvable. This result allows us to recognize in polynomial time all those edges that are present in every popular arborescence and all those edges that are present in _no_ popular arborescence. **Theorem 1.5**.: _For any instance \(G=(V\cup\{r\},E)\) of the popular arborescence problem with a set \(E^{+}\subseteq E\) of forced edges and a set \(E^{-}\subseteq E\) of forbidden edges, there is a polynomial-time algorithm to decide if there is a popular arborescence \(A\) with \(E^{+}\subseteq A\) and \(E^{-}\cap A=\emptyset\) and to find one, if so._ In instances where a popular arborescence does not exist, we could relax popularity to _near-popularity_ or "low unpopularity". A standard measure of unpopularity is the _unpopularity margin_[26], defined for any arborescence \(A\) as \(\mu(A)=\max_{A^{\prime}}\phi(A^{\prime},A)-\phi(A,A^{\prime})\) where the maximum is taken over all arborescences \(A^{\prime}\). An arborescence \(A\) is popular if and only if \(\mu(A)=0\). Unfortunately, finding an arborescence with minimum unpopularity margin is \(\mathsf{NP}\)-hard. **Theorem 1.6**.: _Given an instance \(G=(V\cup\{r\},E)\) of the popular arborescence problem where each vertex has a strict ranking over its incoming edges, together with an integer \(k\), it is \(\mathsf{NP}\)-complete to decide whether \(G\) contains an arborescence with unpopularity margin at most \(k\)._ ### Background The notion of popularity was introduced by Gardenfors [15] in 1975 in bipartite graphs with two-sided strict preferences. In this model every stable matching [14] is popular, thus popular matchings always exist in this setting. When preferences are _one-sided_, popular matchings need not always exist. This is not very surprising given that popular solutions correspond to (weak) Condorcet winners [6, 27] and it is well-known in social choice theory that such a winner need not exist. For the case when preferences are weak rankings, a combinatorial characterization of popular matchings was given in [1] and this yielded an efficient algorithm to solve the popular matching problem in this case. Note that the characterization in [1] does not generalize to partial order preferences, as argued in [22]. Several extensions of the popular matching problem have been considered such as random popular matchings [25], weighted voters [28], capacitated objects [32], popular mixed matchings [23], and popularity with matroid constraints [19]. We refer to [7] for a survey on results in popular matchings. Popular spanning trees were studied in [8, 9, 10] where the incentive was to find a "socially best" spanning tree. However, in contrast to the popular colorful spanning tree problem, edges have no colors in their model and voters have rankings over the entire edge set. Many different ways to compare a pair of trees were studied here, and most of these led to hardness results. Popular branchings, i.e., popular directed forests, in a directed graph (where each vertex has preferences as a partial order over its incoming edges) were studied in [21] where a polynomial-time algorithm was given for the popular branching problem. When preferences are weak rankings, polynomial-time algorithms for the min-cost popular branching problem and the \(k\)-unpopularity margin branching problem were shown in [21]; however these problems were shown to be NP-hard for partial order preferences. The popular branching problem where each vertex (i.e., voter) has a weight was considered in [29]. The popular assignment algorithm from [20] solves the popular maximum matching problem in a bipartite graph, and works for partial order preferences. It was also shown in [20] that the min-cost popular assignment problem is NP-hard, even for strict rankings. Many combinatorial optimization problems can be expressed as (largest) common independent sets in the intersection of two matroids. Interestingly, constraining one of the two matroids in the matroid intersection to be a partition matroid is not really a restriction, because any matroid intersection can be reduced to the case where one matroid is a partition matroid (see [12, claims 104-106]). We refer to [16, 31] for notes on matroid intersection and for the formulation of the matroid intersection polytope. ### An overview of our algorithm For an arborescence \(A\), we can naturally define a weight function \(\mathsf{wt}_{A}:E\to\{-1,0,1\}\) such that for any arborescence \(A^{\prime}\) we have \(\mathsf{wt}_{A}(A^{\prime})=\phi(A^{\prime},A)-\phi(A,A^{\prime})\). Then a popular arborescence \(A\) is a max-weight arborescence in \(G=(V\cup\{r\},E)\) with this function \(\mathsf{wt}_{A}\). Therefore, the popular arborescence problem is the problem of finding \(A\in\mathcal{A}_{G}\) such that \(\max_{A^{\prime}\in\mathcal{A}_{G}}\mathsf{wt}_{A}(A^{\prime})=\mathsf{wt}_{ A}(A)=0\) where \(\mathcal{A}_{G}\) is the set of all arborescences in \(G\). Thus a popular arborescence \(A\) is an optimal solution to the max-weight arborescence LP with edge weights given by \(\mathsf{wt}_{A}\). #### Dual certificates We show that every popular arborescence \(A\) has a dual certificate with a special structure; this corresponds to a _chain_\(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) of subsets of \(E\) with \(\emptyset\subsetneq C_{1}\subsetneq\cdots\subsetneq C_{p}=E\) and \(\operatorname{span}(A\cap C_{i})=C_{i}\) for all \(i\).3 Our algorithm to compute a popular arborescence is a search for such a chain \(\mathcal{C}\) and arborescence \(A\). At a high level, this method is similar to the approach used in [20] for popular assignment, however our dual certificates are more complex than those in [20], and hence the steps in our algorithm (and its proof of correctness) become much more challenging. Footnote 3: In the arborescence case, the set \(\operatorname{span}(A\cap C_{i})\) is defined as \((A\cap C_{i})\cup\{e\in E:(A\cap C_{i})+e\text{ contains a cycle}\,\}\). Given a chain \(\mathcal{C}\) of subsets of \(E\), there is a polynomial-time algorithm to check if \(\mathcal{C}\) corresponds to a dual certificate for some popular arborescence. It follows from dual feasibility and complementary slackness that \(\mathcal{C}\) is a dual certificate if and only if a certain subgraph \(G_{\mathcal{C}}=(V\cup\{r\},E(\mathcal{C}))\) admits an arborescence \(A\) such that \(\operatorname{span}(A\cap C_{i})=C_{i}\) for all \(C_{i}\in\mathcal{C}\). If such an arborescence \(A\) exists in \(G_{\mathcal{C}}\), then it is easy to show that \(A\) is a popular arborescence in \(G\) with \(\mathcal{C}\) as its dual certificate. If \(G_{\mathcal{C}}\) does not admit such an arborescence, then we need to update \(\mathcal{C}\). Since updating \(\mathcal{C}\) changes \(E(\mathcal{C})\), we now seek an arborescence \(A\) in the new graph \(G_{\mathcal{C}}\) such that \(\operatorname{span}(A\cap C_{i})=C_{i}\) for all \(i\). If such an \(A\) does not exist, then \(\mathcal{C}\) is updated again. Note that updating \(\mathcal{C}\) may increase \(|\mathcal{C}|\). When \(|\mathcal{C}|\) becomes larger than \(|V|\), we claim that \(G\) has _no_ popular arborescence. Among other ideas, our technical novelty lies in the proof of this claim that is based on the strong exchange property of matroids. **Matroid Intersection.** Our algorithm holds in the generality of matroid intersection (where one of the matroids is a partition matroid); dual certificates for popular common bases are exactly the same, i.e., chains that are described above. We also show that a popular common independent set has a dual certificate \(\mathcal{C}=\{C,E\}\) of length at most \(2\). This leads to the polyhedral result given in Theorem 1.3. Our algorithm is quite different from the popular branching algorithm [21] that (loosely speaking) first finds a maximum branching on _best_ edges and then augments this branching with _second best_ edges entering certain vertices. Indeed, as seen in Theorem 1.3, popular branchings or popular common independent sets have a significantly simpler structure than popular common bases--the latter seem far tougher to characterize and analyze. Pleasingly, as we show here, there is a clean and compact algorithm to solve the popular common base problem (see Algorithm 1). For the sake of readability, we will describe our results for the popular common base problem in terms of the popular arborescence problem and our results for the popular common independent set problem in terms of the popular colorful forest problem. ### Organization of the paper. The rest of the paper is organized as follows. Section 2 describes dual certificates for popular arborescences. Section 3 presents the popular arborescence algorithm and its proof of correctness. In Section 4, we discuss popular colorful forests and their polytope. Section 6 provides the algorithm for the popular arborescence problem with forced/forbidden edges. Our hardness results are proved in Sections 5 and 7. Appendices A, B, and C respectively present examples of executions of our algorithm, omitted proofs, and extensions and related results. ## 2 Dual Certificates In this section we show that every popular arborescence has a special dual certificate--this will be crucial in designing our algorithm in Section 3. Our input is a directed graph \(G=(V\cup\{r\},E)\) where the root vertex \(r\) has no incoming edge, and every vertex \(v\in V\) has a partial order \(\succ_{v}\) over its set of incoming edges, denoted by \(\delta(v)\). For edges \(e,f\in\delta(v)\), we write \(e\sim_{v}f\) to denote that \(v\) is indifferent between \(e\) and \(f\), i.e., \(e\not\succ_{v}f\) and \(f\not\succ_{v}e\). Given an arborescence \(A\), there is a simple method (as shown in [21]) to check if \(A\) is popular or not. We need to check that \(\phi(A,A^{\prime})\geq\phi(A^{\prime},A)\) for all arborescences \(A^{\prime}\) in \(G\). For this, we will use the following function \(\mathsf{wt}_{A}:E\to\{-1,0,1\}\). For any \(v\in V\), let \(A(v)\) be the unique edge in \(A\cap\delta(v)\). For any \(v\in V\) and \(e\in\delta(v)\), let \[\mathsf{wt}_{A}(e)=\begin{cases}\quad 1&\text{if $e\succ_{v}A(v)$}\quad(v \text{ prefers $e$ to $A(v)$});\\ \quad 0&\text{if $e\sim_{v}A(v)$}\quad(v\text{ is indifferent between $e$ and $A(v)$});\\ -1&\text{if $e\prec_{v}A(v)$}\quad(v\text{ prefers $A(v)$ to $e$}).\end{cases}\] It immediately follows from the definition of \(\mathsf{wt}_{A}\) that we have \(\mathsf{wt}_{A}(A^{\prime})=\phi(A^{\prime},A)-\phi(A,A^{\prime})\) for any arborescence \(A^{\prime}\) in \(G\). Thus \(A\) is popular if and only if every arborescence in \(G\) has weight at most \(0\), where edge weights are given by \(\mathsf{wt}_{A}\). Consider the linear program problem LP1 below. The constraints of LP1 describe the face of the matroid intersection polytope corresponding to common bases. Recall that this is the intersection of the partition matroid on \(E=\biguplus_{v\in V}\delta(v)\) with the graphic matroid \(M=(E,\mathcal{I})\) of \(G\), whose rank is \(|V|\). Here, \(\operatorname{rank}:2^{E}\to\mathbb{Z}_{+}\) is the rank function of \((E,\mathcal{I})\), i.e, for any \(S\subseteq E\), the value of \(\operatorname{rank}(S)\) is the maximum size of an acyclic subset of \(S\) in the graph \(G\). \[\text{(LP1)}\quad\max\sum_{e\in E}\mathsf{wt}_{A}(e)\cdot x_{e} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{(LP2)}\] \[\text{s.t.}\qquad\sum_{e\in\delta(v)}x_{e} \,=\,1 \qquad\forall\,v\in V \qquad\qquad\qquad\qquad\qquad\text{(LP3)}\] \[\qquad\sum_{e\in S}x_{e} \,\leq\,\operatorname{rank}(S)\;\;\forall\,S\subseteq E \qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{(LP4)}\] \[x_{e} \,\geq\,0 \qquad\qquad\forall\,e\in E.\] The feasible region of LP1 is the arborescence polytope of \(G\). Hence LP1 is the max-weight arborescence LP in \(G\) with edge weights given by \(\mathsf{wt}_{A}\). The linear program LP2 is the dual LP in variables \(y_{S}\) and \(\alpha_{v}\) where \(S\subseteq E\) and \(v\in V\). The arborescence \(A\) is popular if and only if the optimal value of LP1 is at most 0, more precisely, if the optimal value is exactly 0, since \(\mathsf{wt}_{A}(A)=0\). Equivalently, \(A\) is popular if and only if the optimal value of LP2 is 0. We will now show that LP2 has an optimal solution with some special properties. For a popular arborescence \(A\), a dual optimal solution that satisfies all these special properties (see Lemma 2.1) will be called a _dual certificate_ for \(A\). The function \(\operatorname{span}:2^{E}\to 2^{E}\) of a matroid \((E,\mathcal{I})\) is defined as follows: \[\operatorname{span}(S)=\{\,e\in E:\,\operatorname{rank}(S+e)=\operatorname{ rank}(S)\,\}\quad\text{ where }S\subseteq E.\] In particular, if \(S\in\mathcal{I}\), then \(\operatorname{span}(S)=S\cup\{\,e\in E:S+e\not\in\mathcal{I}\,\}\). A _chain_\(\mathcal{C}\) of length \(p\) is a collection of \(p\) distinct subsets of \(E\) such that for each two distinct sets \(C,C^{\prime}\in\mathcal{C}\), we have either \(C\subsetneq C^{\prime}\) or \(C^{\prime}\subsetneq C\). That is, a chain has the form \(\mathcal{C}=\{C_{1},C_{2},\ldots,C_{p}\}\) where \(C_{1}\subsetneq C_{2}\subsetneq\cdots\subsetneq C_{p}\). Lemma 2.1 shows that LP2 always admits an optimal solution in the following special form. The proof is based on basic facts on matroid intersection and linear programming, and we postpone it to the end of Section 3. Lemma 2.1.: _An arborescence \(A\) is popular if and only if there exists a feasible solution \((\vec{y},\vec{\alpha})\) to LP2 such that \(\sum_{S\subseteq E}\operatorname{rank}(S)\cdot y_{S}+\sum_{v\in V}\alpha_{v}=0\) and properties 1-4 are satisfied:_ 1. \(\vec{y}\) _is integral and its support_ \(\mathcal{C}:=\{\,S\subseteq E:y_{S}>0\,\}\) _is a chain._ 2. _Each_ \(C\in\mathcal{C}\) _satisfies_ \(\operatorname{span}(A\cap C)=C\)_._ 3. _Every element in_ \(\mathcal{C}\) _is nonempty, and the maximal element in_ \(\mathcal{C}\) _is_ \(E\)_._ 4. _For each_ \(C\in\mathcal{C}\)_, we have_ \(y_{C}=1\)_. For each_ \(v\in V\)_, we have_ \(\alpha_{v}=-|\,\{\,C\in\mathcal{C}:A(v)\in C\,\}\,|\)_._ For any chain \(\mathcal{C}\), we will now define a subset \(E(\mathcal{C})\) of \(E\) that will be used in our algorithm. The construction of \(E(\mathcal{C})\) is inspired by the construction of an analogous edge subset in the popular assignment algorithm [20]. For a chain \(\mathcal{C}=\{C_{1},C_{2},\cdots,C_{p}\}\) with \(\emptyset\subsetneq C_{1}\subsetneq\cdots\subsetneq C_{p}=E\), define \[\mathsf{lev}_{\mathcal{C}}(e) =\text{the index }i\text{ such that }e\in C_{i}\setminus C_{i-1} \text{for any }e\in E,\] \[\mathsf{lev}_{\mathcal{C}}^{*}(v) =\max\,\{\,\mathsf{lev}_{\mathcal{C}}(e):e\in\delta(v)\,\} \text{for any }v\in V,\] where we let \(C_{0}=\emptyset\). Thus every element in \(E\) has a _level_ in \(\{1,\ldots,p\}\) associated with it, which is the minimum subscript \(i\) such that \(e\in C_{i}\) (where \(C_{i}\in\mathcal{C}\)). Furthermore, each \(v\in V\) has a \(\mathsf{lev}_{\mathcal{C}}^{*}\)-value which is the highest level of any element in \(\delta(v)\). Define \(E(\mathcal{C})\subseteq E\) as follows. For each \(v\in V\), an element \(e\in\delta(v)\) belongs to \(E(\mathcal{C})\) if one of the following two conditions holds: * \(\mathsf{lev}_{\mathcal{C}}(e)=\mathsf{lev}_{\mathcal{C}}^{*}(v)\) and there is no element \(e^{\prime}\in\delta(v)\) such that \(\mathsf{lev}_{\mathcal{C}}(e^{\prime})=\mathsf{lev}_{\mathcal{C}}^{*}(v)\) and \(e^{\prime}\succ_{v}e\); * \(\mathsf{lev}_{\mathcal{C}}(e)=\mathsf{lev}_{\mathcal{C}}^{*}(v)-1\) and there is no element \(e^{\prime}\in\delta(v)\) such that \(\mathsf{lev}_{\mathcal{C}}(e^{\prime})=\mathsf{lev}_{\mathcal{C}}^{*}(v)-1\) and \(e^{\prime}\succ_{v}e\), and moreover, \(e\succ_{v}f\) for every \(f\in\delta(v)\) with \(\mathsf{lev}_{\mathcal{C}}(f)=\mathsf{lev}_{\mathcal{C}}^{*}(v)\). In other words, \(e\in\delta(v)\) belongs to \(E(\mathcal{C})\) if either (i) \(e\) is a maximal element in \(\delta(v)\) with respect to \(\succ_{v}\) among those in \(\mathsf{lev}_{\mathcal{C}}^{*}(v)\) or (ii) \(e\) is a maximal element in \(\delta(v)\) among those in \(\mathsf{lev}_{\mathcal{C}}^{*}(v)-1\) and \(v\) strictly prefers \(e\) to all elements in level \(\mathsf{lev}_{\mathcal{C}}^{*}(v)\). From Lemma 2.1, we obtain the following useful characterization of popular arborescences. Lemma 2.2.: _An arborescence \(A\) is popular if and only if there exists a chain \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) such that \(\emptyset\subsetneq C_{1}\subsetneq\cdots\subsetneq C_{p}=E\), \(A\subseteq E(\mathcal{C})\), and \(\operatorname{span}(A\cap C_{i})=C_{i}\) for all \(C_{i}\in\mathcal{C}\)._ The proof is given below. Recall that for a popular arborescence \(A\), we defined its _dual certificate_ as a dual optimal solution \((\vec{y},\vec{\alpha})\) to LP2 that satisfies properties 1-4 in Lemma 2.1. As shown in the proof of Lemma 2.2, we can obtain such a solution \((\vec{y},\vec{\alpha})\) from a chain satisfying the properties in Lemma 2.2. We therefore will also use the term _dual certificate_ to refer to a chain as described in Lemma 2.2. Proof of Lemma 2.2.: We first show the existence of a desired chain \(\mathcal{C}\) for a popular arborescence \(A\). Since \(A\) is popular, we know from Lemma 2.1 that there exists an optimal solution \((\vec{y},\vec{\alpha})\) to LP2 such that properties 1-4 hold, where \(\mathcal{C}\) is the support of \(y\). Since the properties \(\emptyset\subsetneq C_{1}\subsetneq\cdots\subsetneq C_{p}=E\) and \(\operatorname{span}(A\cap C_{i})=C_{i}\) (\(\forall C_{i}\in\mathcal{C}\)) directly follow from properties 3 and 2, respectively, it remains to show that \(A\subseteq E(\mathcal{C})\). Since \((\vec{y},\vec{\alpha})\) is a feasible solution of LP2, we have \(\sum_{S:e\in S}y_{S}+\alpha_{v}\geq\mathsf{wt}_{A}(e)\) for every \(e\in\delta(v)\) with \(v\in V\). By property 4, the left hand side can be expressed as \[|\,\{\,C_{i}\in\mathcal{C}:e\in C_{i}\,\}\,|-|\,\{\,C_{i}\in\mathcal{C}:A(v) \in C_{i}\,\}\,|=(p-\mathsf{lev}_{\mathcal{C}}(e)+1)-(p-\mathsf{lev}_{ \mathcal{C}}(A(v))+1)=\mathsf{lev}_{\mathcal{C}}(A(v))-\mathsf{lev}_{\mathcal{ C}}(e).\] Thus it is equivalent to the condition that for every \(e\in\delta(v)\): \[\mathsf{lev}_{\mathcal{C}}(A(v))-\mathsf{lev}_{\mathcal{C}}(e)\geq\mathsf{wt} _{A}(e)=\begin{cases}1&\text{if }e\succ_{v}A(v);\\ 0&\text{if }e\sim_{v}A(v);\\ -1&\text{if }e\prec_{v}A(v).\end{cases} \tag{2.1}\] In particular, this holds for an edge \(e^{\prime}\) with \(\mathsf{lev}_{\mathcal{C}}(e^{\prime})=\mathsf{lev}_{\mathcal{C}}^{*}(v)\), and hence we have \(\mathsf{lev}_{\mathcal{C}}(A(v))\geq\mathsf{lev}_{\mathcal{C}}^{*}(v)-1\). Since \(\mathsf{lev}_{\mathcal{C}}(A(v))\leq\mathsf{lev}_{\mathcal{C}}^{*}(v)\) by \(A(v)\in\delta(v)\), \(\mathsf{lev}_{\mathcal{C}}(A(v))\) is either \(\mathsf{lev}_{\mathcal{C}}^{*}(v)\) or \(\mathsf{lev}_{\mathcal{C}}^{*}(v)-1\). * If \(\mathsf{lev}_{\mathcal{C}}(A(v))=\mathsf{lev}_{\mathcal{C}}^{*}(v)\), then for any \(e\in\delta(v)\) with \(\mathsf{lev}_{\mathcal{C}}(e)=\mathsf{lev}_{\mathcal{C}}^{*}(v)\), the left hand side of (2.1) is 0, and hence it must be the case that either \(A(v)\succ_{v}e\) or \(A(v)\sim_{v}e\). Hence \(A(v)\) is a maximal element in \(\{\,e\in\delta(v):\mathsf{lev}_{\mathcal{C}}(e)=\mathsf{lev}_{\mathcal{C}}^{*} (v)\,\}\) with respect to \(\succ_{v}\). * If \(\mathsf{lev}_{\mathcal{C}}(A(v))=\mathsf{lev}_{\mathcal{C}}^{*}(v)-1\), then we can similarly show that \(A(v)\) is a maximal element in the set \(\{\,e\in\delta(v):\mathsf{lev}_{\mathcal{C}}(e)=\mathsf{lev}_{\mathcal{C}}^{* }(v)-1\,\}\) with respect to \(\succ_{v}\). Furthermore, in this case, for any \(e\in\delta(v)\) with \(\mathsf{lev}_{\mathcal{C}}(e)=\mathsf{lev}_{\mathcal{C}}^{*}(v)\), the left hand side of (2.1) is \(-1\), and hence \(A(v)\succ_{v}e\) must hold. Therefore, in either case, we have \(A(v)\in E(\mathcal{C})\), which implies that \(A\subseteq E(\mathcal{C})\). For the converse, suppose that \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) is a chain such that \(\emptyset\subsetneq C_{1}\subsetneq\cdots\subsetneq C_{p}=E\), \(A\subseteq E(\mathcal{C})\), and \(\operatorname{span}(A\cap C_{i})=C_{i}\) for all \(C_{i}\in\mathcal{C}\). Define \(\vec{y}\) by \(y_{C_{i}}=1\) for every \(C_{i}\in\mathcal{C}\) and \(y_{S}=0\) for all \(S\in 2^{S}\setminus\mathcal{C}\). We also define \(\vec{\alpha}\) by \(\alpha_{v}=-|\,\{\,C\in\mathcal{C}:A(v)\in C\,\}\,|\) for any \(v\in V\). Then \((\vec{y},\vec{\alpha})\) satisfies properties 1-4 given in Lemma 2.1, which also implies that the objective value is 0. Thus it is enough to show that \((\vec{y},\vec{\alpha})\) is a feasible solution to LP2, because it implies that \(A\) is a popular arborescence by Lemma 2.1. Observe that constraint (2.1) is satisfied for every \(v\in V\) and \(e\in\delta(v)\), which follows from \(A\subseteq E(\mathcal{C})\). Since it is equivalent to the constraint in LP2 for \(v\in V\) and \(e\in\delta(v)\), the proof is completed. \(\Box\) ## 3 Our Algorithm We now present our main result. The popular arborescence algorithm seeks to construct an arborescence \(A\) along with its dual certificate \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\), which is a chain satisfying (i) \(\emptyset\subsetneq C_{1}\subsetneq\cdots\subsetneq C_{p}=E\), (ii) \(A\subseteq E(\mathcal{C})\), and (iii) \(\operatorname{span}(A\cap C_{i})=C_{i}\) for all \(C_{i}\in\mathcal{C}\). * The existence of such a chain \(\mathcal{C}\) means that \(A\) is popular by Lemma 2.2. * Since a popular arborescence need not always exist, the algorithm also needs to detect when a solution does not exist. The algorithm starts with the chain \(\mathcal{C}=\{E\}\) and repeatedly updates it. It always maintains \(\mathcal{C}\) as a multichain, where a collection \(\mathcal{C}=\{C_{1},\cdots,C_{p}\}\) of indexed subsets of \(E\) is called a _multichain_ if \(C_{1}\subseteq\cdots\subseteq C_{p}\). Note that it is a chain if all the inclusions are strict. We will use the notations \(\mathsf{lev}_{\mathcal{C}}\), \(\mathsf{lev}_{\mathcal{C}}^{*}\), and \(E(\mathcal{C})\) also for multichains, which are defined in the same manner as for chains. During the algorithm, \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) is always a multichain with \(C_{p}=E\) and \(\operatorname{span}(C_{i})=C_{i}\) for all \(C_{i}\in\mathcal{C}\). Note that when \(\operatorname{span}(C_{i})=C_{i}\) holds, the condition (iii) for some \(A\) above is equivalent to \(|A\cap C_{i}|=\operatorname{rank}(C_{i})\). Furthermore, as explained later, any multichain can be modified to a chain that satisfies (i) preserving the remaining conditions (ii) and (iii). Therefore, we can obtain a desired chain if \(|A\cap C_{i}|=\operatorname{rank}(C_{i})\) is attained for all \(C_{i}\in\mathcal{C}\) for some arborescence \(A\subseteq E(\mathcal{C})\) in the algorithm. **Lex-maximal branching.** In order to determine the existence of an arborescence \(A\subseteq E(\mathcal{C})\) that satisfies \(|A\cap C_{i}|=\operatorname{rank}(C_{i})\) for all \(C_{i}\in\mathcal{C}\), the algorithm computes a _lex-maximal_ branching \(I\) in \(E(\mathcal{C})\). That is, \(I\) is a branching whose \(p\)-tuple \((|I\cap C_{1}|,\ldots,|I\cap C_{p}|)\) is lexicographically maximum among all branchings in \(E(\mathcal{C})\). If \((|I\cap C_{1}|,\ldots,|I\cap C_{p}|)=(\operatorname{rank}(C_{1}),\ldots, \operatorname{rank}(C_{p}))\), then we can show that \(I\) is a popular arborescence4; otherwise the multichain \(\mathcal{C}\) is updated. We describe the algorithm as Algorithm 1; recall that \(\operatorname{rank}(E)=|V|=n\). Footnote 4: Observe that the branching \(I\) will be an _arborescence_ since \(|I\cap E|=|I\cap C_{p}|=\operatorname{rank}(C_{p})=\operatorname{rank}(E)=|V|\). ``` 1:Initialize \(p=1\) and \(C_{1}=E\). \(\triangleright\) Initially we set \(\mathcal{C}=\{E\}\). 2:while\(p\leq n\)do 3: Compute the edge set \(E(\mathcal{C})\) from the current multichain \(\mathcal{C}\). 4: Find a branching \(I\subseteq E(\mathcal{C})\) that lexicographically maximizes \((|I\cap C_{1}|,\ldots,|I\cap C_{p}|)\). 5:if\(|I\cap C_{i}|=\operatorname{rank}(C_{i})\) for every \(i=1,\ldots,p\)then return \(I\). 6: Let \(k\) be the minimum index such that \(|I\cap C_{k}|<\operatorname{rank}(C_{k})\). 7: Update \(C_{k}\leftarrow\operatorname{span}(I\cap C_{k})\). 8:if\(k=p\)then\(p\gets p+1\), \(C_{p}\gets E\), and \(\mathcal{C}\leftarrow\mathcal{C}\cup\{C_{p}\}\). 9:Return "\(G\) has no popular arborescence". ``` **Algorithm 1** The popular arborescence algorithm We include some examples in Appendix A to illustrate the working of Algorithm 1 on different input instances. The following observation is important. **Observation 3.1**: _During Algorithm 1, \(\mathcal{C}\) is always a multichain and \(\operatorname{span}(C_{i})=C_{i}\) for all \(C_{i}\in\mathcal{C}\)._ Proof.: When \(C_{k}\) is updated, it becomes smaller but the inclusion \(C_{k-1}\subseteq C_{k}\) is preserved. Indeed, since \(|I\cap C_{k-1}|=\operatorname{rank}(C_{k-1})\) by the choice of \(k\), we have \(C_{k-1}\subseteq\operatorname{span}(I\cap C_{k-1})\subseteq\operatorname{ span}(I\cap C_{k})\), for the set \(C_{k}\) before the update. Hence the updated value for \(C_{k}\), i.e., \(\operatorname{span}(I\cap C_{k})\), is still a superset of \(C_{k-1}\), and thus \(\mathcal{C}\) remains a multichain. Since any \(C_{i}\in\mathcal{C}\) is defined in the form \(\operatorname{span}(X)\) for some \(X\subseteq E\) (note that \(E=\operatorname{span}(E)\)) and \(\operatorname{span}(\operatorname{span}(X))=\operatorname{span}(X)\) holds in general, we have \(\operatorname{span}(C_{i})=C_{i}\). Line 4 can be implemented in polynomial time by a max-weight branching algorithm [11, 2, 5] and, in the more general case of the intersection of two matroids, by the weighted matroid intersection algorithm [13]. Hence Algorithm 1 can be implemented in polynomial time. **Correctness of the algorithm.** Suppose that a branching \(I\) is returned by the algorithm. Then \(I\) is an arborescence (see Footnote 4) with \(I\subseteq E(\mathcal{C})\), where \(\mathcal{C}\) is the current multichain. This implies that \(I\subseteq E(\mathcal{C})\) and \(|I\cap C_{i}|=\operatorname{rank}(C_{i})\) for all \(C_{i}\in\mathcal{C}\), and the latter implies \(\operatorname{span}(I\cap C_{i})=C_{i}\) for all \(C_{i}\in\mathcal{C}\) by Observation 3.1. In order to prove that \(I\) is a popular arborescence, let us first prune the multichain \(\mathcal{C}\) to a chain \(\mathcal{C}^{\prime}\), i.e., \(\mathcal{C}^{\prime}\) contains a single occurrence of each \(C_{i}\in\mathcal{C}\); we will also remove any occurrence of \(\emptyset\) from \(\mathcal{C}^{\prime}\). Observe that \(E(\mathcal{C})\subseteq E(\mathcal{C}^{\prime})\): indeed, if \(C_{i}=C_{i+1}\) in \(\mathcal{C}\), then no element \(e\in E\) can have \(\operatorname{lev}_{\mathcal{C}}(e)=i+1\), and hence no element gets deleted from \(E(\mathcal{C})\) by pruning \(C_{i+1}\) from \(\mathcal{C}\). Thus \(I\subseteq E(\mathcal{C})\subseteq E(\mathcal{C}^{\prime})\). This implies that \(\mathcal{C}^{\prime}=\{C^{\prime}_{1},\ldots,C^{\prime}_{p^{\prime}}\}\) satisfies \(\emptyset\subsetneq C^{\prime}_{1}\subsetneq\cdots\subsetneq C^{\prime}_{p^{ \prime}}=E\), \(I\subseteq E(\mathcal{C}^{\prime})\), and \(\operatorname{span}(I\cap C^{\prime}_{i})=C^{\prime}_{i}\) for all \(C^{\prime}_{i}\in\mathcal{C}^{\prime}\).5 Hence \(I\) is a popular arborescence by Lemma 2.2. Footnote 5: In fact, it will turn out that \(\mathcal{C}=\mathcal{C}^{\prime}\), i.e., the final \(\mathcal{C}\) obtained by the algorithm itself is a dual certificate of \(I\) if the algorithm returns an arborescence \(I\). This fact follows from Lemma 3.1 (with \(\mathcal{C}^{\prime}\) substituted for \(\mathcal{D}\)). We will now show that the algorithm always returns a popular arborescence, if \(G\) admits one. Let \(A\) be any popular arborescence in \(G\) and let \(\mathcal{D}=\{D_{1},\ldots,D_{q}\}\) be a dual certificate for \(A\). **Claim 3.1**: _We have \(q\leq n\) where \(|\mathcal{D}|=q\)._ Proof.: From the definition of dual certificate, we have \(\emptyset\subsetneq D_{1}\subsetneq\cdots\subsetneq D_{q}=E\) and \(\operatorname{span}(D_{i})=D_{i}\) for each \(D_{i}\). This implies \(0<\operatorname{rank}(D_{1})<\cdots<\operatorname{rank}(D_{q})\). Since \(\operatorname{rank}(D_{q})=\operatorname{rank}(E)=|V|\), we obtain \(q\leq|V|=n\). The following crucial lemma shows an invariant of the algorithm that holds for the multichain \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) constructed in the algorithm and a dual certificate \(\mathcal{D}=\{D_{1},\ldots,D_{q}\}\) of any popular arborescence \(A\). The proof will be given in this section. **Lemma 3.1**: _At any moment of Algorithm 1, \(p\leq q\) and \(D_{i}\subseteq C_{i}\) holds for \(i=1,\ldots,p\)._ If \(p=n+1\) occurs in Algorithm 1, then Lemma 3.1 implies \(q\geq n+1\). This contradicts Claim 3.1. Hence it has to be the case that \(G\) has no popular arborescence when \(p=n+1\). Thus assuming Lemma 3.1, the correctness of Algorithm 1 follows. Before we prove Lemma 3.1, we need the following claim on \(E(\mathcal{C})\) and \(E(\mathcal{D})\). **Claim 3.2**: _Assume \(p\leq q\) and \(D_{i}\subseteq C_{i}\) for \(i=1,\ldots,p\). For each \(e\in E\), if \(\mathsf{lev}_{\mathcal{C}}(e)=\mathsf{lev}_{\mathcal{D}}(e)\) and \(e\in E(\mathcal{D})\), then \(e\in E(\mathcal{C})\)._ Suppose for the sake of contradiction that \(e\) fulfills the conditions of the claim, but \(e\not\in E(\mathcal{C})\). Let \(e\in\delta(v)\). It follows from the definition of \(E(\mathcal{C})\) that there exists an element \(e^{\prime}\in\delta(v)\) such that one of the following three conditions holds: (a) \(\mathsf{lev}_{\mathcal{C}}(e^{\prime})\geq\mathsf{lev}_{\mathcal{C}}(e)+2\), (b) \(\mathsf{lev}_{\mathcal{C}}(e^{\prime})=\mathsf{lev}_{\mathcal{C}}(e)+1\) and \(e\not\succ_{v}e^{\prime}\), or (c) \(\mathsf{lev}_{\mathcal{C}}(e^{\prime})=\mathsf{lev}_{\mathcal{C}}(e)\) and \(e^{\prime}\succ_{v}e\). Because \(D_{i}\subseteq C_{i}\) for each \(i\in\{1,\ldots,p\}\), we have \(\mathsf{lev}_{\mathcal{D}}(e^{\prime})\geq\mathsf{lev}_{\mathcal{C}}(e^{\prime})\). Since \(\mathsf{lev}_{\mathcal{D}}(e)=\mathsf{lev}_{\mathcal{C}}(e)\), the existence of such an \(e^{\prime}\in\delta(v)\) implies \(e\not\in E(\mathcal{D})\), a contradiction. Thus we have \(e\in E(\mathcal{C})\). The proof of Lemma 3.1 will use the following fact, known as the strong exchange property, that is satisfied by any matroid.6 Footnote 6: The original statement in [4] claims this property only for pairs of bases (maximal independent sets), but it is equivalent to Fact 3.1. Indeed, if we consider the \(\mathrm{rank}(E)\)-truncation of the direct sum of \((E,\mathcal{I})\) and a free matroid whose rank is \(\mathrm{rank}(E)\), then the axiom in [4] applied to this new matroid implies Fact 3.1 for \((E,\mathcal{I})\). **Fact 3.1** (Brualdi [4]): _For any \(X,Y\in\mathcal{I}\) and \(e\in X\setminus Y\), if \(Y+e\not\in\mathcal{I}\), then there exists an element \(f\in Y\setminus X\) such that \(X-e+f\) and \(Y+e-f\) are in \(\mathcal{I}\)._ Now we provide the proof of Lemma 3.1. As mentioned above, this completes the proof of the correctness of our algorithm, and hence we can conclude Theorem 1.1. Furthermore, we can conclude Theorem 1.2 since Algorithm 1 and its correctness proof hold in the generality of a common base in the intersection of the partition matroid on the set \(E=\bigcupdownarrow_{v\in V}\delta(v)\) with any matroid \(M=(E,\mathcal{I})\) of rank \(|V|\). **Proof of Lemma 3.1**: _Algorithm 1 starts with \(\mathcal{C}=\{E\}\). Then the conditions in Lemma 3.1 hold at the beginning. We show by induction that they are preserved through the algorithm._ _It is easy to see that the condition \(p\leq q\) is preserved. Indeed, whenever Algorithm 1 is going to increase \(p\) (in line 8), it is the case that \(p+1\leq q\) because \(D_{p}\subseteq C_{p}\subsetneq E=D_{q}\) by the induction hypothesis. Thus \(p\leq q\) is maintained in the algorithm._ _We now show that \(D_{i}\subseteq C_{i}\)\((i=1,\ldots,p)\) is maintained. Note that \(\mathcal{C}\) is updated in lines 7 or 8. The update in line 8 (adding \(C_{p}=E\)) clearly preserves the condition. We complete the proof by showing that the update in line 7 also preserves the condition, i.e., we show the following statement._ * _Let_ \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) _be a multichain with_ \(C_{p}=E\) _such that_ \(p\leq q\) _and_ \(D_{i}\subseteq C_{i}\) _for_ \(i=1,\ldots,p\)_. Suppose the following two conditions hold._ 1. \(I\) _is a lex-maximal common independent set subject to_ \(I\subseteq E(\mathcal{C})\)_._ 2. \(\mathrm{span}(I\cap C_{i})=C_{i}\) _for_ \(i=1,\ldots,k-1\)_, and_ \(\mathrm{span}(I\cap C_{k})\subsetneq C_{k}\)_._ _Then \(D_{k}\subseteq\mathrm{span}(I\cap C_{k})\)._ _To show this statement, assume for contradiction that \(D_{k}\not\subseteq\mathrm{span}(I\cap C_{k})\)._ _We will first show the existence of distinct elements \(e_{1}\) and \(f_{1}\) such that \(e_{1},f_{1}\in\delta(v_{1})\) for some \(v_{1}\in V\) and \(f_{1}\in A\setminus I\) while \(e_{1}\in I\setminus A\). Then we will use the pair \(e_{1},f_{1}\) to show the existence of another pair \(e_{2},f_{2}\) such that \(e_{2},f_{2}\in\delta(v_{2})\) where \(f_{2}\neq f_{1}\) and \(f_{2}\in A\setminus I\) while \(e_{2}\in I\setminus A\). In this manner, for any \(t\in\mathbb{Z}_{+}\) we will be able to show distinct elements \(f_{1},f_{2},\ldots,f_{t}\) that belong to \(A\). However \(A\) has only \(n\) elements, a contradiction. Then we can conclude that our assumption \(D_{k}\not\subseteq\mathrm{span}(I\cap C_{k})\) is wrong. The following is our starting claim._ Claim 3.3.: _There exists \(v_{1}\in V\) such that there are \(e_{1},f_{1}\in\delta(v_{1})\) satisfying the following properties:_ 1. \(f_{1}\in A\setminus I\)_,_ \(I_{1}:=(I\cap C_{k})+f_{1}\in\mathcal{I}\)_,_ \(I_{1}\subseteq E(\mathcal{C})\)_, and_ \(\mathsf{lev}_{\mathcal{C}}(f_{1})=k\)_,_ 2. \(e_{1}\in I_{1}\setminus A\) _and_ \(\mathsf{lev}_{\mathcal{C}}(e_{1})=\mathsf{lev}_{\mathcal{D}}(e_{1})\leq k\)_._ Proof.: Since \(\mathcal{D}\) is a dual certificate of \(A\), we have \(\operatorname{span}(A\cap D_{k})=D_{k}\). So \(D_{k}\not\subseteq\operatorname{span}(I\cap C_{k})\) implies that \(\operatorname{span}(A\cap D_{k})\not\subseteq\operatorname{span}(I\cap C_{k})\). Hence \(A\cap D_{k}\not\subseteq\operatorname{span}(I\cap C_{k})\). So there exists \(f_{1}\) such that \(f_{1}\in A\cap D_{k}\) and \(f_{1}\not\in\operatorname{span}(I\cap C_{k})\). Since \(D_{k}\subseteq C_{k}\), we have \(f_{1}\in D_{k}\subseteq C_{k}\). We also have \(D_{k-1}\subseteq C_{k-1}=\operatorname{span}(I\cap C_{k-1})\subseteq \operatorname{span}(I\cap C_{k})\not\ni f_{1}\). Hence \(f_{1}\in C_{k}\setminus C_{k-1}\) and \(f_{1}\in D_{k}\setminus D_{k-1}\), i.e., \(\mathsf{lev}_{\mathcal{C}}(f_{1})=\mathsf{lev}_{\mathcal{D}}(f_{1})=k\). Since \(f_{1}\in A\subseteq E(\mathcal{D})\) and \(\mathsf{lev}_{\mathcal{C}}(f_{1})=\mathsf{lev}_{\mathcal{D}}(f_{1})\), we have \(f_{1}\in E(\mathcal{C})\) by Claim 3.2. As \(I\subseteq E(\mathcal{C})\), we then have \(I_{1}:=(I\cap C_{k})+f_{1}\subseteq E(\mathcal{C})\). Also, \(I_{1}\in\mathcal{I}\) by \(f_{1}\not\in\operatorname{span}(I\cap C_{k})\). Since \(\mathsf{lev}_{\mathcal{C}}(f_{1})=k\), the set \(I_{1}=(I\cap C_{k})+f_{1}\) is lexicographically better than \(I\). Then, the lex-maximality of \(I\) implies that \(I_{1}\) must violate the partition matroid constraint, i.e., there exists \(e_{1}\in I_{1}\) such that \(e_{1}\neq f_{1}\) and \(e_{1},f_{1}\in\delta(v_{1})\) for some \(v_{1}\in V\). We have \(\mathsf{lev}_{\mathcal{C}}(e_{1})\leq k\) as \(e_{1}\in I_{1}\setminus\{f_{1}\}=I\cap C_{k}\). Since \(f_{1}\in\delta(v_{1})\cap A\) and \(|\delta(v_{1})\cap A|\leq 1\), we have \(e_{1}\not\in A\). Note that \(f_{1}\in E(\mathcal{D})\) implies \(\mathsf{lev}_{\mathcal{D}}(f_{1})\geq\mathsf{lev}_{\mathcal{D}}(e_{1})-1\) and \(e_{1}\in E(\mathcal{C})\) implies \(\mathsf{lev}_{\mathcal{C}}(e_{1})\geq\mathsf{lev}_{\mathcal{C}}(f_{1})-1\). Note also that, for any element \(e\in E\), we have \(\mathsf{lev}_{\mathcal{D}}(e)\geq\mathsf{lev}_{\mathcal{C}}(e)\) because \(D_{i}\subseteq C_{i}\) for all \(i\). * If \(f_{1}\succ_{v_{1}}e_{1}\), then \(\mathsf{lev}_{\mathcal{C}}(e_{1})>\mathsf{lev}_{\mathcal{C}}(f_{1})\) by \(e_{1}\in E(\mathcal{C})\),7and hence \(\mathsf{lev}_{\mathcal{D}}(f_{1})\geq\mathsf{lev}_{\mathcal{D}}(e_{1})-1\geq \mathsf{lev}_{\mathcal{C}}(e_{1})-1\geq\mathsf{lev}_{\mathcal{C}}(f_{1})\). As we have \(\mathsf{lev}_{\mathcal{D}}(f_{1})=\mathsf{lev}_{\mathcal{C}}(f_{1})\), all the equalities hold. Footnote 7: Actually, the case \(f_{1}\succ_{v_{1}}e_{1}\) is impossible because \(\mathsf{lev}_{\mathcal{C}}(e_{1})>\mathsf{lev}_{\mathcal{C}}(f_{1})\) contradicts \(\mathsf{lev}_{\mathcal{C}}(e_{1})\leq k=\mathsf{lev}_{\mathcal{C}}(f_{1})\). We write the proof in this form because the proofs of Claims 3.4 and 3.5 refer to the argument here to apply it to \(e_{j},f_{j}\), where \(\mathsf{lev}_{\mathcal{C}}(f_{j})=k\) is not assumed. * If \(e_{1}\succ_{v_{1}}f_{1}\), then \(\mathsf{lev}_{\mathcal{D}}(f_{1})>\mathsf{lev}_{\mathcal{D}}(e_{1})\) by \(f_{1}\in E(\mathcal{D})\), and hence \(\mathsf{lev}_{\mathcal{D}}(f_{1})\geq\mathsf{lev}_{\mathcal{D}}(e_{1})+1\geq \mathsf{lev}_{\mathcal{C}}(e_{1})+1\geq\mathsf{lev}_{\mathcal{C}}(f_{1})\). As we have \(\mathsf{lev}_{\mathcal{D}}(f_{1})=\mathsf{lev}_{\mathcal{C}}(f_{1})\), all the equalities hold. * If \(f_{1}\sim_{v_{1}}e_{1}\), then \(\mathsf{lev}_{\mathcal{C}}(e_{1})\geq\mathsf{lev}_{\mathcal{C}}(f_{1})\) by \(e_{1}\in E(\mathcal{C})\); also \(\mathsf{lev}_{\mathcal{D}}(f_{1})\geq\mathsf{lev}_{\mathcal{D}}(e_{1})\) by \(f_{1}\in E(\mathcal{D})\). Hence, we have \(\mathsf{lev}_{\mathcal{D}}(f_{1})\geq\mathsf{lev}_{\mathcal{D}}(e_{1})\geq \mathsf{lev}_{\mathcal{C}}(e_{1})\geq\mathsf{lev}_{\mathcal{C}}(f_{1})\). Since \(\mathsf{lev}_{\mathcal{D}}(f_{1})=\mathsf{lev}_{\mathcal{C}}(f_{1})\), all the equalities hold. Thus in all the cases, we have \(\mathsf{lev}_{\mathcal{C}}(e_{1})=\mathsf{lev}_{\mathcal{D}}(e_{1})\leq k\) and \(e_{1}\in I_{1}\setminus A\). Our next claim is the following. Recall that \(I_{1}:=(I\cap C_{k})+f_{1}\in\mathcal{I}\). Claim 3.4.: _There exists \(v_{2}\in V\) such that there are \(e_{2},f_{2}\in\delta(v_{2})\) satisfying the following properties:_ 1. \(f_{2}\in A\setminus I_{1}\)_,_ \(I_{2}:=I_{1}-e_{1}+f_{2}\in\mathcal{I}\)_,_ \(I_{2}\subseteq E(\mathcal{C})\)_, and_ \(\mathsf{lev}_{\mathcal{C}}(e_{1})=\mathsf{lev}_{\mathcal{C}}(f_{2})\)_,_ 2. \(e_{2}\in I_{2}\setminus A\) _and_ \(\mathsf{lev}_{\mathcal{C}}(e_{2})=\mathsf{lev}_{\mathcal{D}}(e_{2})\leq k\)_._ Proof.: We know from Claim 3.3 that \(I_{1}=(I\cap C_{k})+f_{1}\in\mathcal{I}\). The set \(I_{1}\) satisfies \(\operatorname{span}(I_{1}\cap C_{i})=\operatorname{span}(I\cap C_{i})=C_{i}\) for each \(1\leq i\leq k-1\); this is because \(I_{1}\cap C_{i}=I\cap C_{i}\) for each \(i\leq k-1\). Let us apply the exchange axiom in Fact 3.1 to \(I_{1},A\in\mathcal{I}\) and \(e_{1}\in I_{1}\setminus A\). Since \(A\) is maximal in \(\mathcal{I}\), we have \(A+e_{1}\not\in\mathcal{I}\), and hence there exists \(f_{2}\in A\setminus I_{1}\) such that \(I_{1}-e_{1}+f_{2}\) and \(A+e_{1}-f_{2}\) are in \(\mathcal{I}\). Using that \(\operatorname{span}(A\cap D_{i})=D_{i}\) for \(1\leq i\leq q\), from \(e_{1}\notin\operatorname{span}(A-f_{2})\) we obtain \(\mathsf{lev}_{\mathcal{D}}(f_{2})\leq\mathsf{lev}_{\mathcal{D}}(e_{1})\): indeed, assuming \(\mathsf{lev}_{\mathcal{D}}(f_{2})=\ell\geq 2\) we get \(D_{\ell-1}=\operatorname{span}(A\cap D_{\ell-1})\subseteq\operatorname{span}(A -f_{2})\), which implies \(e_{1}\notin D_{\ell-1}\) and hence also \(\mathsf{lev}_{\mathcal{D}}(e_{1})\geq\ell=\mathsf{lev}_{\mathcal{D}}(f_{2})\). Similarly, from \(f_{2}\notin\operatorname{span}(I_ Note that \(f_{2}\neq f_{1}\) since \(f_{1}\in I_{1}\) and \(f_{2}\in A\setminus I_{1}\). Let \(t\in\mathbb{Z}_{+}\). As shown in Claim 3.4 for \(t=3\), suppose we have constructed for \(2\leq j\leq t-1\): 1. \(f_{j}\in A\setminus I_{j-1}\), \(\ I_{j}:=I_{j-1}-e_{j-1}+f_{j}\in\mathcal{I}\), \(\ I_{j}\subseteq E(\mathcal{C})\), and \(\mathsf{lev}_{\mathcal{C}}(e_{j-1})=\mathsf{lev}_{\mathcal{C}}(f_{j})\), 2. \(e_{j}\in I_{j}\setminus A\) and \(\mathsf{lev}_{\mathcal{C}}(e_{j})=\mathsf{lev}_{\mathcal{D}}(e_{j})\leq k\). For each \(j\) with \(2\leq j\leq t-1\), note that \(I_{j}\) satisfies \(\operatorname{span}(I_{j}\cap C_{i})=\operatorname{span}(I\cap C_{i})=C_{i}\) for each \(1\leq i\leq k-1\). Indeed, since \(\mathsf{lev}_{\mathcal{C}}(e_{j-1})=\mathsf{lev}_{\mathcal{C}}(f_{j})\), we have \(|I_{j}\cap C_{i}|=|I\cap C_{i}|=\operatorname{rank}(C_{i})\) for each \(i\leq k-1\). This implies \(\operatorname{span}(I_{j}\cap C_{i})=C_{i}\). Claim 3.5 generalizes Claim 3.4 for any \(t\geq 3\). **Claim 3.5**: _There exists \(v_{t}\in V\) such that there are \(e_{t},f_{t}\in\delta(v_{t})\) satisfying the following properties:_ 1. \(f_{t}\in A\setminus I_{t-1}\)_,_ \(\ I_{t}:=I_{t-1}-e_{t-1}+f_{t}\in\mathcal{I}\)_,_ \(I_{t}\subseteq E(\mathcal{C})\)_, and_ \(\mathsf{lev}_{\mathcal{C}}(e_{t-1})=\mathsf{lev}_{\mathcal{C}}(f_{t})\)_,_ 2. \(e_{t}\in I_{t}\setminus A\) _and_ \(\mathsf{lev}_{\mathcal{C}}(e_{t})=\mathsf{lev}_{\mathcal{D}}(e_{t})\leq k\)_._ Proof.: Let us apply the exchange axiom in Fact 3.1 to \(I_{t-1},A\in\mathcal{I}\) and \(e_{t-1}\in I_{t-1}\setminus A\). Since \(A+e_{t-1}\not\in\mathcal{I}\), there exists \(f_{t}\in A\setminus I_{t-1}\) such that \(I_{t-1}-e_{t-1}+f_{t}\) and \(A+e_{t-1}-f_{t}\) are in \(\mathcal{I}\). By the conditions \(\operatorname{span}(A\cap D_{i})=D_{i}\) for \(1\leq i\leq q\) we have \(\mathsf{lev}_{\mathcal{D}}(f_{t})\leq\mathsf{lev}_{\mathcal{D}}(e_{t-1})\), and by \(\operatorname{span}(I_{t-1}\cap C_{i})=C_{i}\) for \(1\leq i\leq k-1\) and \(\mathsf{lev}_{\mathcal{C}}(e_{t-1})\leq k\) we have \(\mathsf{lev}_{\mathcal{C}}(e_{t-1})\leq\mathsf{lev}_{\mathcal{C}}(f_{t})\). Then \(\mathsf{lev}_{\mathcal{C}}(e_{t-1})\leq\mathsf{lev}_{\mathcal{C}}(f_{t})\leq \mathsf{lev}_{\mathcal{D}}(f_{t})\leq\mathsf{lev}_{\mathcal{D}}(e_{t-1})= \mathsf{lev}_{\mathcal{C}}(e_{t-1})\), and hence all the equalities hold. So we have \(f_{t}\in A\setminus I_{t-1},\mathsf{lev}_{\mathcal{C}}(f_{t})=\mathsf{lev}_{ \mathcal{D}}(f_{t})\), and \(\mathsf{lev}_{\mathcal{C}}(e_{t-1})=\mathsf{lev}_{\mathcal{C}}(f_{t})\). As \(f_{t}\in A\subseteq E(\mathcal{D})\), Claim 3.2 implies \(f_{t}\in E(\mathcal{C})\). Observe that \(I_{t}:=I_{t-1}-e_{t-1}+f_{t}=(I\cap C_{k})+f_{1}-e_{1}+\ldots+f_{t-1}-e_{t-1}+f _{t}\subseteq E(\mathcal{C})\), and recall \(I_{t}\in\mathcal{I}\). Since \(\mathsf{lev}_{\mathcal{C}}(e_{j-1})=\mathsf{lev}_{\mathcal{C}}(f_{j})\) for \(2\leq j\leq t\) and \(\mathsf{lev}_{\mathcal{C}}(f_{1})=k\), the set \(I_{t}\) is lexicographically better than \(I\). This implies that \(I_{t}\) must violate the partition matroid constraint. By the same argument as used in Claim 3.3 to show \(\mathsf{lev}_{\mathcal{C}}(e_{1})=\mathsf{lev}_{\mathcal{D}}(e_{1})\), we see that there exists \(e_{t}\) such that \(e_{t},f_{t}\in\delta(v_{t})\) for some \(v_{t}\), satisfying also \(e_{t}\in I_{t}\setminus A\) and \(\mathsf{lev}_{\mathcal{C}}(e_{t})=\mathsf{lev}_{\mathcal{D}}(e_{t})\leq k\). This completes the proof of this claim. Observe that \(f_{t}\) is distinct from \(f_{1},\ldots,f_{t-1}\) since \(\{f_{1},\ldots,f_{t-1}\}\subseteq I_{t-1}\) while \(f_{t}\in A\setminus I_{t-1}\). Thus, for each \(t\in\mathbb{Z}_{+}\), we have shown distinct elements \(f_{1},\ldots,f_{t}\) in \(A\), contradicting that \(|A|\leq n\). Therefore, it has to be the case that \(D_{k}\subseteq\operatorname{span}(I\cap C_{k})\). This completes the proof of Lemma 3.1. We conclude this section with the proof of Lemma 2.1, which was postponed in Section 2. Proof of Lemma 2.1.: The optimal value of LP1 is at least \(0\) since \(\mathsf{wt}_{A}(A)=0\). Thus if there exists a feasible solution \((\vec{y},\vec{\alpha})\) to LP2 whose objective value is \(0\), then \((\vec{y},\vec{\alpha})\) is an optimal solution to LP2. Since the optimal value of LP2 is \(0\), \(A\) is a popular arborescence in \(G\). If \(A\) is a popular arborescence, then the optimal value of LP2 is \(0\). We will now show there always exists an optimal solution \((\vec{y},\vec{\alpha})\) to LP2 that satisfies properties 1-4. 1. It is a well-known fact on matroid intersection (see [31, Theorem 41.12] or [16, Lecture 12, Claim 2]) that there exists an integral optimal solution to LP2 such that the support of the dual variables corresponding to the matroid \(M\) is a chain. Thus property 1 follows. 2. Among all the optimal solutions to LP2 that satisfy property 1, let \((\vec{y},\vec{\alpha})\) be the one that minimizes \(\sum_{C\in\mathcal{C}}|\operatorname{span}(C)\setminus C|\), where \(\mathcal{C}\) is the support of \(\vec{y}\). We claim that \(\operatorname{span}(A\cap C)=C\) holds for all \(C\in\mathcal{C}\). Observe that each \(C\in\mathcal{C}\) satisfies \(y_{C}>0\), and hence complementary slackness implies that the characteristic vector \(x\) of \(A\) satisfies \(\sum_{e\in C}x_{e}=\operatorname{rank}(C)\), i.e., \(|A\cap C|=\operatorname{rank}(C)\). Therefore, to obtain \(\operatorname{span}(A\cap C)=C\) for all \(C\in\mathcal{C}\), it suffices to show \(\operatorname{span}(C)=C\) for all \(C\in\mathcal{C}\). Suppose contrary that it does not hold. Then there exists at least one \(C\in\mathcal{C}\) with \(\operatorname{span}(C)\neq C\). Among all such \(C\), let \(C^{*}\in\mathcal{C}\) be the maximal one. Define \(\vec{z}\) as follows: (i) \(z_{\operatorname{span}(C^{*})}=y_{\operatorname{span}(C^{*})}+y_{C^{*}}\), (ii) \(z_{C^{*}}=0\), and (iii) \(z_{S}=y_{S}\) for all other \(S\subseteq E\). Then \(\mathcal{C}^{\prime}=(\mathcal{C}\setminus\{C^{*}\})\cup\{\operatorname{span}(C^{*})\}\) is the support of \(\vec{z}\). Note that \(\mathcal{C}^{\prime}\) is again a chain because any \(C\in\mathcal{C}\) with \(C^{*}\subsetneq C\) satisfies \(\operatorname{span}(C)=C\) by the choice of \(C^{*}\), hence \(\operatorname{span}(C^{*})\subseteq\operatorname{span}(C)=C\). Observe that \((\vec{z},\vec{\alpha})\) is a feasible solution to LP2. Moreover, since \(\operatorname{rank}(C^{*})=\operatorname{rank}(\operatorname{span}(C^{*}))\), it does not change the objective value. Thus \((\vec{z},\vec{\alpha})\) is an optimal solution to LP2 that satisfies property 1 and \(\sum_{C\in\mathcal{C}^{\prime}}|\operatorname{span}(C)\setminus C|<\sum_{C\in \mathcal{C}}|\operatorname{span}(C)\setminus C|\). This contradicts the choice of \((\vec{y},\vec{\alpha})\). 3. Suppose \((\vec{y},\vec{\alpha})\) satisfies properties 1-2 but not property 3. If \(\emptyset\in{\cal C}\), then remove \(\emptyset\) from \({\cal C}\) and modify \(\vec{y}\) by setting \(y_{\emptyset}=0\). This does not change the objective value and does not violate feasibility constraints. If \(E\not\in{\cal C}\), then add \(E\) to \({\cal C}\) and modify \((\vec{y},\vec{\alpha})\) by (i) setting \(y_{E}=1\) and (ii) decreasing every \(\alpha_{v}\) value by 1. Since \(\mbox{rank}(E)=|V|\), the objective value does not change. Also, all constraints in LP2 are preserved. Hence the new solution satisfies properties 1-3. 4. Among all the optimal solutions to LP2 that satisfy properties 1-3, let \((\vec{y},\vec{\alpha})\) be the one that minimizes \(\sum_{S\subseteq E}y_{S}\) and let \({\cal C}\) be the support of \(y\). Note that \(\alpha_{v}=-\sum_{C\in{\cal C}:A(v)\in C}y_{C}\) holds for any \(v\in V\) by complementary slackness (observe that \(x_{A(v)}>0\) for \(A\)'s characteristic vector \(x\)). Suppose \(y_{C^{*}}\geq 2\) for some \(C^{*}\in{\cal C}\). Define \((\vec{z},\vec{\beta})\) as follows: \(z_{C*}=y_{C^{*}}-1\) and \(z_{S}=y_{S}\) for every other \(S\subseteq E\). For any \(v\in V\), let \(\beta_{v}=-\sum_{C\in{\cal C}:A(v)\in C}z_{C}\). We will show below that \((\vec{z},\vec{\beta})\) is a feasible solution to LP2. Let us first see what is the objective value attained by \((\vec{z},\vec{\beta})\). This value is \(\sum_{C\in{\cal C}}\mbox{rank}(C)\cdot z_{C}+\sum_{v\in V}\beta_{v}\). When compared to \(\sum_{C\in{\cal C}}\mbox{rank}(C)\cdot y_{C}+\sum_{v\in V}\alpha_{v}\), the first term has decreased by \(\mbox{rank}(C^{*})\) and the second term has increased by \(|\,\{\,v\in V:A(v)\in C^{*}\,\}\,|=|A\cap C^{*}|\leq\mbox{rank}(C^{*})\). Thus the objective value does not increase. We will now show that \((\vec{z},\vec{\beta})\) is a feasible solution to LP2, that is, \(\sum_{C\in{\cal C}:e\in C}z_{C}+\beta_{v}\geq\mbox{wt}_{A}(e)\) for each \(e\in\delta(v)\), \(v\in V\). Since \((\vec{y},\vec{\alpha})\) is feasible and the first term \(\sum_{C\in{\cal C}:e\in C}z_{C}\) decreases by at most 1 and the second term \(\beta_{v}=-\sum_{C\in{\cal C}:A(v)\in C}z_{C}\) never decreases, the only case we need to worry about is when the first term decreases and the second term does not increase. This implies that \(e\in C^{*}\) and \(A(v)\not\in C^{*}\); hence \(\sum_{C\in{\cal C}:e\in C}z_{C}+\beta_{v}=\sum_{C\in{\cal C}:e\in C}z_{C}- \sum_{C\in{\cal C}:A(v)\in C}z_{C}\geq z_{C*}\geq 1\geq\mbox{wt}_{A}(e)\). Thus \((\vec{z},\vec{\beta})\) is a feasible solution to LP2; furthermore, it is an optimal solution to LP2. Since \(\sum_{S\subseteq E}z_{S}<\sum_{S\subseteq E}y_{S}\), this contradicts the choice of \((\vec{y},\vec{\alpha})\). Thus, we have shown that \((\vec{y},\vec{\alpha})\) satisfies properties 1-3 and \(y_{C}=1\) for all \(C\in{\cal C}\). Since we have \(\alpha_{v}=-\sum_{C\in{\cal C}:A(v)\in C}y_{C}\), it follows that \(\alpha_{v}=-|\,\{\,C\in{\cal C}:A(v)\in C\,\}\,|\) for each \(v\in V\). \(\Box\) ## 4 Popular Colorful Forests This section proves Corollary 1.1 and Theorem 1.3 (in terms of the popular colorful forest problem). Let \(H=(U_{H},E_{H})\) be an undirected graph where \(E_{H}=E_{1}\cup\cdots\cup E_{n}\), i.e., \(E_{H}\) is partitioned into \(n\) color classes. Equivalently, there are \(n\) agents \(1,\ldots,n\) where agent \(i\) owns the elements in \(E_{i}\). For each \(i\), there is a partial order \(\succ_{i}\) over elements in \(E_{i}\). Recall that \(S\subseteq E_{H}\) is a _colorful forest_ if (i) \(S\) is a forest in \(H\) and (ii) \(|S\cap E_{i}|\leq 1\) for every \(i\in\{1,\ldots,n\}\). We refer to Section 1 on how every agent compares any pair of colorful forests; for any pair of colorful forests \(F\) and \(F^{\prime}\), let \(\phi(F,F^{\prime})\) be the number of agents that prefer \(F\) to \(F^{\prime}\). **Definition 4.1**: _A colorful forest \(F\) is popular if \(\phi(F,F^{\prime})\geq\phi(F^{\prime},F)\) for any colorful forest \(F^{\prime}\)._ The popular colorful forest problem is to decide if a given instance \(H\) admits a popular colorful forest or not. We will now show that Algorithm 1 solves the popular colorful forest problem. Observe that a popular colorful forest is a popular common independent set in the intersection of the partition matroid defined by \(E_{H}=E_{1}\cup\cdots\cup E_{n}\) and the graphic matroid of \(H\). In order to use the popular common _base_ algorithm to solve this problem, we will augment the ground set \(E_{H}\). **An auxiliary instance \(G\).** For each \(i\in\{1,\ldots,n\}\), add a dummy edge \(e_{i}=(u_{i},v_{i})\) with endpoints \(u_{i},v_{i}\), where \(u_{i}\) and \(v_{i}\) are new vertices that we introduce; call the resulting graph \(G\). The vertex and edge sets of \(G=(U,E)\) are given by \(U=U_{H}\cup\bigcup_{i=1}^{n}\{u_{i},v_{i}\}\) and \(E=E_{H}\cup\bigcup_{i=1}^{n}\{e_{i}\}\). Furthermore, for each \(i\), the edge \(e_{i}\) will be the _worst_ element in \(i\)'s preference order \(\succ_{i}\), i.e., every \(f\in E_{i}\) satisfies \(f\succ_{i}e_{i}\). In the setting of general matroids, \(n\) dummy elements \(e_{1},\ldots,e_{n}\) are being introduced into the ground set \(E\) as _free_ elements, i.e., for any \(i\), no set \(S\subseteq E\) such that \(e_{i}\notin S\) can span \(e_{i}\). The partitions in the constructed matroid are \(E_{i}\cup\{e_{i}\}\) for all \(i\in\{1,\ldots,n\}\). Observe that there exists a one-to-one correspondence between colorful forests in \(H\) and colorful forests of size \(n\) in \(G\). Suppose \(F_{H}\) is a colorful forest in \(H\) and let \(C\subseteq\{1,\ldots,n\}\) be the set of colors missing in \(F_{H}\), i.e., \(F_{H}\cap E_{i}=\emptyset\) exactly if \(i\in C\). Let \(F_{G}=F_{H}\cup\bigcup_{i\in C}\{e_{i}\}\). Then \(F_{G}\) is a colorful forest of size \(n\) in \(G\). Conversely, given a colorful forest \(F_{G}\) of size \(n\) in \(G\), we can obtain a colorful forest \(F_{H}\) in \(H\) by deleting the dummy elements. **Colorful forests in \(G\).** Let \(F_{H}\) and \(F^{\prime}_{H}\) be colorful forests in \(H\) and let \(F_{G}\) and \(F^{\prime}_{G}\) be the corresponding forests (of size \(n\)) in \(G\). Observe that \(\phi(F_{H},F^{\prime}_{H})=\phi(F_{G},F^{\prime}_{G})\). Thus popular colorful forests in \(H\) correspond to popular colorful forests of size \(n\) in \(G\) and vice-versa. We want popular colorful forests of size \(n\) to be popular common _bases_ in the intersection of the partition matroid and the graphic matroid of \(G\). Hence we will consider the \(n\)-truncation of the graphic matroid of \(G\), i.e., all sets of size larger than \(n\) will be deleted from the graphic matroid of \(G\). The function \(\operatorname{rank}(\cdot)\) now denotes the rank function of the truncation and we have \(\operatorname{rank}(E)=n\). Thus solving the popular common base problem in the intersection of the partition matroid defined by the color classes on \(E\) and the truncated graphic matroid of \(G\) solves the popular colorful forest problem in \(H\). Observe that such a reduction holds for the popular common independent set problem; hence Corollary 1 follows. **The popular colorful forest polytope.** We will henceforth refer to a colorful forest of size \(n\) in the auxiliary instance \(G\) as a _colorful base_ in \(G\). Every popular colorful base \(F\) in \(G\) has a dual certificate as given in Lemma 2.18 and Lemma 2.2. We will now show these dual certificates are even more special than what is given in Lemma 2.2--along with the properties described there, the following property is also satisfied. Footnote 8: In LP1 and LP2 defined with respec to \(F\), the set \(\delta(v)\) for \(v\in V\) will be replaced by \(E_{i}\cup\{e_{i}\}\) for \(i\in\{1,\ldots,n\}\), and in the definition of \(\operatorname{wt}_{F}\), the edge \(A(v)\) will be replaced by the unique element in \(F\cap(E_{i}\cup\{e_{i}\})\), denoted by \(F(i)\). Lemma 4.1.: _Let \(F\) be a popular colorful base in the auxiliary instance \(G\) and let \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) be a dual certificate for \(F\). Then \(p\leq 2\)._ Proof.: Suppose not, i.e., \(p\geq 3\). From the definition of a dual certificate \(\mathcal{C}\), we have \(\emptyset\subsetneq \(|F^{\prime}\cap C|=\operatorname{rank}(C)\) by complementary slackness. Then, what is left is to show \(F^{\prime}\subseteq E(\mathcal{C})\). We consider the cases where the length of \(\mathcal{C}\) is one and two. 1. Suppose \(\mathcal{C}=\{E\}\). Let \(\mathcal{D}\) be a dual certificate of \(F^{\prime}\) as described in Lemma 2.2. Then \(F^{\prime}\subseteq E(\mathcal{D})\). Assume that \(\mathcal{D}=\{D,E\}\) (otherwise \(\mathcal{D}=\{E\}=\mathcal{C}\)). Take any \(i\in\{1,\ldots,n\}\). We now show \(F^{\prime}(i)\in E(\mathcal{C})\). If \(F^{\prime}(i)\in D\) then \(\mathsf{lev}_{\mathcal{D}}(F^{\prime}(i))=1=\mathsf{lev}_{\mathcal{C}}(F^{ \prime}(i))\); along with \(F^{\prime}(i)\in E(\mathcal{D})\), this implies \(F^{\prime}(i)\in E(\mathcal{C})\) by Claim 3.2. We thus assume that \(F^{\prime}(i)\notin D\). Since the characteristic vector \(x\) of \(F\) and \(x^{\prime}\) of \(F^{\prime}\) are optimal solutions to LP1 (defined with respect to \(F\)) and \((\vec{y},\vec{\alpha})\) is an optimal solution to LP2 (its dual LP), we will use complementary slackness. Because \(x_{F(i)}=1\), we have \(\sum_{\hat{C}\in\mathcal{C}:F(i)\in\hat{C}}y_{\hat{C}}+\alpha_{i}=\mathsf{wt} _{F}(F(i))\,(=0)\). Similarly, because \(x_{F^{\prime}(i)}^{\prime}=1\), we have \(\sum_{\hat{C}\in\mathcal{C}:F^{\prime}(i)\in\hat{C}}y_{\hat{C}}+\alpha_{i}= \mathsf{wt}_{F}(F^{\prime}(i))\). By subtracting the former from the latter, we obtain (4.2) Since \(\mathcal{C}=\{E\}\), the left hand side is \(1-1=0\). By this \(\mathsf{wt}_{F}(F^{\prime}(i))=0\), which implies \(F(i)\sim_{i}F^{\prime}(i)\). The fact \(F(i)\in E(\mathcal{C})\) implies that \(F(i)\) is maximal with respect to \(\succ_{i}\) in \(E_{i}\cup\{e_{i}\}\). Because \(\succ_{i}\) is a weak ranking, \(F(i)\sim_{i}F^{\prime}(i)\) means that \(F^{\prime}(i)\) is also maximal, and hence \(F^{\prime}(i)\in E(\mathcal{C})\) follows. 2. Suppose \(\mathcal{C}=\{C,E\}\). Let \(\mathcal{D}\) be a dual certificate of \(F^{\prime}\). Then we have \(\mathcal{D}=\{D,E\}\) and \(D\subseteq C\) (by Lemma 3.1). Take any \(i\in\{1,\ldots,n\}\). We now show \(F^{\prime}(i)\in E(\mathcal{C})\). If \(F^{\prime}(i)\not\in C\) (resp., if \(F^{\prime}(i)\in D\)), then \(F^{\prime}(i)\not\in D\) (resp., \(F^{\prime}(i)\in C\)); hence \(\mathsf{lev}_{\mathcal{C}}(F^{\prime}(i))=\mathsf{lev}_{\mathcal{D}}(F^{ \prime}(i))\). This fact along with \(F^{\prime}(i)\in E(\mathcal{D})\) implies that \(F^{\prime}(i)\in E(\mathcal{C})\), by Claim 3.2. Therefore, let us assume that \(F^{\prime}(i)\in C\setminus D\). By the same analysis as given in Case 1, Equation (4.2) holds. Let us also consider LP1 and LP2 defined with respect to \(F^{\prime}\) (instead of \(F\)). Let \((\vec{z},\vec{\beta})\) be the optimal solution of LP2 corresponding to \(\mathcal{D}\). As before, the characteristic vectors of \(F\) and \(F^{\prime}\) are optimal solutions to LP1. By the same argument (with \(F^{\prime}\), \(F\) and \(\mathcal{D}\) taking the places of \(F\), \(F^{\prime}\), and \(\mathcal{C}\), resp.), we have: (4.3) \[\sum_{\hat{D}\in\mathcal{D}:F(i)\in\hat{D}}z_{\hat{D}}\ \ -\ \sum_{\hat{D}\in\mathcal{D}:F^{\prime}(i)\in\hat{D}}z_{\hat{D}}\ =\ \mathsf{wt}_{F^{\prime}}(F(i)).\] Since \(F^{\prime}(i)\in C\), the left hand side of (4.2) is \(1\) or \(0\), and so is \(\mathsf{wt}_{F}(F^{\prime}(i))\), which implies that we have \(F^{\prime}(i)\succ_{i}F(i)\) or \(F^{\prime}(i)\sim_{i}F(i)\). Furthermore, since \(F^{\prime}(i)\notin D\), the left hand side of (4.3) is \(1\) or \(0\), and so is \(\mathsf{wt}_{F^{\prime}}(F(i))\), which implies that \(F(i)\succ_{i}F^{\prime}(i)\) or \(F(i)\sim_{i}F^{\prime}(i)\). Therefore we must have \(F^{\prime}(i)\sim_{i}F(i)\). Hence \(F(i)\in C\) follows from (4.2). We have shown that \(F^{\prime}(i)\sim_{i}F(i)\) and \(F(i)\in C\). We also have \(F^{\prime}(i)\in C\). Since \(F(i)\in E(\mathcal{C})\), we see that \(F(i)\) is maximal in \(C\cap(E_{i}\cup\{e_{i}\})\) and dominates all elements in \((E_{i}\cup\{e_{i}\})\setminus C\) with respect to \(\succ_{i}\). Since \(\succ_{i}\) is a weak ranking and \(F^{\prime}(i)\sim_{i}F(i)\), the element \(F^{\prime}(i)\in C\) also satisfies these conditions, and hence \(F^{\prime}(i)\in E(\mathcal{C})\). Thus we have \(F^{\prime}(i)\in E(\mathcal{C})\) for every \(i\in\{1,\ldots,n\}\). Hence \(F^{\prime}\subseteq E(\mathcal{C})\). By Lemma 4.2, any popular colorful base \(F^{\prime}\) in \(G\) satisfies \(F^{\prime}\subseteq E(\mathcal{C})\) and \(|F^{\prime}\cap C|=\operatorname{rank}(C)\) if \(\mathcal{C}=\{C,E\}\). Conversely, any popular colorful base \(F^{\prime}\) in \(G\) that satisfies these conditions is popular by Lemma 2.2. Therefore the set of all popular colorful bases in \(G\) can be described as a face of the matroid intersection polytope. Since a popular colorful forest in the given instance \(H\) is obtained by deleting the dummy elements from popular colorful bases in \(G\), Theorem 1.3 follows. We also state this result explicitly in Theorem 4.1 in the setting of popular colorful forests. Let \(\mathcal{C}=\{C,E\}\) be a dual certificate for the popular colorful base \(F\) in \(G\) computed by Algorithm 1. **Theorem 4.1**.: _If preferences are weak rankings, an extension of the popular colorful forest polytope of the given instance \(H\) is defined by the constraints \(\sum_{e\in C}x_{e}=\operatorname{rank}(C)\) and \(x_{e}=0\) for all \(e\in E\setminus E(\mathcal{C})\) along with all the constraints of LP1._ Min-Cost Popular Arborescence We prove Theorem 1.4 in this section. We present a reduction from the Vertex Cover problem, whose input is an undirected graph \(H\) and an integer \(k\), and asks whether \(H\) admits a set of \(k\) vertices that is a vertex cover, that is, contains an endpoint from each edge in \(H\). Our reduction is strongly based on the reduction used in [21, Theorem 6.3] which showed the NP-hardness of the min-cost popular branching problem when vertices have partial order preferences. Recall that the min-cost popular branching problem is polynomial-time solvable when vertices have weak rankings [21] (also implied by Theorem 1.3). Note also that neither the hardness of min-cost popular branching for partial order preferences [21], nor the hardness of min-cost popular assignment for strict preferences [20] implies Theorem 1.4, since the min-cost popular arborescence problem does not contain either of these problems. To show the NP-hardness of the min-cost popular arborescence problem when vertices have strict rankings, we construct a directed graph \(G=(V\cup\{r\},E=E_{1}\cup E_{2}\cup E_{3})\) as follows; see Figure 1 for an illustration. We set \[V =\{w\}\cup\{v_{0},v_{1}:v\in V(H)\}\cup\{e_{u},e_{v}:e=uv\in E(H)\},\] \[E_{1} =\{(e_{u},e_{v}),(e_{v},e_{u}),(e_{u},w),(e_{v},w):e=uv\in E(H)\}\] \[\qquad\cup\{(v_{0},v_{1}),(v_{1},v_{0}):v\in V(H)\},\] \[E_{2} =\{(r,w)\}\cup\{(w,x):x\in V(G)\setminus\{r,w\}\},\] \[E_{3} =\{(r,v_{1}):v\in V(H)\}\cup\{(u_{0},e_{u}),(v_{0},e_{v}):e=uv\in E (H)\}.\] To define the preferences of each vertex in \(G\), we let all vertices prefer edges of \(E_{1}\) to edges of \(E_{2}\), which in turn are preferred to edges of \(E_{3}\). Whenever some vertex has more than one incoming edge in some \(E_{i}\), \(i\in\{1,2,3\}\), then it orders them in some arbitrarily fixed strict order. We set the cost of each edge in \(E_{3}\), as well as the cost of all edges entering \(w\) except for \((r,w)\) as \(\infty\). We set the cost of \((w,v_{1})\) as \(1\) for each \(v\in V(H)\), and we set the cost of all remaining edges as \(0\). We define our budget to be \(k\), finishing the construction of our instance of min-cost popular arborescence. We are going to show that \(H\) admits a vertex cover of size at most \(k\) if and only if \(G\) has a popular arborescence of cost at most \(k\). Suppose first that \(A\) is a popular arborescence in \(G\) with cost at most \(k\). We prove that the set \(S=\{v\in V(H):(w,v_{1})\in A\}\) is a vertex cover in \(H\). Since each edge \((w,v_{1})\) has cost \(1\), our budget implies \(|S|\leq k\). For a vertex \(v\in V(H)\) and an edge \(e=uv\in E(H)\), let \(A_{v}=A\cap(\delta(v_{0})\cup\delta(v_{1}))\) and \(A_{e}=A\cap(\delta(e_{u})\cup\delta(e_{v}))\), respectively. We note that any \(v\in V(H)\) satisfies that \(A_{v}\) is either \(\{(w,v_{0}),(v_{0},v_{1})\}\) or \(\{(w,v_{1}),(v_{1},v_{0})\}\). Indeed, if it is not the case, we have \(A_{v}=\{(w,v_{0}),(w,v_{1})\}\), since \(A\) is an arborescence with finite cost. However, this contradicts the popularity of \(A\), since \(A\setminus\{(w,v_{1})\}\cup\{(v_{0},v_{1})\}\) is more popular than \(A\). We can similarly show that each \(e=uv\in E(H)\) satisfies that \(A_{e}\) is either \(\{(w,e_{u}),(e_{u},e_{v})\}\) or \(\{(w,e_{v}),(e_{v},e_{u})\}\). Note also that \((r,w)\in A\), as all other edges entering \(w\) have infinite cost. Assume for the sake of contradiction that \(S\) is not a vertex cover of \(H\), i.e., there exists an edge \(e=uv\in E(H)\) such that neither \((w,u_{1})\) nor \((w,v_{1})\) is contained in \(A\). Then we have \(A_{u}=\{(w,u_{0}),(u_{0},u_{1})\}\) and \(A_{v}=(w,v_{0}),(v_{0},v_{1})\}.\) By symmetry, we assume without loss of generality that \(A_{e}=\{(w,e_{u}),(e_{u},e_{v})\}\). Define an edge set \(A^{\prime}\) by \[A^{\prime}=(A\setminus(A_{e}\cup A_{v}\cup\{(r,w)\}))\cup\{(r,v_{1}),(v_{1}, v_{0}),(v_{0},e_{v}),(e_{v},e_{u}),(e_{u},w)\}.\] We can see that \(A^{\prime}\) is an arborescence and is more popular than \(A\), since three vertices, \(v_{0}\), \(e_{u}\), and \(w\), prefer \(A^{\prime}\) to \(A\), while two vertices, \(v_{1}\) and \(e_{v}\), prefer \(A\) to \(A^{\prime}\), and all others are indifferent between them. This proves that \(S\) is a vertex cover of \(H\). For the other direction, assume that \(S\) is a vertex cover in \(H\). We construct a popular arborescence \(A\) of cost \(|S|\) in \(G\). For each \(e\in E(H)\) we fix an endpoint \(\sigma(e)\) of \(e\) that is contained in \(S\), and we denote by \(\bar{\sigma}(e)\) the other endpoint of \(e\) (which may or may not be in \(S\)). Let \[A=\{(r,w)\} \cup\{(w,v_{1}),(v_{1},v_{0}):v\in S\}\] \[\cup\{(w,v_{0}),(v_{0},v_{1}):v\in V(H)\setminus S\}\] \[\cup\{(w,e_{\bar{\sigma}(e)}),(e_{\bar{\sigma}(e)},e_{\sigma(e)}):e \in E(H)\}.\] It is straightforward to verify that \(A\) is an arborescence and its cost is exactly \(|S|\). Hence it remains to prove its popularity, which is done by showing a dual certificate \(\mathcal{C}\) for \(A\). To define \(\mathcal{C}\), let us first define a set \(X=\{w\}\cup\{e_{u},e_{v}:e=uv\in E(H)\}\cup\{v_{0},v_{1}:v\in S\}\) of vertices in \(G\). Then we set \(\mathcal{C}=\{C_{1},C_{2},C_{3}\}\) where \[C_{1}= \{(e_{u},e_{v}),(e_{v},e_{u}):e=uv\in E(H)\}\cup\{(v_{0},v_{1}),(v _{1},v_{0}):v\in S\},\] \[C_{2}= \{f\in E(H):f\text{ has two endpoints in }X\}\cup\{(v_{0},v_{1}),(v _{1},v_{0}):v\in V(H)\setminus S\},\] \[C_{3}= E.\] Let us first check that \(\operatorname{rank}(C_{i})=|A\cap C_{i}|\) for each \(C_{i}\in\mathcal{C}\). Clearly, \(C_{1}\) consists of mutually vertex-disjoint 2-cycles, and \(A\) contains an edge from each of them. Thus \(\operatorname{rank}(C_{1})=|A\cap C_{1}|\) follows. The edge set \(C_{2}\) consists of all edges induced by the vertices of \(X\), together with another set of mutually vertex-disjoint 2-cycles that share no vertex with \(X\). It is easy to verify that \(A\cap C_{2}\) contains an edge from each of the 2-cycles in question, as well as a directed tree containing all vertices of \(X\). Thus, \(\operatorname{rank}(C_{2})=|A\cap C_{2}|\) holds. Since \(A\) is an arborescence, \(\operatorname{rank}(C_{3})=\operatorname{rank}(E)=|V|=|A\cap C_{3}|\) is obvious. Observe that for each \(i\in\{1,2,3\}\) we have \(\operatorname{span}(C_{i})=C_{i}\), and hence \(\operatorname{rank}(C_{i})=|A\cap C_{i}|\) implies \(\operatorname{span}(A\cap C_{i})=C_{i}\). It remains to see that \(A\subseteq E(\mathcal{C})\). First, \(A(w)=(r,w)\) is the unique incoming edge of \(w\) with \(\mathcal{C}\)-level 3. For some \(v\in S\), \(\operatorname{\mathsf{lev}}_{\mathcal{C}}^{*}(v_{0})=2\) while \(\operatorname{\mathsf{lev}}_{\mathcal{C}}^{*}(v_{1})=3\), and by their preferences both \(A(v_{0})=(v_{1},v_{0})\) and \(A(v_{1})=(w,v_{1})\) are in \(E(\mathcal{C})\). For some \(v\in V(H)\setminus S\), \(\operatorname{\mathsf{lev}}_{\mathcal{C}}^{*}(v_{0})=\operatorname{\mathsf{ lev}}_{\mathcal{C}}^{*}(v_{1})=3\), and hence both \(A(v_{0})=(w,v_{0})\) and \(A(v_{1})=(v_{0},v_{1})\) are in \(E(\mathcal{C})\). Finally, consider an edge \(e=uv\in E(H)\) with \(\sigma(e)=v\in S\). As \(\operatorname{\mathsf{lev}}_{\mathcal{C}}^{*}(e_{u})\leq 3\), and since \(e_{u}\) prefers \((w,e_{u})\) to \((u_{0},e_{u})\), we know that the edge \(A(e_{u})=(w,e_{u})\in C_{2}\) is contained in \(E(\mathcal{C})\). By contrast, since \(v\in S\) implies \(v_{0}\in X\), we obtain \(\operatorname{\mathsf{lev}}_{\mathcal{C}}^{*}(e_{v})=2\), and therefore the edge \(A(e_{v})=(e_{u},e_{v})\in C_{1}\) is contained in \(E(\mathcal{C})\). By Lemma 2.2, this proves that \(A\) is indeed a popular arborescence. Figure 1: Illustration of the reduction in the proof of Theorem 1.4. Figure (a) illustrates the construction showing a subgraph of \(G\), assuming that the input graph \(H\) contains an edge \(e=uv\). Edges in \(E_{1}\), \(E_{2}\), and \(E_{3}\) are depicted with double red, single blue, and dashed green lines, respectively. Edges marked with two, one, and zero crossbars have cost \(\infty\), \(1\), and \(0\), respectively. Figure (b) illustrates the popular arborescence \(A\) in bold, assuming \(v\in S\) and \(u\notin S\). The chain \(C_{1}\subsetneq C_{2}\subsetneq C_{3}=E\) certifying the popularity of \(A\) is shown using grey and dotted ellipses for edges in \(C_{1}\) and \(C_{2}\), respectively. Popular Arborescences with Forced/Forbidden Edges We prove Theorem 1.5 in this section. Observe that the problem of deciding if there exists a popular arborescence \(A\) such that \(A\supseteq E^{+}\) for a given set \(E^{+}\subseteq E\) of _forced_ edges can be reduced to the problem of deciding if there exists a popular arborescence \(A\) such that certain edges are _forbidden_ for \(A\). Let \(V^{\prime}\subseteq V\) be the set of those vertices \(v\) such that \(\delta(v)\cap E^{+}\neq\emptyset\); clearly, we may assume \(|\delta(v)\cap E^{+}|=1\) for each \(v\in V^{\prime}\). Let \(E^{\prime}=\bigcup_{v\in V^{\prime}}(\delta(v)\setminus E^{+})\). Since \(A\supseteq E^{+}\) if and only if \(A\cap E^{\prime}=\emptyset\), it follows that the problem of deciding if there exists a popular arborescence \(A\) such that \(E^{+}\subseteq A\) and \(E^{-}\cap A=\emptyset\) reduces to the problem of deciding if there exists a popular arborescence \(A\) such that \(A\cap E_{0}=\emptyset\) for a set \(E_{0}\subseteq E\) of forbidden edges. #### 5.0.1 Forbidden edges We present our algorithm that decides if \(G\) admits a popular arborescence that avoids \(E_{0}\) for a given subset \(E_{0}\) of \(E\) as Algorithm 2. The only difference from the original popular arborescence algorithm (Algorithm 1) is in line 4: the new algorithm finds a lexicographically maximal branching in the set \(E(\mathcal{C})\setminus E_{0}\) instead of \(E(\mathcal{C})\). Recall that \(\operatorname{rank}(E)=|V|=n\). ``` 1:Initialize \(p=1\) and \(C_{1}=E\). \(\triangleright\) Initially we set \(\mathcal{C}=\{E\}\). 2:while\(p\leq n\)do 3: Compute the edge set \(E(\mathcal{C})\) from the current multichain \(\mathcal{C}\). 4: Find a branching \(I\subseteq E(\mathcal{C})\setminus E_{0}\) that lexicographically maximizes \((|I\cap C_{1}|,\dots,|I\cap C_{p}|)\). 5:if\(|I\cap C_{i}|=\operatorname{rank}(C_{i})\) for every \(i=1,\dots,p\)then return \(I\). 6: Let \(k\) be the minimum index such that \(|I\cap C_{k}|<\operatorname{rank}(C_{k})\). 7: Update \(C_{k}\leftarrow\operatorname{span}(I\cap C_{k})\). 8:if\(k=p\)then\(p\gets p+1\), \(C_{p}\gets E\), and \(\mathcal{C}\leftarrow\mathcal{C}\cup\{C_{p}\}\). 9: Return "\(G\) has no popular arborescence that avoids \(E_{0}\)". ``` **Algorithm 2** The popular arborescence algorithm with the forbidden edge set \(E_{0}\) Theorem 6.1.: _Let \(E_{0}\subseteq E\). The instance \(G=(V\cup\{r\},E)\) admits a popular arborescence \(A\) such that \(A\cap E_{0}=\emptyset\) if and only if Algorithm 2 returns a popular arborescence with no edge of \(E_{0}\)._ Proof.: The easy side is to show that if Algorithm 2 returns an arborescence \(I\), then (i) \(I\) is popular and (ii) \(I\cap E_{0}=\emptyset\). As done in Section 3, let us prune the multichain \(\mathcal{C}\) into a chain \(\mathcal{C}^{\prime}\). Because \(I\subseteq E(\mathcal{C})\setminus E_{0}\) and \(E(\mathcal{C})\subseteq E(\mathcal{C}^{\prime})\), we have \(I\subseteq E(\mathcal{C}^{\prime})\setminus E_{0}\). Since \(I\subseteq E(\mathcal{C}^{\prime})\) and \(|I\cap C_{i}^{\prime}|=\operatorname{rank}(C_{i}^{\prime})\) (and hence \(\operatorname{span}(I\cap C_{i}^{\prime})=C_{i}^{\prime}\)) for every \(C_{i}^{\prime}\in\mathcal{C}^{\prime}\), it follows from Lemma 2.2 that \(I\) is a popular arborescence. We now show the converse. Suppose that \(G\) admits a popular arborescence \(A\) with \(A\cap E_{0}=\emptyset\). Let \(\mathcal{D}=\{D_{1},\dots,D_{q}\}\) be a dual certificate for \(A\). Then we have \(A\subseteq E(\mathcal{D})\setminus E_{0}\). It suffices to show that Algorithm 2 maintains the following invariant: the multichain \(\mathcal{C}=\{C_{1},\dots,C_{p}\}\) maintained in the algorithm satisfies \(p\leq q\) and \(D_{i}\subseteq C_{i}\) for any \(i=1,2,\dots,p\). We can show a variant of Lemma 3.1, i.e., we can show that when \(C_{k}\) is updated in the algorithm, \(D_{k}\subseteq\operatorname{span}(I\cap C_{k})\) holds where \(I\) is a lexicographically maximal branching in \(E(\mathcal{C})\setminus E_{0}\). The proof of Lemma 3.1 works almost as it is. Recall that we sequentially find elements \(f_{1},e_{1},f_{2},e_{2},\dots\) in the proof of Lemma 3.1. For each \(j=1,2,\dots\), in addition to the condition \(f_{j}\in E(\mathcal{C})\), we have \(f_{j}\not\in E_{0}\) since \(f_{j}\in A\subseteq E\setminus E_{0}\). By this, \(I_{j}=(I\cap C_{k})+f_{1}-e_{1}+f_{2}\dots-e_{j-1}+f_{j}\) satisfies \(I_{j}\subseteq E(C)\setminus E_{0}\) for each \(j\). Hence the proof of Lemma 3.1 works with "lex-maximality subject to \(I\subseteq E(\mathcal{C})\setminus E_{0}\)" replacing "lex-maximality subject to \(I\subseteq E(\mathcal{C})\)". ## 7 Minimum Unpopularity Margin Arborescence We prove Theorem 1.6 in this section. It is easy to see that the problem is in \(\mathsf{NP}\), since given an arborescence \(A\) we can verify \(\mu(A)\leq k\) efficiently, assuming that a dual certificate for \(A\) (i.e., a solution for LP2 with objective value \(k\)) is provided. To prove \(\mathsf{NP}\)-hardness, we present a reduction from the following \(\mathsf{NP}\)-hard variant of the Exact 3-Cover problem [17]. The input contains a set \(U\) of size \(3n\) and a set family \(\mathcal{S}=\{S_{1},\dots,S_{3n}\}\) where \(S_{i}\subseteq U\) and \(|S_{i}|=3\) for each \(S_{i}\in\mathcal{S}\), and each \(u\in U\) is contained in exactly three sets from \(\mathcal{S}\). The task is to decide whether there exist \(n\) sets in \(\mathcal{S}\) whose union is \(U\). Our reduction draws inspiration from the reduction used in [21, Theorem 4.6] which proved the NP-hardness of the \(k\)-unpopularity margin branching problem when vertices have partial order preferences. Recall that this problem was shown to be polynomial-time solvable when vertices have weak rankings [21]. Note also that Theorem 1.6 does not follow from the NP-hardness of either the \(k\)-unpopularity margin branching problem [21] or the \(k\)-unpopularity margin assignment problem [20]. To show the NP-hardness of the \(k\)-unpopularity margin arborescence problem when vertices have strict rankings, we construct a directed graph \(G=(V\cup\{r\},E=E_{1}\cup E_{2}\cup E_{3})\) as follows; see Figure 2 for an illustration. For each \(u\in U\) we construct a gadget \(G_{u}\) whose vertex set is \(\{u_{0},u_{1}\}\cup A_{u}\cup B_{u}\) where \(A_{u}=\{a_{u,1},a_{u,2},a_{u,3}\}\) and \(B_{u}=\{b_{u,1},b_{u,2},b_{u,3}\}\). First we add four 2-cycles, with all their edges in \(E_{1}\), on vertex sets \(\{a_{u,i},b_{u,i}\}\) for each \(i=1,2,3\), as well as on \(\{u_{0},u_{1}\}\); these \(8|U|\) edges comprise \(E_{1}\). We next add edges of \(E_{2}\): first, we stitch together the three 2-cycles on \(A_{u}\cup B_{u}\) with edges \((a_{u,3},b_{u,2})\), \((a_{u,2},b_{u,1})\), and \((a_{u,1},b_{u,3})\); second, we add all possible edges between \(\{u_{0},u_{1}\}\) and \(A_{u}\), creating a bidirected \(K_{2,3}\). We denote the unique 6-cycle on \(A_{u}\cup B_{u}\) as \(C_{u}\). This finishes the construction of our gadget \(G_{u}\). To complete the definition of \(G\), it remains to define \(E_{3}\). To this end, for each \(u\in U\) we fix an arbitrary ordering over the three sets of \(\mathcal{S}\) containing \(u\), and denote them as \(S(u,1)\), \(S(u,2)\), and \(S(u,3)\). We then let \[E_{3}=\{(r,u_{0}),(r,u_{1}):u\in U\}\cup\{(b_{u,i},b_{v,j}):\exists S\in \mathcal{S}\text{ s.t. }S=S(u,i)=S(v,j)\}.\] To define the preferences of each vertex in \(G\), we let all vertices prefer edges of \(E_{1}\) to those in \(E_{2}\), which in turn are preferred to edges of \(E_{3}\). Whenever some vertex has more than one incoming edge in some \(E_{i}\), \(i\in\{1,2,3\}\), then it orders them in some fixed strict order with the only constraint that edges from \(u_{0}\) are preferred to edges from \(u_{1}\) for each \(u\in U\). We are going to show that our instance of Exact 3-Cover is solvable if and only if \(G\) admits an arborescence with \(\mu(A)\leq 2n\). First, assume that there exists some \(\mathcal{T}\subseteq\mathcal{S}\) of size \(n\) that covers each \(u\in U\) exactly once. Let \(\sigma(u)\) denote the index in \(\{1,2,3\}\) for which \(S(u,\sigma(u))\in\mathcal{T}\). We then let \[A=\bigcup_{u\in U}\{(r,u_{0}),(u_{0},u_{1}),(u_{0},a_{u,\sigma(u)}),(a_{u, \sigma(u)},b_{u,\sigma(u)})\}\cup(C_{u}\setminus\{e\in C_{u}:e\text{ is incident to }b_{u,\sigma(u)}\}).\] Note that \(A\) is an arborescence in \(G\). To prove that the unpopularity margin of \(A\) is at most \(2n\), we will use the fact that, by definition, \(\mu(A)=\max_{A^{\prime}}\phi(A^{\prime},A)-\phi(A,A^{\prime})\) is the optimal value of LP1. Therefore, to show that \(\mu(A)\leq 2n\) it suffices to give a dual feasible solution with objective value \(2n\). To this end, we define a chain \(\mathcal{C}=\{C_{1},C_{2},C_{3}\}\) with \(C_{1}\subsetneq C_{2}\subsetneq C_{3}=E\) by setting \[C_{1} =\{(a_{u,i},b_{u,i}),(b_{u,i},a_{u,i}):u\in U,i\in\{1,2,3\}\},\] \[C_{2} =\bigcup_{\{u,v,z\}\in\mathcal{T}}\{e\in E:e\text{ has both endpoints in }V(G_{u})\cup V(G_{v})\cup V(G_{z})\}.\] Figure 2: Illustration of a gadget \(G_{u}\) in the proof of Theorem 1.6. Preferences are encoded using line types and colors as in Figure 2. Note that \(\mathrm{rank}(C_{1})=3|U|\), \(\mathrm{rank}(C_{2})=(3\cdot 8-1)n=7|U|+2n\), and \(\mathrm{rank}(C_{3})=8|U|\). To define a feasible solution \((\vec{y},\vec{\alpha})\) for LP2, for each \(S\subseteq E\) we let \(y_{S}=1\) if \(S\in\mathcal{C}\), and \(y_{S}=0\) otherwise; we also set \[\alpha_{a_{u,i}} =\left\{\begin{array}{ll}-3&\quad\text{if $i\neq\sigma(u)$,}\\ -2&\quad\text{if $i=\sigma(u)$,}\end{array}\right. \alpha_{u_{0}}=-1,\] \[\alpha_{b_{u,i}} =\left\{\begin{array}{ll}-2&\quad\text{if $i\neq\sigma(u)$,}\\ -3&\quad\text{if $i=\sigma(u)$,}\end{array}\right. \alpha_{u_{1}}=-2,\] for each \(u\in U\). See Figure 3 for an illustration. The objective value of \((\vec{y},\vec{\alpha})\) is \[\sum_{C_{i}\in\mathcal{C}}\mathrm{rank}(C_{i})+\sum_{v\in V}\alpha_{v}=3|U|+7| U|+2n+8|U|-18|U|=2n.\] Therefore, to prove that \(A\) has unpopularity margin at most \(2n\), it suffices to show that \((\vec{y},\vec{\alpha})\) is a feasible solution for LP2, as stated by Claim 7.1 below. The proofs of claims marked by an asterisk (\(\star\)) are deferred to Appendix B. **Claim 7.1**.: _[\(\star\)] \((\vec{y},\vec{\alpha})\) is a feasible solution for LP2._ For the other direction, assume that \(G\) admits an arborescence \(A\) with \(\mu(A)\leq 2n\). Let \(B\) be an arborescence that yields an optimal solution for LP1, maximizing \(\phi(B,A)-\phi(A,B)\leq 2n\). First note that we can assume that \(A\) is Pareto-optimal in the sense that there is no arborescence that is weakly preferred by all vertices to \(A\), and strictly preferred by at least one vertex to \(A\). Similarly, we can choose \(B\) to be Pareto-optimal as well. Consequently, for any two edges \(e,e^{\prime}\in E_{1}\) forming a 2-cycle, both \(A\) and \(B\) uses at least one of \(e\) and \(e^{\prime}\). For some \(X\subseteq V\) and two arborescences \(A^{\prime}\) and \(A^{\prime\prime}\), let \(\phi_{X}(A^{\prime},A^{\prime\prime})\) denote the number of vertices in \(X\) that prefer \(A^{\prime}\) to \(A^{\prime\prime}\). We say that a gadget \(G_{u}\) is _clean_, if \(\phi_{V(G_{u})}(B,A)-\phi_{V(G_{u})}(A,B)\leq 0\). By \(\mu(A)=\phi(B,A)-\phi(A,B)=\sum_{u\in U}\phi_{V(G_{u})}(B,A)-\phi_{V(G_{u})}( A,B)\leq 2n\), we know that there are at least \(|U|-2n=n\) clean gadgets. Let \(U^{\star}=\{u:G_{u}\text{ is clean}\}\). **Claim 7.2**.: _[\(\star\)] If \(G_{u}\) is clean, then a unique edge of \(A\) enters \(G_{u}\), and it comes from \(r\)._ By Claim 7.2, for each \(u\in U^{\star}\) there exists a vertex \(\hat{u}\in\{u_{0},u_{1}\}\) for which \((r,\hat{u})\in A\). We can also assume w.l.o.g. that \(A\) and \(B\) coincide on \(G_{u}\), since otherwise we can replace \(B\) with the arborescence \(B^{\star}=B\setminus\{\delta(x):x\in V(G_{u}),u\in U^{\star}\}\cup\{A(x):x\in V (G_{u}),u\in U^{\star}\}\), since \(B^{\star}\) is also optimal for LP2. Furthermore, we get that for each \(u\in U^{\star}\) there exists some \(i\in\{1,2,3\}\) for which \(A(a_{u,i})\) comes from \(\{u_{0},u_{1}\}\); let \(\sigma(u)\) denote this index. Figure 3: Illustration of the arborescence \(A\) in the proof of Theorem 1.6, shown in bold, together with a feasible dual solution \((\vec{y},\vec{\alpha})\) certifying \(\mu(A)\leq 2n\). The figure assumes \(\sigma(u)=2\). The chain \(C_{1}\subsetneq C_{2}\subsetneq C_{3}=E\) is shown using grey and dotted ellipses for edges in \(C_{1}\) and \(C_{2}\), respectively, while the values \(\alpha_{v}\), \(v\in V\), are written within the corresponding vertices. **Claim 7.3**.: _[_4_]_ _If \(u\in U^{\star}\), then the tail of each edge \(f\in\delta(b_{u,\sigma(u)})\cap E_{3}\) is a descendant of \(\hat{u}\) in \(B\)._ We claim that \(\mathcal{T}=\{S(u,\sigma(u)):u\in U^{\star}\}\) is a solution to our instance of Exact 3-Cover. First observe that \(\mathcal{T}\) contains at least \(n\) sets by \(|U^{\star}|\geq n\). It remains to show that the sets in \(\mathcal{T}\) are pairwise disjoint. We say that \(G_{v}\) is _assigned_ to \(u\in U^{\star}\), if \(v\in S(u,\sigma(u))\). It suffices to show that no gadget \(G_{v}\) can be assigned to more than one vertices in \(U^{\star}\). Assume for the sake of contradiction that \(G_{v}\) is assigned to both \(u\) and \(w\) for two distinct vertices \(u,w\in U^{\star}\). Then by Claim 7.3 there are two vertices in \(B_{v}\), one of them a descendant of \(\hat{u}\), the other a descendant of \(\hat{w}\). Note that neither \(\hat{u}\) nor \(\hat{w}\) is a descendant of the other, since both \((r,\hat{u})\) and \((r,\hat{w})\) are edges in \(A\), and hence, in \(B\) (recall that \(A\) and \(B\) coincide on \(G_{u}\) and on \(G_{w}\)). This means that there are two distinct edges entering \(B_{v}\), one from a descendant of \(\hat{u}\), the other from a descendant of \(\hat{w}\). Thus for some \(j\in\{1,2,3\}\), the edges \(B(b_{v,j})\) and \(B(b_{v,j+1})\) are both in \(E_{3}\), implying also \((b_{v,j},a_{v,j})\in B\) and \((b_{v,j+1},a_{v,j+1})\in B\), where indices are taken modulo 3 (so \(a_{v,4}=a_{v,1}\) and \(b_{v,4}=b_{v,1}\)). However, this contradicts the Pareto-optimality of \(B\), since replacing \(B(b_{v,j})\) with \((a_{v,j+1},b_{v,j})\) in \(B\) results in an arborescence that \(b_{v,j}\) prefers to \(B\), with all other vertices being indifferent between the two. This shows that any two sets in \(\mathcal{T}\) are disjoint, proving the correctness of our reduction. ## 8 Conclusions We considered the popular arborescence problem, which asks to determine whether a given directed rooted graph, in which vertices have preferences over incoming edges, admits a popular arborescence or not and to find one if so. We provided a polynomial-time algorithm to solve this problem, which affirmatively answers an open problem posed in 2019 [24]. Our algorithm and its correctness proof work in the generality of matroid intersection (where one of the matroids is a partition matroid), which means that we also solved the popular common base problem. Furthermore, we observed that the popular common independent set problem, which includes the popular colorful forest problem as a special case, can be reduced to the popular common base problem, and hence can be solved by our algorithm. Utilizing structural observations, we also proved that the min-cost popular common independent set problem is tractable if preferences are weak rankings. On the intractability side, we proved that the min-cost popular arborescence problem and the \(k\)-unpopularity margin arborescence problem are both NP-hard even for strict preferences. Note that the min-cost problem is NP-hard for _popular common bases_ (a fact implied by the NP-hardness of the popular assignment problem shown in [20], as well as by Theorem 1.4), while it is tractable for _popular common independent sets_ by Theorem 1.3. By analogy, one may expect the problem of finding a common independent set with unpopular margin at most \(k\) to be polynomial-time solvable. However, this is not the case (unless \(\mathsf{P}=\mathsf{NP}\)), since the \(k\)-unpopularity matching problem is NP-hard even for strict rankings [26]. Note that the \(k\)-unpopularity margin branching problem is polynomial-time solvable when preferences are weak rankings, as shown in [21], but this does not contradict the above fact: branchings and matchings are both special cases of common independent sets (where one matroid is a partition matroid), but neither of them includes the other. ### Acknowledgments We are grateful for inspiring discussions on the popular arborescence problem to Chien-Chung Huang, Satoru Iwata, Tamas Kiraly, Jannik Matuschke, and Ulrike Schmidt-Kraepelin. We thank the anonymous reviewers for their valuable comments. Telikepalli Kavitha is supported by the Department of Atomic Energy, Government of India, under project no. RTI4001. Kazuhisa Makino is partially supported by JSPS KAKENHI Grant Numbers JP20H05967, JP19K22841, and JP20H00609. Ildiko Schlotter is supported by the Hungarian Academy of Sciences under its Momentum Programme (LP2021-2) and its Janos Bolyai Research Scholarship, and by the Hungarian Scientific Research Fund (OTKA grant K124171). Yu Yokoi is supported by JST PRESTO Grant Number JPMJPR212B. This work was partially supported by the joint project of Kyoto University and Toyota Motor Corporation, titled "Advanced Mathematical Science for Mobility Society." ## Appendix A Examples of Algorithm Execution We illustrate how Algorithm 1 works using some examples. We provide three instances of the popular arborescence problem. In all of these instances, a digraph is given as \(G=(V\cup\{r\},E)\) with \(V=\{a,b,c,d\}\), and each node \(v\in V\) has a strict preference on \(\delta(v)\). For better readability, for a multichain \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) with \(C_{1}\subseteq\cdots\subseteq C_{p}\) we will also use the notation \(\langle C_{1},\ldots,C_{p}\rangle\). ### Example 1 This instance is similar to the one illustrated in Section 1; the only difference is that now the edge \((r,d)\) is deleted. In contrast to the case where \((r,d)\) exists, this instance admits a popular arborescence, which is found by Algorithm 1 as follows. The preference orders for the four vertices are as follows: \[(b,a)\succ_{a}(c,a)\succ_{a}(r,a)\] \[(a,b)\succ_{b}(d,b)\succ_{b}(r,b)\] \[(d,c)\succ_{c}(a,c)\succ_{c}(r,c)\] \[(c,d)\succ_{d}(b,d).\] For convenience, we denote by \(E_{1}\), \(E_{2}\), and \(E_{3}\) the sets of the first, second and third choice edges, respectively. That is, \(E_{1}=\{(b,a),(a,b),(d,c),(c,d)\}\), \(E_{2}=\{(c,a),(d,b),(a,c),(b,d)\}\), and \(E_{3}=\{(r,a),(r,b),(r,c)\}\). **Algorithm Execution.** Below we describe the steps in our algorithm. 1. \(p=1\) and \(C_{1}=E\). Then \(E(\mathcal{C})=E_{1}\) and \(I=\{(a,b),(c,d)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=2<4=\text{rank}(C_{1})\), the set \(C_{1}\) is updated to \(\text{span}(I\cap C_{1})=E_{1}\). Since \(C_{1}=C_{p}\) is updated, \(p\) is incremented and \(E\) is added to \(\mathcal{C}\) as \(C_{2}\). 2. \(p=2\) and \(\langle C_{1},C_{2}\rangle=\langle E_{1},E\rangle\). Then \(E(\mathcal{C})=E_{1}\cup E_{2}\) and \(I=\{(a,b),(c,d),(a,c)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=2=\text{rank}(C_{1})\) and \(|I\cap C_{2}|=3<4=\text{rank}(C_{2})\), the set \(C_{2}\) is updated to \(\text{span}(I\cap C_{2})=E_{1}\cup E_{2}\). Since \(C_{2}=C_{p}\) is updated, \(p\) is incremented and \(E\) is added to \(\mathcal{C}\) as \(C_{3}\). 3. \(p=3\) and \(\langle C_{1},C_{2},C_{3}\rangle=\langle E_{1},\,E_{1}\cup E_{2},\,E\rangle\). Then \(E(\mathcal{C})=\{(c,d)\}\cup E_{2}\cup E_{3}\) and \(I=\{(c,d),(c,a),(d,b),(r,c)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=1<2=\text{rank}(C_{1})\), the set \(C_{1}\) is updated to \(\text{span}(I\cap C_{1})=\{(c,d),(d,c)\}\). 4. \(p=3\) and \(\langle C_{1},C_{2},C_{3}\rangle=\langle\{(c,d),(d,c)\},\,E_{1}\cup E_{2},\,E\rangle\). Then we have \(E(\mathcal{C})=\{(r,a),(b,a),(r,b),(a,b),(r,c),(a,c),(c,d),(b,d)\}\) (all edges on the figure to the right) and \(I=\{(r,a),(a,b),(a,c),(c,d)\}\) (thick edges on the figure to the right) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{i}|=\text{rank}(C_{i})\) holds for \(i=1,2,3\), the algorithm returns \(I\). Note that \(I^{\prime}=\{(r,b),(b,a),(a,c),(c,d)\}\) is also a possible output of the algorithm. Indeed, both \(I\) and \(I^{\prime}\) are popular arborescences. ### Example 2 We next demonstrate how the algorithm works for an instance that admits no popular arborescences. Consider the instance illustrated in the introduction. For the reader's convenience, we include the same figure again. As observed there, this instance has no popular arborescence. We denote by \(E_{1}\), \(E_{2}\), and \(E_{3}\) the sets of the first, second and third rank edges, respectively. Note that, unlike in Example 1, here \(E_{3}\) contains \((r,d)\). **Algorithm Execution** 1. The first step is the same as Step 1 in Example 1. That is, \(p=1\), \(C_{1}=E\), \(E(\mathcal{C})=E_{1}\), and \(I=\{(a,b),(c,d)\}\) is found as a lex-maximal branching in \(E(\mathcal{C})\). Then, \(C_{1}\) is updated to \(\operatorname{span}(I\cap C_{1})=E_{1}\), \(p\) is incremented, and \(E\) is added to \(\mathcal{C}\) as \(C_{2}\). 2. The second step is also the same as Step 2 in Example 1. That is, \(p=2\), \(\langle C_{1},C_{2}\rangle=\langle E_{1},E\rangle\), \(E(\mathcal{C})=E_{1}\cup E_{2}\), and \(I=\{(a,b),(c,d),(a,c)\}\) is found as a lex-maximal branching in \(E(\mathcal{C})\). Then, \(C_{2}\) is updated to \(\operatorname{span}(I\cap C_{2})=E_{1}\cup E_{2}\), \(p\) is incremented, and \(E\) is added to \(\mathcal{C}\) as \(C_{3}\). 3. \(p=3\) and \(\langle C_{1},C_{2},C_{3}\rangle=\langle E_{1},E_{1}\cup E_{2},E\rangle\). Then \(E(\mathcal{C})=E_{2}\cup E_{3}\) (compared to Example 1, here \((r,d)\) is included while \((c,d)\) is excluded) and \(I=\{(a,c),(b,d),(r,a),(r,b)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=0<2=\operatorname{rank}(C_{1})\), the set \(C_{1}\) is updated to \(\operatorname{span}(I\cap C_{1})=\emptyset\). 4. \(p=3\) and \(\langle C_{1},C_{2},C_{3}\rangle=\langle\emptyset,E_{1}\cup E_{2},E\rangle\). Then \(E(\mathcal{C})=E_{1}\cup E_{3}\) and \(I=\{(a,b),(c,d),(r,a),(r,c)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=\operatorname{rank}(C_{1})\) and \(|I\cap C_{2}|=2<3=\operatorname{rank}(C_{2})\), the set \(C_{2}\) is updated to \(\operatorname{span}(I\cap C_{2})=E_{1}\). 5. \(p=3\) and \(\langle C_{1},C_{2},C_{3}\rangle=\langle\emptyset,E_{1},E\rangle\). Then \(E(\mathcal{C})=E_{1}\cup E_{2}\) and \(I=\{(a,b),(c,d),(a,c)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). (Observe that these \(E(\mathcal{C})\) and \(I\) are the same as Step 2.) Since \(|I\cap C_{i}|=\operatorname{rank}(C_{i})\) for \(i=1,2\) and \(|I\cap C_{3}|=3<4=\operatorname{rank}(C_{3})\), the set \(C_{3}\) is updated to \(\operatorname{span}(I\cap C_{3})=E_{1}\cup E_{2}\), \(p\) is incremented, and \(E\) is added to \(\mathcal{C}\) as \(C_{4}\). 6. \(p=4\) and \(\langle C_{1},C_{2},C_{3},C_{4}\rangle=\langle\emptyset,E_{1},E_{1}\cup E_{2},E\rangle\). Then, as in Step 3, \(E(\mathcal{C})=E_{2}\cup E_{3}\) and \(I=\{(a,c),(b,d),(r,a),(r,b)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=\operatorname{rank}(C_{1})\) and \(|I\cap C_{2}|=0<2=\operatorname{rank}(C_{2})\), the set \(C_{2}\) is updated to \(\operatorname{span}(I\cap C_{2})=\emptyset\). 7. \(p=4\) and \(\langle C_{1},C_{2},C_{3},C_{4}\rangle=\langle\emptyset,\emptyset,E_{1}\cup E _{2},E\rangle\). By the same argument as in Step 4, the set \(C_{3}\) is updated to \(E_{1}\). 8. \(p=4\) and \(\langle C_{1},C_{2},C_{3},C_{4}\rangle=\langle\emptyset,\emptyset,E_{1},E\rangle\). By the same argument as in Step 5, the set \(C_{4}\) is updated to \(E_{1}\cup E_{2}\), \(p\) is incremented, and \(E\) is added to \(\mathcal{C}\) as \(C_{5}\). 9. \(p=5\) and \(\langle C_{1},C_{2},C_{3},C_{4},C_{5}\rangle=\langle\emptyset,\emptyset,E_{1}, E_{1}\cup E_{2},E\rangle\). Since \(p=5>4=n=|V|\), the algorithm halts with returning "\(G\) has no popular arborescence." The reader might observe that whenever \(C_{1}\) becomes empty in the algorithm, then by Lemma 3.1 we can conclude that the instance admits no popular arborescence, since the dual certificate contains only non-empty sets (Lemma 2.2) and hence \(D_{1}\subseteq C_{1}=\emptyset\) is not possible. Therefore, we could in fact stop the algorithm already in Step 3 when \(C_{1}\) gets updated to \(\emptyset\). Nevertheless, the algorithm will reach a correct answer even without using this observation, as illustrated by the above example. ### Example 3. We next provide an example that shows the importance of considering multichains. During the algorithm's execution on this instance, \(\mathcal{C}\) does become a multichain that is not a chain. The preferences of the four vertices are as follows: \[(b,a)\succ_{a}(r,a)\] \[(c,b)\succ_{b}(a,b)\] \[(d,c)\succ_{c}(b,c)\] \[(c,d)\] where \((c,d)\) is the unique incoming edge of \(d\). For convenience, we denote by \(E_{abcd}\), \(E_{bcd}\), and \(E_{cd}\) the edge sets of the induced subgraphs for the vertex sets \(\{a,b,c,d\}\), \(\{b,c,d\}\), and \(\{c,d\}\), respectively. That is, \(E_{abcd}=E\setminus\{(r,a)\}\), \(E_{bcd}=\{(b,c),(c,b),(c,d),(d,c)\}\), and \(E_{cd}=\{(c,d),(d,c)\}\). Note that \(\{(r,a),(a,b),(b,c),(c,d)\}\) is the unique arborescence in this instance, and hence it is a popular arborescence. **Algorithm Execution** 1. \(p=1\) and \(C_{1}=E\). Then \(E(\mathcal{C})=\{(b,a),(c,b),(d,c),(c,d)\}\) and \(I=\{(b,a),(c,b),(c,d)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=3<4=\mathrm{rank}(C_{1})\), the set \(C_{1}\) is updated to \(\mathrm{span}(I\cap C_{1})=E_{abcd}\). Since \(C_{1}=C_{p}\) is updated, \(p\) is incremented and \(E\) is added to \(\mathcal{C}\) as \(C_{2}\). 2. \(p=2\) and \(\langle C_{1},C_{2}\rangle=\langle E_{abcd},E\rangle\) (shown by braces on the right). Then \(E(\mathcal{C})=\{(r,a),(b,a),(c,b),(d,c),(c,d)\}\) (all edges on the right) and \(I=\{(b,a),(c,b),(c,d)\}\) (thick edges on the right) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=\mathrm{rank}(C_{1})\) and \(|I\cap C_{2}|=3<4=\mathrm{rank}(C_{2})\), \(C_{2}\) is updated to \(\mathrm{span}(I\cap C_{2})=E_{abcd}\). Since \(C_{2}=C_{p}\) is updated, \(p\) is incremented and \(E\) is added to \(\mathcal{C}\) as \(C_{3}\). 3. \(p=3\) and \(\langle C_{1},C_{2},C_{3}\rangle=\langle E_{abcd},\,E_{abcd},\,E\rangle\) (so \(C_{1}=C_{2}\)). Then \(E(\mathcal{C})=\{(r,a),(c,b),(d,c),(c,d)\}\). Note that \((b,a)\) is not in \(E(\mathcal{C})\) as \(\mathsf{lev}_{\mathcal{C}}((b,a))=1\) while \(\mathsf{lev}_{\mathcal{C}}((r,a))=3\). \(I=\{(r,a),(c,b),(c,d)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=2<3=\mathrm{rank}(C_{1})\), the set \(C_{1}\) is updated to \(\mathrm{span}(I\cap C_{1})=E_{bcd}\). 4. \(p=3\) and \(\langle C_{1},C_{2},C_{3}\rangle=\langle E_{bcd},\,E_{abcd},\,E\rangle\). Then, \(E(\mathcal{C})=E\setminus\{(b,c)\}\) and \(I=\{(b,a),(c,b),(c,d)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{i}|=\mathrm{rank}(C_{i})\) for \(i=1,2\) and \(|I\cap C_{3}|=3<4=\mathrm{rank}(C_{3})\), the set \(C_{3}\) is updated to \(\mathrm{span}(I\cap C_{3})=E_{abcd}\). Since \(C_{3}=C_{p}\) is updated, \(p\) is incremented and \(E\) is added to \(\mathcal{C}\) as \(C_{4}\). 5. \(p=4\) and \(\langle C_{1},C_{2},C_{3},C_{4}\rangle=\langle E_{bcd},\,E_{abcd},\,E_{abcd},E\rangle\). \(E(\mathcal{C})=E\setminus\{(b,a),(b,c)\}\) and \(I=\{(r,a),(c,b),(c,d)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=\mathrm{rank}(C_{1})\) and \(|I\cap C_{2}|=2<3=\mathrm{rank}(C_{2})\), the set \(C_{2}\) is updated to \(\mathrm{span}(I\cap C_{2})=E_{bcd}\). 6. \(p=4\) and \(\langle C_{1},C_{2},C_{3},C_{4}\rangle=\langle E_{bcd},\,E_{bcd},\,E_{abcd},E\rangle\). Then \(E(\mathcal{C})=E\setminus\{(b,c),(c,b)\}\) and \(I=\{(r,a),(a,b),(c,d)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{1}|=1<2=\mathrm{rank}(C_{1})\), the set \(C_{1}\) is updated to \(\mathrm{span}(I\cap C_{1})=E_{cd}\). 7. \(p=4\) and \(\langle C_{1},C_{2},C_{3},C_{4}\rangle=\langle E_{cd},\,E_{bcd},\,E_{abcd},E\rangle\). Then \(E(\mathcal{C})=E\) and \(I=\{(r,a),(a,b),(b,c),(c,d)\}\) is a lex-maximal branching in \(E(\mathcal{C})\). Since \(|I\cap C_{i}|=\mathrm{rank}(C_{i})\) holds for \(i=1,2,3,4\), the algorithm returns \(I\). ## Appendix B Deferred proofs Here we present all proofs that we omitted from the main body of the paper, namely, from Section 7. For the convenience of the reader, we re-state each claim before providing its proof. **Claim 7.1**: _[\(\star\)] (\(\vec{y},\vec{\alpha}\)) is a feasible solution for LP2._ We need to verify that \[|\{C\in\mathcal{C}:e\in C\}|+\alpha_{v}\geq\mathsf{wt}_{A}(e) \tag{2.4}\] holds for each edge \(e\) entering some vertex \(v\). First assume \(e\in C_{1}\), in which case \(e\) is contained in three sets of \(\mathcal{C}\). If \(e\in C_{1}\cap A\), then \(\mathsf{wt}_{A}(e)=0\) and \(\alpha_{v}=-3\) ensures (2.4). If \(e\in C_{1}\setminus A\), then \(\mathsf{wt}_{A}(e)=1\) but \(\alpha_{v}=-2\), so (2.4) is again satisfied. Second, assume \(e\in C_{2}\), in which case \(e\) is contained in two sets from \(\mathcal{C}\). If \(e\in C_{2}\cap A\), then \(\alpha_{v}=-2\), which implies (2.4). If \(e\in C_{2}\setminus A\), then we distinguish between two cases: if \(v=u_{0}\) for some \(u\in U\), then \(\mathsf{wt}_{A}(e)=1\) and \(\alpha_{v}=-1\); otherwise \(\mathsf{wt}_{A}(e)=-1\) and \(\alpha_{v}\geq-3\) (note that here we used that all vertices \(a_{u,i}\) prefer \((u_{0},a_{u,i})\) to \((u_{1},a_{u,i})\)). Hence, \(e\) again satisfies (2.4). Third, assume \(e\in C_{3}\), in which case \(e\) is contained in one set from \(\mathcal{C}\). Let \(G_{u}\) be the gadget entered by \(e\). If \(e=(r,u_{0})\in A\), then \(\mathsf{wt}_{A}(e)=0\) and \(\alpha_{v}=\alpha_{u_{0}}=-1\), and thus (2.4) holds. Otherwise \(\mathsf{wt}_{A}(e)=-1\). Let \(T\) be the set in \(\mathcal{T}\) containing \(u\). Note that either \(v=u_{1}\) or \(v=b_{u,j}\) for some \(j\neq\sigma(u)\), because all edges entering \(b_{u,\sigma(u)}\) are contained in \(C_{2}\), since they each originate in some gadget \(G_{v}\) with \(v\in T\). Therefore, we have \(\alpha_{v}=-2\) in both cases, which implies (2.4) for \(e\). **Claim 7.2**: _[\(\star\)] If \(G_{u}\) is clean, then a unique edge of \(A\) enters \(G_{u}\), and it comes from \(r\)._ Assume for the sake of contradiction that the claim does not hold for some \(u\in U^{\star}\); this means that \(A\) must reach \(G_{u}\) through an edge \(e\in E_{3}\) pointing to some vertex of \(B_{u}\); let \(b_{u,j}\) denote this vertex. Let \(u_{h}\) be the vertex where \(B\) enters \(\{u_{0},u_{1}\}\); then \((u_{h},u_{1-h})\in B\). Define \(B^{\prime}\) as follows: \[B^{\prime}= B\setminus\{\delta(x):x\in V(G)\}\] \[\cup\{(r,u_{1-h}),(u_{1-h},u_{h}),(u_{0},a_{u,j+1}),(a_{u,j+1},b _{u,j+1})\}\] \[\cup(C_{u}\setminus\delta(a_{u,j+1})\setminus\delta(b_{u,j+1}))\] where indices are taken modulo \(3\) (so \(a_{u,4}=a_{u,1}\) and \(b_{u,4}=b_{u,1}\)). Observe that \(B^{\prime}\) is an arborescence. If \((u_{0},a_{u,j+1})\notin A\), then \((b_{u,j+1},a_{u,j+1})\in A\), and thus vertices \(u_{h}\), \(b_{u,j}\), and \(b_{u,j+1}\) all prefer \(B^{\prime}\) to \(A\), while vertices \(u_{1-h}\) and \(a_{u,j+1}\) prefer \(A\) to \(B^{\prime}\). If \((u_{0},a_{u,j+1})\in A\), then vertices \(u_{h}\) and \(b_{u,j}\) prefer \(B^{\prime}\) to \(A\), vertex \(u_{1-h}\) prefers \(A\) to \(B^{\prime}\), while \(a_{u,j+1}\) and \(b_{u,j+1}\) are indifferent between them; note \((a_{u,j+1},b_{u,j+1})\in A\cap B^{\prime}\). Furthermore, if \(b_{u,j-1}\) prefers \(A\) to \(B^{\prime}\), then \((a_{u,j-1},b_{u,j-1})\in A\), and therefore \(a_{u,j-1}\) prefers \(B^{\prime}\) to \(A\). Summing up all these facts, we obtain \(\phi_{V(G_{u})}(B^{\prime},A)-\phi_{V(G_{u})}(A,B^{\prime})\geq 1\) which in turn implies \(\phi(B^{\prime},A)-\phi(A,B^{\prime})>\phi(B,A)-\phi(A,B)\), a contradiction to our choice of \(B\). **Claim 7.3**: _[\(\star\)] If \(u\in U^{\star}\), then the tail of each edge \(f\in\delta(b_{u,\sigma(u)})\cap E_{3}\) is a descendant of \(\hat{u}\) in \(B\)._ Define \(B_{f}\) as follows: \[B_{f}= B\setminus\{\delta(x):x\in V(C_{u})\text{ or }x=\hat{u}\}\cup\{f,(a_{u, \sigma(u)},\hat{u})\}\cup(C_{u}\setminus\delta(b_{u,\sigma(u)})).\] Observe that there is an edge from \(\hat{u}\) to the other vertex of \(\{u_{0},u_{1}\}\) shared by \(A\) and \(B_{f}\). Suppose that \(B_{f}\) is an arborescence. Note that vertices \(\hat{u}\) and \(a_{u,\sigma(u)}\) prefer \(B_{f}\) to \(A\), while vertex \(b_{u,\sigma(u)}\) prefers \(A\) to \(B_{f}\) (because \((a_{u,\sigma(u)},b_{u,\sigma(u)})\in A\)). Furthermore, if some \(b_{u,i}\), \(i\neq\sigma(u)\), prefers \(A\) to \(B_{f}\), then \(a_{u,i}\) prefers \(B_{f}\) to \(A\). Hence, \(\phi_{V(G_{u})}(B_{f},A)-\phi_{V(G_{u})}(A,B_{f})\geq 1\), which implies also \(\phi(B_{f},A)-\phi(A,B_{f})>\phi(B,A)-\phi(A,B)\), contradicting our choice of \(B\). Hence, \(B_{f}\) cannot be an arborescence, which can only happen if the tail of \(f\) is a descendant of \(\hat{u}\) in \(B\). ## Appendix C Extensions and Related Results ### Popularity under size constraints As mentioned in the introduction, the popular largest common independent set problem can be reduced to the popular common base problem. More generally, we can reduce the popular size \([\ell,u]\) common independent set problem to the popular common base problem, where the goal of the former problem is to find a common independent set that is popular within the set of all common independent sets whose size is at least \(\ell\) and at most \(u\) (if such a solution exists). We now describe the reduction. Let \(E=E_{1}\mathbin{\mathaccent 0{\cdot}\cup}\cdots\mathbin{\mathaccent 0{\cdot} \cup}E_{n}\) be a given partition of \(E\) and \(M=(E,\mathcal{I})\) be a given matroid. We define a new instance as follows. For each \(i\in\{1,\ldots,n\}\), we create a new element \(e_{i}\) and extend the domain of \(\succ_{i}\) to \(E_{i}\cup\{e_{i}\}\), where \(e_{i}\) is the unique worst element. The new partition is defined as \(E^{\prime}:=\biguplus_{i=1}^{n}(E_{i}\cup\{e_{i}\})\). We define a new matroid \(M^{\prime}=(E^{\prime},\mathcal{I}^{\prime})\) by \[\mathcal{I}^{\prime}=\{\,X\subseteq E^{\prime}:X\cap E\in\mathcal{I},\,|X \cap E|\leq u,\,|X\cap\{e_{1},\ldots,e_{n}\}|\leq n-\ell,\,|X|\leq n\,\}.\] Note that we can assume that the rank of \(M\) (i.e., the size of a base in \(M\)) is at least \(\ell\) since otherwise the given instance clearly has no solution. Therefore, the rank of \(M^{\prime}\) is \(n\). There exists a one-to-one correspondence between common independent sets of sizes in \([\ell,u]\) in the original instance and common bases of the new instance. Suppose \(I\) is a common independent set with \(\ell\leq|I|\leq u\) in the original instance. Let \(B\) be obtained from \(I\) by adding \(e_{i}\) for any unassigned \(i\in\{1,\ldots,n\}\) (that is, where \(I\cap E_{i}=\emptyset\)). Then, \(B\) is a common base in the new instance. Conversely, given a common base of the new instance, we can obtain a common independent set satisfying the size constraint by projecting out the dummy elements. Furthermore, \(\phi(I,I^{\prime})=\phi(B,B^{\prime})\) holds for any common independent sets \(I\) and \(I^{\prime}\) of the original instance and their corresponding bases \(B\) and \(B^{\prime}\). Thus, the reduction is completed. The reduction used in Section 4 (to reduce the popular colorful forest problem to the popular colorful base problem) is a special case of this reduction where \(u=n\) and \(\ell=0\). ### Popularity under category-wise size constraints We can also use our popular common base algorithm (Algorithm 1) to solve the problem of finding a common independent set that is popular under a kind of diversity constraints. Similarly to the above, suppose that a partition \(E=E_{1}\mathbin{\mathaccent 0{\cdot}\cup}\cdots\mathbin{\mathaccent 0{\cdot} \cup}E_{n}\) and a matroid \(M=(E,\mathcal{I})\) are given. We regard \(\{1,\ldots,n\}\) as the set of agents. Suppose that the set \(\{1,\ldots n\}\) is partitioned into \(q\) categories \(P_{1}\mathbin{\mathaccent 0{\cdot}\cup}\cdots\mathbin{\mathaccent 0{\cdot} \cup}P_{q}\), and each category \(P_{k}\) is associated with integers \(\ell_{k}\) and \(u_{k}\) where \(\ell_{k}\leq u_{k}\). We call a common independent set \(X\subseteq E\)_admissible_ if, for each \(k=1,\ldots,q\), we have \(\ell_{k}\leq|\,\{\,i\in P_{k}:E_{i}\mathbin{\mathaccent 0{\cdot}\cup}X\neq \emptyset\,\}\,|\leq u_{k}\). That is, a set \(X\) is admissible if, among the agents in each category \(P_{k}\), at least \(\ell_{k}\) and at most \(u_{k}\) agents are assigned an element. The problem of finding a common independent set that is popular within the set of admissible common independent sets can be reduced to the popular common base problem as follows. Similarly to the case of size constraints above, for each \(i\in\{1,\ldots,n\}\), we introduce a dummy element \(e_{i}\) that is worst in \(i\)'s preferences. Moreover, for each category \(P_{k}\), we create a set \(D_{k}\) of dummy agents with \(|D_{k}|=u_{k}-\ell_{k}\). With each agent \(j\in D_{k}\) we associate a set \(\{f_{j},g_{j}\}\) of two new elements, and these are tied in the preferences of \(j\), that is, \(f_{j}\not\succ_{j}g_{j}\) and \(g_{j}\not\succ_{j}f_{j}\). Thus, the new ground set is \(E^{*}=\bigcup_{i=1}^{n}(E_{i}\cup\{e_{i}\})\cup\bigcup_{j\in D_{1}\mathbin{ \mathaccent 0{\cdot}\cup}\cdots\mathbin{\mathaccent 0{\cdot}\cup}D_{q}}\{f_{j},g_{j}\}\), and its partitions are the sets \(E_{i}\cup\{e_{i}\}\) for \(i\in\{1,\ldots,n\}\) and the sets \(\{f_{j},g_{j}\}\) for \(j\in D_{k}\) and \(k\in\{1,\ldots,q\}\). We define a matroid on \(E^{*}\). First, for \(k=1,\ldots,q\), let \(F_{k}:=\{\,e_{i}:i\in P_{k}\}\cup\{f_{j}:j\in D_{k}\,\}\) and let \((F_{k},\mathcal{I}_{k})\) be a uniform matroid defined by \(\mathcal{I}_{k}=\{\,X\subseteq F_{k}:|X|\leq|P_{k}|-\ell_{k}\,\}\). Next, let \(E^{\prime}:=E\cup\{\,g_{j}:j\in D_{1}\cup\cdots\cup D_{q}\,\}\) and define a matroid \((E^{\prime},\mathcal{I}^{\prime})\) as the truncation of the direct sum of \(M\) and the free matroid on \(\{\,g_{j}:j\in D_{1}\cup\cdots\cup D_{q}\,\}\), that is, \(\mathcal{I}^{\prime}:=\{\,X\subseteq E^{\prime}:X\cap E\in\mathcal{I},|X|\leq \sum_{k=1}^{q}u_{k}\,\}\). Let \((E^{*},\mathcal{I}^{*})\) be the direct sum of all these matroids, i.e., \(\mathcal{I}^{*}\) is defined as \[\mathcal{I}^{*}=\{\,X\subseteq E^{*}:X\cap E\in\mathcal{I},\,\,|X\cap E^{\prime} |\leq\sum_{k=1}^{q}u_{k},\,\,|X\cap F_{k}|\leq|P_{k}|-\ell_{k}\,\,\,\text{for }k=1,\ldots,q\,\}.\] We can assume that the size of a base in \((E,\mathcal{I})\) is at least \(\sum_{k=1}^{q}\ell_{k}\) since otherwise the instance clearly has no admissible set. As we have \(|\,\{\,g_{j}:j\in D_{1}\cup\cdots\cup D_{q}\,\}\,|=\sum_{k=1}^{q}(u_{k}-\ell_{ k})\), the size of a base in the matroid \((E^{\prime},\mathcal{I}^{\prime})\) is exactly \(\sum_{k=1}^{q}u_{k}\). Also, the size of a base in each \((F_{k},\mathcal{I}_{k})\) is \(|P_{k}|-\ell_{k}\) (since \(|F_{k}|=|P_{k}|+u_{k}-\ell_{k}\)). Thus, the size of a base of the matroid \((E^{*},\mathcal{I}^{*})\) is \(\sum_{k=1}^{q}(|P_{k}|+u_{k}-\ell_{k})\), which equals the number of agents. We now explain how to transform admissible common independent sets of the original instance to common bases of the new instance, and vise versa. Let \(I\) be an admissible common independent set of the original instance. For each \(k=1,\ldots,q\), let \(C_{k}\subseteq P_{k}\) be the set of agents \(i\) in \(P_{k}\) with \(I\cap E_{i}=\emptyset\). Since \(I\) is admissible, \(|P_{k}|-u_{k}\leq|C_{k}|\leq|P_{k}|-\ell_{k}\). Set \(B=I\) and augment \(B\) by adding elements in the following manner. For all agents in \(C_{k}\), add the corresponding \(e_{i}\) elements to \(B\). Note that \(|P_{k}|-\ell_{k}-|C_{k}|\) is at least \(0\) and at most \(u_{k}-\ell_{k}\). Take \(|P_{k}|-\ell_{k}-|C_{k}|\) agents \(j\) from \(D_{k}\) arbitrarily and add the corresponding \(f_{j}\) elements to \(B\). For the remaining \(|D_{k}|-(|P_{k}|-\ell_{k}-|C_{k}|)=u_{k}-|P_{k}|+|C_{k}|\) agents \(j\) in \(D_{k}\), we add the corresponding \(g_{j}\) elements to \(B\). Thus, all agents are assigned elements. Furthermore, we see that the set \(B\) satisfies \(B\cap E\in\mathcal{I}\), \(|B\cap E^{\prime}|=|I|+\sum_{i=1}^{n}(u_{k}-|P_{k}|+|C_{k}|)=\sum_{k=1}^{q}u_ {k}\) (note that \(\sum_{k=1}^{q}(|P_{k}|-|C_{k}|)=|I|\)), and \(|B\cap F_{k}|=|P_{k}|-\ell_{k}\) for each \(k=1,\ldots,q\). Thus, \(B\) is a common base in the new instance. Conversely, let \(B\) be a common base of the new instance and \(I\) be obtained by deleting all dummy elements in \(B\). Clearly \(I\) is a common independent set of the original instance. As \(B\) is a base in \(\mathcal{I}^{*}\), we have \(|B\cap F_{k}|=|P_{k}|-\ell_{k}\) for each \(k=1,\ldots,q\). Since \(F_{k}=\{\,e_{i}\!:\!i\in\!P_{k}\,\}\cup\{\,f_{j}\!:\!j\in D_{k}\,\}\), this implies \(|B\cap\{\,e_{i}\!:\!i\in\!P_{k}\,\}\,|\leq|P_{k}|-\ell_{k}\). As \(|\{\,f_{j}\!:\!j\in D_{k}\,\}\,|=u_{k}-\ell_{k}\), it also follows that \(|B\cap\{\,e_{i}\!:\!i\in P_{k}\,\}\,|\geq|P_{k}|-u_{k}\). Thus, we have \(|P_{k}|-u_{k}\leq|B\cap\{\,e_{i}\!:\!i\in P_{k}\}|\leq|P_{k}|-\ell_{k}\), which is equivalent to \(\ell_{k}\leq|\,\{\,i\in P_{k}:B\cap E_{i}\neq\emptyset\,\}\,|\leq u_{k}\). Thus, \(I\) is admissible in the original instance. We can also observe that \(\phi(I,I^{\prime})=\phi(B,B^{\prime})\) holds for any admissible common independent sets \(I\) and \(I^{\prime}\) of the original instance and their corresponding bases \(B\) and \(B^{\prime}\) in the new instance. Therefore, a popular admissible common independent set in the original instance corresponds to a popular common base of the new instance. **Popular fractional solutions.** The notion of popularity can be extended to fractional solutions, or equivalently, probability distributions over integral solutions. A fractional/mixed solution \(x\) is popular if there is no fractional (in fact, integral) solution more popular than \(x\). It was shown in [23] using the minimax theorem that popular mixed matchings always exist and such a fractional/mixed matching can be computed in polynomial time. The same proof shows that a popular fractional (largest) common independent set always exists and such a fractional solution can be computed in polynomial time by optimizing over the matroid intersection polytope. An integral solution \(I\) is _strongly popular_ if \(\phi(I,I^{\prime})>\phi(I^{\prime},I)\) for all solutions \(I^{\prime}\neq I\). As observed in [3] in the context of matchings, if a strongly popular solution exists, then it has to be a unique popular fractional solution. Thus there is a polynomial-time algorithm for the strongly popular (largest) common independent set problem.
2302.08991
Bridging scales with Machine Learning: From first principles statistical mechanics to continuum phase field computations to study order disorder transitions in LixCoO2
LixTMO2 (TM=Ni, Co, Mn) forms an important family of cathode materials for Li-ion batteries, whose performance is strongly governed by Li composition-dependent crystal structure and phase stability. Here, we use LixCoO2 (LCO) as a model system to benchmark a machine learning-enabled framework for bridging scales in materials physics. We focus on two scales: (a) assemblies of thousands of atoms described by density functional theory-informed statistical mechanics, and (b) continuum phase field models to study the dynamics of order-disorder transitions in LCO. Central to the scale bridging is the rigorous, quantitatively accurate, representation of the free energy density and chemical potentials of this material system by coarsegraining formation energies for specific atomic configurations. We develop active learning workflows to train recently developed integrable deep neural networks for such high-dimensional free energy density and chemical potential functions. The resulting, first principles-informed, machine learning-enabled, phase-field computations allow us to study LCO cathodes' phase evolution in terms of temperature, morphology, charge cycling and particle size.
G. H. Teichert, S. Das, M. Faghih Shojaei, J. Holber, T. Mueller, L. Hung, V. Gavini, K. Garikipati
2023-02-17T16:50:23Z
http://arxiv.org/abs/2302.08991v1
Bridging scales with Machine Learning: From first principles statistical mechanics to continuum phase field computations to study order-disorder transitions in Li\({}_{x}\)CoO\({}_{2}\) ###### Abstract Li\({}_{x}\)_TMO\({}_{2}\)_ (TM=Ni, Co, Mn) forms an important family of cathode materials for Li-ion batteries, whose performance is strongly governed by Li composition-dependent crystal structure and phase stability. Here, we use Li\({}_{x}\)CoO\({}_{2}\) (LCO) as a model system to benchmark a machine learning-enabled framework for bridging scales in materials physics. We focus on two scales: (a) assemblies of thousands of atoms described by density functional theory-informed statistical mechanics, and (b) continuum phase field models to study the dynamics of order-disorder transitions in LCO. Central to the scale bridging is the rigorous, quantitatively accurate, representation of the free energy density and chemical potentials of this material system by coarse-graining formation energies for specific atomic configurations. We develop active learning workflows to train recently developed integrable deep neural networks for such high-dimensional free energy density and chemical potential functions. The resulting, first principles-informed, machine learning-enabled, phase-field computations allow us to study LCO cathodes' phase evolution in terms of temperature, morphology, charge cycling and particle size. + Footnote †: journal: ## 1 Introduction Layered oxides of type Li\({}_{x}\)_TMO\({}_{2}\)_(\(x\in[0,1]\), TM=Ni, Mn, Co, hence called NMC materials) are commonly used cathodes in Li-ion batteries because of the tunability of the chemistry specific to an application, as well as their high energy density and charging rate. Of these, LiCoO\({}_{2}\) (LCO) has been the choice for consumer electronics, while Ni-rich compositions are attractive for high energy density batteries, important for electrification of transportation. NMCs undergo phase transitions during cycling, which can, in some cases, affect their performance. LCO suffers deleterious stacking sequence phase transitions from the O3 to H1-3 structure below \(x=1/3\)[1]. Quantitative control of cycling and structural stability of layered NMC oxides requires a first principles-based understanding of the thermodynamics of Li intercalation, its impact on the crystal structure's stability and long-term electrochemical properties, and a modeling framework that spans scales from atoms to the continuum. However, scale-bridging is a hard problem and prior studies, as we outline below, have treated the different length scales in isolation. Here, we present a key advance in further developing a scale bridging framework [2; 3] by rigorously and systematically tying first principles energetics with the free energy--in a high dimensional space accounting for all symmetries--describing the phase evolution. This is demonstrated for the O3 phase of LCO with insights to the free energy underpinnings of the thermodynamic phase evolution and order-disorder transitions under Li intercalation. The rich phase-behavior of LCO has been extensively studied by experiments. A first order phase transition has been identified for compositions \(0.75\leq x\leq 0.94\), caused by a metal-insulator transition [4, 5]. Reimers and Dahn [4] reported an ordering at \(x=1/2\), confirmed as a row ordering [6, 7]. Shao-Horn et al. [6] found evidence of ordering at \(x=1/3\) at \(\sim\)100 K. Additionally, charge ordering Co\({}^{3+}\) and Co\({}^{4+}\) atoms at \(x=1/2\) and \(x=2/3\) was observed at 175 K, while these charge states' occupation was random at room temperature [8, 7]. The transition from the O3 to the H1-3 structure also has been identified for \(x<0.33\)[9, 10]. There also have been numerous computational studies of LCO. _Ab initio_ thermodynamic approaches combining density functional theory (DFT) and statistical mechanics [1, 11] have been used to systematically predict phase diagrams, free energy data, and voltage curves. The first-principles methods have been combined with experiment and CALPHAD models [12] to define Gibbs free energy functions [13, 10] using traditional Redlich-Kister polynomials. DFT predictions have been combined with experimental data, such as in phase field modeling of an observed metal-insulator phase transition [14]. However, while the link between _ab initio_ and statistical mechanics computations is well established, first-principles methods have not been included in a comprehensive scale bridging framework that connects to continuum treatments such as phase field models to study the thermodynamics and kinetics of phase transitions in these systems. The current work demonstrates a rigorous framework combining advanced first-principles methods with our recent work in machine learning to bridge across DFT-informed statistical mechanics and continuum scale simulations (Figure 1a). Our use of O3 layered LCO to benchmark the approach is motivated by the extensive studies of this chemistry. To our knowledge, the systematic scale bridging demonstrated here has not been presented for this cathode material. We perform DFT+\(U\) calculations [15] to capture electronic correlation in transition metal oxides with localized \(d\) orbitals. Combined with van der Waals functionals, this validates the DFT approaches by close agreement with experimental voltages in layered LCO [16]. A cluster expansion Hamiltonian is parameterized by the DFT computations and used in Semi-Grand Canonical Monte Carlo sampling to compute Li intercalation thermodynamics as a function of temperature and chemical potential, and to construct temperature - composition phase diagrams [17, 18]. The crystallographic structure of a material determines the ordered arrangements that inserted atoms can adopt on the lattice at certain compositions. The free energy is a function of the composition, parameters describing these orderings, as well as other variables such as strain and temperature. For the regimes of LCO's phase stability considered here, six parameters determined from symmetry group considerations fully describe its ordering (SS1.2.3, 3.2.2). The statistical mechanics simulations yield free energy density derivatives; i.e., chemical potentials, with respect to the composition and order parameters. The machine learning framework that is central to this work adaptively learns a seven-dimensional representation of the free energy density as a function of composition and six order parameters from free energy derivative data. Strain and vibrational entropy have not been included; therefore mechanics and heat transport are not accounted for. Integrable deep neural networks (IDNNs) previously introduced by the authors [2, 3], can be trained to derivative data, e.g. chemical potentials obtained through Monte Carlo sampling, and analytically integrated to recover the antiderivative function, e.g. the free energy density function needed in phase field models. This requires adaptive sampling of non-convex regions, extrema, boundaries of admissible regions and high error points of the seven-dimensional free energy density. These machine learnt thermodynamic representations have a quantifiable precision controlled by convergence tolerances. They are used to confirm a match with extensively reported phase diagrams for LCO, as well as experimental voltage measurements. We also perform large-scale DFT calculations to compute the anti-phase boundary energies and interface energies between the different phases. Finally, using the obtained free energy densities and anti-phase boundary/interface energies, we carry out phase field studies on the dynamics of an order-disorder transition in isolated particles as well as during charge-discharge cycles on single LCO particles at temperatures of practical interest. ### Formation energies and configurations from DFT The mean voltages predicted from DFT+\(U\) for Li composition \(x=\{0,1\},\{0,0.5\},\{0.5,1\}\) and increasing values of \(U\) are plotted in Figure 1b. We found \(U=2.5\) eV for calculation of LCO formation energies to provide a good match with the experimental voltages reported in Ref. [16] for \(x=\{0,1\}\) and \(x=\{0.5,1\}\). DFT+\(U\) computations were carried out on 333 configurations identified by the CASM (Clusters' Approach to Statistical Mechanics) software [22]. The calculated formation energies and associated convex hull with the predicted ground states are plotted in Figure 1c. The unit cell volume for LiCoO\({}_{2}\) was calculated to be 32.502 A\({}^{3}\). Figure 1d shows Li in three ordered configurations encountered in this work at \(x=1/2\). We found the zig-zag ordering to be the ground state with a formation energy \(E_{\mathrm{f}}=-0.170\) eV rather than the two row configurations (\(E_{\mathrm{f}}=-0.150\) and \(-0.164\) eV). Charge splitting appears in our DFT results only for the row configuration on the left in Figure 1d. ### Statistical mechanics #### 1.2.1 Cluster expansions for \(E_{\mathrm{f}}\) A cluster expansion was developed for the formation energy, \(E_{\mathrm{f}}\), within the CASM code using a genetic algorithm that selected 35 out of an initial 221 basis functions. The RMS error for this cluster expansion prediction was 2.49 meV per primitive cell, obtained under 10-fold cross-validation. The corresponding effective cluster interactions (ECIs) are shown in Figure 1e. The cluster expansion predicted \(E_{\mathrm{f}}\) values are plotted with those from DFT+\(U\) in Figure 1c. Note that the respective convex hulls match in formation energies and configurations. #### 1.2.2 Monte Carlo simulations of ordered/disordered structures Li content as a function of chemical potential and temperature was computed through Semi-Grand Canonical Monte Carlo for temperatures ranging from 200 K to 400 K. Unbiased and biased (umbrella) sampling were used [24; 25; 26]. Two-phase regions are easily seen as gaps in the composition values of the unbiased data (Figure 2a). These data were used to construct a phase diagram of O3 LCO, for Li compositions \(x\geq 1/3\) that is shown in Figure 2c. From the Monte Carlo results, the order-disorder transition temperature for the ordering at \(x\approx 1/2\) is slightly greater than 330 K, consistent with experiment [4]. A fairly good agreement also exists between the computed voltage profile at 300 K and experimental measurements [23] ( Figure 2b). The order-disorder transition is visible in the fluctuation around \(x\approx 1/2\). Similar to past first-principles studies on LCO [1; 18], however, the calculations lack the metal-insulator transition that leads to the two-phase region observed over \(x\sim 0.7\)-0.9. Therefore, the corresponding voltage plateau is not captured in the simulations. The Monte Carlo results in this work, and the predicted voltage, are only for the O3 structure and, therefore, valid for compositions \(x\geq 1/3\). #### 1.2.3 Symmetry-adapted order parameters Symmetry-adapted order parameters precisely represent the distinct variants of the zig-zag ordering that was identified by DFT as having the lowest energy of all possible decorations of the triangular Li sub-lattice at the composition of \(x=1/2\). The zig-zag ordering has 12 variants that are shown in Figure 3a. They arise from the 3 rotations \(\times\) 4 translations that belong to the symmetry group of the triangular lattice. The mutually commensurate supercell that includes the supercells of all the rotations and translations was computed and includes 32 unique sublattice sites on two successive Li layers (Figures 3b-3c). This yields a 32-dimensional basis on which 12 vectors \(\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(12)}\}\) uniquely describe the 12 variants (Figures 3c-3d). A more efficient representation of the 12 variants is obtained by seven order parameters, \(\eta_{0},\ldots,\eta_{6}\) corresponding to the first seven rows of the matrix \(\mathbf{\eta}\) in Figure 3e, with its remaining rows being zeros. Of these, \(\eta_{0}=x\) corresponds to the composition averaged over all 32 sublattice sites. It is associated with a single distinct eigenvalue of an orthogonal matrix \(\mathbf{Q}\in\mathbb{R}^{32\times 32}\) that is invariant to the symmetry group of the zig-zag ordering (Figure 3e and SS3.2.2). The other six order parameters are associated with eigenvectors of one of the degenerate eigenvalues and represent the 12 variants by \(\eta_{1},\ldots\eta_{6}=\pm 1/2\) in Figure 3e. ### Large-scale DFT calculations for the anti-phase boundary energies between ordered variants The coherent anti-phase boundary energy between the different LCO variants at the ordering composition \(x=1/2\) is another key input required for the phase-field simulations. Using the variants defined by \(\eta_{1}=1/2\) and \(\eta_{3}=1/2\) (Figure 3f) we have carried out large-scale DFT computations that predict an anti-phase boundary energy of \(\gamma=30.9\) mJ/m\({}^{2}\) (see SS3.1.1). Guided by the estimate of a factor of 1/2 relating the order-disorder interface energy to the anti-phase boundary energy [27], we use \(\sim 15.45\) mJ/m\({}^{2}\) for the order-disorder interface energy. Active learning an integrable deep neural network (IDNN) for the seven-dimensional free energy surface Figures 4a, b, show respectively, a schematic and the active learning curves with respect to IDNN models indexed by the training dataset number. See SS3.3 for details on IDNNs. The MSE is reported for each IDNN model using the the last (25\({}^{\text{th}}\)) dataset. The best model (lowest MSE) at each temperature is marked by a solid circle. Models with hyperparameter searches performed are marked by open circles. Figure 4c shows the free energy density surfaces as \(\eta_{0}-\eta_{1}\) and \(\eta_{1}-\eta_{2}\) slices for the best model at each temperature. The active learning resulted in IDNNs with two hidden layers and 173 neurons per layer at 260 K, three hidden layers and 163 neurons per layer at 300 K and two hidden layers with 193 neurons per layer at 340 K. #### 1.4.1 Symmetry-respecting functions as IDNN features The features for IDNN representations of the free energy density are the following polynomials, up to sixth order in \(\eta_{0},\ldots\eta_{6}\). They are invariant under transformations of the triangular Li sublattice that map between the ordered variants. Therefore, IDNN representations in terms of these polynomials inherit symmetry with respect to the free energy. These symmetry-respecting IDNN features are determined by applying the Reynolds operator to monomials in \(\eta_{0},\ldots,\eta_{6}\)[28]: \[h_{1} =\eta_{0}\] \[h_{2} =\frac{2}{3}\sum_{i=1}^{6}\eta_{i}^{2}\] \[h_{3} =\frac{8}{3}\sum_{i=1}^{6}\eta_{i}^{4}\] \[h_{4} =\frac{4}{3}\big{[}\left(\eta_{1}^{2}+\eta_{2}^{2}\right)\left( \eta_{3}^{2}+\eta_{4}^{2}+\eta_{5}^{2}+\eta_{6}^{2}\right)+\] \[\left(\eta_{3}^{2}+\eta_{6}^{2}\right)\left(\eta_{4}^{2}+\eta_{5 }^{2}\right)\big{]}\] \[h_{5} =\frac{16}{3}\left(\eta_{1}^{2}\eta_{2}^{2}+\eta_{3}^{2}\eta_{6} ^{2}+\eta_{4}^{2}\eta_{5}^{2}\right)\] \[h_{6} =\frac{32}{3}\sum_{i=1}^{6}\eta_{i}^{6}\] \[h_{12} =64\sqrt{5}\eta_{1}\eta_{2}\eta_{3}\eta_{4}\eta_{5}\eta_{6} \tag{1}\] ### Phase field simulations The representations learnt by the IDNN show that the decrease in free energy density for heterogeneous nucleation of an ordered phase at 260 K is 1 meV/unit cell, while at 300 K it is 0.5 meV/unit cell (Figure S1). Using the DFT results for anti-phase boundary energy \(\gamma=30.9\) mJ/m\({}^{2}\) and order-disorder interface energy \(\sim 15.45\) mJ/m\({}^{2}\), we obtain a far higher nucleation rate of the ordered phase at 260 K than at 300 K. Figure 5a shows the nucleation rates _versus_ applied voltage and the associated composition of the disordered matrix (\(x_{\text{mat}}\)) obtained at 260 and 300 K (see SS3 for further details). At 340 K only the disordered phase exists, as seen in the phase diagram (Figure 2c) and the learnt free energy density surfaces (Figure 4c). The scale bridging framework is applied to two phase field studies of LCO cathodes involving the order-disorder transition around \(x=1/2\). The gradient parameters in the Cahn-Hilliard and Allen-Cahn phase field equations (9-10) were found to be \(\chi_{0}=1\times 10^{-4}\) mJ/m (\(1.88\times 10^{-4}\) mJ/m) and \(\chi_{1},\ldots,\chi_{6}=2.12\times 10^{-8}\) mJ/m (\(4.91\times 10^{-8}\) mJ/m) from the anti-phase boundary/interface energies and the IDNN representations of the free energy surfaces at 260 K (300 K) temperature (see SS3.5). The first phase field study considers the dynamics of the order-disorder transition in two-dimensional particles without external Li fluxes. Figure 5b shows a dynamics closely resembling spinodal decomposition followed by Ostwald ripening of ordered phase domains in a particle of diameter 1 \(\mu\)m, which has an average composition \(x=0.525\) at 260 K and \(x=0.53\) at 300 K. These compositions lie in the order-disorder region of the phase diagram. At each temperature, the first row shows composition, \(x\) and the second shows the orderings \(\eta_{1},\ldots,\eta_{6}\). Investigation of the seven-dimensional free energy surface shows that a single one out of twelve variants forms within each ordered region (SS7 and Fig S7). These dynamics were not run until equilibrium microstructures were obtained (at 260 and 300 K) because of the long computation time needed. However, the mass fractions of the ordered and disordered phases predicted by the coupled phase field equations at each temperature are in good agreement with the lever rule applied to the phase diagram after accounting for the slight shift in order-disorder boundaries as predicted by the IDNN learnt free energy density (SS87, SS8, and Fig S8). This suggests that near-equilibrium states have been attained in the computations. We call attention to the greater numbers of ordered regions and the narrower anti-phase boundaries/interfaces at 260 K relative to 300 K due to the correspondingly lower boundary/interface energies. Note the higher nucleation rates but the slower motion of anti-phase boundaries at 260 K over 300 K. The nucleation rate calculations appear in SSS3(see Figures S1-S4). The interface widths are larger between the ordered variants corresponding to \(\eta_{i}=\pm 1/2\) than a pair of variants represented by \(\eta_{i}=1/2\) or \(-1/2\) and \(\eta_{j}=1/2\) or \(-1/2\), where \(i\neq j\), instances of which are shown as an inset on the fifth row of Figure 5b. This results from symmetries learnt by the IDNN that are additional to those imposed as features above via the Reynolds operator (SS1.4.1), and are discussed further below. The second phase field study is also on two-dimensional particles at 260 and 300 K, and is focused on the effect of charge cycling of an LCO particle, by specifying time-varying externally applied Li fluxes [29]. See (Figure 6). At each temperature, the final configuration of ordered variants in a disordered matrix from the study in Figure 5 was taken as the initial condition. An initial current density of 0.6677 A m\({}^{-2}\) was applied. The applied current density was adjusted to -1.0683, 0.6677, -0.3338, and 0 A m\({}^{-2}\) at 10, 16, 26, and 40 s, respectively. This cycling corresponds to C-rates of 3.125, -5, 3.125, -1.5625, and 0 C, where a positive sign denotes discharging and negative is charging. Over the composition range, discharging, a-c and e-g in Figure 6, injects Li, driving the particle further into the disordered regime and shrinking the ordered domains; conversely, charging, c-e and g-i, extracts Li, making the ordered domains grow. Note the continued growth/shrinkage of different ordered domains as the anti-phase boundaries migrate and decrease in curvature. This cycle at 260 (300) K results in a net discharging, starting from the average Li composition of 0.5425 (0.5225) and increasing that to an average final composition of 0.5455 (0.5254), over the voltage plateau of \(\sim 4.15\) V (see Figure 2b). ## 2 Discussion Our study has been an extensive one, beginning with hundreds of DFT computations to inform tens of thousands of semi-grand canonical Monte Carlo calculations. The active learning framework guided Monte Carlo sampling for data to train IDNN representations of continuum, homogeneous free energy density functions. With gradient free energies informed by other large-scale DFT calculations of anti-phase boundary energies, this continuum free energy description enabled detailed first principles-informed phase field studies. To our knowledge, a systematic scale bridging framework of this nature has not been presented for a materials system. Guided by machine learning, it endows our predictions with quantifiable precision at every stage. In order to place our work in context, we address its scope, open questions and future developments starting with the DFT calculations, and proceeding through first principles statistical mechanics and machine learning to the continuum computations. We have carried out DFT+\(U\) calculations on the O3-layering structure, which is the stable form of LCO for \(x\geq 1/3\). The \(U\) parameter was obtained by matching with experimental measurements of the average voltage between \(x=0\) and \(x=1\), \(x=0\) and \(x=1/2\), and \(x=1/2\) and \(x=1\) (see Figure 1b). A different approach could be to choose \(U\) for an optimal fit across the entire composition range in Figure 2a by iteratively updating the cluster expansion for \(E_{\text{f}}\) and making voltage predictions at intermediate \(x\). Empirical approaches to determine the \(U\) parameter can be completely circumvented by using the self-consistent Hubbard method [30]. Moving on to the configurations predicted by our DFT study, we note the experimental evidence for the row orderings at \(x=1/2\) in Figure 1d [4, 6, 7]. Takahashi, et al. [7] presented the row ordering on the left, and Shao-Horn, et al. [6] suggested a different row ordering in the center. In their DFT study, Van der Ven, et al. [1] found the row ordering in the center to have the lowest formation energy, followed closely by the zig-zag ordering on the right. They make the observation, however, that the energy difference, of order 1 meV in their work, is small enough that it is difficult to determine the ground state with certainty solely from DFT calculations. The differences in formation energies between our results and those of earlier DFT work are likely due to our using DFT+\(U\), vdW-DF, and generalized gradient approximations (GGA), as opposed to the local-density approximation (LDA) used in the prior study. The scale bridging framework remains applicable to the other orderings, although presented here for the zig-zag case. Given this zig-zag structure, the symmetry-adapted order parameters \(\eta_{0},\ldots,\eta_{6}\) are important to the scale-bridging framework by enabling a more efficient representation of the 12 variants in \(\mathbb{R}^{7}\) compared to the sub-lattice vectors \(\mathbf{x}\in\mathbb{R}^{32}\), since \(\eta_{i}=\pm 1/2\), \(i=1,\ldots,6\) represent perfect orderings (Figure 3e). Monte Carlo sampling in \(\mathbb{R}^{7}\) over regions of higher error, in the free energy wells, over the order-disorder transitions and near the bounds in \(\mathbf{\eta}\)-space, however, presents a challenge, which we have approached with the active learning workflow (Figure 4a), which, as Figure 4b shows, improves the IDNN training over datasets, but not monotonically. As the Monte-Carlo sampled data grow with active learning-guided exploitation and exploration, an increasing complexity of representation is needed for the IDNN homogeneous free energy density model. New hyperparameter searches were imposed if the two most recent IDNN model updates showed increasing MSE against the most recent dataset. This IDNN model plotted over two-dimensional slices of \(\mathbb{R}^{7}\) (Figure 4c) is more complex at 300 K with three hidden layers of 163 neurons than at 260 K (two hidden layers, 173 neurons each) and 340 K (two hidden layers, 193 neurons each). In addition to the homogeneous free energy density, the energy of the anti-phase boundary between any two of the 12 variants also is needed for phase field computations. The large-scale DFT computations yielded anti-phase boundary energy \(\gamma=30.9\) mJm\({}^{-2}\) between \(\eta_{1}=1/2\) and \(\eta_{3}=1/2\) variants (Figure 3f). However, order-disorder interface energy computation presents difficulties for systematic convergence studies since randomly sampled disordered configurations change with size. We have therefore followed the approach suggested in the literature [27] that the order-disorder interface energy is approximately half the order-order anti-phase boundary energy. While the state of order/disorder causes local atomic relaxation, the lattice remains coherent. Therefore, the elastic stresses at anti-phase boundaries and order-disorder interfaces are not expected to be large, and these transitions, while important to the phase dynamics, may not contribute to the degradation as discussed in the experimental literature [31, 32, 33]. This machine learning-enabled scale bridging from first-principles statistical mechanics to continuum free energy functions allows quantitatively rigorous phase field computations. These reveal complexity that is linkable to the lower scales: Thicker anti-phase boundaries form between (A) ordered variants represented by extreme values of the same parameter, \(\eta_{i}=\pm 1/2\) than between (B) variants represented by \(\eta_{i}=1/2\) or \(-1/2\) and \(\eta_{j}=1/2\) or \(-1/2\), \(i\neq j\) (SS1.5, Figure 5b bottom row). Type A must include a disordered region \((\eta_{1},\ldots,\eta_{6}=0)\), which does not appear in type B. This is explained by the two-dimensional homogeneous free energy density slices in Figure 4c. Consider an example of type B with \(x=\eta_{0}=1/2\) and \(\eta_{3},\ldots,\eta_{6}=0\). The homogeneous free energy density surface has paths between domains with \(\eta_{1}=1/2\) or \(-1/2\) and \(\eta_{2}=1/2\) or \(-1/2\) that do not pass through \(\eta_{1},\eta_{2}=0\). Consequently, the transition between regions with orderings of type B does not include a fully disordered interface, and appears as a sharp transition (Figure 5b, bottom inset). In contrast, as also seen in Figure 4c, for \(x=\eta_{0}=1/2\) and \(\eta_{3},\ldots,\eta_{6}=0\) the path between type A variants: \((\eta_{1},\eta_{2})=(-1/2,0)\) and \((\eta_{1},\eta_{2})=(1/2,0)\) has to pass through the point \(x=\eta_{0}=1/2\) and \((\eta_{1},\ldots,\eta_{6})=(0,\ldots,0)\)-a fully disordered region. This anti-phase boundary appears with a larger width. The Allen-Cahn equation (10) shows that in a physical particle at equilibrium, the order parameter profiles along paths that are perpendicular to type A anti-phase boundaries are well approximated by the tanh function of position. For type B variants, the IDNN learns scale bridging-informed, physics-constrained symmetries of the form \(\mu_{i}=\mu_{j}\) along paths \(\eta_{i}=\pm\eta_{j}\pm 1/2\) in \(\mathbf{\eta}\in\mathbb{R}^{7}\). These symmetries are distinct from those imposed on the IDNN features in SS1.4.1, Equation (1). In addition to equilibrium structures, the phase dynamics of order-disorder transition are of interest. They require nucleation of an ordered variant at composition \(x_{\text{nuc}}\) from a disordered matrix at \(x_{\text{mat}}\), which depends on the associated free energy density decrease. For homogeneous nucleation, our scale-bridging results in \(\Delta g_{\text{hom}}^{260}\approx 2\Delta g_{\text{hom}}^{300}\). A significantly higher nucleation rate is thus attained at 260 K than at 300 K. Under an external field \(V\) we have a critical free energy decrease for heterogeneous nucleation \(\Delta g_{\text{het}}^{*}=f(\theta)(16\pi\gamma^{2}/3(\Delta g_{\text{hom}}^{3} \pm V(x_{\text{mat}}-x_{\text{nuc}}))\) depending on the sign of \(V\), where \(f(\theta)\) is a function of the contact angle of the nucleus. We find \(x_{\text{mat}}-x_{\text{nuc}}\sim 0.01\). See SS3 and Figures S1b, S1c [34, 35, 36, 37]. With \(x_{\text{nuc}}=1/2\), Figure 5a shows the variation of nucleation rate and probability with voltage and \(x_{\text{mat}}\). The very steep, almost discontinuous, transition from zero to exponentially high rates and probability from 0 to 1 guarantees that the formation of the ordered phases is essentially instantaneous at either 260 K or 300 K once the combination of voltage and \(x_{\text{mat}}\) is favorable. More insight to the order-disorder transition at \(x=1/2\) can be gained from the IDNN homogeneous free energy density function model itself. An examination of the lowest free energy paths followed on this seven-dimensional manifold, in combination with the phase field dynamics of the Cahn-Hilliard equation for \(\eta_{0}\) and the Allen-Cahn equation for \(\eta_{1}\), can provide detailed insight to these transitions. If ordering were ignored, we obtain a one-dimensional homogeneous free energy density parameterized by only the composition, \(x=\eta_{0}\). In the neighborhood of \(x=1/2\) non-convexities appear that suggest spinodal decomposition as a phase separation mechanism (SS1.5, SSS4, Figs S4, and S8a). However, an analysis of the seven-dimensional IDNN free energy density model shows that, in the ordered region five of six order parameters remain at zero, while one changes nearly discontinuously (with respect to \(\eta_{0}\)) to \(\sim 1/2\) (see SSS7, Figure S9). Two-dimensional slices (Fig 4c and Fig S7) viewed in the \(g-\eta_{0}\) plane show a discontinuity in slope; i.e., chemical potential \(\mu_{0}\). With a one-dimensional free energy density learnt by the IDNN using only \(\eta_{0}\) as input this \(\mu_{0}\)-discontinuity gets smoothed to a non-convexity of the \(g-\eta_{0}\) curve (SSS7 and Fig S8a) leading to the suggestion of spinodal decomposition noted above. The full seven-dimensional IDNN model, however, reveals a more complex trajectory: As \(\eta_{0}\) enters the region of ordering, a non-convexity is encountered in the \(g-\eta_{0}-\eta_{i}\) space for some \(i\in\{1,\dots,6\}\). Fig S7 shows this for the \(g-\eta_{0}-\eta_{1}\) space exploiting the symmetry with respect to \(\eta_{1},\dots,\eta_{6}\). However, it is not a non-convexity of \(g\) with respect to \(\eta_{0}\) alone, but one that is aligned with eigenvectors of the Hessian \(\partial^{2}g/\partial\eta_{0}\partial\eta_{1}\) for \((\eta_{2},\dots,\eta_{6})=(0,\dots,0)\). This is a spinodal decomposition in the \(g-(\eta_{0},\dots,\eta_{6})\) space, and not fully described by considering only the \(g-\eta_{0}\) space. The seven-dimensional perspective resolves the phase transition that determines the chemical potentials, and thus the voltage, at the lower lithiation limit of LCO. Thereby, it guides a complete understanding of battery performance and ultimately, its control. Figure S8b locates the points related to the above analysis on the phase diagram. Enabled by machine learning, the scale-bridging reveals details of the dynamics under quantifiable precision that is directly connected to first principles statistical mechanics and DFT calculations. As seen in Figure 5b, notable variations can arise in the order-disorder morphology of a cathode particle undergoing rapid charge-discharge cycles in a temperature range of \(260-300\) K (and up to 330 K) that falls within the window of relevance for actual batteries. Order-disorder transitions do not create lattice mismatches, and therefore are not expected to cause stresses-induced damage [38, 33]. The absence of order-disorder transitions above 340 K for the composition range studied (Figures 2c and 4c) eliminates even the low stresses that may otherwise form at anti-phase boundaries. This could motivate doping with elements that stabilize the disordered structure. Such a mechanism has relevance to the Al and La doped stabilization of the order-disorder transition as observed recently [33]. Smaller particles have a more uniform morphology. This is well-understood: The length scale of the transient microstructure in the simulations is defined by the gradient coefficient, which is related to the anti-phase boundary and interfacial energies, and is independent of the particle size. As the particle size decreases from 1 \(\mu\)m to 50 nm, the result is a less complex microstructure. See SSS4 Figure S6 and Ref [39]. Current cycling 50 nm particles at the rates considered here showed no differences with temperature. The particles are small enough that the relative values of the mobility, particle size, and microstructure length scale lead the Li atoms to diffuse uniformly and extremely rapidly in comparison with the time scale of the simulation such that any temperature effects are not seen (Figure S6). We note that for compositions \(x<1/3\), DFT+\(U\) studies will need to be completed for the H1-3 structure, which is the stable form over the lower compositional range, and subjected to our scale-bridging framework, thus extending it to the entire composition range. A systematic incorporation of additional effects remains. These include elasticity, changes in lattice structure and vibrational entropy, which can be scaled up to continuum thermomechanics. An extension to the equations of electrochemistry at the active particle-electrolyte scale will allow the study of phenomena such as the mosaic instability observed in nanoparticulate batteries [40]. Our predictions rest upon the fidelity of the DFT methods, which also are far from _ab initio_. Therefore, we do not claim to present quantitatively accurate computational predictions. Yet, we have furnished a scale-bridging framework to systematically infuse continuum phase field computations with quantitative information from first principles statistical mechanics with a precision that can be tightened. This will allow us to study how morphology evolves under various microstructure-specific, environmental, kinetic and cycling conditions at a computational fidelity not previously accessed with regard to the length scales. From such simulations we can make connections to observations in the experimental literature on LCO [38; 31; 33; 32; 41]. ## 3 Methods ### DFT calculations for energies The formation energy \(E\) of a configuration with Li composition \(x\) can be calculated using total energy values computed using density functional theory (DFT) according to the following equation [1]: \[E_{\text{Li}_{x}\text{CoO}_{2}}=E^{\text{tot}}-xE_{\text{LiCoO}_{2}}^{\text {tot}}-(1-x)E_{\text{CoO}_{2}}^{\text{tot}} \tag{2}\] where \(E^{\text{tot}}\), \(E_{\text{LiCoO}_{2}}^{\text{tot}}\), and \(E_{\text{CoO}_{2}}^{\text{tot}}\) are the total energies for the given configuration, LiCoO\({}_{2}\), and CoO\({}_{2}\), respectively. We use a simplified rotational-invariant formulation of DFT+\(U\)[15] and a vdW-DF exchange correlation functional, namely the optB88 exchange correlation functional [42; 43; 44], to calculate the formation energy on a chosen subset of LCO configurations using Quantum Espresso[45; 46]. The CASM (Clusters' Approach to Statistical Mechanics) software suite was used to identify a set of configurations with the O3 crystal structure for parametrizing the cluster expansion. We calibrate the Hubbard \(U\) parameter to match experimental average lithiation voltages over various ranges of Li composition. The average voltage from \(x_{1}\) to \(x_{2}\) is calculated using the following equation [47; 48; 16]: \[V=-\frac{E_{\text{Li}_{x_{2}}\text{CoO}_{2}}-E_{\text{Li}_{x_{1}}\text{CoO }_{2}}-(x_{2}-x_{1})E_{\text{Li}}}{(x_{2}-x_{1})e} \tag{3}\] where \(E\) is the calculated total energy from DFT and \(e\) is the charge on an electron. We compute the average voltage for Li composition: \(\{x_{1},x_{2}\}=\{0,1\},\{0,1/2\}\) and \(\{1/2,1\}\). By comparing with experimental voltages (Figure 1b), we select an appropriate value of \(U=2.5\) eV. Additional details on the DFT calculations for formation energy are provided in SSS1.1. #### 3.1.1 Interface energy Large-scale DFT computations were performed for the coherent interface energy of the anti-phase boundary between the \(\eta_{1}=1/2\) and \(\eta_{3}=1/2\) ordered LCO rotational variants. These computations were performed using the DFT-FE software [49; 50; 51; 52], a recently developed massively parallel open-source code for large-scale real-space Kohn-Sham DFT studies based on a finite-element discretization. We employ the PBE exchange-correlation functional and the optimized norm-conserving Vanderbilt pseudopotentials (ONCV) [53] from the Pseudo Dojo library [54]. Table S1 shows the energies and system sizes used to determine the anti-phase boundary energy \(\gamma\), which is estimated to be 30.9 mJ/m\({}^{2}\). Details of the periodic simulation cells, treatment of anti-phase boundaries, accounting of elastic misfit strain energy, and convergence tolerances are provided in SSSI.1.1. ### Statistical mechanics #### 3.2.1 Cluster expansion for formation energy We adopt cluster expansions to access the large numbers of configurations needed in the statistical mechanics studies. The formation energy is written as \(E_{\mathrm{f}}(\mathbf{\sigma})\), where \(\mathbf{\sigma}\) is the configuration vector with the occupancy variable \(\sigma_{i}=1\) if Li occupies the site \(i\) and \(\sigma_{i}=0\) if the site contains a vacancy. We define \(E_{\mathrm{f}}(\mathbf{\sigma})\) by computing the formation energy with DFT for a subset of configurations and use these values to parameterize a cluster expansion [55; 56] as a rapidly queryable surrogate for the formation energy. We use the CASM software[22], which facilitates the construction and parameterization of cluster expansion Hamiltonians and their use in Monte Carlo simulations, to select configurations for the DFT computations and perform the statistical mechanics calculations in this work [57; 58; 59]. Details on the definition of cluster basis functions, effective cluster interaction coefficients, regression techniques and algorithms to choose among the candidate basis functions appear in SSSI.2.1. #### 3.2.2 Symmetry-adapted order parameters We follow Natarajan, et al [60] for identifying symmetry-adapted order parameters representing the variants in Figure 3a. The symmetry group, \(P\), of the zig-zag ordering consists of 384 unique linear transformations between the 12 variants each represented by a matrix in \(\mathbb{R}^{32\times 32}\). Following the algorithm in Ref. [61], we constructed a \(P\)-invariant matrix and performed its eigenvalue decomposition resulting in eight nonzero eigenvalues: two distinct, two repeated three times, and four repeated six times, and eight corresponding sets of eigenvectors, for a total of 32 eigenvectors forming the rows of the orthogonal transformation matrix \(\mathbf{Q}\in\mathbb{R}^{32\times 32}\), that maps a sublattice composition vector \(\mathbf{x}\in\mathbb{R}^{32}\) to order parameter space \(\mathbf{\eta}\in\mathbb{R}^{32}\). Using zero indexing, components \(\eta_{7}^{(i)}\) through \(\eta_{31}^{(i)}\) of \(\mathbf{\eta}\in\mathbb{R}^{32}\) are zero for all variants \(i=1,\ldots,12\), and are irrelevant for describing the zig-zag ordering (Figure 3e). Of the seven relevant order parameters, \(\eta_{0}\) is associated with one of the distinct eigenvalues and corresponds to the composition averaged over all 32 sublattice sites; i.e., \(\eta_{0}=x\). The other six order parameters, \(\eta_{1},\ldots,\eta_{6}\) are associated with one of the degenerate eigenvalues that has six corresponding eigenvectors. Since the free energy density is invariant under transformations of the triangular Li sublattice that map between the ordered variants, the IDNN representation is presented with features that are symmetric functions of \(\eta_{0},\ldots\eta_{6}\) under these transformations. Monomials of up to sixth order were chosen and subjected to the Reynolds operator, by summing: \[h(\mathbf{\eta})=\sum_{\mathbf{M}^{(n)}\in\mathcal{M}}f(\mathbf{M}^{(n)}\mathbf{\eta}), \tag{4}\] where for each \(\mathbf{M}\in P\) we have \(\mathbf{M}^{\eta}=\mathbf{Q}\mathbf{M}\). This operation yields the P-invariant polynomial functions in Eq. (1) as IDNN features. Further details on the symmetry-respecting functions of the order parameters in \(\mathbf{\eta}\) space are available in SSSS5 #### 3.2.3 Monte Carlo sampling Given \(E_{\mathrm{f}}(\mathbf{\sigma})\), we sample within the semi-grand canonical ensemble, in which the chemical potential is specified and the corresponding composition and/or order parameters are determined through ensemble averaging. This approach, however, does not produce data within the unstable regions associated with phase separation. However, phase field simulations require free energy information within these two-phase regions in order to consistently resolve phase interfaces. Additional Monte Carlo calculations were performed for temperatures of 260 K, 300 K, and 340 K using bias potentials for umbrella sampling within the unstable regions of the order-disorder transition [60; 24; 62; 25; 26]. The partition function is: \[\Theta=\sum_{\mathbf{\sigma}}\exp\left(-\frac{E(\mathbf{\sigma})+\sum_{i=0}^{6}\phi_{i}( \eta_{i}(\mathbf{\sigma})-\kappa_{i})^{2}}{k_{B}T}\right) \tag{5}\] where \(\phi_{i}\) and \(\kappa_{i}\) determine the curvature and center of the bias potential, respectively, and the inner sum is over the composition and six order parameters. The ensemble average of the composition \(\langle\eta_{0}\rangle\) and each order parameter \(\langle\eta_{i}\rangle\), \(i=1,\ldots,6\) is related to its corresponding chemical potential through the bias parameters: \[\frac{1}{M}\mu_{i}\Big{|}_{\langle\mathbf{\eta}\rangle}=-2\phi_{i}(\langle\eta_{i} \rangle-\kappa_{i}),\qquad i=0,\ldots,6 \tag{6}\] The Monte Carlo calculations are run with an allowed variance in ensemble averages \(\langle\eta_{i}\rangle=3\times 10^{-4}\) as a convergence criterion, from which the precision of the order parameter is computed as \(\text{Var}(\langle\eta_{i}\rangle-\kappa_{i})\). The precision in \(\langle\mu_{i}\rangle\) follows using (6). Additional details on the Monte Carlo simulations, are provided in SS1.2.3. ### Integrable deep neural networks (IDNNs) for free energy representations The IDNN representation is obtained for the free energy density function by training on derivative data: chemical potentials as labels and the corresponding symmetry-invariant functions of composition or order parameter as features (SS3.2.2) [2; 3]. The integrability of the IDNN follows from the fundamental theorem of calculus since it is the derivative of a fully-connected deep neural network (DNN). A DNN is a function \(Y(\mathbf{X},\mathbf{W},\mathbf{b})\) representing the ensemble averaged chemical potentials \(\langle\mu_{i}\rangle\), with arguments or inputs \(\mathbf{X}\) representing the symmmetry-invariant functions of composition and order parameters \(\langle\eta_{i}\rangle\), \(i=0,\ldots 6\), weights \(\mathbf{W}\), and biases \(\mathbf{b}\). DNN training is an optimization problem for the weights and biases, given the dataset \(\{(\widehat{\mathbf{X}}_{\theta},\widehat{Y}_{\theta})\}\). Here, however, the dataset \(\{(\widehat{\mathbf{X}}_{\theta},\widehat{Y}_{\theta})\}\) is not available. Instead, we have the derivative dataset \(\{(\widehat{\mathbf{X}}_{\theta},\widehat{y}_{\theta_{k}})\}\), where \(\widehat{y}_{\theta_{k}}\) corresponds to the partial derivative of \(\widehat{Y}_{\theta}\) with respect to the \(k^{\text{th}}\) component of \(\widehat{\mathbf{X}}_{\theta}\). By defining the IDNN as the gradient of \(Y\) with respect to its inputs \(\mathbf{X}\), i.e. \(\partial Y(\mathbf{X},\mathbf{W},\mathbf{b})/\partial X_{k}\), the training/optimization problem is: \[\widehat{\mathbf{W}},\widehat{\mathbf{b}}=\operatorname*{arg\,min}_{\mathbf{W},\mathbf{b}}\, \sum_{k=1}^{n}\text{MSE}\left(\frac{\partial Y(\mathbf{X},\mathbf{W},\mathbf{b})}{ \partial X_{k}}\Big{|}_{\widehat{\mathbf{X}}_{\theta}},\widehat{y}_{\theta_{k}}\right) \tag{7}\] The optimized weights \(\widehat{\mathbf{W}}\) and biases \(\widehat{\mathbf{b}}\) are used with the IDNN function \(\partial Y(\mathbf{X},\widehat{\mathbf{W}},\widehat{\mathbf{b}})/\partial X_{k}\) to predict the chemical potential. The same weights and biases are used in its antiderivative DNN function \(Y(\mathbf{X},\widehat{\mathbf{W}},\widehat{\mathbf{b}})\) to predict the hommogeneous free energy density. ### Sampling and active learning workflow IDNN training to represent the chemical potential for the zig-zag ordering requires sampling data in the seven-dimensional \(\mathbf{\eta}\) space. Uniform, dense sampling in this space would require a prohibitive number of Monte Carlo simulations. Instead, we focus on physically significant regions with difficult-to-capture features. These include the energy wells related to the variants of the zig-zag ordering and the divergent behavior of the chemical potential at the boundaries of the order parameter space, including the composition end members at \(\eta_{0}=x=\{0,1\}\). Some general, unguided _exploration_ sampling of the order parameter space is also performed to capture overall trends. We improve the partially trained IDNN by combining exploration with _exploitation_ sampling in areas with high point-wise error. The active learning workflow iterates over cycles of exploration sampling, IDNN training, and exploitation sampling until a stopping criterion is met. This sampling must be carried out within the boundaries of the admissible domain in \(\mathbf{\eta}\)-space. We use the Billiard Walk [63] random sampling algorithm in this space. More detail is available in SS1.4. The workflow (Figure 4a,b) forced a new search for the IDNN hyperparameters on (a) the second workflow iteration, and (b) if the mean square error (MSE) calculated for the two previous IDNN models using the most recent dataset increased from one to the other. If the MSE decreased, then the workflow allowed training to continue with the previous IDNN on the most recent data (Figure 4b). ### Phase field theory and associated computational framework Neglecting elastic effects, the total free energy of the system is: \[\Pi[x,\widehat{\mathbf{\eta}}]=\int_{\Omega}\left(f(x,\widehat{\mathbf{\eta}})+\frac{1 }{2}\chi_{0}|\nabla x|^{2}+\sum_{i=1}^{6}\frac{1}{2}\chi_{i}|\nabla\widehat{ \eta}_{i}|^{2}\right)\,\mathrm{d}V \tag{8}\] where \(\chi_{i}\) are the gradient parameters, and \(f(x,\widehat{\mathbf{\eta}})\) is the free energy density, represented by the analytically integrated DNN in this work. The chemical potentials \(\widetilde{\mu}_{i}\) used in the phase field equations are variational derivatives of the total free energy: \(\widetilde{\mu}_{0}:=\delta\Pi/\delta x\) and \(\widetilde{\mu}_{i}:=\delta\Pi/\delta\widehat{\eta}_{i}\), \(i=1,\dots,6\) (the \(\widehat{\bullet}\) notation is used for clarity; however, \(\widehat{\eta}_{i}=\eta_{i}\) from previous sections): \[\widetilde{\mu}_{0}=\frac{\partial f}{\partial x}-\chi_{0}\nabla^{2}x,\qquad \widetilde{\mu}_{i}=\frac{\partial f}{\partial\widehat{\eta}_{i}}-\chi_{i} \nabla^{2}\widehat{\eta}_{i},\;i=1,\dots,6 \tag{9}\] The Cahn-Hilliard phase field equation [64] models the dynamics of the conserved composition, and the Allen-Cahn equation [65] models nonconserved order parameter fields, respectively: \[\frac{\partial x}{\partial t}=\nabla\cdot(\widetilde{M}\nabla\widetilde{\mu}_ {0}),\quad\text{and}\quad\frac{\partial\widehat{\eta}_{i}}{\partial t}=-L \widetilde{\mu}_{i},\qquad i=1,\dots,6 \tag{10}\] where \(\widetilde{M}\) is the mobility and \(L\) is a kinetic coefficient. We substitute in the chemical potentials and write the governing equations in weak form to be solved using a mixed finite element method: \[0 =\int_{\Omega}\left(w_{x}\frac{\partial x}{\partial t}+\widetilde {M}\nabla w_{x}\cdot\nabla\tilde{\mu}_{0}\right)\mathrm{d}V-\int_{\partial \Omega}w_{x}j_{n}\mathrm{d}S \tag{11}\] \[0 =\int_{\Omega}\left[w_{\tilde{\mu}_{0}}\left(\tilde{\mu}_{0}- \frac{\partial f}{\partial x}\right)-\chi_{0}\nabla w_{\tilde{\mu}_{0}}\cdot \nabla x\right]\mathrm{d}V\] (12) \[0 =\int_{\Omega}\left[w_{i}\frac{\partial\widehat{\eta}_{i}}{ \partial t}+L\left(w_{i}\frac{\partial f}{\partial\widehat{\eta}_{i}}+\chi_{ i}\nabla w_{i}\cdot\nabla\widehat{\eta}_{i}\right)\right]\mathrm{d}V,\qquad i=1,\dots,6 \tag{13}\] where \(w_{x}\), \(w_{\tilde{\mu}_{0}}\), and \(w_{i}\) are weighting functions. For the equations written in this mixed formulation, the following boundary conditions have been applied to \(x\), \(\tilde{\mu}_{0}\), and \(\widehat{\eta}_{i}\), \(i=1,\dots,n\), on \(\partial\Omega\), where \(\mathbf{n}\) is the outward unit normal and \(j_{n}\) is an influx:\(\nabla x\cdot\mathbf{n}=0,\;\widetilde{M}\nabla\tilde{\mu}_{0}\cdot\mathbf{n}=j_{n},\; \nabla\widehat{\eta}_{i}\cdot\mathbf{n}=0,\;i=1,\dots,6\) We define a composition-dependent surrogate function for the diffusivity \(D\) at 300 K, using the predicted values from Ref. [66]: \(D=0.01\exp(-274(1.05-x)(0.47-x)(1.-x))\), and approximate \(D\) at other temperatures as outlined in SSS1.5. The diffusivity surrogate function and the predicted data appear in Figure S5, where the effective vibrational frequency \(\nu^{*}\) is reported to be on the order of \(10^{13}\) s\({}^{-1}\)[66]. The mobility \(\widetilde{M}\) is related to \(D\) by the equation \(\widetilde{M}=Dx/(k_{B}T)\)[29]. We solve an inverse problem for the gradient parameters \(\chi_{i}\) in Eq. (8) by constraining the phase field calculation of the interface energy to agree with the DFT results in SS3.1.1, see SS1.5 The simulations are performed using the finite element method with the mechanoChemFEM code (available at github.com/mechanoChem/mechanoChemFEM), which is based on the deal.II[67] library. Adaptive meshing with hanging nodes and adaptive time stepping are used. Further details on the phase field methods are provided in SSS1.5. ## Acknowledgements We thank Brian Puchala and Anirudh Natarajan for their insight regarding the CASM software and related methods, as well as Chirranjeevi Gopal and Muratahan Aykol for their suggestions on DFT methods for layered oxides. We gratefully acknowledge the support of Toyota Research Institute, Award #849910, "Computational framework for data-driven, predictive, multi-scale and multi-physics modeling of battery materials". This work has also been supported in part by National Science Foundation DMREF grant #1729166, "Integrated Framework for Design of Alloy-Oxide Structures". Additional support was provided by Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR0011199002, "Artificial Intelligence guided multi-scale multi-physics framework for discovering complex emergent materials phenomena". We acknowledge the support of the U.S. Army Research Office through the DURIP grant W911NF1810242, which provided computational resources for this work. Simulations in this work were run on the Great Lakes HPC cluster at the University of Michigan. Additional simulations were performed using resources provided by the Extreme Science and Engineering Discovery Environment (XSEDE) Comet at the San Diego Supercomputer Center and Stampede2 at the Texas Advanced Computing Center through allocations TG-DMR180072 and TG-MCH200011. XSEDE is supported by National Science Foundation grant number ACI-1548562. Some simulations were performed using resources provided by the NSF via grant 1531752 MRI: Acquisition of Conflux, A Novel Platform for Data-Driven Computational Physics (Tech. Monitor: Ed Walker), with additional support by the University of Michigan. ## Author contributions The study's scale bridging methodology was developed by KG and GHT; electronic structure studies were planned by GHT, SD and VG; electronic structure calculations were performed by GHT and SD; statistical mechanics and phase field studies were carried out by GHT, JH and MFS; the results were analyzed by all the authors; the paper was written by all the authors. ## Competing interests The authors declare no competing interests. Figure 1: (a) Flowchart outlining the data, computational methods and machine learning-enabled linkages that bridge from the atomic up to the continuum scale. The Li atoms are green, Co are blue, and O are red (visuals generated using VESTA [19]. (b) Calculated voltage across various compositions as a function of U, compared with experimental voltages [20; 21]. (b) Formation energies and convex hull predicted by the cluster expansion (CLEX) are compared with those from the DFT calculations. The configurations predicted by the cluster expansion to lie on the convex hull are the same as those on the DFT convex hull. (d) Three configurations with Li concentration \(x=0.5\) and their corresponding calculated formation energies. The filled circles represent Li atoms within a Li layer, and the empty circles represent Li atoms in the adjacent Li layer. (e) ECI values associated with each basis function in the cluster expansion. Figure 2: (a) Chemical potential values at 300 K, predicted using Monte Carlo simulations with and without bias potentials. (b) Comparison of the predicted voltage at 300 K (blue) with the experimental voltage (red) from Ref. [23]. (c) Phase diagram for the O3 structure for LCO with the ordering of interest at \(x=1/2\), based on Monte Carlo results. The gaps in unbiased sample points in (a) correspond to order-disorder phase transitions (white region) at 300 K. Figure 3: (a) The zig-zag ordering has 12 variants, resulting from combinations of 3 rotations and 4 translations. (b) The supercells corresponding to each variant are also shown. Note that these can remain invariant under translations, but not under rotations. The filled circles represent Li atoms occupying the same layer; empty circles are Li atoms in the layer below (or above). (c) The mutually commensurate supercell that includes all the supercells is also shown with its 32 sublattice sites. (d) One example of the 12 variants of zig-zag ordering is shown in its sublattice representation. (e) The transformation \(\mathbf{\eta}=\mathbf{Qx}\) from sublattice to order parameter space. The 12 zig-zag variants in the \(\mathbf{\eta}\) space have zeroes in rows 8-32. Therefore seven order parameters are needed to describe the composition and ordering. (f) Atomic configurations for anti-phase boundary (dashed lines) energy computations between ordered variants \(\eta_{1}=1/2\) (left) and \(\eta_{3}=1/2\) (right). The yellow path traces the change in zig-zag ordering. Figure 4: (a) A schematic of the active learning workflow with exploration, training and exploitation to guide additional sampling. (b) Active learning curves for the IDNN with respect to model index for 260, 300, and 340 K. Each model corresponds to progressively larger datasets. Note the logarithmic scale on the y-axis. Open circles indicate new hyperparameter searches to update the IDNN. Losses are MSEs for the final (25\({}^{\text{th}}\)) dataset. Solid circles mark the best IDNN model at each temperature. (c) The free energy density surfaces plotted as \(\eta_{0}-\eta_{1}\) slices (\(\eta_{2},\dots,\eta_{6}=0\)) and \(\eta_{1}-\eta_{2}\) slices (\(\eta_{0}=1/2\), \(\eta_{3},\dots\eta_{6}=0\)). The ordered variants correspond to the wells located at \(\eta_{0}=x\sim 1/2\) and \(\eta_{1},\dots,\eta_{6}\sim\pm 1/2\) at 260 and 300 K. At 340 K the only well is at \(\eta_{1},\dots,\eta_{6}=0\) for all values of \(\eta_{0}\); i.e., only the disordered form exists. Figure 5: (a) Nucleation rates at 260 K and 300 K as functions of the voltage and the composition of a potential nucleation site in the disordered region \(x_{\rm mat}\), and the corresponding nucleation probabilities \(P_{n}\) for \(\Delta t=1e-4\) sec. The dark and light colors show \(P_{n}=0\) and \(P_{n}=1\), respectively (b) 2D phase field simulations at 260 K (300 K) showing the Li composition in a 1 \(\mu\)m diameter particle. The initial Li composition was randomly perturbed about \(x=0.5425\) (\(x=0.5225\)) and no boundary flux. The inset shows some of the widest order-order interfaces at 300 K formed by \(\eta_{1}=\pm\frac{1}{2}\) (left), \(\eta_{3}=\pm\frac{1}{2}\) (middle), and \(\eta_{6}=\pm\frac{1}{2}\) (right). Figure 6: Phase field results showing the Li composition field resulting from applying a cycling current density for 260 and 300 K, for a 1 \(\mu\)m diameter particle. The dashed blue line shows the applied C-rate, where a positive sign denotes discharging and negative is charging, and the red lines show the corresponding average Li composition of the particle. ## References * [1] A. Van der Ven, M. K.. Aydinol, G. Ceder, G. Kresse, and J. Hafner. First-principles investigation of phase stability in Li\({}_{x}\)CoO\({}_{2}\). _Phys. Rev. B_, 58:2975-2987, 1998. * 216, 2019. * [3] G.H. Teichert, A.R. Natarajan, A. Van der Ven, and K. Garikipati. Scale bridging materials physics: Active learning workflows and integrable deep neural networks for free energy function representations in alloys. _Computer Methods in Applied Mechanics and Engineering_, 371:113281, 2020. * [4] J. N. Reimers and J. R. Dahn. Electrochemical and in situ x-ray diffraction studies of lithium intercalation in Li\({}_{x}\)CoO\({}_{2}\). _Journal of The Electrochemical Society_, 139(8):2091-2097, Aug 1992. * [5] M. Menetrier, I. Saadoune, S. Levasseur, and C. Delmas. The insulator-metal transition upon lithium deintercalation from LiCoO\({}_{2}\): Electronic properties and 7Li NMR study. _Journal of Materials Chemistry_, 9(5):1135-1140, 1999. cited By 337. * [6] Y. Shao-Horn, S. Levasseur, F. Weill, and C. Delmas. Probing lithium and vacancy ordering in O3 layered Li\({}_{x}\)CoO\({}_{2}\) (x\(\approx\)0.5). _Journal of The Electrochemical Society_, 150(3):A366, 2003. * [7] Y Takahashi, N Kijima, K Tokiwa, T Watanabe, and J Akimoto. Single-crystal synthesis, structure refinement and electrical properties of Li\({}_{0.5}\)CoO\({}_{2}\). _Journal of Physics: Condensed Matter_, 19(43):436202, Sep 2007. * [8] T. Motohashi, T. Ono, Y. Sugimoto, Y. Masubuchi, S. Kikkawa, R. Kanno, M. Karppinen, and H. Yamauchi. Electronic phase diagram of the layered cobalt oxide system Li\({}_{x}\)CoO\({}_{2}\) (\(0.0\leq x\leq 1.0\)). _Phys. Rev. B_, 80:165114, Oct 2009. * 1090, 2004. * 15, 2013. * [11] C. Wolverton and A. Zunger. First-principles prediction of vacancy order-disorder and intercalation battery voltages in Li\({}_{x}\)CoO\({}_{2}\). _Phys. Rev. Lett._, 81:606-609, Jul 1998. * [12] L. Kaufman and H. Bernstein. _Computer Calculation of Phase Diagrams_. Academic Press, NY, 1970. * 218, 2011. * [14] N. Nadkarni, T. Zhou, D. Fraggedakis, T. Gao, and M. Z. Bazant. Modeling the metal-insulator phase transition in LiXCoO\({}_{2}\) for energy and information storage. _Advanced Functional Materials_, 29(40):1902821, 2019. * [15] M. Cococcioni and S. de Gironcoli. Linear response approach to the calculation of the effective interaction parameters in the LDA + U method. _Phys. Rev. B_, 71:035105, Jan 2005. * [16] M. Aykol, S. Kim, and C. Wolverton. van der waals interactions in layered lithium cobalt oxides. _The Journal of Physical Chemistry C_, 119(33):19053-19058, 2015. * [17] C. B. Gopal and A. van de Walle. Ab initio thermodynamics of intrinsic oxygen vacancies in ceria. _Phys. Rev. B_, 86:134117, Oct 2012. * [18] ME Arroyo y de Dompablo, A Van der Ven, and G Ceder. First-principles calculations of lithium ordering and phase stability on li x nio 2. _Physical Review B_, 66(6):064112, 2002. * [19] K. Momma and F. Izumi. VESTA 3 for three-dimensional visualization of crystal, volumetric and morphology data. _J. Appl. Crystallogr._, 44:1272-1276, 2011. * [20] GG Amatucci, JM Tarascon, and LC Klein. Coo2, the end member of the li x coo2 solid solution. _Journal of The Electrochemical Society_, 143(3):1114, 1996. * [21] H Xia, L Lu, Y Sh Meng, and G Ceder. Phase transitions and high-voltage electrochemical behavior of licoo2 thin films grown by pulsed laser deposition. _Journal of The Electrochemical Society_, 154(4):A337, 2007. * [22][https://github.com/prisms-center/CASMcode](https://github.com/prisms-center/CASMcode). CASM: A Clusters Approach to Statistical Mechanics, v0.3.dev, 2018. * [23] G. G. Amatucci, J. M. Tarascon, and L. C. Klein. CoO\({}_{2}\), the end member of the Li\({}_{x}\)CoO\({}_{2}\) solid solution. _Journal of The Electrochemical Society_, 143(3):1114-1123, Mar 1996. * 199, 1977. * [25] B. Sadigh, P. Erhart, A. Stukowski, A. Caro, E. Martinez, and L. Zepeda-Ruiz. Scalable parallel monte carlo algorithm for atomistic simulations of precipitation in alloys. _Phys. Rev. B_, 85:184203, May 2012. * [26] B. Sadigh and P. Erhart. Calculation of excess free energies of precipitates via direct thermodynamic integration across phase boundaries. _Phys. Rev. B_, 86:134204, Oct 2012. * 3001, 1998. * [28] M. S. Dresselhaus, G. Dresselhaus, and A. Jorio. _Group theory: application to the physics of condensed matter_. Springer Science & Business Media, 2007. * [29] T. Jiang, S. Rudraraju, A. Roy, A. Van der Ven, K. Garikipati, and M. L. Falk. Multiphysics simulations of lithiation-induced stress in Li\({}_{1+x}\)Ti\({}_{2}\)O\({}_{4}\) electrode particles. _The Journal of Physical Chemistry C_, 120(49):27871-27881, 2016. * [30] I. Timrov, N. Marzari, and M. Cococcioni. Self-consistent hubbard parameters from density-functional perturbation theory in the ultrasoft and projector-augmented wave formulations. _Phys. Rev. B_, 103:045141, Jan 2021. * [31] S. H. Choi, J-W. Son, Y. S. Yoon, and J. Kim. Particle size effects on temperature-dependent performance of LiCoO\({}_{2}\) in lithium batteries. _Journal of power sources_, 158(2):1419-1424, 2006. * [32] F. Leng, C. M. Tan, and M. Pecht. Effect of temperature on the aging rate of li ion battery operating above room temperature. _Scientific reports_, 5(1):1-12, 2015. * [33] J. P. Pender, G. Jha, D. H. Youn, J. M Ziegler, I. Andoni, E. J. Choi, A. Heller, B. S. Dunn, P. S. Weiss, R. M. Penner, et al. Electrode degradation in lithium-ion batteries. _ACS nano_, 14(2):1243-1295, 2020. * [34] FK LeGoues and HI Aaronson. Influence of crystallography upon critical nucleus shapes and kinetics of homogeneous fcc-fcc nucleation--iv. comparisons between theory and experiment in cuco alloys. _Acta Metallurgica_, 32(10):1855-1864, 1984. * [35] J.P Simmons, C Shen, and Y Wang. Phase field modeling of simultaneous nucleation and growth by explicitly incorporating nucleation events. _Scripta Materialia_, 43(10):935-942, 2000. * [36] J.P. Simmons, Youhai Wen, C. Shen, and Y.Z. Wang. Microstructural development involving nucleation and growth phenomena simulated with the phase field method. _Materials Science and Engineering: A_, 365(1):136-143, 2004. Multiscale Materials Modelling. * [37] R. P. Sear. Nucleation: theory and applications to protein solutions and colloidal suspensions. _Journal of Physics: Condensed Matter_, 19(3):033101, jan 2007. * [38] H. Wang, Y-I. Jang, B. Huang, D. R. Sadoway, and Y-M. Chiang. TEM study of electrochemical cycling-induced damage and disorder in LiCoO\({}_{2}\) cathodes for rechargeable lithium batteries. _Journal of the Electrochemical Society_, 146(2):473, 1999. * [39] Gregory H Teichert, Sambit Das, Muratahan Aykol, Chirranjeevi Gopal, Vikram Gavini, and Krishna Garikipati. Li\({}_{x}\)CoO\({}_{2}\) phase stability studied by machine learning-enabled scale bridging between electronic structure, statistical mechanics and phase field theories. _arXiv preprint arXiv:2104.08318_, 2021. * [40] B. Orvananos, T. R. Ferguson, H-C. Yu, M. Z. Bazant, and K. Thornton. Particle-level modeling of the charge-discharge behavior of nanoparticulate phase-separating li-ion battery electrodes. _Journal of The Electrochemical Society_, 161(4):A535, 2014. * [41] A. J. Merryweather, C. Schnedermann, Q. Jacquet, C. P. Grey, and A. Rao. Operando optical tracking of single-particle ion dynamics in batteries. _Nature_, 594(7864):522-528, 2021. * [42] T. Thonhauser, V. R. Cooper, S. Li, A. Puzder, P. Hyldgaard, and D. C. Langreth. Van der Waals density functional: Self-consistent potential and the nature of the van der Waals bond. _Phys. Rev. B_, 76:125112, Sep 2007. * [43] J. Klimes, D. R. Bowler, and A. Michaelides. Chemical accuracy for the van der Waals density functional. _Journal of Physics: Condensed Matter_, 22(2):022201, Dec 2009. * [44] D C Langreth, B I Lundqvist, S D Chakarova-Kack, V R Cooper, M Dion, P Hyldgaard, A Kelkkanen, J Kleis, Lingzhu Kong, Shen Li, P G Moses, E Murray, A Puzder, H Rydberg, E Schroder, and T Thonhauser. A density functional for sparse matter. _Journal of Physics: Condensed Matter_, 21(8):084203, Jan 2009. * [45] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch. Quantum espresso: a modular and open-source software project for quantum simulations of materials. _Journal of Physics: Condensed Matter_, 21(39):395502 (19pp), 2009. * [46] P Giannozzi, O Andreussi, T Brumme, O Bunau, M Buongiorno Nardelli, M Calandra, R Car, C Cavazzoni, D Ceresoli, M Cococcioni, N Colonna, I Carnimeo, A Dal Corso, S de Gironcoli, P Delugas, R A DiStasio Jr, A Ferretti, A Floris, G Fratesi, G Fugallo, R Gebauer, U Gerstmann, F Giustino, T Gorni, J Jia, M Kawamura, H-Y Ko, A Kokalj, E Kucukbenli, M Lazzeri, M Marsili, N Marzari, F Mauri, N L Nguyen, H-V Nguyen, A Otero de-la Roza, L Paulatto, S Ponce, D Rocca, R Sabatini, B Santra, M Schlipf, A P Seitsonen, A Smogunov, I Timrov, T Thonhauser, P Umari, N Vast, X Wu, and S Baroni. Advanced capabilities for materials modelling with quantum espresso. _Journal of Physics: Condensed Matter_, 29(46):465901, 2017. * [47] M. K. Aydinol, A. F. Kohan, G. Ceder, K. Cho, and J. Joannopoulos. Ab initio study of lithium intercalation in metal oxides and metal dichalcogenides. _Phys. Rev. B_, 56:1354-1365, Jul 1997. * [48] B. Meredig, A. Thompson, H. A. Hansen, C. Wolverton, and A. van de Walle. Method for locating low-energy solutions within \({\rm DFT}+u\). _Phys. Rev. B_, 82:195128, Nov 2010. * [49] S. Das, P. Motamarri, V. Subramanian, D. M. Rogers, and V. Gavini. DFT-FE 1.0: A massively parallel hybrid cpu-gpu density functional theory code using finite-element discretization. _Computer Physics Communications_, 280:108473, 2022. * a massively parallel adaptive finite-element code for large-scale density functional theory calculations. _Computer Physics Communications_, 246:106853, 2020. * [51] P. Motamarri and V. Gavini. Configurational forces in electronic structure calculations using kohn-sham density functional theory. _Phys. Rev. B_, 97:165132, Apr 2018. * [52] P. Motamarri, M.R. Nowak, K. Leiter, J. Knap, and V. Gavini. Higher-order adaptive finite-element methods for kohn-sham density functional theory. _Journal of Computational Physics_, 253:308-343, 2013. * [53] D. R. Hamann. Optimized norm-conserving Vanderbilt pseudopotentials. _Phys. Rev. B_, 88:239906, 1995. * [54] M. J. van Setten, M. Giantomassi, E. Bousquet, M. J. Verstraete, D. R. Hamann, X. Gonze, and G-M. Rignanese. The pseudodojo: Training and grading a 85 element optimized norm-conserving pseudopotential table. _Comput. Phys. Commun._, 226:39-54, 2018. * [55] J. M. Sanchez, F. Ducastelle, and D. Gratias. Generalized cluster description of multicomponent systems. _Physica A_, 128:334-350, 1984. * 176. Academic Press, 1994. * [57] A. Van der Ven, J. C. Thomas, Q. Xu, and J. Bhattacharya. Linking the electronic structure of solids to their thermodynamic and kinetic properties. _Mathematics and Computers in Simulation_, 80(7):1393-1410, 2010. * [58] J. C. Thomas and A. Van der Ven. Finite-temperature properties of strongly anharmonic and mechanically unstable crystal phases from first principles. _Physical Review B_, 88(21):214111-214111, December 2013. * [59] B. Puchala and A. Van der Ven. Thermodynamics of the Zr-O system from first-principles calculations. _Phys. Rev. B_, 88:094108, Sep 2013. * [60] A. R. Natarajan, J. C. Thomas, B. Puchala, and A. Van der Ven. Symmetry-adapted order parameters and free energies for solids undergoing order-disorder phase transitions. _Phys. Rev. B_, 96:134204, Oct 2017. * [61] J. C. Thomas and A. Van der Ven. The exploration of nonlinear elasticity and its efficient parameterization for crystalline materials. _Journal of the Mechanics and Physics of Solids_, 107:76-95, October 2017. * 1467, 2004. * a new sampling algorithm for control and optimization. _IFAC Proceedings Volumes_, 47(3):6123-6128, 2014. 19th IFAC World Congress. * [64] J. W. Cahn and J. E. Hilliard. Free energy of a nonuniform system. I Interfacial energy. _The Journal of Chemical Physics_, 28:258-267, 1958. * [65] S. M. Allen and J. W. Cahn. A microscopic theory for antiphase boundary motion and its application to antiphase boundary coarsening. _Acta Metallurgica_, 27:1085-1091, 1979. * [66] A. Van der Ven and G Ceder. Lithium diffusion in layered Li\({}_{x}\)CoO\({}_{2}\). _Electrochemical and Solid-State Letters_, 3(7):301, 2000. * [67] D. Arndt, W. Bangerth, D. Davydov, T. Heister, L. Heltai, M. Kronbichler, M. Maier, J-P. Pelteret, B. Turcksin, and D. Wells. The deal.II library, version 8.5. _Journal of Numerical Mathematics_, 25(3), April 2017. Supplementary Information: Machine learning-enabled scale bridging between first principles statistical mechanics and continuum phase field computations for phase stability of Li\({}_{x}\)CoO\({}_{2}\) G.H. Teichert S. Das M. Faghih Shojaei J. Holber T. Mueller L. Hung V. Gavinia K. Garikipati Department of Mechanical Engineering, University of Michigan Toyota Research Institute, Los Altos, CA Department of Materials Science & Engineering, University of Michigan Michigan Institute for Computational Discovery & Engineering, University of Michigan Applied Physics Program, University of Michigan Materials Science & Engineering, Johns Hopkins University ###### Abstract ## S1 Detailed Methods ### Density functional theory methods The formation energy \(E\) of a configuration with Li composition \(x\) can be calculated using total energy values computed using density functional theory (DFT) according to the following equation [1]: \[E_{\text{Li}_{x}\text{CoO}_{2}}=E^{\text{tot}}-xE_{\text{LiCoO}_{2}}^{\text{ tot}}-(1-x)E_{\text{CoO}_{2}}^{\text{tot}}\] (S1) where \(E^{\text{tot}}\), \(E_{\text{LiCoO}_{2}}^{\text{tot}}\), and \(E_{\text{CoO}_{2}}^{\text{tot}}\) are the total energies for the given configuration, LiCoO\({}_{2}\), and CoO\({}_{2}\), respectively. In performing DFT calculations for Li\({}_{x}\)CoO\({}_{2}\), Aykol and co-authors found that it is important to include not only the Hubbard correction (DFT+\(U\)), which reduces self-interaction errors [2; 3; 4]), but to also include the van der Waals (vdW) interactions [5]. The effect of the vdW interactions allows the voltage predicted by DFT to match the experimental voltage when using an appropriately tuned value of \(U\). While both vdW-corrections and vdW-density functionals (vdW-DF) improve the DFT results, predictions are most consistent and accurate with vdW-DF. In this workflow, we use a simplified rotational-invariant formulation of DFT+\(U\)[6] and a vdW-DF exchange correlation functional, namely the optB88 exchange correlation functional [7; 8; 9; 10; 5; 11; 12], to calculate the formation energy on a chosen subset of LCO configurations. The CASM (Clusters' Approach to Statistical Mechanics) software suite was used to identify a set of configurations with the O3 crystal structure for parametrizing the cluster expansion. Fully-relaxed DFT calculations were completed for 333 of these configurations, with CASM automatically adjusting the size of the k-points grid according to atomic configuration. We perform the ground-state DFT calculations with geometry optimization in Quantum Espresso [13; 14], using projector augmented-wave (PAW) pseudopotentials calculated with the Perdew-Burke-Ernzerhof (PBE) functional from PSlibrary 1.0.0 [15; 16]. The values for the wave function and charge density cutoffs are chosen in two steps. Starting with the cutoff values suggested in the pseudopotential file for Co, we first increase only the charge density cutoff until the total energy converges to \(<\)1 meV/atom. Next, we increase both the wave function and the charge density cutoffs, maintaining the ratio of the two values, until the total energy again converges to \(<\)1 meV/atom, giving a wave function cutoff of 55 Ha and a charge density cutoff of 301.5 Ha. A k-point grid of \(6\times 6\times 3\) is also used to ensure total energy convergence within \(<\)1 meV/atom. Structural optimization is performed until cell stress and ionic forces are under 0.5 kbar and 0.00005 Ha/Bohr, respectively. The crystal structure for O3-LiCoO\({}_{2}\) is shown in Figure 1a. In this work, we calibrate the Hubbard \(U\) parameter to match experimental average lithiation voltages over various ranges of Li composition. To do this, we compute the voltage at increasing \(U\) for given composition values of \(x_{1}\) and \(x_{2}\)[17], using crystal structures previously reported from experiments. The average voltage from \(x_{1}\) to \(x_{2}\) is calculated using the following equation [18; 5]: \[V=-\frac{E_{\text{Li}_{x_{2}}\text{CoO}_{2}}-E_{\text{Li}_{x_{1}}\text{CoO }_{2}}-(x_{2}-x_{1})E_{\text{Li}}}{(x_{2}-x_{1})e}\] (S2) where \(E\) is the calculated total energy from DFT and \(e\) is the charge on an electron. We compute the average voltage for Li composition: \(\{x_{1},x_{2}\}=\{0,1\},\{0,1/2\}\) and \(\{1/2,1\}\). By comparing with experimental voltages (Figure 1b), we select an appropriate value of \(U=2.5\) eV. The development of charge ordering was monitored through the magnetic moments in the DFT calculations [5]. Evidence of charge splitting appears in the DFT results in this work for the row ordering on the left in Figure 1d, but not in the other row ordering or in the zig-zag ordering. Charge ordering of Co\({}^{3+}\) and Co\({}^{4+}\) atoms has been experimentally observed at low temperatures and Li compositions of \(x=1/2\) and \(x=2/3\)[19]. #### s1.1.1 DFT calculations for interface energy Large-scale DFT computations were performed for the coherent interface energy of the anti-phase boundary between the \(\eta_{1}=1/2\) and \(\eta_{3}=1/2\) ordered LCO rotational variants. These computations were performed using the DFT-FE software [20; 21; 22; 23], a recently developed massively parallel open-source code for large-scale real-space Kohn-Sham DFT studies based on a finite-element discretization. We employ the PBE exchange-correlation functional and the optimized norm-conserving Vanderbilt pseudopotentials (ONCV) [24] from the Pseudo Dojo library [25]. All numerical parameters in DFT-FE were chosen such that the ground-state energies converged to an accuracy of 1.5 meV/atom. Additionally, ionic forces and cell stresses were relaxed to under 5 meV/A and 0.5 Kbar respectively, and Fermi-Dirac smearing with a temperature of 500 K was used for all simulations. In order to compute the interface energy, we considered two types of simulation cells. The first was a periodic simulation cell representing the interface system, constructed with equal number of mutually commensurate (MC) supercells (see SSS1.2.2) for each ordered variant resulting in two interfaces upon accounting for the periodicity of the simulation domain. The MC supercells were arranged along the lattice vector that is not part of the plane forming the interface, as shown in Figure 3f for a two MC supercell case. Additionally, we consider another periodic simulation cell to represent the bulk, constructed using one MC supercell corresponding to either of the ordered variants. Since the corresponding supercells are related through rotations, the bulk energies are equal for both the ordered variants. Next, we performed ground-state DFT calculations on the interface system and the bulk system with full structural relaxation of ionic forces and cell stress. The ionic forces and cell stresses are relaxed self-consistently where each cell stress relaxation update involves a full ionic forces relaxation keeping cell vectors fixed. The total interface energy per unit area of interface, considered to be the average of the two interfaces and denoted by \(\gamma^{\prime}\), is given by the following energy difference \[\gamma^{\prime}=\frac{E_{\text{(Var1+Var2)}}-2N_{\text{cell}}E_{\text{bulk}}}{2 A}\,,\] (S3) where \(N_{\text{cell}}\) denotes the number of MC supercells for each variant, \(A\) denotes the area of the interface after structural relaxation, and the factor of 2 in the denominator accounts for the averaging over the two interfaces in the periodic system. The interface energy \(\gamma^{\prime}\) is composed of two contributions. The first is a short-ranged term due to the local deviation of the atomic arrangement across the interface, which is denoted by \(\gamma\). The second is a long-ranged contribution due to the elastic misfit strain to maintain the coherency of the interface. Although the misfit elastic strain decays away from the interface, it is not computationally feasible to resolve it by explicit DFT calculations with periodic supercells. Due to the use of periodic boundary conditions in the interface calculations, the misfit elastic strain contributes an energy to \(\gamma^{\prime}\) that scales linearly with number of MC supercells: \[\gamma^{\prime}=\gamma+(k_{1}+k_{2})\times N_{\rm cell}\,,\] (S4) where \(k_{1}\) and \(k_{2}\) denote the misfit elastic energy per MC supercell for the two ordered variants. Next, the linear scaling elastic misfit strain energy contribution to \(\gamma^{\prime}\) is eliminated by finding the intercept of Eq. (S4) at \(N_{\rm cell}=0\) to obtain \(\gamma\). A similar approach has been employed in previous atomistic calculations of coherent interface energies [26; 27]. Table S1 shows the energies and system sizes used to determine \(\gamma\) in the LCO system, where we consider \(N_{\rm cell}\) ranging from 1-3 and obtain the expected close to a linear relationship between \(\gamma^{\prime}\) and \(N_{\rm cell}\). Finally, \(\gamma\) is estimated to be 30.9 mJ/m\({}^{2}\). We additionally remark that the above calculations required simulation domains consisting of \(\sim\) 500-1400 atoms (4,000-12,000 electrons) combined with around 200 geometry updates for each simulation domain size, indicating the large-scale nature of these calculations. ### Statistical mechanics In this work, we follow the statistical mechanics approach outlined by Van der Ven, et al. [1] in their first-principles study of Li\({}_{x}\)CoO\({}_{2}\) (LCO), and of Natarajan, et al. [28], among others. #### s1.2.1 Cluster expansion for formation energy Given the expense of DFT calculations, we adopt cluster expansions to access the large numbers of configurations needed in the statistical mechanics studies. The formation energy is written as \(E_{\rm f}(\mathbf{\sigma})\), where \(\mathbf{\sigma}\) is the configuration vector with the occupancy variable \(\sigma_{i}=1\) if Li occupies the site \(i\) and \(\sigma_{i}=0\) if the site contains a vacancy. As is common, we define \(E_{\rm f}(\mathbf{\sigma})\) by computing the formation energy with DFT for a subset of configurations and use these values to parameterize a cluster expansion [29; 30] as a rapidly queryable surrogate for the formation energy. We use the CASM software[31], which facilitates the construction and parameterization of cluster expansion Hamiltonians and their use in Monte Carlo simulations, to select configurations for the DFT computations and perform the statistical mechanics calculations in this work [32; 33; 34]. Candidate configurations are chosen for O3-Li\({}_{x}\)CoO\({}_{2}\) with Li compositions from \(x=0\) to \(x=1\). A cluster is a collection of sites on the Li sub-lattice. Given a cluster of sites \(\alpha=\{i,j,\ldots,k\}\), a polynomial \(\phi_{\alpha}\) can be defined as the product of occupancy variables of those sites, i.e. \(\phi_{\alpha}=\sigma_{i}\sigma_{j}\cdots\sigma_{k}\). A cluster expansion is a linear combination of the polynomials \(\phi_{\alpha}\), leading to the following form for the formation energy: \[E_{\rm f}(\mathbf{\sigma})=V_{0}+\sum_{\alpha}V_{\alpha}\phi_{\alpha}(\mathbf{\sigma }),\] (S5) where the coefficients \(V_{0}\) and \(V_{\alpha}\) are optimized for a "best fit", and are called effective cluster interactions (ECI). We fit a cluster expansion to the formation energy calculated using DFT for 333 configurations for the O3 host structure using the CASM code. A sparse regression technique that combines a genetic algorithm with weighted linear regression is used to perform the fit. In this approach, a number of candidate basis functions are created using singletons, pairs, triplets and quadruplets of lattice sites that are within a given distance from each other. The genetic algorithm is used to select a subset of these basis functions to include in the cluster expansion. The coefficients of each subset of basis functions are calculated using linear regression. We use cluster types up to quadruplets, with maximum lattice site distances of 24 \(\AA\) for pairs, 8 \(\AA\) for triplets and 6 \(\AA\) for quadruplets, for a total of 221 candidate basis functions. During the linear regression, we apply weights, \(w_{i}\) to bias the fit towards greater accuracy for configurations on or near the convex hull using the energy difference, \(\Delta E_{i}^{\rm hull}\) between data point \(i\) and the convex hull at that configuration: \[w_{i}=15\exp\left(-\frac{\Delta E_{i}^{\rm hull}}{0.005}\right)+0.5\] (S6) Additionally, the genetic algorithm is constrained to select basis functions such that the convex hull constructed from the cluster expansion predictions for all 333 configurations consists of the same configurations as in the DFT convex hull. #### s1.2.2 Symmetry-adapted order parameters In addition to composition, symmetry-adapted order parameters are useful for observing and tracking order-disorder transitions, as well as for identifying the various translational and rotational variants of a given ordering, such as those appearing in Figure 3a for the zig-zag ordering at composition \(x\approx 1/2\). The method for identifying symmetry-adapted order parameters is laid out elsewhere; e.g., see Natarajan, et al. [28]. In this process supercells are identified corresponding to each variant (Figure 3b). A mutually commensurate supercell is then identified that encompasses each distinct supercell and is therefore sufficient for representing all the 12 variants. For the zig-zag ordering it includes 32 Li sublattice sites (Figures 3c-3d). The sublattice compositions of this supercell form a basis that can describe each variant as a vector \(\mathbf{x}\in\mathbb{R}^{32}\), where each component is 1 if the corresponding sublattice sites are fully occupied by Li and 0 otherwise. The symmetry group, \(P\), of the zig-zag ordering consists of 384 unique linear transformations between the 12 variants each represented by a matrix in \(\mathbb{R}^{32\times 32}\). Following the algorithm from Thomas and Van der Ven [35], we constructed a \(P\)-invariant matrix and performed its eigenvalue decomposition. It resulted in eight nonzero eigenvalues: two distinct, two repeated three times, and four repeated six times, and eight corresponding sets of eigenvectors, for a total of 32 eigenvectors. The eigenvectors formed the rows of the orthogonal transformation matrix \(\mathbf{Q}\in\mathbb{R}^{32\times 32}\), that maps a sublattice composition vector \(\mathbf{x}\) to a vector \(\mathbf{\eta}\) with 32 order parameters. The subset of order parameters relevant to the zig-zag ordering was identified by operating with \(\mathbf{Q}\) on the vectors \(\mathbf{x}^{(i)}\), \(i=1,\ldots,12\) describing the variants of that ordering. Using zero indexing, components \(\eta_{7}^{(i)}\) through \(\eta_{31}^{(i)}\) of \(\mathbf{\eta}\in\mathbb{R}^{32}\) are zero for all variants \(i=1,\ldots,12\), and are irrelevant for describing the zig-zag ordering (Figure 3c). This defines the seven relevant order parameters, \(\eta_{0},\ldots,\eta_{6}\). The first of these, \(\eta_{0}\), is associated with one of the distinct eigenvalue and corresponds to the composition averaged over all 32 sublattice sites; i.e., \(\eta_{0}=x\). The other six order parameters, \(\eta_{1},\ldots,\eta_{6}\) are associated with one of the degenerate eigenvalues that has six corresponding eigenvectors. Since the free energy density is invariant under transformations of the triangular Li sublattice that map between the ordered variants, the IDNN representation is presented with features that are symmetric functions of \(\eta_{0},\ldots\eta_{6}\) under these transformations. Monomials of up to sixth order were chosen and subjected to the Reynolds operator, by summing: \[h(\mathbf{\eta})=\sum_{\mathbf{M}^{(\eta)}\in\mathcal{M}}f(\mathbf{M}^{(\eta)}\mathbf{\eta}),\] (S7) where for each \(\mathbf{M}\in P\) we have \(\mathbf{M}^{\eta}=\mathbf{Q}\mathbf{M}\). This operation yields the P-invariant polynomial functions in Eq. (1) as IDNN features. #### s1.2.3 Monte Carlo sampling Given \(E_{\text{f}}(\mathbf{\sigma})\), we sample within the semi-grand canonical ensemble, in which the chemical potential is specified and the corresponding composition and/or order parameters are determined through ensemble averaging. The partition function for the semi-grand canonical ensemble is the following: \[\Theta=\sum_{\mathbf{\sigma}}\exp\left(-\frac{E(\mathbf{\sigma})-M\widehat{\mathbf{\eta}}( \mathbf{\sigma})\cdot\widehat{\mathbf{\mu}}}{k_{B}T}\right)\] (S8) where \(\widehat{\mathbf{\eta}}=\langle\eta_{0},\ldots,\eta_{6}\rangle^{\text{T}}\) represents the reduced vector of order parameters with \(\eta_{0}=x\) and \(\widehat{\mathbf{\mu}}=\langle\mu_{0},\ldots,\mu_{6}\rangle\) is the corresponding vector of chemical potentials, \(M\) is the number of reference supercells that tile the configuration, \(k_{B}\) is the Boltzmann constant, and \(T\) is the temperature. Each chemical potential component is the derivative of the free energy with respect to its corresponding composition or order parameter component. The chemical potential associated with Li composition is related to the voltage as a function of composition, \(V(x)\) as \[V(x)=-\frac{\mu_{0}^{\mathrm{cathode}}(x)-\mu_{0}^{\mathrm{anode}}}{e},\] (S9) where \(\mu_{0}^{\mathrm{cathode}}\) is the chemical potential with respect to Li for the cathode (LCO) and \(\mu_{0}^{\mathrm{anode}}\) is the chemical potential of the anode. Taking the anode to be Li metal, \(\mu_{0}^{\mathrm{anode}}\) is a constant equal to the Gibbs free energy of Li [18]. It is necessary for the two chemical potentials to have a consistent reference, however. We approximate the Gibbs free energy of Li with the total energy calculated using DFT, but the chemical potential for LCO reported from Monte Carlo sampling is based on formation energies that were re-referenced to the end members, according to the linear relation in Eq. (S1). Since the chemical potential is the derivative of the free energy with respect to composition, this re-referencing shifted the chemical potential of LCO by a constant \(k\) equal to the derivative of Eq. (S1) with respect to \(x\), i.e. \(k:=-E_{\mathrm{LiCoO}_{2}}^{\mathrm{tot}}+E_{\mathrm{CoO}_{2}}^{\mathrm{ tot}}\). Therefore, we reverse the shift in the chemical potential by subtracting \(k\) from \(\mu_{0}^{\mathrm{cathode}}(x)\) before computing the voltage according to Eq. (S9). This imposes a consistent reference for both \(\mu_{0}^{\mathrm{cathode}}\) and \(\mu_{0}^{\mathrm{anode}}\). A limitation with Monte Carlo sampling within the semi-grand canonical ensemble is that it does not produce data within the unstable regions associated with phase separation. While this is sufficient for delineating phase diagrams and predicting voltage, phase field simulations require free energy information within these two-phase regions in order to consistently resolve phase interfaces. Additional Monte Carlo calculations were performed for temperatures of 260 K, 300 K, and 340 K, this time by umbrella sampling using bias potentials to sample within the unstable regions of the order-disorder transition [28, 36, 37, 38, 39]. Since the fluctuations in the chemical potential around \(\eta_{0}=x=1/2\) and for the locations of wells at \(\eta_{1},\ldots,\eta_{6}\approx\pm 1/2\) are important for representing the ordering in the phase field models at temperatures below \(\sim\)330 K, dense sampling of chemical potential data was performed in those regions for 260 K and 300 K. Additional sampling was also performed near the extreme compositions of \(\eta_{0}=x=\{0,1\}\) and order parameters \(\eta_{1},\ldots,\eta_{6}=\pm 1/2\) to help capture the divergent nature of the chemical potential, and in the unstable regions. The partition function used in this case, then, is the following: \[\Theta=\sum_{\boldsymbol{\sigma}}\exp\left(-\frac{E(\boldsymbol{\sigma})+ \sum_{i=0}^{6}\phi_{i}(\eta_{i}(\boldsymbol{\sigma})-\kappa_{i})^{2}}{k_{B}T}\right)\] (S10) where \(\phi_{i}\) and \(\kappa_{i}\) determine the curvature and center of the bias potential, respectively, and the inner sum is over the composition and six order parameters. Usually, \(\phi_{i}\) can be fixed at an appropriate value while \(\kappa_{i}\) is varied to sample across the desired composition and order parameter space within the Monte Carlo sampling routine. The ensemble average of the composition \(\langle\eta_{0}\rangle\) and each order parameter \(\langle\eta_{i}\rangle\), \(i=1,\ldots,6\) is related to its corresponding chemical potential through the bias parameters: \[\frac{1}{M}\mu_{i}\Big{|}_{\langle\boldsymbol{\eta}\rangle}=-2\phi_{i}( \langle\eta_{i}\rangle-\kappa_{i}),\qquad i=0,\ldots,6\] (S11) Regions with orderings are recognized by the steepness and positive slope of the curve traced out by connecting the chemical potential data points. This indicates a strongly convex free energy well and thermodynamic preference for these structures over the corresponding composition intervals. An ordering is also predicted at \(x=1/3\), appearing below approximately 235 K. This ordering has not been seen experimentally at room temperature, but some evidence has been reported at low temperatures [40], which is consistent with our results. The improved accuracy of the predicted order-disorder temperatures over previous first-principles work [1] is likely due to the advanced methods of DFT+\(U\) with van der Waals interactions included (SS1.1). The Monte Carlo calculations are run with a variance in ensemble averages \(\langle\eta_{i}\rangle=3\times 10^{-4}\) from which the precision of the order parameter is computed as \(\mathrm{Var}(\langle\eta_{i}\rangle-\kappa_{i})\). The precision in \(\langle\mu_{i}\rangle\) follows using (S11). The computational runtime of the smallest, average, and largest calculation are (17 s, 97 s, 64941 s) and (1 s, 166 s, and 13700 s) for standard sampling and umbrella sampling, respectively. Standard sampling has 5487 points on average per Monte Carlo simulation, and umbrella sampling has 52789. ### Integrable deep neural networks (IDNNs) for free energy representations The IDNN representation is obtained for the free energy density function by training on derivative data in the form of pairs of chemical potential-the label-and corresponding (symmetry-invariant functions of) composition or order parameter-the features [41; 42]. The integrability of the IDNN is built into its structure by constructing it as the derivative form of a standard fully-connected deep neural network (DNN). While it is straightforward to differentiate the equations describing the standard DNN to derive the equations for the IDNN, modern deep learning libraries make this step unnecessary. The user can simply construct a standard DNN and apply a gradient operation to create the IDNN, which is then used for training. Mathematically, a DNN can be denoted by a function \(Y(\mathbf{X},\mathbf{W},\mathbf{b})\) representing the ensemble averaged chemical potentials \(\langle\mu_{i}\rangle\), with arguments or inputs \(\mathbf{X}\in\mathbb{R}^{7}\) representing the composition \(\langle x\rangle=\langle\eta_{0}\rangle\) and remaining order parameters \(\langle\eta_{i}\rangle\), \(i=1,\dots 6\), weights \(\mathbf{W}\), and biases \(\mathbf{b}\). Training the DNN involves the solution of an optimization problem for the weights and biases, given the dataset \(\{(\widehat{\mathbf{X}}_{\theta},\widehat{Y}_{\theta})\}\): \[\widehat{\mathbf{W}},\widehat{\mathbf{b}}=\operatorname*{arg\,min}_{\mathbf{W},\mathbf{b}} \operatorname{MSE}\left(Y(\mathbf{X},\mathbf{W},\mathbf{b})\Big{|}_{\widehat{\mathbf{X}}_{ \theta}},\widehat{Y}_{\theta}\right)\] (S12) The case of interest here is when the dataset \(\{(\widehat{\mathbf{X}}_{\theta},\widehat{Y}_{\theta})\}\) is not available. Instead, we have the derivative dataset \(\{(\widehat{\mathbf{X}}_{\theta},\widehat{y}_{\theta_{k}})\}\), where \(\widehat{y}_{\theta_{k}}\) corresponds to the partial derivative of \(\widehat{Y}_{\theta}\) with respect to the \(k^{\text{th}}\) component of \(\widehat{\mathbf{X}}_{\theta}\). To use these data, the IDNN is defined as the gradient of \(Y\) with respect to its inputs \(\mathbf{X}\), i.e. \(\partial Y(\mathbf{X},\mathbf{W},\mathbf{b})/\partial X_{k}\). The training is defined as follows: \[\widehat{\mathbf{W}},\widehat{\mathbf{b}}=\operatorname*{arg\,min}_{\mathbf{W},\mathbf{b}} \,\sum_{k=1}^{n}\operatorname{MSE}\left(\frac{\partial Y(\mathbf{X},\mathbf{W},\mathbf{b })}{\partial X_{k}}\Big{|}_{\widehat{\mathbf{X}}_{\theta}},\widehat{y}_{\theta_{k }}\right)\] (S13) The resulting optimized weights \(\widehat{\mathbf{W}}\) and biases \(\widehat{\mathbf{b}}\) can be used with the function \(\partial Y(\mathbf{X},\widehat{\mathbf{W}},\widehat{\mathbf{b}})/\partial X_{k}\) to return a prediction of the chemical potential. Its antiderivative is exactly represented by using the same weights and biases in the function \(Y(\mathbf{X},\widehat{\mathbf{W}},\widehat{\mathbf{b}})\). For the current work, \(\partial Y(\mathbf{X},\widehat{\mathbf{W}},\widehat{\mathbf{b}})/\partial X_{k}\) gives the IDNN representation of the chemical potentials, and \(Y(\mathbf{X},\widehat{\mathbf{W}},\widehat{\mathbf{b}})\) is the DNN representation of the free energy. ### Sampling and active learning workflow In order to train an IDNN that represents the chemical potential data for composition plus the six order parameters associated with the zig-zag ordering, we must sample data in a seven-dimensional space. Rather than sampling uniformly and densely across the entire space, which would require a potentially prohibitive number of Monte Carlo simulations, we focus on the regions that are both significant physically and have features that may be difficult to capture. These regions include the energy wells related to the variants of the zig-zag ordering and the divergent behavior of the chemical potential at the boundaries of the order parameter space, including the composition end members at \(\eta_{0}=x=\{0,1\}\). Some general, unguided sampling of the order parameter space is also performed to capture overall trends. Thus, this first method of sampling involves the _exploration_ of areas known _a priori_ to be of interest. Since the purpose of the sampling is to develop a surrogate model of the free energy, we combine the exploration sampling with a second method of sampling that _exploits_ the partially trained surrogate to identify additional areas with data that may be helpful in improving the surrogate model. For the current workflow, this exploitation consists simply of identifying a specified number of data points at which the surrogate model shows high point-wise error. The high error suggests that the landscape in this region is difficult to capture, and so more data are sampled here. The combined exploration/exploitation sampling approach forms the active learning workflow where a cycle of exploration sampling, IDNN training, and exploitation sampling is followed until a stopping criterion is met. Before sampling within the space, it is necessary to define the boundaries of the space. The sublattice parameter space is the unit hypercube in 32 dimensions, and \(\widehat{\mathbf{Q}}\) the reduced matrix obtained from \(\mathbf{Q}\) (in Figure 3e) by restriction to its first seven rows, transforms from the sublattice space to the order parameter space. A uniform sampling of the sublattice space, transformed through \(\widehat{\mathbf{Q}}\), does not produce a uniform sampling of the order parameters, however, due in part to the dimension reduction. Instead, we define bounding planes in the order parameter space. First, we restrict the space so that \(\eta_{7}\) through \(\eta_{31}\) are equal to zero, i.e., any ordering can only be a variant of the zig-zag ordering. Using this restriction, the relationship \(\mathbf{\eta}=\mathbf{Q}\mathbf{x}\), and applying the physical constraint that the sublattice compositions satisfy \(0\leq x_{i}\leq 1\), \(i=0,\ldots,31\), we can use the inverse of the full \(\mathbf{Q}\) matrix to define the following upper bounding planes: \[\sum_{j=0}^{6}Q_{ij}^{-1}\eta_{j}\leq 1,\qquad i=0,\ldots,31\] (S14) and lower bounding planes: \[\sum_{j=0}^{6}Q_{ij}^{-1}\eta_{j}\geq 0,\qquad i=0,\ldots,31\] (S15) where we have used zero indexing of vectors in \(\mathbb{R}^{32}\) and \(\mathbf{Q}\in\mathbb{R}^{32\times 32}\). We use the Billiard Walk [43] random sampling algorithm to sample within the bounding planes. A summary of the algorithm is as follows: Given an initial point within the bounding planes, a random trajectory and length are chosen. The trajectory is followed until reaching a bounding plane, at which point the trajectory is updated according to its reflection off the plane. The reflections continue until reaching the full length of the trajectory, which defines the next point in the Billiard Walk. This process is repeated until the desired number of globally sampled internal points is reached. A byproduct of the Billiard Walk is a collection of quasi-random boundary points. We take a random subset of these boundary points to help capture the divergent behavior of the chemical potentials at the boundary during early iterations of the workflow. The area around the end members and energy wells are randomly sampled within a hypercube of side length \(0.15\). We also explicitly sample along and near the order-disorder transition paths in the order parameter space at \(\eta_{0}=x=0.5\). The active learning workflow (Figure 3(a),b), guided sampling with exploration, training and exploitation near the bounds, high error points, wells and unstable regions of the reduced-order \(\widehat{\mathbf{\eta}}\in\mathbb{R}^{7}\) subspace. The workflow forced a new search for the IDNN hyperparameters on (a) the second workflow iteration, and (b) if the mean square error (MSE) calculated for the two previous IDNN models using the most recent dataset increased from one to the other. If the MSE decreased, then the workflow allowed training to continue with the previous IDNN on the most recent data (Figure 3(b)). ### Phase field theory and associated computational framework The evolution of microstructure and phase changes can be modeled using the phase field equations. The Cahn-Hilliard equation [44] models the dynamics of conserved quantities, such as composition, while nonconserved order parameter fields are modeled using the Allen-Cahn equation [45]. When neglecting elastic effects, the total free energy of the system with composition and \(n=6\) order parameters can be described as follows: \[\Pi[x,\widehat{\mathbf{\eta}}]=\int\limits_{\Omega}\left(f(x,\widehat{\mathbf{\eta}}) +\frac{1}{2}\chi_{0}|\nabla x|^{2}+\sum_{i=1}^{6}\frac{1}{2}\chi_{i}|\nabla \widehat{\eta}_{i}|^{2}\right)\,\mathrm{d}V\] (S16) where \(\chi_{i}\) are the gradient parameters, and \(f(x,\widehat{\mathbf{\eta}})\) is the free energy density, represented by the analytically integrated DNN in this work. The chemical potentials \(\widetilde{\mu}_{i}\) used in the phase field equations are given by the variational derivatives of the total free energy, such that \(\widetilde{\mu}_{0}:=\delta\Pi/\delta x\) and \(\widetilde{\mu}_{i}:=\delta\Pi/\delta\widehat{\eta}_{i}\), \(i=1,\ldots,n\) (the \(\widehat{\mathbf{\bullet}}\) notation is used for clarity; however, \(\widehat{\eta}_{i}=\eta_{i}\) from previous sections): \[\widetilde{\mu}_{0} =\frac{\partial f}{\partial x}-\chi_{0}\nabla^{2}x\] (S17) \[\widetilde{\mu}_{i} =\frac{\partial f}{\partial\widehat{\eta}_{i}}-\chi_{i}\nabla^{2 }\widehat{\eta}_{i},\qquad i=1,\ldots,n\] (S18) The Cahn-Hilliard and Allen-Cahn equations, respectively, are the following: \[\frac{\partial x}{\partial t} =\nabla\cdot(\widetilde{M}\nabla\widetilde{\mu}_{0})\] (S19) \[\frac{\partial\widehat{\eta}_{i}}{\partial t} =-L\widetilde{\mu}_{i},\qquad i=1,\ldots,n\] (S20) where \(\widetilde{M}\) is the mobility and \(L\) is a kinetic coefficient. We substitute in the equations for the chemical potentials and write the governing equations in weak form to be solved using a mixed finite element method: \[0 =\int_{\Omega}\left(w_{x}\frac{\partial x}{\partial t}+ \widetilde{M}\nabla w_{x}\cdot\nabla\widetilde{\mu}_{0}\right)\mathrm{d}V- \int_{\partial\Omega}w_{x}j_{n}\mathrm{d}S\] (S21) \[0 =\int_{\Omega}\left[w_{\tilde{\mu}_{0}}\left(\tilde{\mu}_{0}- \frac{\partial f}{\partial x}\right)-\chi_{0}\nabla w_{\tilde{\mu}_{0}}\cdot \nabla x\right]\mathrm{d}V\] (S22) \[0 =\int_{\Omega}\left[w_{i}\frac{\partial\widehat{\eta}_{i}}{ \partial t}+L\left(w_{i}\frac{\partial f}{\partial\widehat{\eta}_{i}}+\chi_{i }\nabla w_{i}\cdot\nabla\widehat{\eta}_{i}\right)\right]\mathrm{d}V,\qquad i= 1,\ldots,n\] (S23) where \(w_{x}\), \(w_{\tilde{\mu}_{0}}\), and \(w_{i}\) are weighting functions. For the equations written in this mixed formulation, the following Neumann boundary conditions have been applied to \(x\), \(\tilde{\mu}_{0}\), and \(\widehat{\eta}_{i}\), \(i=1,\ldots,n\), on \(\partial\Omega\), where \(\mathbf{n}\) is the outward unit normal and \(j_{n}\) is an influx: \[\nabla x\cdot\mathbf{n} =0\] (S24) \[\widetilde{M}\nabla\tilde{\mu}_{0}\cdot\mathbf{n} =j_{n}\] (S25) \[\nabla\widehat{\eta}_{i}\cdot\mathbf{n} =0,\qquad i=1,\ldots,n\] (S26) To compare the behavior of LCO at different temperatures, we perform phase field simulations at 260 and 300 K, informed by data from the Monte Carlo computations at these temperatures. The analytically integrated free energy DNNs for each of these temperatures are used to represent \(f(x,\widehat{\mathbf{\eta}})\) in Eq. (S18). We define the following composition-dependent surrogate function for the diffusivity \(D\) at 300 K, using the predicted values from Ref. [46]: \[D=0.01\exp(-274(1.05-x)(0.47-x)(1.-x))\] (S27) The diffusivity surrogate function and the predicted data appear in Figure S5, where the effective vibrational frequency \(\nu^{*}\) is reported to be on the order of \(10^{13}\) s\({}^{-1}\)[46]. The mobility \(\widetilde{M}\) is related to \(D\) by the equation \(\widetilde{M}=Dx/(k_{B}T)\)[47]. The value of \(D\) is multiplied by 4 to approximate the diffusivity at 340 K and is divided by 4 for 260 K. This approximation was obtained from studies in which Li diffusion in LCO was computed by a combination of DFT and kinetic Monte Carlo that accounted for the composition-dependent variation of the migration energy barrier [46]. These computations reported the composition-dependent diffusivity at 300 and 400 K, with \(D\) at 400 K being approximately 30 times the diffusivity at 300 K, from which we have drawn the approximation of the factor of 4 for every 40 K. We obtain values of the gradient parameters \(\chi_{i}\) in Eq. (S16) such that the phase field calculation of the interface energy agrees with the DFT results given in Section S1.1.1. Let \(\gamma\) and \(\beta\) be the anti-phase boundary energies of two ordered LCO variants at composition \(x=0.5\) calculated by DFT and phase field theory, respectively. And similarly consider \(\widehat{\gamma}\) and \(\widehat{\beta}\) for the interface energies between an ordered variant at \(x=0.5\) and a disordered matrix at \(x>0.5\) and \(\eta_{i}=0,i=1,\ldots,6\) calculated by DFT and phase field theory, respectively. One cannot define a disordered LCO atomic structure with any degree of uniqueness, making consistent calculation of \(\widehat{\gamma}\) challenging. Therefore, we use an estimation that \(\widehat{\gamma}=\frac{1}{2}\gamma\)[48]. We then define an inverse phase field problem: Given \(\gamma\), find \(\chi_{0}\) and \(\chi=\chi_{1}=\ldots=\chi_{6}\) such that \(\sqrt{(\beta-\gamma)^{2}+(\widehat{\beta}-\frac{\gamma}{2})^{2}}<\varepsilon\), where we set the threshold \(\varepsilon=0.1\) mJ/m. We compute \(\beta\) and \(\widehat{\beta}\) on a thin rectangular domain with length and height of 0.2 and 0.005 m, respectively. We obtain optimal \(\chi_{0}\) and \(\chi\) by performing an exhaustive two-dimensional grid search. The inferred values are: \(\chi_{0}=1\times 10^{-4}\) mJ/m at 260 K and \(\chi_{0}=1.88\times 10^{-4}\) mJ/m at 300 K, \(\chi_{1},\ldots,\chi_{6}=2.12\times 10^{-8}\) mJ/m at 260 K and \(\chi_{1},\ldots,\chi_{6}=4.91\times 10^{-8}\) mJ/m at 300 K. At these values, the minimized function is 0.05 mJ/m at 260 K and 0.08 mJ/m at 300 K. The simulations are performed using the finite element method with the mechanoChemFEM code1, which is based on the deal.II[49] library, and run on the Great Lakes HPC cluster at the University of Michigan. Adaptive meshing with hanging nodes and adaptive time stepping are used. Footnote 1: Code available at github.com/mechanoChem/mechanoChemFEM ## S2 Data on DFT calculations for anti-phase boundary energies ## S3 Explicit nucleation The idea of defining explicit nucleation in phase field simulations using the nucleation rate from classical nucleation rate was proposed by Simmons et al. [50; 51]. The probability \(P_{n}\) of a nucleus forming in an element within a time step \(\Delta t\) is given by: \[P_{n}=1-\exp(-J^{*}\Delta t)\] (S28) The nucleation rate \(J^{*}\) within an element is given by \[J^{*}=ZN\beta^{*}\exp\left(-\frac{\Delta G^{*}}{k_{B}T}\right)\exp\left(\frac {\tau}{t}\right)\] (S29) where \(Z\) is the Zeldovich factor, \(N\) is the number of atoms (Li sites in our case) in the element, \(\beta^{*}\) is the frequency factor, \(\Delta G^{*}\) is the activation energy to create a stable nucleus, \(k_{B}\) is Boltzman's constant, and \(T\) is the temperature. We neglect the incubation term with \(\tau\). Expressions for \(Z\) and \(\beta^{*}\) are given by [52]: \[Z = \frac{3v_{o}}{4\pi^{3/2}}\left[\frac{\Delta G^{*}}{k_{B}T}\right] ^{1/2}\frac{1}{{R^{*}}^{3}}\] (S30) \[\beta^{*} = \frac{4\pi{R^{*}}^{2}Dx_{0}}{a^{4}}\] (S31) where \(v_{\rm o}\) is the average volume per (Li) atom in the ordered phase (roughly twice the unit cell volume), \(R^{*}\) is the critical radius, \(D\) is the diffusivity (see Eq. (S42) ), \(x_{0}\) is the average composition, and \(a\) is the average lattice parameter of the nucleus and matrix phases (approximated as the cube root of the unit cell volume). We can combine the three terms in the prefactor and simplify slightly, and we let \(v_{\rm e}:=Nv_{o}\), the volume of the element: \[ZN\beta^{*}=\frac{3v_{\rm e}Dx_{0}}{R^{*}a^{4}}\left[\frac{\Delta G^{*}}{\pi k _{B}T}\right]^{1/2}\] (S32) A similar expression of the nucleation rate, as well as an explanation of how to derive \(\Delta G^{*}\) is presented in a review of classical nucleation theory by Sear [53]. The change in free energy for forming a nucleus is equal to the change in bulk energy plus the interfacial energy that is introduced. For a spherical precipitate, the total change in energy \(\Delta G\) is the following \[\Delta G=-\frac{4}{3}\pi R^{3}\Delta G_{v}+4\pi R^{2}\gamma\] (S33) where \(R\) is the radius of the precipitate, \(\Delta G_{v}\) is the decrease in free energy per unit volume (i.e. a positive \(\Delta G_{v}\) signifies a drop in the free energy density; more on this term later), and \(\gamma\) is the interfacial energy. \(\Delta G^{*}\) maximizes this energy since it is the energy barrier, so we differentiate w.r.t. \(R\) and set equal to zero to solve for the critical radius, \(R^{*}\): \[0 =-4\pi{R^{*}}^{2}\Delta G_{v}+8\pi R^{*}\gamma\] (S34) \[\implies R^{*} =\frac{2\gamma}{\Delta G_{v}}\] (S35) Plugging back into Eq. (S33) gives the value for homogeneous nucleation \[\Delta G_{\rm hom}^{*} =-\frac{4}{3}\pi\left(\frac{2\gamma}{\Delta g_{v}}\right)^{3} \Delta G_{v}+4\pi\left(\frac{2\gamma}{\Delta g_{v}}\right)^{2}\gamma\] (S36) \[=\frac{16\pi\gamma^{2}}{3\Delta G_{v}^{3}}\] (S37) Strictly speaking, this is for homogeneous nucleation. Another term, \(f(\theta)\), is multiplied for heterogeneous nucleation, which is a function of the angle \(\theta\) between the interface and the (flat) boundary: \[\Delta G_{\rm het}^{*} =f(\theta)\Delta G_{\rm hom}^{*}\] (S38) \[f(\theta) =\frac{1}{2}-\frac{3}{4}\cos\theta+\frac{1}{4}\cos^{3}\theta\] (S39) Note that \(f(\pi/2)=0.5\), which is the value that is used here. Poduri and Chen [54] suggest calculating the driving force, \(\Delta G_{v}\), by drawing the tangent line w.r.t. \(x\) of the disordered free energy density at the composition of the potential nucleation site, currently disordered (labeled \(x_{\rm mat}\) in Figure S1a). \(\Delta G_{v}\) is the largest difference between this disordered tangent line and the ordered free energy density. This occurs at the composition (labeled \(x_{\rm nuc}\) in Figure S1) at which the ordered tangent line is parallel to the disordered tangent line. When the disordered phase is at a composition that gives a coincident or common tangent line with the ordered phase, \(\Delta G_{v}\) is zero (see Figure S1)b. The free energy density DNN has units of eV/unit cell. With a unit cell volume of 32.502 A\({}^{3}\), the conversion factor to J/m\({}^{3}\) is \(4.926\times 10^{9}\frac{\rm J/m^{3}}{\rm eV/unit\ cell}\). Figure S2 shows the the driving force \(\Delta G_{v}\) at a given composition within the disordered phase at 300 K. Within the operating range of a battery (i.e., \(x\geq 0.5\)), \(\Delta G_{v}\) has a maximum of \(3.563\times 10^{6}\) J/m\({}^{3}\) at \(x=0.5\) and becomes zero around \(x=0.537\). To explore the effect of interfacial energy, Figure S3 shows \(\Delta G\) and \(R\) as a function of interfacial energy, using the largest driving force of \(\Delta G_{v}=3.563\times 10^{6}\) J/m\({}^{3}\). Under an applied voltage, \(V\), the critical free energy for heterogeneous nucleation is \[\Delta G_{\mathrm{het}}^{*}=\frac{16\pi\gamma^{2}}{3(\Delta G_{v}-V\Delta x)^{3}}\] (S40) for \(\Delta x=x_{\mathrm{nuc}}-x_{\mathrm{mat}}\). The corresponding rate of heterogeneous nucleation is then \[J^{*}=ZN\beta^{*}\exp\left(-\frac{\Delta G_{\mathrm{het}}^{*}}{k_{B}T}\right) \exp\left(\frac{\tau}{t}\right)\] (S41) Figure (b)b in the main text represents the heterogeneous nucleation rate _versus_\(V\) and \(x_{\mathrm{mat}}\), accounting for the dependence of \(x_{\mathrm{nuc}}\) upon \(x_{\mathrm{mat}}\) as illustrated in the plots of Figure S1. ## S4 Thermodynamic representation ignoring ordering Figure S4 shows the IDNN representation obtained by training on only the free energy/chemical potential data corresponding to the composition with the composition as the only feature. These are one-dimensional representations that were obtained without consideration of ordering [55]. The corresponding phase microstructures are also shown. Figure S5 shows the diffusivity's dependence on composition fit to data from [46]. The fit function is: \[D(x)=0.01\exp(-274(1.05-x)(0.47-x)(1.-x))\] (S42) Results from the phase field simulations initialized with Li composition randomly perturbed about \(x=0.525\), with no boundary flux, are shown in Figure S6 for the 50 nm particle. As with the larger particles, spinodal decomposition occurs at 260 and 300 K, but not 340 K. However, equilibrium is reached between 100 and 10,000 times more quickly with the smaller particles, depending on the temperature. Additionally, while the equilibrium microstructures are comparable for both sizes, the transient microstructure is much simpler for the smaller particle. The results from the cycling simulation for the 50 nm particles presented in Figure S6 show that, unlike with the larger particles, the Li composition field is essentially the same for all three temperatures. This, again, matches the experimental findings of Choi et al. [56] which show consistent performance across the temperature range of 258 to 333 K. ## S5 Symmetry functions The 12 symmetric variants are energetically equivalent, so it is important build that symmetry into the IDNN. We do that by making the IDNN a function of symmetry functions that encapsulate the symmetry of the system. The symmetry functions themselves are functions of the order parameters. For example, consider a simpler case (e.g., a B2 system) where the ordering can be defined by a single order parameter with a positive value to represent one variant and a negative value for the second variant. The energy should be the same whether the order parameter is positive or negative, i.e., the energy function is symmetric about \(\eta=0\). To build this in, we can make the energy a function of \(\eta^{2}\), since \(\eta^{2}\) encapsulates the appropriate symmetry. With more order parameters, the symmetry functions become more complicated, but the basic idea is the same. The algorithm for defining symmetry functions is described by Thomas and Van der Ven [35] in Appendix C (and Section 5) of their paper. The idea is to start with a complete set of monomials, constructed of the order parameters, of a given order and applying the Reynolds operator to each of them. Note that since we are now operating on the order parameter vectors \(\mathbf{\eta}\) instead of the sublattice vectors \(\mathbf{x}\), we first convert the transformation matrices \(\mathbf{M}_{i}^{(x)}\) through \(\mathbf{M}_{i}^{(\eta)}=\mathbf{Q}\mathbf{M}_{i}^{(x)}\mathbf{Q}^{-1}\). Many of them will reduce to the zero function and there will likely be duplicates among the resulting nonzero functions, but those that do survive are symmetry invariant. Also, in most cases the final functions are no longer monomials. The process begins with first order monomials, then continues increasing the order of the monomials until a sufficient number of symmetry invariant functions are obtained. Not all individual symmetry functions reflect the same symmetries. For example, the function \[f(\mathbf{\eta})=\eta_{1}\eta_{2}\eta_{3}\eta_{4}\eta_{5}\eta_{6}\] (S43) clarifies that the free energy can be different if you change the sign on a single order parameter, but this characteristic is missed in, say, this lower order symmetry function: \[f(\mathbf{\eta})=\eta_{1}^{2}+\eta_{2}^{2}+\eta_{3}^{2}+\eta_{4}^{2}+\eta_{5}^{2}+ \eta_{6}^{2}\] (S44) The Reynold's operator is performed by applying all transformations to the monomial's input, in turn, and summing the results, as in the following: \[h(\mathbf{\eta})=\sum_{\mathbf{M}^{(\eta)}\in\mathcal{M}}f(\mathbf{M}^{(\eta)}\mathbf{\eta})\] (S45) where \(\mathcal{M}\) is the group of all transformation matrices \(\mathbf{M}^{(\eta)}\) acting on the order parameter space, \(h(\mathbf{\eta})\) is a symmetry invariant function, and \(f(\mathbf{\eta})\) is the initial monomial. We can represent the monomial \(f(\mathbf{\eta})\) with an exponent vector \(\mathbf{n}\) and an associated coefficient \(a^{(\mathbf{n})}\): \[f(\mathbf{\eta})=a^{(\mathbf{n})}\prod_{m=1}^{\text{len}(\mathbf{n})}\eta_{m}^{n_{m}}\] (S46) The final symmetry functions used here include up through order six and are the following: \[h_{1} =\eta_{0}\] (S47) \[h_{2} =\frac{2}{3}\sum_{i=1}^{6}\eta_{i}^{2}\] (S48) \[h_{3} =\frac{8}{3}\sum_{i=1}^{6}\eta_{i}^{4}\] (S49) \[h_{4} =\frac{4}{3}\left[\left(\eta_{1}^{2}+\eta_{2}^{2}\right)\left( \eta_{3}^{2}+\eta_{4}^{2}+\eta_{5}^{2}+\eta_{6}^{2}\right)+\left(\eta_{3}^{2} +\eta_{6}^{2}\right)\left(\eta_{4}^{2}+\eta_{5}^{2}\right)\right]\] (S50) \[h_{5} =\frac{16}{3}\left(\eta_{1}^{2}\eta_{2}^{2}+\eta_{3}^{2}\eta_{6} ^{2}+\eta_{4}^{2}\eta_{5}^{2}\right)\] (S51) \[h_{6} =\frac{32}{3}\sum_{i=1}^{6}\eta_{i}^{6}\] (S52) \[h_{7} =\frac{8}{3}\big{[}\left(\eta_{1}^{4}+\eta_{2}^{4}\right)\left( \eta_{3}^{2}+\eta_{4}^{2}+\eta_{5}^{2}+\eta_{6}^{2}\right)+\left(\eta_{3}^{4} +\eta_{6}^{4}\right)\left(\eta_{4}^{2}+\eta_{5}^{2}\right)+\] (S53) \[\qquad\qquad\left(\eta_{1}^{2}+\eta_{2}^{2}\right)\left(\eta_{3} ^{4}+\eta_{4}^{4}+\eta_{5}^{4}+\eta_{6}^{4}\right)+\left(\eta_{3}^{2}+\eta_{6 }^{2}\right)\left(\eta_{4}^{4}+\eta_{5}^{4}\right)\bigg{]}\] \[h_{8} =\frac{16}{3}\big{[}\eta_{1}^{2}\eta_{2}^{2}(\eta_{3}^{2}+\eta_{ 4}^{2}+\eta_{5}^{2}+\eta_{6}^{2})+\eta_{3}^{2}\eta_{6}^{2}(\eta_{1}^{2}+\eta_ {2}^{2}+\eta_{4}^{2}+\eta_{5}^{2})+\] (S54) \[\qquad\qquad\eta_{4}^{2}\eta_{5}^{2}(\eta_{1}^{2}+\eta_{2}^{2}+ \eta_{3}^{2}+\eta_{6}^{2})\big{]}\] \[h_{9} =\frac{32}{3}\left(\eta_{1}^{4}\eta_{2}^{2}+\eta_{1}^{2}\eta_{2}^ {4}+\eta_{3}^{4}\eta_{6}^{2}+\eta_{3}^{2}\eta_{6}^{4}+\eta_{4}^{4}\eta_{5}^{2} +\eta_{4}^{2}\eta_{5}^{4}\right)\] (S55) \[h_{10} =8(\eta_{1}^{2}+\eta_{2}^{2})(\eta_{3}^{2}+\eta_{6}^{2})(\eta_{4} ^{2}+\eta_{5}^{2})\] (S56) \[h_{11} =\frac{64}{5}\big{[}\eta_{1}\eta_{2}(\eta_{3}^{2}-\eta_{6}^{2})( \eta_{4}^{2}-\eta_{5}^{2})+\] \[\qquad\qquad\eta_{3}\eta_{6}(\eta_{1}^{2}-\eta_{2}^{2})(\eta_{4} ^{2}-\eta_{5}^{2})+\] (S57) \[\qquad\qquad\eta_{4}\eta_{5}(\eta_{1}^{2}-\eta_{2}^{2})(\eta_{3} ^{2}-\eta_{6}^{2})\big{]}\] \[h_{12} =64\sqrt{5}\eta_{1}\eta_{2}\eta_{3}\eta_{4}\eta_{5}\eta_{6}\] (S58) ## S6 Monte Carlo Precision Each Monte Carlo run finds the ensemble average for \(\mathbf{\eta}\) and \(\mathbf{\mu}\) for a given \(\mathbf{\phi}\), \(\mathbf{\kappa}\), and temperature. The Monte Carlo calculations used in the 7D IDNN have a precision of 0.0003 for the order parameters with a confidence of 0.95. When the Monte Carlo simulation converges, \(\mathbf{\mu}\) is then computed using the ensemble average. We found the precision of the order parameter does not affect the determined relationship between \(\mathbf{\eta}\) and \(\mathbf{\mu}\) and therefore does not affect the accuracy of the data used to train the IDNN. However, it does effect what data is used to train the IDNN, as the determined \(\mathbf{\eta}\) given the inputs is less accurate. We found that a higher precision is needed at 300K and 340K to have a consistent relationship between \(\mathbf{\kappa}\) and \(\mathbf{\eta}\). When precise sampling is needed, higher precision on the order parameters would be necessary. For our current active learning method, there is randomness in both the global and local sampling, such that additional randomness in the \(\mathbf{\eta}\) used to find \(\mathbf{\mu}\) is not an issue. ## S7 Lowest Free Energy Surface The Free Energy Surface is plotted in the \(\eta_{0}-\eta_{1}\) space with the lowest free energy curve for each \(\eta_{0}\) (with \(\eta_{2}\)-\(\eta_{6}\) held at 0) plotted in red. This free energy curve is found by predicting the free energy using the 7D IDNN for each point in the \(\eta_{0}\) and positive \(\eta_{1}\) space and determining the lowest free energy at each \(\eta_{0}\). As the order parameters are symmetrically invariant, this holds for negative \(\eta_{1}\) and all other order parameters. For 260K and 300K (with \(\eta_{2}\)-\(\eta_{6}\) held at 0) there is a discontinuity in the derivative of the lowest free energy corresponding to \(\eta_{1}\) jumping from 0 to (.33-.39). At 340K the lowest energy state is always disordered. The lowest free energy curves in 7D are plotted in Figure S8a. These curves are found by evaluating data points over the complete \(\mathbf{\eta}\) space, under the conditions that \(\eta_{1}-\eta_{6}\) are positive and in decreasing order (as each order parameter is symmetrically invariant) and that they satisfy the conditions outlined in the Equations 15 and 16 in the main section. For each \(\eta_{0}\), we then determine the \(\eta^{\prime}s\) that correspond to the lowest free energy. For 300K along the lowest free energy curve, only one order parameter is nonzero. This curve is shown in Figures S8a and S7. For 260K, there are two distinctive non-disordered regions. In one region, both \(\eta_{1}\) and \(\eta_{2}\) are nonzero and jump from 0 to \(\sim.18\). In the second region \(\eta_{2}\) goes to 0 and \(\eta_{1}\) goes to \(\sim.37\). The points at which these discontinuities occur are shown by the red points in Figure S8. In addition, the \(\eta_{0}-\eta_{1}\) curve for 260K from Figure S7 is shown as the green curve in Figure S8a, and the discontinuities are shown by the green points in S8. ## S8 Phase Diagram Our Phase Diagram (Figure S8b) is constructed using the free energy predicted by the 1D IDNN trained on the results of 1D Monte Carlo with umbrella sampling. A constructed set of tangent lines on the free energy curve, at compositions of Li greater than 0.5, and less than 0.5, are used to find the boundaries to the region of phase instability. The points used in the phase diagram are shown in blue diamonds in Figure S8a. To determine the region of spinodal decomposition we can also use the second derivative of free energy to determine the boundary. A characteristic of the region is that the free energy curve is convex, while otherwise the free energy curve is concave. For the 1D Monte Carlo the 2nd derivative of free energy in the two-phase region is negative and the boundary to the region can be found when \(\frac{\partial^{2}g}{\partial x^{2}}=0\). These points are plotted as blue circles in the Figure S8a. For the 7D IDNN, the convexity in the free energy is found in regions with all negative eigenvalues. However, we have found that this exists only outside the boundary regions for \(\eta_{1}-\eta_{6}\) as described in section 4.4 (Equations 15 and 16) of the main text. We expect this accounts for the difference between the 1D and the 7D curve. This means we cannot find the boundaries of the spinodal decomposition region using eigenvalues. Instead, we look at the \(\eta^{\prime}s\) corresponding to the lowest free energy curve. There is a discontinuity in the \(\mu_{0}\) space, where the \(\eta_{1}\) jumps. These jumps are shown in Figure S9. These discontinuities are plotted by the green and red points in S8. Since we do not see a zero or negative \(\frac{\partial^{2}g}{\partial x^{2}}\) we use the discontinuities to approximate the transition between disordered and two phase. From Figure S8 we can see that the discontinuity occurs at a similar \(\eta_{0}\) to the 1D disorder-spinodal region boundary. The ordered to spinodal decomposition boundary is more difficult to find in 7D and additional sampling might be needed to properly capture the boundary in the IDNN. We do expect the 1D and lowest free energy from the 7D IDNN to show the same trends. This discrepancy could be caused by inaccuracies in the Monte Carlo calculations, sub optimal sampling, or not using an optimal machine learning model. Figure S4: Comparison of IDNN fits with the data for 260 K, 300 K, and 340 K: (a) the chemical potential data used for training, as sampled with the bias potentials, and the IDNN prediction, (b) numerically differentiated data and derivative of the chemical potential IDNN, and (c) the analytically integrated free energy DNN. (d) Cahn-Hilliard phase field computations of the order-disorder transition based on the free energy density function parameterized by composition, alone. Order parameters are not included; therefore, the ordered variants are not represented. The disordered regions A,C, E and ordered regions C,D are labelled. Figure S5: Li diffusivity fit to data. Figure S8: (a) The lowest free energy curves as determined by the IDNN. The curve for the 1D IDNN is shown in blue. The lowest free energy in the complete \(\eta\) space is shown by the 7D curve in red. For 260K we additionally plotted the lowest free energy curve in the \(\eta_{0}-\eta_{1}\) space shown in green. The points used to create the phase diagram in (b) are shown by the blue diamonds. These points occur at (0.453 0.485 0.520 0.560) for T260 and (0.469 0.489 0.512 0.540) for T300. For both (a) and (b) the points where the second derivative of the 1D IDNN is zero is shown by the blue circles. These points occur at (0.4667 0.4848 0.5181 0.5387) for T260 and (0.475 0.4857 0.5147 0.5307) for T300. The discontinuities in the free energy of the 7D IDNN are represented by the green and red points. For T260 the discontinuities in the full \(\eta\) space(red points) are found at (0.466 0.471 0.540 0.546). In the \(\eta_{0}-\eta_{1}\) space the green points are found at (0.468 0.544). For T300 the discontinuities are found at (0.474 0.525) The boxes in (a) highlighting the convexity of the two-phase region have a linear term added to the chemical potential to exaggerate the convex region, such that \(g^{\prime}=g+l\eta_{0}\). For T260: 1 is 0.05, -0.1002 for the left and the right box respectively. Similarly, for T300: 1 is 0.037 and -0.065. Figure S9: This figure shows the \(\eta\)'s corresponding to the lowest free energy curve. The red and blue curves show how \(\eta_{1}\) and \(\eta_{2}\) vary corresponding to the lowest free energy at 260K. The yellow curve shows how \(\eta_{1}\) varies given \(\eta_{2}-\eta_{6}\) are zero for 260K. The purple curve shows how \(\eta_{1}\) varies for 300K given \(\eta_{2}-\eta_{6}\) are zero.
2301.00645
Development of Bethe-Salpeter theory for dealing with unstable system
In the framework of relativistic quantum field theory, the solution of homogeneous Bethe-Salpeter equation for two-body bound state can not describe unstable system, so we develop Bethe-Salpeter theory to investigate resonance which is regarded as an unstable two-body system. Based on Bethe-Salpeter wave function, we consider the time evolution of two-body bound state determined by the total Hamiltonian. The total matrix element for arbitrary decay channel is expressed in terms of the Heisenberg picture, and Mandelstam's approach is generalized to calculate the matrix element between bound states with respect to arbitrary value of the final state energy. Some innovations to Feynman diagram are made so that the key features of dispersion relation can be more clearly exhibited. This new resonance theory in quantum field theory is applied to investigate exotic particle which is considered as an unstable meson-meson molecular state.
Xiaozhao Chen, Xiaofu Lü
2023-01-02T13:08:19Z
http://arxiv.org/abs/2301.00645v2
# Resonance in quantum field theory ###### Abstract In the framework of relativistic quantum field theory, the solution of Bethe-Salpeter equation for bound state can not describe resonance, so we develop Bethe-Salpeter equation to investigate resonance which is regarded as an unstable state created by two Heisenberg field operators. Based on Bethe-Salpeter wave function, we consider the temporal evolution of two-body bound state determined by the total Hamiltonian. The total matrix element for arbitrary decay channel is expressed in terms of the Heisenberg picture, and Mandelstam's approach is generalized to calculate the matrix element between bound states with respect to arbitrary energy. Some innovations to Feynman diagram are made so that the key features of dispersion relation can be more clearly exhibited. This resonance theory in quantum field theory is applied to investigate exotic particle which is considered as an unstable meson-meson molecular state. pacs: 12.40.Yx, 14.40.Rt, 12.39.Ki Introduction Many exotic particles have been discovered in experiment and many possible alternative interpretations beyond quark-antiquark state have been proposed in theory [1; 2; 3; 4; 5; 6]. Among these interpretations, Bethe-Salpeter (BS) equation is frequently used to investigate the properties of exotic particles which are considered as two-body bound states [7; 8; 9; 10]. In quantum field theory, BS equation is a nonperturbative method [11; 12; 13], which should be only applied to deal with two-body bound state in the strict sense. However, in experiments exotic particles are unstable states, so these exotic particles are resonances which can not be completely treated as stationary two-body bound states. More importantly, present field theory seldom involves the issue concerning unstable two-body system. In this work, resonance is regarded as an unstable state created by two Heisenberg field operators. We develop BS equation to deal with resonance in the framework of relativistic quantum field theory and illustrate this theory based on BS equation for exotic meson resonance. In the previous works about hadronic molecule states [1; 2; 3; 8; 9; 10], exotic particles were considered as meson-meson bound states. Solving BS equations for meson-meson bound states, the authors of these works obtained the masses and BS wave functions. The "bare" mass of meson-meson bound state was regarded as mass of exotic meson resonance and the mass shift for molecular state due to decay channels has seldom been considered [1; 2; 3; 8; 9; 10]. As well-known, all decay channels of resonance should contribute to its physical mass. Therefore, it is necessary to seek a development of BS equation for dealing with resonance. In this paper, exotic meson resonance is considered as an unstable meson-meson molecular state. Based on BS wave function for meson-meson bound state, we can provide a description for the prepared state and then study the temporal evolution of meson-meson molecular state determined by the total Hamiltonian. Using dispersion relation, the Heisenberg picture and Mandelstam's approach, we obtain the shift for energy level of resonance and then the physical mass is used to calculate its decay width. An innovative Feynman diagram is introduced, in which the key features of dispersion relation can be exhibited clearly. Resonance Theory in Quantum Field Theory Let us begin with the interaction Lagrangian for the coupling of light quark fields to light meson fields as in effective theory at low energy QCD [9] \[\mathscr{L}_{I}=ig_{0}\bar{\mathcal{Q}}\gamma_{5}\mathbb{P}\mathcal{Q}+ig_{0}^{ \prime}\bar{\mathcal{Q}}\gamma_{\mu}\mathbb{V}_{\mu}\mathcal{Q}+g_{\sigma}\bar {\mathcal{Q}}^{\prime}\mathcal{Q}^{\prime}\sigma, \tag{1}\] where \(\bar{\mathcal{Q}}=(\bar{u},\bar{d},\bar{s})\), \(\bar{\mathcal{Q}}^{\prime}=(\bar{u},\bar{d})\), \(g\) represents the corresponding meson-quark coupling constant, \(\mathbb{P}\) and \(\mathbb{V}\) are the octet pseudoscalar and nonet vector meson matrices, respectively. From this Lagrangian, we have investigated the light meson interaction with quarks in heavy mesons and obtained the interaction of heavy meson with light meson through the heavy meson form factor [8; 14]. Using path integrals, one can obtain a homogeneous integral equation for arbitrary bound state composed of two mesons. In symbolic notation BS equation may be written as \[(S^{(1)-1}S^{(2)-1}+\mathcal{V})\chi=0, \tag{2}\] where \(\chi\) represents BS wave function, the kernel \(\mathcal{V}\) is the sum of all irreducible graphs, \(S^{(1)}\) and \(S^{(2)}\) represent meson propagators, respectively. Solving this BS equation, one can obtain the bare mass \(M_{0}\) and BS wave function \(\chi_{P}(x_{1},x_{2})\) for this meson-meson bound state with momentum \(P=(\mathbf{P},i\sqrt{\mathbf{P}^{2}+M_{0}^{2}})\). We emphasize that the kernel \(\mathcal{V}\) is defined in two-body channel so \(\mathcal{V}\) is not complete interaction. The kernel plays a central role for making two-body system to be a stable state, but it can not provide any motive for decay process. Since resonance decays spontaneously into other particles, we can suppose that at the times \(t_{1}=0\) and \(t_{2}=0\) this unstable state has been prepared to decay. This prepared state (ps) can be described by the ground-state BS wave function which has the form \[\mathscr{X}_{a}^{\text{ps}}=\chi_{P}(\mathbf{x}_{1},t_{1}=0,\mathbf{x}_{2},t_{ 2}=0)=\frac{1}{(2\pi)^{3/2}}\frac{1}{\sqrt{2E(P)}}e^{i\mathbf{P}\cdot(\eta_{1 }\mathbf{x}_{1}+\eta_{2}\mathbf{x}_{2})}\chi_{P}(\mathbf{x}_{1}-\mathbf{x}_{2 }), \tag{3}\] where \(E(p)=\sqrt{\mathbf{p}^{2}+m^{2}}\) and \(\eta_{1}+\eta_{2}=1\).Then the time evolution of this system determined by the total Hamiltonian \(H\) has the explicit form \[\mathscr{X}(t)=e^{-iHt}\mathscr{X}_{a}^{\text{ps}}=\frac{1}{2\pi i}\int_{C_{2} }d\epsilon e^{-i\epsilon t}\frac{1}{\epsilon-H}\mathscr{X}_{a}^{\text{ps}}, \tag{4}\] where \(G(\epsilon)=(\epsilon-H)^{-1}\) is the Green's function and the contour \(C_{2}\) runs from \(ic_{r}+\infty\) to \(ic_{r}-\infty\) in energy-plane. The Green's function can be represented by scattering matrix [15] \[G_{a}(\epsilon)=(\chi_{a}^{\text{ps}},G(\epsilon)\chi_{a}^{\text{ps}})=\frac{1 }{\epsilon-M_{0}-(2\pi)^{3}T_{a}(\epsilon)}, \tag{5}\] where \(\chi_{a}^{\rm ps}\) represents \((2\pi)^{-3/2}[2E(P)]^{-1/2}\chi_{P}({\bf x}_{1}-{\bf x}_{2})\) in Eq. (3). The proof of Eq. (5) has been given by Ref. [15]. This work will give \(T_{a}(\epsilon)\) in the framework of relativistic quantum field theory. In field theory the operator \(T(\epsilon)\) is just the scattering matrix with energy \(\epsilon\), and \(T_{a}(\epsilon)\) is the \(T\)-matrix element between two bound states, which should be defined as \(\langle a\ {\rm out}|a\ {\rm in}\rangle=\langle a\ {\rm in}|a\ {\rm in} \rangle-i(2\pi)^{4}\delta^{(4)}(P-P)T_{a}(\epsilon)\). Because of the analyticity of \(T_{a}(\epsilon)\), we define \[T_{a}(\epsilon)={\mathbb{D}}(\epsilon)-i{\mathbb{I}}(\epsilon), \tag{6}\] where \(\epsilon\) approaches the real axis from above, \({\mathbb{D}}\) and \({\mathbb{I}}\) are the real and imaginary parts, respectively. When there is only one decay channel, we can use the unitarity of \(T_{a}(\epsilon)\) to obtain [15] \[2{\mathbb{I}}(\epsilon)=\sum_{b}(2\pi)^{4}\delta^{(3)}({\bf P}_{b}-{\bf P}) \delta(E_{b}-\epsilon)|T_{ba}(\epsilon)|^{2}, \tag{7}\] where the final 4-vector momentum is \(P_{b}=({\bf P}_{b},iE_{b})\) and the \(T\)-matrix element \(T_{ba}(\epsilon)\) is defined as \(\langle b\ {\rm out}|a\ {\rm in}\rangle=-i(2\pi)^{4}\delta^{(3)}({\bf P}_{b}-{\bf P}) \delta(E_{b}-\epsilon)T_{ba}(\epsilon)\). The delta-function in Eq. (7) means that the energy \(\epsilon\) in scattering matrix is equal to the total energy \(E_{b}\) of the final state, and \(\sum_{b}\) represents summing over all final states. For \(E_{b}=\epsilon\), we also denote the total energy of the final state by \(\epsilon\) and \({\mathbb{I}}(\epsilon)\) becomes a function of the final state energy. Using dispersion relation for the function \(T_{a}(\epsilon)\), we obtain \[{\mathbb{D}}(\epsilon)=-\frac{{\cal P}}{\pi}\int_{\epsilon_{M}}^{\infty}\frac {{\mathbb{I}}(\epsilon^{\prime})}{\epsilon^{\prime}-\epsilon}d\epsilon^{ \prime}. \tag{8}\] The symbol \({\cal P}\) means that this integral is a principal value integral and the variable of integration is the total energy \(\epsilon^{\prime}\) of the final state. The function \({\mathbb{I}}(\epsilon^{\prime})\) should be calculated over the real interval \(\epsilon_{M}<\epsilon^{\prime}<\infty\). As usual the momentum of initial bound state \(a\) is set as \(P=(0,0,0,iM_{0})\) in the rest frame and \(\epsilon_{M}\) denotes the sum of all particle masses in the final state. Let us suppose that there are several decay channels and the final state \(b\) may contain \(n\) composite particles and \(n^{\prime}\) elementary particles in decay channel \(c^{\prime}\). From Eq. (7), we have \[{\mathbb{I}}(\epsilon^{\prime})= \frac{1}{2}\sum_{c^{\prime}}\int d^{3}Q^{\prime}_{1}...d^{3}Q^{ \prime}_{n^{\prime}}d^{3}Q_{1}...d^{3}Q_{n}(2\pi)^{4}\delta^{(4)}(Q^{\prime}_{ 1}+...+Q_{n}-P^{\epsilon^{\prime}})\sum_{\rm spins}|T_{(c^{\prime};b)a}( \epsilon^{\prime})|^{2}, \tag{9}\] where \(Q^{\prime}_{1}...Q^{\prime}_{n^{\prime}},Q_{1}...Q_{n}\) are the momenta of final particles, \(P^{\epsilon^{\prime}}=(0,0,0,i\epsilon^{\prime})\), \(T_{(c^{\prime};b)a}(\epsilon^{\prime})\) is the \(T\)-matrix element with respect to \(\epsilon^{\prime}\), \(\sum_{\rm spins}\) represents summing over final spins and averaging over initial spins, and \(\sum_{c^{\prime}}\) represents summing over all open and virtual decay channels. In Eq. (9) the energy in scattering matrix is equal to the total energy \(\epsilon^{\prime}\) of the final state \(b\) which extends from \(\epsilon_{M}\) to \(+\infty\), i.e., \(\epsilon_{M}<\epsilon^{\prime}<\infty\), while the bare mass \(M_{0}\) and BS amplitude of initial bound state \(a\) have been specified. It is obvious that the final state may be a "virtual" state. From Eq. (9), we have \(\mathbb{I}(\epsilon^{\prime})>0\) for \(\epsilon^{\prime}>\epsilon_{M}\) and \(\mathbb{I}(\epsilon^{\prime})=0\) for \(\epsilon^{\prime}\leqslant\epsilon_{M}\), which is the reason that the integration in dispersion relation (8) ranges from \(\epsilon_{M}\) to \(+\infty\). We emphatically introduce the \(T\)-matrix element \(T_{(c^{\prime};b)a}(\epsilon^{\prime})\) as follows. In our theory \(T_{(c^{\prime};b)a}(\epsilon^{\prime})\) must involve bound state, so this matrix element can not be calculated as ordinary \(S\)-matrix element. Using the Heisenberg picture, we can obtain the total matrix element between a final state \(|Q^{\prime}_{1}...Q^{\prime}_{n^{\prime}},Q_{1}...Q_{n}\) out\(\rangle\) and a specified initial bound state \(|P\) in\(\rangle\) \[\begin{split}-iR_{(c^{\prime};b)a}(\epsilon^{\prime})=& \langle Q^{\prime}_{1}...Q^{\prime}_{n^{\prime}},Q_{1}...Q_{n}\ \mbox{out}|P\ \mbox{in} \rangle\\ =& i^{2n^{\prime}}\int d^{4}z_{1}...d^{4}z_{n^{\prime}}f^{ *}_{Q^{\prime}_{1}}(z_{1})...f^{*}_{Q^{\prime}_{n^{\prime}}}(z_{n^{\prime}}) S^{\prime-1}_{z_{1}}...S^{\prime-1}_{z_{n^{\prime}}}\\ &\times\langle Q_{1}...Q_{n}\ \mbox{out}|T\phi(z_{1})... \psi(z_{i})...\bar{\psi}(z_{j})...\phi(z_{n^{\prime}})|P\ \mbox{in} \rangle.\end{split} \tag{10}\] Here \(\phi\) and \(\psi\) represent boson and fermion field operators in the Heisenberg picture, respectively. The functions \(f\) are solutions to the corresponding free field equations of motion, and \(S^{\prime}\) represents free field propagator. Of great interest is the matrix element of a time-order product of Heisenberg field operators between bound states in Eq. (10). Mandelstam's approach is generalized to evaluate the bound state matrix element with respect to \(\epsilon^{\prime}\) \[\begin{split}&\langle Q_{1}...Q_{n}\ \mbox{out}|T\phi(z_{1})...\psi(z_{i})...\bar{\psi}(z_{j})...\phi(z_{n^{\prime}}) |P\ \mbox{in}\rangle\\ =&\int d^{4}y_{1}d^{4}y_{2}...d^{4}y_{2n-1}d^{4}y_{ 2n}d^{4}x_{1}d^{4}x_{2}\\ &\times\bar{\chi}_{Q_{1}}(y_{1},y_{2})...\bar{\chi}_{Q_{n}}(y_{2 n-1},y_{2n})\mathbb{T}(y_{1}...y_{2n};z_{1}...z_{i}...z_{j}...z_{n^{\prime}};x_{1},x_{ 2})\chi_{P}(x_{1},x_{2}),\end{split} \tag{11}\] where \(\mathbb{T}(y_{1}...y_{2n};z_{1}...z_{i}...z_{j}...z_{n^{\prime}};x_{1},x_{2})\) is the two-particle irreducible Green's function, \(\bar{\chi}\) and \(\chi\) are BS wave functions for the final and initial bound states, respectively. The function \(\mathbb{T}\) can, in principle, be evaluated by means of perturbation theory. It is necessary to emphasize that the general matrix element (11) is calculated with respect to \(\epsilon^{\prime}\), and the energy in \(\mathbb{T}\) is equal to the final state energy \(\epsilon^{\prime}\) extending from \(\epsilon_{M}\) to \(+\infty\), i.e., \(\epsilon_{M}<\epsilon^{\prime}<\infty\). Therefore the final state may be a virtual state, while the traditional Feynman diagram represents only the physical case \(\epsilon^{\prime}=M_{0}\). In this paper we introduce a Feynman diagram to represent the virtual states, called _virtual Feynman diagram_, shown in Figure 1. The crosses in virtual Feynman diagram mean that the energy in \(\mathbb{T}\) is equal to the final state energy \(\epsilon^{\prime}\) extending from \(\epsilon_{M}\) to \(+\infty\) while the bare mass \(M_{0}\) of initial bound state is specified. When \(\epsilon^{\prime}=M_{0}\), the crosses in virtual Feynman diagram disappear and it becomes the traditional Feynman diagram. Removing delta-function factor \((2\pi)^{4}\delta^{(4)}(Q^{\prime}_{1}+...+Q_{n}-P^{\epsilon^{\prime}})\) in \(R_{(\epsilon^{\prime};b)a}(\epsilon^{\prime})\), we obtain \(T_{(\epsilon^{\prime};b)a}(\epsilon^{\prime})\). To illustrate this, we imagine that the initial bound state (\(MS\)) is composed of two heavy vector mesons (\(VM\) and \(\overline{VM}\)) and the final state contains a heavy meson (\(HM\)) and a light meson (\(LM\)). If a bound state with spin \(j\) is created by two massive vector fields, its BS wave function can be defined as \(\chi^{j}_{P(\lambda\tau)}(x_{1},x_{2})=\langle 0|TA_{\lambda}(x_{1})A^{\dagger}_{ \tau}(x_{2})|P,j\rangle\) and we have given the general form for this BS wave function \(\chi^{j}_{\lambda\tau}(P,p)\) in the momentum representation, where \(p\) is the relative momentum of two vector fields [8; 10]. This BS wave function should satisfy the equation \[\chi^{j}_{\lambda\tau}(P,p)= -\int\frac{d^{4}p^{\prime}}{(2\pi)^{4}}\Delta_{F\lambda\theta}(p ^{\prime}_{1}){\cal V}_{\theta\theta^{\prime},\kappa^{\prime}\kappa}(p,p^{ \prime};P)\chi^{j}_{\theta^{\prime}\kappa^{\prime}}(P,p^{\prime})\Delta_{F \kappa\tau}(p^{\prime}_{2}), \tag{12}\] where \({\cal V}_{\theta\theta^{\prime},\kappa^{\prime}\kappa}\) is the interaction kernel, \(p^{\prime}_{1}\) and \(p^{\prime}_{2}\) are the momenta carried by two vector fields, \(\Delta_{F\lambda\theta}(p^{\prime}_{1})\) and \(\Delta_{F\kappa\tau}(p^{\prime}_{2})\) are the propagators for the spin 1 fields. Owing to the effective interaction Lagrangian at low energy QCD (1), we have to consider that the heavy meson is a bound state composed of a quark and an antiquark and investigate the interaction of light meson with quarks in heavy meson. Through the heavy meson form factor describing the heavy meson structure, we have obtained the interaction kernel between two heavy vector mesons (\(VM\) and \(\overline{VM}\)) derived from one light meson (\(\sigma\), \(\omega\), \(\rho\), \(\phi\)) exchange in Refs. [8, 16]. In our previous works [8, 14], BS equation (12) has been solved and the bare mass \(M_{0}\) and BS wave function \(\chi_{\lambda\tau}^{j}(P,p)\) for the bound state composed of two heavy vector mesons have been obtained. In this paper, we are interested only in mass shift for molecular state and do not repeat the procedure for solving BS equation. Taking into account the internal structure of heavy mesons (\(VM\), \(\overline{VM}\) and \(HM\)) and retaining the lowest order value of \(\mathbb{T}\)[9], we can obtain the \(T\)-matrix element with respect to \(\epsilon^{\prime}\) in the momentum representation \[\begin{split} T_{(\epsilon^{\prime};b)a}(\epsilon^{\prime})=& \frac{ig^{\prime}_{0}\varepsilon_{\mu}^{\bar{\epsilon}^{\prime}}(Q^{\prime}) \varepsilon_{\nu}^{\bar{\epsilon}^{\prime}}(Q)}{(2\pi)^{9/2}\sqrt{8E_{H}(Q)E_ {L}(Q^{\prime})E(P)}}\int\frac{d^{4}kd^{4}p}{(2\pi)^{8}}\text{Tr}[S_{F}^{ \mathcal{D}}(p_{2})\bar{\Gamma}_{\nu}^{H}(Q,q)S_{F}^{\mathcal{C}}(p_{1})\\ &\times\Gamma_{\lambda}^{V}(p_{1}^{\prime},k)S_{F}^{\mathcal{A}}( p_{3})\gamma_{\mu}S_{F}^{\mathcal{B}}(p_{4})\Gamma_{\tau}^{\bar{V}}(p_{2}^{ \prime},k^{\prime})\chi_{\lambda\tau}^{j}(P,p)],\end{split} \tag{13}\] where \(p_{1},p_{3},p_{4},p_{2}\) are the momenta of four quarks; \(p_{1}^{\prime}\) and \(p_{2}^{\prime}\) are the momenta of two heavy vector mesons; \(q\), \(k\) and \(k^{\prime}\) are the relative momenta between quark and antiquark in heavy mesons, respectively; \(\varepsilon(p)\) is the polarization vector of vector meson with momentum \(p\), \(\Gamma^{H}(K,k)\) represents BS amplitude of heavy meson, \(S_{F}(p)\) is the quark propagator and its superscript is a flavor label, shown as Figure 2. In our approach, the initial bound state is considered as a four-quark state [9], so the generalized BS amplitude of initial bound state should be \(\Gamma_{\lambda}^{V}(p_{1}^{\prime},k)\chi_{\lambda\tau}^{j}(P,p)\Gamma_{ \tau}^{\bar{V}}(p_{2}^{\prime},k^{\prime})\), which has been specified. In Figure 2(a), the energy in \(\mathbb{T}\) is equal to the energy of final state which is a virtual state, and then the quark momenta in left-hand side of crosses depend on the final state energy and the momenta in right-hand side depend on the initial state energy, i.e., \(p_{1}-p_{2}-p_{3}+p_{4}=Q+Q^{\prime}=P^{\epsilon^{\prime}}\) and \(p_{1}^{\prime}-p_{2}^{\prime}=P\). In the rest frame, we have \(P=(0,0,0,iM_{0})\), \(P^{\epsilon^{\prime}}=(0,0,0,i\epsilon^{\prime})\) and \(\epsilon_{M}<\epsilon^{\prime}<\infty\). When \(\epsilon^{\prime}=M_{0}\), the crosses in Figure 2(a) disappear and Figure 2(a) becomes the traditional Feynman diagram, shown as Figure 2(b). From Figure 2(b), we have calculated the matrix element \(T_{(\epsilon^{\prime};b)a}(M_{0})\) with bare mass for meson-meson bound state and obtained the decay width \(\Gamma(M_{0})\) with bare mass [9]. Finally, we can expect that \(G_{a}(\epsilon)\) has a pole on the second Riemann sheet from Eq. (5) \[\epsilon_{0}\cong M_{0}+(2\pi)^{3}[\mathbb{D}(M_{0})-i\mathbb{I}(M_{0})]=M-i \Gamma(M_{0})/2, \tag{14}\] where \(\Delta M=(2\pi)^{3}\mathbb{D}(M_{0})\) is the shift for energy level of resonance and \(M=M_{0}+(2\pi)^{3}\mathbb{D}(M_{0})\) is the physical mass for resonance. The bare mass \(M_{0}\) of two-body bound state is obtained by solving BS equation (2), which should not be the mass of physical resonance. \(\Gamma(M_{0})\) with bare mass also should not be the width of physical resonance, which should depend on its physical mass \(M\). Replacing \(M_{0}\) in the momentum of initial bound state by \(M\) and setting \(\epsilon^{\prime}=M\), we can calculate the matrix element \(T_{(c^{\prime};b)a}(M)\) from Eq. (10) and obtain the decay width \(\Gamma\) for physical resonance. Up to now, a systematical and accurate theoretical approach in the framework of relativistic quantum field theory to investigate resonance has been established. In this paper, we only explore exotic meson resonance which is considered as an unstable meson-meson molecular state. The extension of our approach to more general resonances is straightforward, while the interaction Lagrangian may be modified. ## III Example As an illustration, we investigate exotic state \(\chi_{c0}(3915)\)[17; 18; 19], once named _X_(3915). In experiments two strong decay modes of \(\chi_{c0}(3915)\) have been observed: \(J/\psi\omega\) and \(D^{+}D^{-}\). Figure 2: Matrix element between bound states in the momentum representation. The momenta in the final state satisfy \(Q+Q^{\prime}=P^{\epsilon^{\prime}}\) and the momentum of the initial state is \(P\). The solid lines denote quark propagators. In the rest frame, we have \(P=(0,0,0,iM_{0})\), \(P^{\epsilon^{\prime}}=(0,0,0,i\epsilon^{\prime})\) and \(\epsilon_{M}<\epsilon^{\prime}<\infty\). In diagram (a) the final state is a virtual state and the crosses mean that the momenta of quark propagators depend on the final state energy; in diagram (b) the crosses disappear when \(\epsilon^{\prime}=M_{0}\). Here, we assume that the isoscalar \(\chi_{c0}(3915)\) is a mixed state of two unstable molecular states \(D^{*0}\bar{D}^{*0}\) and \(D^{*+}D^{*-}\) with spin-parity quantum numbers \(0^{+}\). Firstly, we consider the mixed state of two bound states \(D^{*0}\bar{D}^{*0}\) and \(D^{*+}D^{*-}\), and this bare state can be denoted as \(1/\sqrt{2}|D^{*0}\bar{D}^{*0}\rangle+1/\sqrt{2}|D^{*+}D^{*-}\rangle\). In Refs. [8, 10, 14], we have obtained the bare mass \(M_{0}\) and BS wave function \(\chi_{\lambda\tau}^{0^{+}}(P,p)\). In this paper, our attention is focused on the mass shift due to all decay channels and the decay width of physical resonance. Let us list all decay channels which are fully open or only just "virtual". The narrow state \(\chi_{c0}(3915)\) was discovered in 2005 [17] and for a long time a series of experiments only observed one strong decay mode of \(\chi_{c0}(3915)\): \(J/\psi\omega\) denoted as \(c_{1}^{\prime}\). In 2020 LHCb Collaboration observed another decay channel \(D^{+}D^{-}\)[19] denoted as \(c_{2}^{\prime}\). Though the neutral channel \(D^{0}\bar{D}^{0}\) still has not been observed, this neutral channel should exist for the isospin conservation, which is denoted as \(c_{3}^{\prime}\). Because the total energy \(\epsilon^{\prime}\) of the final state extends from \(\epsilon_{M}\) to \(+\infty\), we obtain one virtual channel \(D^{*}\bar{D}^{*}\) derived from the interaction Lagrangian (1), denoted as \(c_{4}^{\prime}\). Since bound state lies below the threshold, i.e., \(M_{0}<M_{D^{*}}+M_{\bar{D}^{*}}\), the virtual channel \(c_{4}^{\prime}\) can not occur inside the physical world. Then we can apply Eqs. (10) and (11) to evaluate the \(T\)-matrix element \(T_{(c^{\prime};b)a}(\epsilon^{\prime})\) for arbitrary decay channel. In Figure 2, \(VM\) and \(\overline{VM}\) become \(D^{*}\) and \(\bar{D}^{*}\), respectively; \(HM\) becomes \(J/\psi\) and \(LM\) becomes \(\omega\); and decay channel \(J/\psi\omega\) can be exhibited by these two Feynman diagrams. Applying Eq. (9), we obtain the function \[\mathbb{I}_{1}(\epsilon^{\prime})= \frac{1}{2}\int d^{3}Qd^{3}Q^{\prime}(2\pi)^{4}\delta^{(4)}(Q+Q^{ \prime}-P^{\epsilon^{\prime}})\sum_{\rm spins}|T_{(c_{1}^{\prime};b)a}( \epsilon^{\prime})|^{2}, \tag{15}\] where \(T_{(c_{1}^{\prime};b)a}(\epsilon^{\prime})\) is the bound state matrix element with respect to \(\epsilon^{\prime}\). These heavy mesons \(J/\psi\), \(D^{*}\) and \(\bar{D}^{*}\) are considered as quark-antiquark bound states, and \(T_{(c_{1}^{\prime};b)a}(\epsilon^{\prime})\) has been given by Eq. (13), where flavor labels \({\cal C}={\cal D}\) and \({\cal A}={\cal B}\) represent \(c\)-quark and light quark, respectively. The meson-quark coupling constant \(g_{0}^{\prime}\) becomes \(g_{\omega}\) and \(g_{\omega}^{2}=2.42/2\) was obtained within QCD sum rules approach [9]. BS amplitudes of heavy vector mesons \(J/\psi\) and \(D^{*}\) have the form \(\Gamma_{\lambda}^{V}(K,k)=(\gamma_{\lambda}+K_{\lambda}\gamma\cdot K/M_{V}^{2} ){\rm exp}(-k^{2}/\omega_{V}^{2})\), where \(\omega_{J/\psi}\)=0.826GeV and \(\omega_{D^{*}}\)=1.50GeV [9, 13]. These momenta in Figure 2(a) become \(p_{1}=(Q+Q^{\prime})/2+p+k\), \(p_{2}=(Q^{\prime}-Q)/2+p+k\), \(p_{3}=k\), \(p_{4}=Q^{\prime}+k\), \(p_{1}^{\prime}=p+P/2\), \(p_{2}^{\prime}=p-P/2\), \(Q+Q^{\prime}=P^{\epsilon^{\prime}}=(0,0,0,i\epsilon^{\prime})\) and \(P=(0,0,0,iM_{0})\). Using Eq. (13), we can calculate the \(T\)-matrix element \(T_{(c_{1}^{\prime};b)a}(\epsilon^{\prime})\) with respect to arbitrary energy \(\epsilon^{\prime}\) for channel \(c_{1}^{\prime}\). From Eq. (15), we obtain the function \(\mathbb{I}_{1}(\epsilon^{\prime})\) for channel \(c^{\prime}_{1}\) and dispersion relation (8) becomes \[\mathbb{D}_{1}(M_{0})=-\frac{\mathcal{P}}{\pi}\int_{\epsilon_{c^{\prime}_{1},M}} ^{\infty}\frac{\mathbb{I}_{1}(\epsilon^{\prime})}{\epsilon^{\prime}-M_{0}}d \epsilon^{\prime}, \tag{16}\] where \(\epsilon_{c^{\prime}_{1},M}=M_{J/\psi}+M_{\omega}\). The \(T\)-matrix element \(T_{(c^{\prime}_{1};b)a}(\epsilon^{\prime})\) and the function \(\mathbb{I}_{1}(\epsilon^{\prime})\) for channel \(c^{\prime}_{1}\) are calculated over the real interval \(\epsilon_{c^{\prime}_{1},M}<\epsilon^{\prime}<\infty\), and we obtain the mass shift \(\Delta M_{1}=(2\pi)^{3}\mathbb{D}_{1}(M_{0})\) due to channel \(c^{\prime}_{1}\). For decay channel \(D^{+}D^{-}\), we obtain the function \(\mathbb{I}_{2}(\epsilon^{\prime})\) from Eq. (9) \[\mathbb{I}_{2}(\epsilon^{\prime})= \frac{1}{2}\int d^{3}Q_{1}d^{3}Q_{2}(2\pi)^{4}\delta^{(4)}(Q_{1} +Q_{2}-P^{\epsilon^{\prime}})\sum_{\rm spins}|T_{(c^{\prime}_{2};b)a}( \epsilon^{\prime})|^{2}, \tag{17}\] where \(T_{(c^{\prime}_{2};b)a}(\epsilon^{\prime})\) represents the bound state matrix element with \(\epsilon^{\prime}\). Considering the lowest order term of \(\mathbb{T}\), we obtain \(T_{(c^{\prime}_{2};b)a}(\epsilon^{\prime})\) represented graphically by Figure 3, where \(p_{1}-p_{2}-p_{3}+p_{4}=p_{1}-p_{2}-q_{3}+q_{4}=Q_{1}+Q_{2}=P^{\epsilon^{\prime }}\), \(p^{\prime}_{1}-p^{\prime}_{2}=P\), and the crosses mean that the momenta of quark propagators and the momentum \(w\) of the exchanged light meson depend on \(Q_{1}\) and \(Q_{2}\). The \(T\)-matrix element with respect to \(\epsilon^{\prime}\) for channel \(c^{\prime}_{2}\) becomes \[\begin{split} T_{(c^{\prime}_{2};b)a}(\epsilon^{\prime})=& \frac{-ig^{2}}{(2\pi)^{9/2}\sqrt{8E_{D^{+}}(Q_{1})E_{D^{-}}(Q_{2})E(P)}} \int\frac{d^{4}kd^{4}k^{\prime}d^{4}p}{(2\pi)^{12}}{\rm Tr}[S^{d}_{F}(q_{3}) \\ &\times\bar{\Gamma}^{D^{+}}(Q_{1},q)S^{c}_{F}(p_{1})\Gamma^{D^{*} }_{\lambda}(p^{\prime}_{1},k)S^{l}_{F}(p_{3}){\cal O}^{dl}(p_{3},q_{3})]\chi^{0 ^{+}}_{\lambda\tau}(P,p)\Delta_{F}(w)\\ &\times{\rm Tr}[S^{l}_{F}(p_{4})\Gamma^{\bar{D}^{*}}_{\tau}(p^{ \prime}_{2},k^{\prime})S^{c}_{F}(p_{2})\bar{\Gamma}^{D^{-}}(Q_{2},q^{\prime}) S^{d}_{F}(q_{4}){\cal O}^{dl}(q_{4},p_{4})],\end{split} \tag{18}\] where \(q\), \(q^{\prime}\), \(k\) and \(k^{\prime}\) are the relative momenta between quark and antiquark in heavy mesons, respectively; the meson-quark coupling constants \(g\) were obtained within QCD sum rules approach [8, 16], \({\cal O}^{dl}(p,q)\) represents the meson-quark vertex, \(\Delta_{F}(w)\) is the light meson propagator, the superscript of quark propagator \(S_{F}(p)\) is flavor label, and \(l=u,d\) represents the \(u,d\)-antiquark in heavy vector meson \(D^{*0}\) or \(D^{*+}\), respectively. BS amplitude of heavy pseudoscalar meson \(D^{+}\) has the form \(\Gamma^{D^{+}}(K,k)=i\gamma_{5}{\rm exp}(-k^{2}/\omega_{D}^{2})\), where \(\omega_{D}\)=1.50GeV [13]. The meson-quark vertex \({\cal O}^{dl}(p,q)\) is unit matrix for one-\(\sigma\) exchange; and it becomes \(\gamma_{\mu}\) for one light vector meson exchange. From Eqs. (17), (8) and (14), we obtain the mass shift \(\Delta M_{2}\) due to channel \(c^{\prime}_{2}\) \[\Delta M_{2}=(2\pi)^{3}\mathbb{D}_{2}(M_{0})=-\frac{\mathcal{P}}{\pi}\int_{ \epsilon_{c^{\prime}_{2},M}}^{\infty}\frac{(2\pi)^{3}\mathbb{I}_{2}(\epsilon^ {\prime})}{\epsilon^{\prime}-M_{0}}d\epsilon^{\prime}, \tag{19}\] where \(\epsilon_{c^{\prime}_{2},M}=M_{D^{+}}+M_{D^{-}}\). The \(T\)-matrix element \(T_{(c^{\prime}_{2};b)a}(\epsilon^{\prime})\) and the function \(\mathbb{I}_{2}(\epsilon^{\prime})\) for channel \(c^{\prime}_{2}\) are calculated over the real interval \(\epsilon_{c^{\prime}_{2},M}<\epsilon^{\prime}<\infty\). Following the same procedure as for channels \(c^{\prime}_{1}\) and \(c^{\prime}_{2}\), we can calculate the mass shifts \(\Delta M_{3}\) and \(\Delta M_{4}\) due to the channel \(c^{\prime}_{3}\) and virtual channel \(c^{\prime}_{4}\), respectively. Since the isospin conservation, we have the constituent quark masses \(m_{u}=m_{d}=0.33\)GeV, \(m_{c}=1.55\)GeV [6] and the meson masses \(M_{\sigma}=0.45\)GeV, \(M_{\omega}=0.782\)GeV, \(M_{\rho}=0.775\)GeV, \(M_{\phi}=1.019\)GeV, \(M_{D^{*0}}=M_{D^{*+}}=2.007\)GeV, \(M_{D^{0}}=M_{D^{+}}=1.865\)GeV, \(M_{J/\psi}=3.097\)GeV [20]. By doing the numerical calculation, we obtain the mass shifts \(\Delta M_{i}(i=1,2,3,4)\) due to three open decay channels \(J/\psi\omega\), \(D^{+}D^{-}\), \(D^{0}\bar{D}^{0}\) and one virtual channel \(D^{*}\bar{D}^{*}\), respectively. Subsequently, the mass \(M\) for physical resonance \(\chi_{c0}(3915)\) can be applied to calculate its decay width. Replacing \(M_{0}\) by \(M\) in Eq. (13) and setting \(\epsilon^{\prime}=M\), we calculate the matrix element \(T_{(c^{\prime}_{1};b)a}(M)\) and obtain that the width for physical decay model \(\chi_{c0}(3915)\to J/\psi\omega\) is \(\Gamma_{1}=2(2\pi)^{3}\mathbb{I}_{1}(M)\). Replacing \(M_{0}\) by \(M\) in Eq. (18) and setting \(\epsilon^{\prime}=M\), we calculate the matrix element \(T_{(c^{\prime}_{2};b)a}(M)\) and obtain that the width for physical decay model \(\chi_{c0}(3915)\to D^{+}D^{-}\) is \(\Gamma_{2}=2(2\pi)^{3}\mathbb{I}_{2}(M)\). For the isospin conservation, it is easy to obtain the width \(\Gamma_{3}\) for physical decay model \(\chi_{c0}(3915)\to D^{0}\bar{D}^{0}\). Our numerical results are presented in Table 1, and the mass \(M\) and full width \(\Gamma\) are in good agreement with experimental data. Furthermore, the calculated \(D^{+}D^{-}\) width \(\Gamma_{2}\) is very small compared with the calculated \(J/\psi\omega\) width \(\Gamma_{1}\), and then we can explain why the decay model \(\chi_{c0}(3915)\to D^{+}D^{-}\) had not been observed in experiments for a long time. Therefore, this work provides a further verification for the molecular hypothesis of \(\chi_{c0}(3915)\) and predicts the exact values of these strong decay widths \(\Gamma_{1}(\chi_{c0}(3915)\to J/\psi\omega)\), \(\Gamma_{2}(\chi_{c0}(3915)\to D^{+}D^{-})\) and \(\Gamma_{3}(\chi_{c0}(3915)\to D^{0}\bar{D}^{0})\). In this paper we emphatically illuminate the physical meaning of resonance theory in quantum field theory, and the details in computational process will be shown in our future article. ## IV Conclusion We recognize that resonance can not be completely treated as a stationary bound state and provide a reasonable and feasible scheme to describe resonance in the framework of relativistic quantum field theory. Based on BS wave function, we provide a description of the prepared state and investigate the temporal evolution of two-body bound state as determined by the total Hamiltonian. According to dispersion relation, the total matrix elements for all decay channels should be calculated with respect to arbitrary energy, and these matrix elements are expressed in terms of the Heisenberg picture. Mandelstam's approach is generalized to calculate the matrix element between bound states with arbitrary energy, which is exhibited in virtual Feynman diagram. Finally, the mass and decay width for physical resonance are obtained. In this paper, we illustrate resonance theory in quantum field theory by reference to the example of exotic meson which is considered as an unstable meson-meson molecular state, and obviously our work can be extended to more general resonances and creates a new paradigm for investigating hadron resonances. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grants No. 11705104 and No. 11801323; Shandong Provincial Natural Science Foundation, China under Grants No. ZR2016AQ19 and No. ZR2016AM31; and SDUST Research Fund \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Quantity & \(M_{0}\) & \(\Delta M_{1}\) & \(\Delta M_{2}\) & \(\Delta M_{3}\) & \(\Delta M_{4}\) & \(M\) & \(\Gamma_{1}\) & \(\Gamma_{2}\) & \(\Gamma_{3}\) & \(\Gamma\) \\ \hline this work & 3953.7 & \(-24.0\) & \(-1.4\) & \(-1.4\) & \(-4.6\) & 3922.3 & 22.3 & 1.5 & 1.5 & 25.3 \\ \hline PDG[20] & & & & & & 3921.7\(\pm\)1.8 & & & & 18.8\(\pm\)3.5 \\ \hline \end{tabular} \end{table} Table 1: Mass \(M\) and width \(\Gamma\) for physical resonance \(\chi_{c0}(3915)\). \(M_{0}\) is the bare mass of mixed state of two bound states \(D^{*0}\bar{D}^{*0}\) and \(D^{*+}D^{*-}\), \(\Delta M_{i}\) is the calculated shift due to \(i\)th decay channel. (Dimensioned quantities in MeV.) under Grant No. 2018TDJH101.
2307.15878
Explaining Full-disk Deep Learning Model for Solar Flare Prediction using Attribution Methods
This paper contributes to the growing body of research on deep learning methods for solar flare prediction, primarily focusing on highly overlooked near-limb flares and utilizing the attribution methods to provide a post hoc qualitative explanation of the model's predictions. We present a solar flare prediction model, which is trained using hourly full-disk line-of-sight magnetogram images and employs a binary prediction mode to forecast $\geq$M-class flares that may occur within the following 24-hour period. To address the class imbalance, we employ a fusion of data augmentation and class weighting techniques; and evaluate the overall performance of our model using the true skill statistic (TSS) and Heidke skill score (HSS). Moreover, we applied three attribution methods, namely Guided Gradient-weighted Class Activation Mapping, Integrated Gradients, and Deep Shapley Additive Explanations, to interpret and cross-validate our model's predictions with the explanations. Our analysis revealed that full-disk prediction of solar flares aligns with characteristics related to active regions (ARs). In particular, the key findings of this study are: (1) our deep learning models achieved an average TSS=0.51 and HSS=0.35, and the results further demonstrate a competent capability to predict near-limb solar flares and (2) the qualitative analysis of the model explanation indicates that our model identifies and uses features associated with ARs in central and near-limb locations from full-disk magnetograms to make corresponding predictions. In other words, our models learn the shape and texture-based characteristics of flaring ARs even at near-limb areas, which is a novel and critical capability with significant implications for operational forecasting.
Chetraj Pandey, Rafal A. Angryk, Berkay Aydin
2023-07-29T03:18:56Z
http://arxiv.org/abs/2307.15878v1
# Explaining Full-disk Deep Learning Model for Solar Flare Prediction using Attribution Methods ###### Abstract Solar flares are transient space weather events that pose a significant threat to space and ground-based technological systems, making their precise and reliable prediction crucial for mitigating potential impacts. This paper contributes to the growing body of research on deep learning methods for solar flare prediction, primarily focusing on highly overlooked near-limb flares and utilizing the attribution methods to provide a post hoc qualitative explanation of the model's predictions. We present a solar flare prediction model, which is trained using hourly full-disk line-of-sight magnetogram images and employs a binary prediction mode to forecast \(\geq\)M-class flares that may occur within the following 24-hour period. To address the class imbalance, we employ a fusion of data augmentation and class weighting techniques; and evaluate the overall performance of our model using the true skill statistic (TSS) and Heidke skill score (HSS). Moreover, we applied three attribution methods, namely Guided Gradient-weighted Class Activation Mapping, Integrated Gradients, and Deep Shapley Additive Explanations, to interpret and cross-validate our model's predictions with the explanations. Our analysis revealed that full-disk prediction of solar flares aligns with characteristics related to active regions (ARs). In particular, the key findings of this study are: (1) our deep learning models achieved an average TSS\(\sim\)0.51 and HSS\(\sim\)0.35, and the results further demonstrate a competent capability to predict near-limb solar flares and (2) the qualitative analysis of the model's explanation indicates that our model identifies and uses features associated with ARs in central and near-limb locations from full-disk magnetograms to make corresponding predictions. In other words, our models learn the shape and texture-based characteristics of flaring ARs even when they are at near-limb areas, which is a novel and critical capability that has significant implications for operational forecasting. Keywords:Solar flares Deep learning Explainable AI. ## 1 Introduction Solar flares are temporary occurrences on the Sun that can generate abrupt and massive eruptions of electromagnetic radiation in its outermost atmosphere. These events happen when magnetic energy, accumulated in the solar atmosphere, is suddenly discharged, leading to a surge of energy that spans a wide range of wavelengths, from radio waves to X-rays. They are considered critical phenomena in space weather forecasting, and predicting solar flares is essential to understanding and preparing for their effects on Earth's infrastructure and technological systems. The National Oceanic and Atmospheric Administration (NOAA) classifies solar flares into five groups based on their peak X-ray flux level, namely A, B, C, M, and X, which represent the order of the flares from weakest to strongest [8] and are commonly referred to as NOAA/GOES flare classes, where GOES stands for Geostationary Operational Environmental Satellite. M- and X-class flares, which are rare but significant, are the strongest flares that can potentially cause near-Earth impacts, including disruptions in electricity supply chains, airline traffic, satellite communications, and radiation hazards to astronauts in space. This makes them of particular interest to researchers studying space weather. Therefore, developing better methods to predict solar flares is necessary to prepare for the effects of space weather on Earth. Active regions (ARs) are typically characterized by strong magnetic fields that are concentrated in sunspots. These magnetic fields can become highly distorted and unstable, leading to the formation of plasma instabilities and the release of energy in the form of flares and other events [41]. Most operational flare forecasts target these regions of interest and issue predictions for individual ARs, which are the main initiators of space weather events. In order to produce a comprehensive forecast for the entire solar disk using an AR-based model, a heuristic function is used to combine the output flare probabilities (\(P_{FL}(AR_{i})\)) for each active region (AR) [29]. The resulting probability, \(P_{aggregated}=1-\prod_{i}\left[1-P_{FL}(AR_{i})\right]\), represents the likelihood of at least one AR producing a flare, assuming that the flaring events from different ARs are independent. However, there are two main issues with this approach for operational systems. Firstly, magnetic field measurements, which are the primary feature used by AR-based models, are subject to projection effects that distort measurements when ARs are closer to the limb. As a result, the aggregated full-disk flare probability is restricted to ARs in central locations, typically within \(\pm 30^{\circ}\)[11], \(\pm 45^{\circ}\)[20] to \(\pm 70^{\circ}\) of the disk center [12]. Secondly, the heuristic function assumes that all ARs are equally important and independent of one another, which limits the accuracy of full-disk flare prediction probability. In contrast, full-disk models use complete magnetograms covering the entire solar disk, which are used to determine shape-based parameters such as size, directionality, borders, and inversion lines[13]. Although projection effects still exist in these images, full-disk models can learn from the near-limb areas and provide a complementary element to AR-based models by predicting flares that occur in these regions [28]. Machine learning and deep learning methods are currently being applied to predict solar flares, with experimental success and interdisciplinary collaboration from researchers in various fields [26], [25], [11], [20], [28], [14], [42]. Although these approaches have improved image classification and computer vision, they learn complex data representations, resulting in so-called black-box models. The decision-making process of these models is obscured, which is crucial for operational forecasting communities. To address this issue, several attribution meth ods, or post hoc analysis methods, have been developed to explain and interpret the decisions made by deep neural networks. These methods focus on analyzing trained models and do not contribute to the model's parameters during training. In this study, we develop a convolutional neural network (CNN) based full-disk model for predicting solar flares with a magnitude of \(\geq\)M-class flares. We evaluate and explain the model's performance using three attribution methods: Guided Gradient-weighted Class Activation Mapping (Guided Grad-CAM) [32], Integrated Gradients [39], and Deep Shapley Additive Explanations (Deep SHAP) [22]. Our analysis reveals that our model's decisions are based on the characteristics corresponding to ARs, and it successfully predicts flares appearing on near-limb regions of the Sun. The rest of this paper is organized as follows. In Sec. 2, we present the related work on flare forecasting. In Sec. 3, we present our methodology with data preparation and model architecture. In Sec. 4, we provide a detailed description of all three attribution methods used as methods of explanation. In Sec. 5, we present our experimental evaluation. In Sec. 6, we discuss the interpretation of our models, and in Sec. 7, we present our conclusion and future work. ## 2 Related Work Currently, there are four main types of methods in use for predicting solar flares, which include (i) human-based prediction techniques based on empirical observations [6], [7] (ii) statistical approaches [18], [19] (iii) numerical simulations based on physics-based models [17], [15], and (iv) data-driven models which made use of machine learning and deep learning techniques [4], [11], [20], [2], [28], [27]. The application of machine learning in predicting solar flares has seen significant progress due to recent advances. In one such application of machine learning, a multi-layer perceptron model based on machine learning was employed for predicting \(\geq\)C- and \(\geq\)M-class flares in [25] using 79 manually selected physical precursors derived from multi-modal solar observations. Later, a CNN-based model was developed for predicting \(\geq\)C-, \(\geq\)M-, and \(\geq\)X-class flares using solar AR patches extracted from line-of-sight (LoS) magnetograms within \(\pm 30^{\circ}\) of the central meridian in [11], taking advantage of the increasing popularity of deep learning models. [20] also used a CNN-based model to predict \(\geq\)C- and \(\geq\)M-class flares within 24 hours using AR patches located within \(\pm 45^{\circ}\) of the central meridian. To address the class imbalance issue, they employed undersampling and data augmentation techniques. However, while undersampling led to higher experimental accuracy scores, it often failed to deliver similar real-time performance [1]. It is worth noting that both of these models have limited operational capability as they are restricted to a small portion of the observable disk in central locations (\(\pm 30^{\circ}\) and \(\pm 45^{\circ}\)). In addition, in [30], a CNN-based hybrid model was introduced which combined GoogleLeNet [40] and DenseNet [10]. The model was trained using a large volume of data from both the Helioseismic and Magnetic Imager (HMI) instrument onboard Solar Dynamics Observatory (SDO) and magnetograms from the Michelson Doppler Imager (MDI) onboard the Solar and Heliospheric Observatory (SOHO). The aim of this model was to predict the occurrence of \(\geq\)C-class flares within the next 24 hours. However, it is important to note that these two instruments are not currently cross-calibrated for forecasting purposes, which may result in spurious or incomplete patterns being identified. More recently, an AlexNet-based [16] full-disk flare prediction model was presented in [28]. The authors provided a black-box model, but training and validation were limited due to a lower temporal resolution. To interpret a CNN-based solar flare prediction model trained with AR patches, [3] used an occlusion-based method, and [43] presented visual explanation methods using daily observations of solar full-disk LoS magnetograms at 00:00 UT. They applied Grad-CAM [32] and Guided Backpropagation [36] to explore the relationship between physical parameters and the occurrence of C-, M-, and X-class flares. However, these methods had limitations in predicting near-limb flares. Recently, [38] evaluated two additional attribution methods, DeepLIFT [34] and Integrated Gradients [39], for interpreting CNNs trained on AR patches from central locations, i.e., within \(\pm 70^{\circ}\) for predicting solar flares. In this paper, a CNN-based model is presented for predicting \(\geq\)M-class flares, which was trained using full-disk LoS magnetogram images. The contributions of this study are threefold: (i) demonstrating an overall improvement in the performance of a full-disk solar flare prediction model, (ii) utilizing recent attribution methods to provide explanations of our model's decisions, and (iii) for the first time, demonstrating the capability of predicting flares in near-limb regions of the Sun, which are traditionally difficult to predict with AR-based models. ## 3 Data and Model We used compressed images of full-disk LoS solar magnetograms obtained from the HMI/SDO available in near real-time publicly via Helioiewer1[23]. We sampled the magnetogram images every hour of the day, starting at 00:00 and ending at 23:00, from December 2010 to December 2018. We collected a total of 63,649 magnetogram images and labeled them using a 24-hour prediction window based on the maximum peak X-ray flux (converted to NOAA/GOES flare classes) within the next 24 hours, as illustrated in Fig. 1. To elaborate, if the maximum X-ray intensity of a flare was weaker than M (i.e., \(<10^{-5}Wm^{-2}\)), we labeled the observation as "No Flare" (NF: \(<\)M), and if it was \(\geq\)M, we labeled it as "Flare" (FL: \(\geq\)M). This resulted in 54,649 instances for the NF class and 9,000 instances for the FL class. The detailed class-wise distribution of our data is shown in Fig. 2(a). Finally, we created a non-chronological split of our data into four temporally non-overlapping tri-monthly partitions for our cross-validation experiments. We created this partitioning by dividing the data timeline from December 2010 to December 2018 into four partitions. Partition-1 contained data from January to March, Partition-2 contained data from April to June, Partition-3 contained data from July to September, and Partition-4 contained data from October to December, as shown in Fig. 2(b). Due to the scarcity of \(\geq\)M-class flares, the overall distribution of the data is highly imbalanced, with FL:NF \(\sim\)1:6. In our study, we employed transfer learning with a pre-trained VGG-16 model [35] for solar flare prediction. To use the pre-trained weights for our 1-channel input magnetogram images, we duplicated the channels twice, as the pre-trained model requires a 3-channel image for input. Additionally, we used the 7\(\times\)7 adaptive average pooling after feature extraction using the convolutional layer and prior to the fully-connected layer to match the dimension of our 1-channel, 512\(\times\)512 image. This ensures efficient utilization of the pre-trained weights, ir Figure 1: A visual representation of the data labeling process using hourly observations of full-disk LoS magnetograms with a prediction window of 24 hours. Here, ’FL’ and ’NF’ indicates Flare and No Flare for binary prediction (\(\geq\)M-class flares). The gray-filled circles indicate hourly spaced timestamps for magnetogram instances. Figure 2: (a) The total number of hourly sampled magnetograms images per flare classes. (b) Label distribution into four tri-monthly partitions for predicting \(\geq\)M-class flares. respective of the architecture of the VGG-16 model, which is designed to receive 224\(\times\)224, 3-channel images. Our model comprises 13 convolutional layers, each followed by a rectified linear unit (ReLU) activation, five max pool layers, one average pool layer, and three fully connected layers, as illustrated in Fig. 3. ## 4 Attribution Methods Deep learning models are often seen as black boxes due to their intricate data representations, making them difficult to understand, leading to issues of inconsistency in the discovered patterns [21]. The attribution methods are post hoc approaches for model interpretation that provides insights into the decision-making process of the trained CNN models without influencing the training process. These methods generate an attribution vector, or heat map, of the same size as the input, where each element in the vector represents the contribution of the corresponding input element to the model's decision. Attribution methods can be broadly classified into two main categories: perturbation-based and gradient-based [9]. Perturbation-based methods modify the parts of the input to create new inputs and compute the attribution by measuring the difference between the output of the original and modified inputs. However, this approach can lead to inconsistent interpretations due to the creation of Out-of-Distribution (OoD) data caused by random perturbations [31]. In contrast, gradient-based methods calculate the gradients of the output with respect to the extracted features or input using backpropagation, enabling attribution scores to be estimated more efficiently and robustly to input perturbations [24]. Therefore, in this study, we employed three recent gradient-based methods to evaluate our models due to their reliability and computational efficiency. Our primary objective is to provide a visual analysis of the decisions made by our model and identify the characteristics of magnetogram images that trigger specific decisions by cross-validating the generated explanations from all three methods, Figure 3: The architecture of our full-disk solar flare prediction model. which can clarify the predictive output of the models and help with operational forecasting under critical conditions. **Guided Grad-CAM:** The Guided Gradient-weighted Class Activation Mapping (Guided Grad-CAM) method [32] combines the strengths of Grad-CAM and guided backpropagation [36]. Grad-CAM produces a coarse localization map of important regions in the image by using class-specific gradient information from the final convolutional layer of a CNN, while guided backpropagation calculates the gradient of the output with respect to the input, highlighting important pixels detected by neurons. While Grad-CAM attributions are class-discriminative and useful for localizing relevant image regions, they do not provide fine-grained pixel importance like guided backpropagation [5]. Guided Grad-CAM combines the fine-grained pixel details from guided backpropagation with the coarse localization advantages of Grad-CAM and generates its final localization map by performing an element-wise multiplication between the upsampled Grad-CAM attributions and the guided backpropagation output. **Integrated Gradients:** Integrated Gradients (IG) [39] is an attribution method that explains a model's output by analyzing its features. To be more specific, IG calculates the path integral of gradients along a straight line connecting the baseline feature to the input feature in question. A baseline reference is required for this method, which represents the absence of a feature in the original image and can be a zero vector or noise; we used a zero vector of the size of the input as a baseline for our computation. IG is preferred for its completeness property, which states that the sum of integrated gradients for all features equals the difference between the model's output with the given input and the baseline input values. This property allows for attributions to be assigned to each individual feature and, when added together, should yield the output value itself [37]. **Deep SHAP:** SHAP values, short for SHapley Additive exPlanations [22], utilize cooperative game theory [33] to enhance the transparency and interpretability of machine learning models. This method quantifies the contribution or importance of each feature on the model's prediction rather than evaluating the quality of the prediction itself. In the case of deep-learning models, Deep SHAP [22] improves upon the DeepLIFT algorithm [34] by estimating the conditional expectations of SHAP values using a set of background samples. For each input sample, the DeepLIFT attribution is computed with respect to each baseline, and the resulting attributions are averaged. This method assumes feature independence and explains the model's output through the additive composition of feature effects. Although it assumes a linear model for each explanation, the overall model across multiple explanations can be complex and non-linear. Similar to IG, Deep SHAP also satisfies the completeness property [37]. ## 5 Experimental Evaluation ### Experimental Settings We trained a full-disk flare prediction model using Stochastic Gradient Descent (SGD) as an optimizer and Negative Log-Likelihood (NLL) as the objective function. To apply NLL loss, we used logarithmic-softmax activation on the raw logits from the output layer. Our model was initialized with pre-trained weights from the VGG-16 model [35]. We further trained the model for 50 epochs with a batch size of 64 using dynamic learning rates (initialized at 0.001 and halved every 5 epochs). To address the class imbalance issue, we used data augmentation and class weights in the loss function. Specifically, we applied three augmentation techniques (vertical flipping, horizontal flipping, and rotations of +5\({}^{\circ}\) to -5\({}^{\circ}\)) during the training phase to explicitly augment the minority FL-class three times. However, this still left the dataset imbalanced, so we adjusted the class weights inversely proportional to the class frequencies after augmentations and penalized misclassifications made in the minority class. To improve the generalization of our model without introducing bias in the test set, we applied data augmentation exclusively during the training phase, and we opted for augmentation over oversampling and undersampling as the latter two may lead to overfitting of the model [2]. Finally, we conducted 4-fold cross-validation experiments using tri-monthly partitions to train our models. We assess the overall performance of our models using two forecast skills scores: True Skill Statistics (TSS, in Eq. 1) and Heidke Skill Score (HSS, in Eq. 2), derived from the elements of confusion matrix: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). In this context, FL and NF represent positive and negative classes respectively. \[TSS=\frac{TP}{TP+FN}-\frac{FP}{FP+TN} \tag{1}\] \[HSS=2\times\frac{TP\times TN-FN\times FP}{\left((P\times(FN+TN)+(TP+FP)\times N )\right)} \tag{2}\] where N = TN + FP and P = TP + FN. TSS and HSS values range from -1 to 1, where 1 indicates all correct predictions, -1 represents all incorrect predictions, and 0 represents no skill. In contrast to TSS, HSS is an imbalance-aware metric, and it is common practice to use HSS for the solar flare prediction models due to the high class-imbalance ratio present in the datasets and for a balanced dataset, these metrics are equivalent as discussed in [1]. Lastly, we report the subclass and overall recall for flaring instances (M- and X-class), which is calculated as \((\frac{TP}{TP+FN})\), to demonstrate the prediction sensitivity. To reproduce this work, the source code and detailed experimental results can be accessed from our open source repository 2. Footnote 2: explainFDvgg16:[https://bitbucket.org/gsudmlab/explainfdvgg16/src/main/](https://bitbucket.org/gsudmlab/explainfdvgg16/src/main/) ### Evaluation We performed 4-fold cross-validation using the tri-monthly dataset for evaluating our models. Our models have on average TSS\(\sim\)0.51 and HSS\(\sim\)0.35, which improves over the performance of [28] by \(\sim\)4% in terms of TSS (reported \(\sim\)0.47) and competing results in terms of HSS (reported \(\sim\)0.35). In addition, we evaluate our results for correctly predicted and missed flare counts for class-specific flares (X-class and M-class) in central locations (within \(\pm\)70\({}^{\circ}\)) and near-limb locations (beyond \(\pm\)70\({}^{\circ}\)) of the Sun as shown in Table 1. We observe that our models made correct predictions for \(\sim\)89% of the X-class flares and \(\sim\)77% of the M-class flares in central locations. Similarly, our models show a compelling performance for flares appearing on near-limb locations of the Sun, where \(\sim\)77% of the X-class and \(\sim\)52% of the M-class flares are predicted correctly. This is important because, to our knowledge, the prediction of near-limb flares is often overlooked. More false positives in M-class are expected because of the model's inability to distinguish bordering class [C4+ to C9.9] flares from \(\geq\)M-class flares, which we have observed empirically in our prior work [27] as well. Overall, we observed that \(\sim\)86% and \(\sim\)70% of the X-class and M-class flares, respectively, are predicted correctly by our models. We also quantitatively and qualitatively evaluated our models' effectiveness by spatially analyzing their performance with respect to the locations of M- and X-class flares responsible for the labels. To conduct our analysis, we have spatially binned the responsible flares (maximum X-ray flux within the next 24h) and analyzed whether these instances were correctly (TP) or incorrectly predicted (FN). For this, we used the predictions of our models in the validation set from the 4-fold cross-validation experiments. Here, each bin represents a 5\({}^{\circ}\) by 5\({}^{\circ}\) spatial cell in Heliographic Stonyhurst (HGS) coordinate system (i.e., latitude and longitude). For each subgroup, represented in a spatial cell, we calculate the recall for M-class, X-class, and M- and X-class flares, separately to assess the models' sensitivity at a fine-grained level. The heatmaps demonstrating the spatial distribution of recall scores of our models can be seen in Fig. 4. This allows us to pinpoint the locations where our models were more effective in making accurate predictions and vice versa. We observed that our models demonstrated reasonable performance overall, particularly for X-class flares, in both near-limb and central locations. However, we also observed a higher number of false negatives around near-limb locations for M-class flares. In particular, we demonstrate that the full-disk model proposed in this paper can predict flares appearing at \begin{table} \begin{tabular}{r c c c c c c} \hline \hline & \multicolumn{3}{c}{Within \(\pm\)70\({}^{\circ}\)} & \multicolumn{3}{c}{Beyond \(\pm\)70\({}^{\circ}\)} \\ Flare-Class & TP & FN & Recall & TP & FN & Recall \\ \hline X-Class & 597 & 71 & 0.89 & 164 & 48 & 0.77 \\ M-Class & 4,464 & 1,366 & 0.77 & 1,197 & 1,093 & 0.52 \\ Total (X\&M) & 5,061 & 1,437 & 0.78 & 1,361 & 1,141 & 0.54 \\ \hline \hline \end{tabular} \end{table} Table 1: Counts of correctly (TP) and incorrectly (FN) classified X- and M-class flares in central (\(|longitude|\leq\pm\)70\({}^{\circ}\)) and near-limb locations. The recall across different location groups is also presented. Counts are aggregated across folds. near-limb locations of the Sun with great accuracy, which is a crucial addition to operational forecasting systems. Figure 4: A heat map showcasing recall for individual FL-Class (X- and M-class flares) and when combined (\(\geq\)M-class flares) binned into 5\({}^{\circ}\)\(\times\) 5\({}^{\circ}\) flare locations used as the label. The flare events beyond \(\pm\)70\({}^{\circ}\) longitude (separated by a vertical red line) represent near-limb events. Note: Red cross in white grids represents locations with zero correct predictions while white cells without red cross represent unavailable instances. ## 6 Discussion In this section, we interpret the visual explanations generated using the attribution methods mentioned earlier for correctly predicted near-limb flares and the model's high confidence in an incorrect prediction. As the major focus of this study is on the near-limb flares, we interpret the predictions of our model for an east-limb X4.9-class (note that East and West are reversed in solar coordinates) flare observed on 2014-02-25 at 00:39:00 UTC with a visual explanation generated using all three attribution methods. For this, we used an input im Figure 5: A visual explanation of correctly predicted near-limb (East) FL-class instance. (FL-Image): Annotated full-disk magnetogram at flare start time, showing flare location (green flag) and NOAA ARs (red flags). (Input Image): Actual magnetogram from the dataset. Overlays (GGCAM, IG, SHAP) depict the input image overlayed with attributions, and Maps (GGCAM, IG, SHAP) showcase the attribution maps obtained from Guided Grad-CAM, Integrated Gradients, and Deep SHAP, respectively. age at 2014-02-24 19:00:00 UTC (\(\sim\)6 hours prior to the flare event), where the sunspot for the corresponding flare becomes visible in the magnetogram image. We observed while all three methods highlight features corresponding to an AR in the magnetogram, Guided Grad-CAM and Deep SHAP provide finer details by suppressing noise compared to IG as shown in Fig. 5. Furthermore, the visualization of attribution maps suggests that for this particular prediction, although barely visible, the region responsible for the flare event is considered important and hence contributes to the consequent decision. The explanation shows that the attribution maps are not visible in the magnetogram image, but the attribution maps are not visible in the magnetogram image. Figure 6: A visual explanation of correctly predicted near-limb (West) FL-class instance. (FL-Image): Annotated full-disk magnetogram at flare start time, showing flare location (green flag) and NOAA ARs (red flags). (Input Image): Actual magnetogram from the dataset. Overlays (GGCAM, IG, SHAP) depict the input image overlayed with attributions, and Maps (GGCAM, IG, SHAP) showcase the attribution maps obtained from Guided Grad-CAM, Integrated Gradients, and Deep SHAP, respectively. as soon as a region becomes visible, the pixels covering the AR on the east-limb are activated. Similarly, we analyze another case of correctly predicted near-limb flare (West-limb) of the Sun. For this, we provide a case of X2.3-class flare observed on 2013-10-29T21:42:00 UTC where we used an input image at 2013-10-29T03:00:00 UTC (\(\sim\)19 hours prior to the flare event) shown in Fig. 6. We observed that the model focuses on specific ARs including the relatively smaller AR on the west limb, even though other ARs are present in the magnetogram Figure 7: A visual explanation of incorrectly predicted NF-class instance. (FL-Image): Annotated full-disk magnetogram at flare start time, showing flare location (green flag) and NOAA ARs (red flags). (Input Image): Actual magnetogram from the dataset. Overlays(GGCAM, IG, SHAP) depict the input image overlayed with attributions, and Maps(GGCAM, IG, SHAP)) showcase the attribution maps obtained from Guided Grad-CAM, Integrated Gradients, and Deep SHAP, respectively. image. This shows that our models are capable of identifying the relevant AR even when there is a severe projection effect. Similarly, to analyze a case of false positive, we present an example of a C8.5 flare observed on 2011-02-18 at 06:27:00 UTC, and to explain the result, we used an input magnetogram instance at 2014-02-17 07:00:00 UTC (\(\sim\)23.5 hours prior to the event). We observed that the model's flaring probability for this particular instance is about 0.93. Therefore, we seek a visual explanation of this prediction using all three interpretation methods. Similar to the observations from our positive prediction, the visualization rendered using Guided Grad-CAM and Deep SHAP provides smoothed details and reveals that out of three ARs present in the magnetogram, only two of them are activated while the AR on the west-limb is not considered important for this prediction as shown in Fig. 7. Although an incorrect prediction, the visual explanation shows that the model's decision is based on an AR which is, in fact, responsible for the eventual C8.5 flare event. This incorrect prediction can be attributed to the interference of these bordering class flares, which is problematic for binary flare prediction models. ## 7 Conclusion and Future Work In this paper, we employed three recent gradient-based attribution methods to interpret the predictions made by our binary flare prediction model based on VGG-16, which was trained to predict \(\geq\)M-class flares. We addressed the issue of flares occurring in near-limb regions of the Sun, which has been widely ignored, and our model demonstrated competent performance for such events. Additionally, we assessed the model's predictions with visual explanations, indicating that the decisions were primarily based on characteristics related to ARs in the magnetogram instance. Despite the model's enhanced ability, it still suffers from a high false positive rate due to high C-class flares. In an effort to address this problem, we plan to examine the unique features of each flare class to create a more effective method for segregating these classes based on background flux and generate a new set of labels that better handle border class flares. Moreover, our models currently only examine spatial patterns in our data, but we intend to broaden this work to include spatiotemporal models to improve performance. ## Acknowledgements This work is supported in part under two NSF awards #2104004 and #1931555, jointly by the Office of Advanced Cyberinfrastructure within the Directorate for Computer and Information Science and Engineering, the Division of Astronomical Sciences within the Directorate for Mathematical and Physical Sciences, and the Solar Terrestrial Physics Program and the Division of Integrative and Collaborative Education and Research within the Directorate for Geosciences. This work is also partially supported by the National Aeronautics and Space Administration (NASA) grant award #80NSSC22K0272. ## Ethical Statement Space weather forecasting research raises several ethical implications that must be considered. It is important to note that the data used for the full-disk deep learning model for solar flare prediction is publicly available as a courtesy of NASA/SDO and the AIA, EVE, and HMI science teams - and not subject to data privacy and security concerns. The use of SDO images for non-commercial purposes and public education and information efforts is strongly encouraged and requires no expressed authorization. However, it is still essential to consider the ethical implications associated with developing and using a full-disk deep learning model for solar flare prediction, particularly in terms of fairness, interpretability, and transparency. It is crucial to ensure that the model is developed and used ethically and responsibly to avoid any potential biases or negative impacts on individuals or communities. Moreover, post hoc analysis for full-disk deep learning models for solar flare prediction should avoid giving wrongful assumptions of causality and false trust. While these models may have robust and novel forecast skills, it is crucial to understand the scarcity of extreme solar events and the skill scores used to assess model performance. We note that these models are not perfect and have limitations that should be considered when interpreting their predictions. Therefore, it is important to use these models with caution and to consider multiple sources of information when making decisions, especially when in operations, related to space weather events. By being transparent about the limitations and uncertainties associated with these models, we can ensure that they are used ethically and responsibly to mitigate any potential harm to individuals or communities. Furthermore, the impact of space weather events can range from minor disruptions to significant damage to critical infrastructure, such as power grids, communication systems, and navigation systems, with the potential to cause significant economic losses. Therefore, it is crucial to ensure public safety, particularly for astronauts and airline crew members, by providing information about potential dangers associated with space weather events. Finally, it is imperative to ensure that space weather forecasting research is used for peaceful purposes, i.e., early detection and in part avoiding vulnerabilities that may be caused by extreme space weather events.
2308.02559
DLSIA: Deep Learning for Scientific Image Analysis
We introduce DLSIA (Deep Learning for Scientific Image Analysis), a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing, or for experiment-in-the-loop computing scenarios. DLSIA features easy-to-use architectures such as autoencoders, tunable U-Nets, and parameter-lean mixed-scale dense networks (MSDNets). Additionally, we introduce sparse mixed-scale networks (SMSNets), generated using random graphs and sparse connections. As experimental data continues to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration, and advance research in scientific image analysis.
Eric J Roberts, Tanny Chavez, Alexander Hexemer, Petrus H. Zwart
2023-08-02T21:32:41Z
http://arxiv.org/abs/2308.02559v2
# DLSIA: Deep Learning for Scientific Image Analysis ###### Abstract We introduce DLSIA (Deep Learning for Scientific Image Analysis), a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing, or for experiment-in-the-loop computing scenarios. DLSIA features easy-to-use architectures such as autoencoders, tunable U-Nets, and parameter-lean mixed-scale dense networks (MSDNets). Additionally, we introduce sparse mixed-scale networks (SMSNets), generated using random graphs and sparse connections. As experimental data continues to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration, and advance research in scientific image analysis. ## I Introduction ### _Purpose & Motivation_ Scientific image analysis forms a crucial component of numerous workflows at user facilities, generating an abundance of datasets that each possess unique characteristics. Given the distinct nature of these datasets, the need frequently arises to craft custom solutions tailored to individual experiments. Convolutional neural networks (CNNs), along with other machine learning tools, prove extremely valuable in this regard, capable of addressing a variety of analysis needs and producing insightful results. The unique aspect of scientific data analysis in such settings often necessitates the creation of bespoke solutions tailored to individual experiments, providing optimal results given the data's specific characteristics. CNNs, along with a host of other machine learning tools, present themselves as exceptionally suitable for such tasks due to their flexibility and the broad array of potential applications they cater to. ### _Background & Prior Art_ Convolutional neural networks (CNNs) have emerged as a transformative class of machine learning models specifically designed to unravel patterns and extract meaningful features from various forms of data. Having gained significant popularity in the scientific community, CNNs are _particularly_ well-suited for tackling image analysis tasks, including object detection, image classification, and pixel-by-pixel semantic segmentation. The unique strength of CNNs lies in their ability to autonomously learn discriminative features directly from the data itself, eliminating the need for laborious manual feature engineering. By training on large datasets with labeled examples, CNNs can learn to recognize specific objects, identify anomalies, or detect subtle patterns. Moreover, CNNs remain a versatile tool, allowing researchers from different backgrounds to choose from a variety of different CNN architectures that can denoise, reconstruct, and segment images [1, 2, 3, 4, 5, 6], or perform higher level tasks from among their diverse scientific disciplines, including automated structure and material classification and data-driven discovery in X-ray scattering [7, 8, 9, 10, 11], biological [12, 13], crystallographic [14, 15, 16], and signal processing [17, 18, 19, 20] settings. Inspired by the structure and hierarchy of visual representations in the animal and human visual system in which each individually specialized cortical areas work in tandem to identify objects in a visual field [21], CNNs apply small filters across the input data in the form of several adjacently connected convolutional layers to learn relevant features and progressively transform sequences of raw pixel intensities into higher-level representations. This layered computation and hierarchical abstraction empowers CNNs to discern intricate patterns and identify key features in data. Contrasted with earlier, more-traditional fully-connected neural networks (FCNs) [22], CNNs enforce a more-localized learning of image features and require _far_ fewer weights to learn, resulting in deeper CNN architectures with more-targeted learning [23]. While the widespread adaptability of CNNs has made them a prevalent tool across various scientific domains, not all scientific researchers possess the expertise or knowledge required to construct and train these networks effectively. Access to user-friendly libraries with pre-built networks is invaluable for individuals lacking deep understanding of CNNs. These libraries offer a convenient way to deploy CNNs without dealing with network architecture intricacies. Researchers can focus on their domain expertise by leveraging these libraries instead of building CNNs from scratch. The flexibility of these libraries enables iterative experimentation, allowing researchers to easily swap network architectures and adjust hyperparameters to find the best configurations for their problems. Access to state-of-the-art networks saves time and resources, while promoting interdisciplinary collaboration by abstracting the complexities of CNN construction and training, as researchers can focus on their areas of expertise while leveraging the power of CNNs for their analyses. In summary, the prevalence of CNNs in the sciences necessitates user-friendly libraries that simplify their construction and training, allowing scientists to stay at the forefront of CNN research without the need for extensive expertise in deep learning. To address these challenges and expedite the process of incorporating machine learning into scientific image analysis workflows, we introduce DLSIA (Deep Learning for Scientific Image Analysis), a Python-based, general-purpose machine learning library offering a flexible and customizable environment for generating custom CNN architectures and an extensive suite of tools designed to empower scientists and researchers from diverse scientific domains, including beamline scientists, biologists, and researchers in X-ray scattering. DLSIA, found here [https://dlsia.readthedocs.io/en/latest/](https://dlsia.readthedocs.io/en/latest/), enables a seamless integration of custom CNN architectures and other advanced machine learning methods into common workflows, providing researchers with the means to rapidly test and implement different analysis approaches within a unified framework, dramatically increasing efficiency and adaptability. Whether the task at hand involves image classification, anomaly detection, or any other complex pattern recognition, DLSIA offers a streamlined, efficient platform that enables users to explore, compare, and customize a wide array of CNN architectures, facilitating a systematic investigation of what works, what doesn't, and what is best suited for their specific scientific problems. ### _The DLSIA software library_ The core focus of DLSIA lies in its ability to bridge the gap between cutting-edge deep learning techniques and the challenges encountered in scientific image analysis. By offering a comprehensive collection of user-customizable CNNs, including autoencoders, tunable U-Nets (TUNet), mixed-scale dense networks (MSDNet), and more novel randomized sparse mixed-scale networks (SMSNet), DLSIA allows researchers to harness the power of state-of-the-art deep learning while tailoring a network architecture to the specific demands of their scientific investigations. This flexibility empowers users to fine-tune CNNs, select appropriate layers, optimize hyperparameters, and explore diverse architectural variations, enabling a comprehensive exploration of the rich design space inherent in deep learning-based image analysis. DLSIA facilitates seamless integration with various scientific datasets and promotes reproducible research through its intuitive and extensible PyTorch Application Programming Interface (API). It offers a rich set of functionalities for data preprocessing, model training, validation, and evaluation, while also providing convenient visualization tools to aid in the interpretation and analysis of results. With its user-centric design philosophy, DLSIA aims to empower scientists across domains to leverage the potential of CNNs for scientific image analysis, ultimately accelerating discoveries and advancing research in a wide range of scientific fields. The rest of the manuscript is organized as follows: Sect. II offers an in-depth look at the CNN architectures offered, Sect. III describes the different utility functions, data loaders, training regiments, uncertainty quantification available to DLSIA users, we validate DLSIA CNN architectures through various applications on experimental data in Sect. IV, and Sect. V concludes with a discussion of DLSIA results and viability. ## II DLSIA Deep Convolutional Neural Networks Convolutional neural networks (CNNs) are deep learning models that excel at visual data analysis. In general, CNNs capture features by applying many convolutional filters, or kernels, to local regions of the data via several adjacently connected convolutional layers. The filters are square matrices with adjustable weights that serve as "windows" observing a specific region of the image. By learning the filters' weights via network training and optimization, CNNs can identify various features within the image. We highlight below the different CNN architectures available in the DLSIA software library. Each available network varies in its sequencing of layers and addition of nonlinear activation, pooling, and normalization layers to decompose images into complex hierarchical structures and increase the expressive power. But true to the original goal of DLSIA, all networks are fully customizable with an array of user-specified hyperparameters available to toggle. ### _Tunable U-Nets_ Included in the DLSIA software suite are tunable variant of U-Nets (TUNets), a popular and effective deep convolutional neural network [24]. Inspired by autoencoders (Sect. II-B) and first introduced for the segmentation of biomedical images, its distinctive U-shaped architecture consists of typically-mirrored contractive encoder and expansive decoder halves. Contextual information and features are captured by the contractual encoder phase, made up of a predefined number of layers \(d\). Each individual layer consists of stacked \(3\times 3\) unpadded convolutional operators, nonlinear activation (typically in the form of the rectified linear unit (ReLU) function), and batch normalization to expedite the learning process [25]. Between each layer, max-pooling operations reduce the spatial dimensionality to ease computational costs, introduce translational equivariance [26], and encourage higher-level feature extraction. Next, the upsampling phase of the expansive decoder half, each layer mirrors the stacked convolution-activation-normalization block, while transposed convolutions between layers recover the previously compressed spatial dimensions. This effectively projects the encoder's learned features into the higher resolutions of the original image space to predict a pixel-by-pixel semantic segmentation. In total, the modified upsampling phase offers many more convolutional channels in which the contextual information may be propagated, allowing one to combine and correlate local features of an image with its behavior at larger length scales [27, 28]. Moreover, long-reaching skip connections are introduced in the form of channel-wise concatenations of intermediate feature maps between adjacent contractive and expansive phases, allowing for an aggregation of multi-scale feature representation at different network stages [29, 30, 31, 32]. TUNet performance on different applications relies significantly on the the various hyperparameters that govern the network architecture [33, 34, 35]. As such, the DLSIA API offers full flexibility in creating and deploying TUNets of custom sizes and morphology by allowing the user to define the four following architecture-governing hyperparameters: 1. Depth \(d\): the number of layers in the TUNet. A depth of \(d\) will contain \(d\) layers of dual convolutions and accompanying intralayer operations in each of the encoder and decoder phases, with \(d-1\) mirrored max-pooling, up-convolutions, and concatenation steps between each layer. 2. Number of initial base channels \(c_{b}\): the input data is mapped to this number of feature channels after the initial convolution. 3. Growth rate \(r\): the growth rate/decay rate of feature channels between successive layers, 4. Hidden rate \(r_{h}\): the growth rate/decay rate of feature channels within each individual layer, between each layers' successive convolutions. Additionally, DLSIA defaults to ReLU nonlinear activation and batch normalization after each convolution operation, though the user is free to apply any activation or normalization consistent with PyTorch syntax. A U-Net schematic of depth \(d=4\) is shown in Fig. 1 depicting the order of operations and evolution of channels and spatial dimensions along the contracting and expanding halves. We note that the growth and hidden rates of feature channel growth and decay may be non-integers. ### _Convolutional Autoencoder_ Convolutional autoencoders are a deep, unsupervised neural network framework generally tasked with learning feature extraction for the purpose of reconstructing the input [36, 37]. While relatively simple in structure and acting as a precursor to U-Net encoder-decoder structure, autoencoders use convolutional layers and max-pooling operations between adjacent layers to exploit the feature extraction properties in the beginning encoder half of its architecture. As shown in Fig. 2, the encoder half terminates at a single-dimensional latent space of features, often referred to as the latent space representation. This informational "bottleneck" forces the network to learn only the most important features and contextual information. The second half of the network, the decoder, concludes with alternating transposed convolutions and blocks of dual convolutionals to project the information back to the input space and learn the reconstruction of input data DLSIA instantiation of autoencoders once again reflects that of the tunable U-Nets. Users once again may find the autoencoder with the appropriate expressive power to suit their needs by toggling the number of layers \(d\), the initial number of base channels \(c_{b}\), and the growth rate \(r\) of the convolutional channels. Additionally, users are free to experiment with different sizes of latent space vectors with the \(c_{lat}\) hyperparameter. ### _Mixed-scale Dense Convolutional Neural Networks_ The mixed-scale dense network (MSDNet) was developed as a deep learning framework with a relatively simple architecture containing roughly two to three orders of magnitude _fewer_ trainable parameters [38, 39] than U-Nets and typical encoder-decoder networks. This reduction in model complexity reduces the risk of overfitting, a common problem in neural networks that occurs when the model fits the training data too closely, resulting in poor generalization [40, 41]. MSDNets reduce the model complexity in two ways. Firstly, to probe image features at different length scales and preserve dimensionality between all network layers, dilated convolutions [42] replace upscaling and downscaling operations typically found in convolutional neural networks. Like their non-dilated counterparts, convolutions of integer Fig. 1: Diagram of a two-dimensional, four-layer tunable U-Net congruent with input data of \(c_{in}\) channels and spatial dimensions \(m\) and \(n\). Among the user-defined hyperparameters on display are the initial base channels \(c_{b}\) and the channel growth factor \(r\), both of which control the size of the network, and thus its potential expressive power. The hidden growth rate \(r_{h}\) is set to 1 for simplicity. We note that DLSIA easily accommodates volumetric data by simply replacing all convolutions (and associated layer normalization) with their three-dimensional counterparts. Fig. 2: Schematic overview of a two-layer autoencoder congruent with input data of \(c_{in}\) channels and spatial dimensions \(m\) and \(n\). DLSIA provides the flexibility to adjust the following hyperparameters: initial base channels \(c_{b}\), channel growth factor \(r\), and length of latent space vector \(c_{lat}\). dilation \(l\) consist of the the same square kernel, though the kernel's receptive field is expanded by spacing neighboring entries \((l-1)\) pixels apart; e.g. a two-dimensional \(3\times 3\) dilated convolution with \(l=10\) has \(9\) pixels between each of the vertically- and horizontally-adjacent entries in the kernel matrix, resulting in a receptive field of \(21\times 21\) pixels. Secondly, as depicted in a 3-layer MSDNet diagram in Fig. 3, layers associated with different length scales are mixed together by densely connecting _all_ potential pairs of layers. This dense connectivity leads to several advantages, including maximum feature reusability, recovery of spatial information lost in the early layers, alleviation from the vanishing gradient problem [25] that plagues deep or stacked networks [43], and more robust model convergence and finer-grained predictions [32]. MSDNets are distinct from encoder-decoder networks, as the _same_ set of operations are applied to all densely-connected layers: dilated \(3\times 3\) convolutions with a layer-specific dilation, ReLU nonlinear activation, and batch normalization to expedite training. The final output layer is computed by replacing dilated convolutions with \(1\times 1\) non-dilated convolutions. These single pixel filters result in a linear combination with learnable weights of all pixels in a single position among previous layers whose weights. Overall, MSDNets have a much simpler architecture than the aforementioned U-Net design. As a result, DLSIA API requires only two main hyperparameters with which to govern the network architecture, namely: 1. depth \(d\): the number of network layers, 2. maximum dilation \(l_{m}\): the maximum integer dilation of the network; i.e. each layer \(d_{i}\) is assigned integer dilation \(i\mod l_{m}\), 1. custom dilations: alternatively, DLSIA users can manually assign specific dilations to each layer with a vector of length \(d\); e.g. cycling through dilations of size \([1,2,4,8,16]\) ten times in a network with \(d=50\). ### _Sparse Mixed-scale Convolutional Neural Networks_ Mixed-scale dense networks are designed to require a minimal number of parameters, yet the resulting networks may still be trimmed down using pruning approaches. For instance, results from the graph-based pruning method LEAN [44] demonstrate that large MSDNets can be reduced to 0.5% of their original size without sacrificing significant performance. Given the high quality in performance of pruned networks in general [45, 46, 47], it would be advantageous to be able to create _pre-pruned_ networks from scratch, aimed at producing networks that are as lean as possible with the lowest chances of overfitting. In this communication, we aim to produce this type of network by using a stochastic approach that yields random networks with configurable complexity. These sparse mixed-scale networks (SMSNets), shown in Fig. 4, are stochastically configured, both topologically with varying random connections and morphologically with convolutions of different dilations assigned to each connection. This random nature of model architectures produces additional diversity among the models, making them suitable for ensemble methods [48, 49]. Each SMSNet is produced using the following user-specified hyperparameters: 1. \(d\): number of nodes between the input (I) node and the output (O) node. 2. \(k_{\min}\), \(k_{\max}\): the global minimum and maximum number of edges per node. By default, these are set to \(1\) and \((d+1)\), respectively. Adjustments are made on a node level based on their depth. 3. \(LL_{\gamma}\): the degree distribution parameter. The number of edges \(n_{j}\) at node \(j\) is a random number drawn from a distribution with density proportional to \(\exp(-\gamma n_{j})\), with \(n_{j}\in[\min(k_{\min},d-j),\min(k_{\max},d-j)]\). 4. \(LL_{\alpha}\): skip-connection distribution parameter governing the probability for an edge to be assigned between node \(i\) and node \(j\), proportional to \(\exp(-\alpha|i-j|)\). 5. \(PL_{l}\): the probability for an edge between input node \(I\) and any of the intermediate hidden nodes \(L\). 6. \(PL_{IO}\): the probability for an edge between an intermediate hidden node \(L\) and the output node \(O\). 7. \(PL_{IO}\): boolean variable that allows edges between all channels in input node \(I\) and output node \(O\). Below, in Sect. IV-B, we leverage predictions from an ensemble of several low parameter SMSNets, each with varied architectures generated stochastically and independently using the above hyperparameters available in DLSIA. Fig. 4: Schematic overview of a six-layer sparse mixed-scale network (SMSNet). Network nodes consist of the input data \(I\), six intermediate (hidden) layers \(L\), and output data \(O\). All nodes/layers are sparsely connected via convolution filters, represented by dashed, dotted, and solid lines. For sake of simplicity, connections between input-to-output (\(IO\)) channels are not shown. Fig. 3: Schematic of a three-layer mixed-scale dense network (MSDNet) with \(c_{in}\) and \(c_{out}\) number of input and output channels. Blue, green, and red solid lines represent \(3\times 3\) dilated convolutions between all possible pairs of input and intermediate layers, with different dilations assigned to each color. Black dashed lines on the bottom connecting all input and intermediate layers to the output layer represent \(1\times 1\) convolutional operators, amounting to a linear sum between individual pixels at each position among all non-output layers. ## III Utility functions In addition to custom CNN architectures, DLSIA offers a number of tools to assist in the end-to-end training process. 1. **Training Scripts:** DLSIA offers comprehensive training scripts for effortlessly loading data and customizing training instances. Researchers can easily fine-tune a range of essential parameters, including optimizer selection, learning rate, learning schedulers, gradient clipping, early stopping, and automatic mixed precision. This flexibility ensures that users can tailor their training process to the unique demands of their scientific image analysis tasks, while efficiently optimizing model performance. 2. **Custom Loss Functions:** In addition to standard classification loss functions such as the cross entropy provided by PyTorch, DLSIA provides a collection of custom loss functions designed to tackle specific challenges in scientific image analysis. The Dice loss [50] is an alternative to the cross entropy loss that measures the overlap between predicted and ground truth masks. The Focal loss [51] aids in handling imbalanced datasets by prioritizing hard-to-classify samples during training. The Tversky loss [52] offers a fine-tuned balance between false positives and false negatives, granting users more control over the desired trade-offs during training. 3. **Random Data Loaders:** In PyTorch, random data splitters are often used for creating separate training, validation, and testing datasets from a larger dataset, a crucial step in training a robust machine learning model. These tools, such as the RandomSplit function, work by randomly assigning a certain proportion of the dataset to each subset. This ensures an unbiased distribution of data points, aiding in preventing overfitting and improving the generalization capability of the model. In essence, random data splitters provide a quick and efficient method to divide datasets, paving the way for effective model training and evaluation processes. While random data splitters in PyTorch excel in scenarios with large data volumes, their effectiveness can diminish in segmentation problems with a shortage of images. This is because they operate at the image level, meaning they can't split and shuffle small data sets effectively for robust training and testing. To overcome this limitation, the DLSIA introduces random data loaders that perform splitting at a more granular pixel level, creating randomized disjoint sets. This allows for more representative distributions of training and validation data, even in situations with limited images, leading to better model performance and generalizability. 4. **Conformal Estimation Methods:** DLSIA offers conformal estimation methods [53] enabling researchers to determine confidence intervals for their model predictions. By quantifying uncertainty in predictions, calibrated prediction sets with user-specified coverage are provided allowing one to make informed decisions in critical applications. ## IV Applications using DLSIA We use DLSIA in the following examples to build end-to-end deep learning workflows. Section IV-A uses MSDNets and tunable U-Nets for inpainting purposes. Here, network training was performed on a single \(40\) GB capacity Nvidia A100 GPU. Additionally, Sects. IV-B and IV-C validating SMSNet ensembling and autoencoder latent space clustering were performed on a single \(24\) GB memory capacity Nvidia RTX 3090 GPU, along with a 20-thread I9-10900X Intel Core CPU for loading, distributing, and receiving works calls to and from the GPU. All training was performed using the ADAM optimizer [54]. ### _Inpainting X-ray Scattering images with U-Nets and MSDNets_ Image inpainting is a restoration process that estimates the contents of missing regions within images and videos. Several machine learning (ML) approaches exist for inpainting [55, 56], chief among them being competing dual-model generative adversarial networks (GANs) [57, 58] and partial convolutional operators which augment traditional convolutional layers with adaptive kernel masking [59]. While inpainting results have recently gained popularity in non-scientific communities for it's ability to blindly fill in pictures of heavily masked faces, inpainting in X-ray scattering sciences is limited to only a handful of previous studies which heavily exploit symmetry [60]. Since beamline scientists are currently using ML-based algorithms to process the large amount of data they collect [61], it is of great importance to reconstruct the missing regions to avoid the introduction of distortion and bias to the post-processing ML analysis. Hence, DLSIA was employed to inpaint the missing pixel information in vertical and horizontal detector gaps in X-ray scattering datasets. In this 2022 study, published in [62], the ground truth information exists for the missing horizontal gap data in which to train against, though missing gap data information is entirely nonexistent for the vertical bars. To alleviate this constraint, data augmentation was performed. Outlined in Fig. 5, this augmentation process artificially introduced vertical bar gaps in new positions which contained ground truth data behind them. Two distinct CNNs, a U-Net and an MSDNet, were used for leaning to inpaint the gaps. Once the data augmentation steps were complete, nearly 15,000 training images were used, of Fig. 5: Inpainting data augmentation process to artificially present new vertical gaps with ground truth information behind them. **a)** Input data is **b)** cropped into seven overlapping images, introducing new vertical gaps in one of four positions in the non-highlighted images. **c)** Highlighted images constitute the original input, but artificial gaps are randomly inserted in one of the four new gap positions. which three are shown in Fig. 5c. The \(L_{1}\) loss metric, which gauges differences between gap predictions and ground truth, was chosen as the target function to minimize. The \(L_{2}\) loss was also tested but resulted in more blurring, as is consistent with previous inpainting studies [63]. Of several different network architectures tested, a depth-4 U-Net with \(\sim\)\(8.56\) million parameters and a 200-layer MSDNet with \(\sim\)\(0.18\) million parameters were the best performing networks, both achieving correlation coefficient scores of \(>0.998\) between predicted gaps and ground truth. These predictions are displayed in Fig. 6. ### _Detecting 3d fibers in X-ray tomographic reconstructions of concrete using SMSNet ensembling_ Fiber reinforcement in concrete plays a fundamental role in enhancing the material's properties, delivering increased tensile strength, superior shrinkage control, and enhanced flex-induced crack, blast, and fire resistances [64, 65]. As concrete naturally excels under compression resistance but lags in tensile strength, fibers serve to augment this tensional weakness, ensuring the material can endure greater tensile stresses. Furthermore, fibers significantly contribute to the concrete's toughness and durability, providing heightened resistance to impact, abrasion damage, and shrinkage-related cracking. Simultaneously, the integral role of fibers in mitigating shrinkage throughout the curing process and the concrete's lifetime ensures overall enhanced longevity of the structure. Understanding the structural distribution of fibers within the concrete matrix is pivotal for comprehending the properties of the composite material and consequently the design of better concrete mixtures. Fiber distribution, orientation, and density greatly impact the overall performance of the concrete, influencing its strength, ductility, and fracture resistance. This characterization can be achieved through techniques such as X-ray tomography. However, concrete is a complex mixture comprising various components, such as cement, aggregates, and fibers. Isolating and identifying fibers within the voluminous and intricate 3D data obtained from X-ray tomography is not a straightforward task. In this context, we used a publicly available dataset and performed manual binary segmentation of select fiber sections to curate a ground truth for supervised learning. Manual segmentation using Napari software [66] consisted of the sparse and incomplete hand-annotation of only 6 fibers, consisting of \(\sim\)\(245,000\) labeled pixels with a \(10:2\) background-to-foreground ratio. Hand-annotations focused on a diverse representation of cases and contrasts among the fibers is displayed in Fig. 8a. This selection was restricted to a few locations with the focus of balancing accuracy - particularly when labeling the border between classes - and overall speed of annotation to maintain a manageable workload. The prepared data was then subjected to an ensemble of five DLSIA-instantiated sparse mixed-scale networks (SMSNets), each with a different stochastically generated architecture and approximately 45,000 parameters. The multi-network mean prediction probabilities are displayed in 8d. However, we choose to leverage the multi-network standard deviation and keep only those pixel predictions whose probability remains over \(50\%\) after subtracting a single standard deviation, pictured Fig. 6: Inpainting of X-ray scattering vertical and horizontal detector gaps using U-Net and MSDNet. Fig. 7: Ensemble network predictions of fibers in concrete. **a.** Sparse binary labeling of target fibers (cyan) and background (brown). **b.** Aggregated network predictions. **c.** Cross sectional slice of raw training data. **d.** Probability map of aggregated network predictions. **e.** Probability map with standard deviations subtracted. **f.** Cross sectional view of instance segmented fibers derived from **e. in 8e. A subsequent analysis using the external Python package cc3d [67] involved 3d instance segmentation using a decision tree-augmented 3d variant of connected components [68]. Additionally, cc3d allowed for the removal of small connected components - a so-called "dusting" - below some user-defined threshold. Both a histogram of the end-to-end length of the instance segmented fibers and a Hammer projection [69] of the surface an origin-centered 30-pixel sphere of the autocorrelation function of the segmented labels - essentially measuring the directional distribution of the segmented fibers - is shown in Fig. 8, providing critical insights into the morphology and organization of the segmented fibers that can be used to understand, predict, or design properties of fiber-reinforced concrete. ### _Shape clustering via autoencoder latent space_ We present the results of our clustering approach on the highly compressed autoencoder latent space using synthetic data consisting of \(64\times 64\) tiles, each containing one of four random shapes (circle, triangle, rectangle, and annulus) that are randomly sized and rotated by a random degree around their centers. We apply a 4-layer 16-base channel autoencoder that bottlenecks to a \(16\times 1\) sized latent space (or feature space) to reconstruct the input data, optimized on the mean square error loss. To assess the quality of our model reconstruction, we find the Pearson cross-correlation scored against the original images which yielded an impressive score of approximately 0.98. Once the model is sufficiently trained, we pass new images through the trained autoencoder to obtain their \(16\times 1\) latent space representation. To visualize and analyze the clustering behavior, we further compress the latent space down to two real numbers using U-Map [70], allowing us to generate meaningful scatter plots in Cartesian coordinates. As illustrated in Fig. 9, our approach exhibits clear, distinct clustering results between each of the four shapes. Moreover, the approach handles the variations in shape orientation and size remarkably well, with clear transitions between each shapes' size and orientation within each cluster. ## V Discussion and Conclusions We introduce DLSIA (Deep Learning for Scientific Image Analysis), a Python-based deep learning convolutional neural network library aimed at bringing a new level of user-customizability to researchers and their image analysis tasks. Offering simplified network construction, multiple proven network architectures, and an array of tunable training parameters, DLSIA provides a versatile platform allowing users to explore diverse network settings. DLSIA-instantiated networks and workflows were validated through three separate applications: 1) semantic segmentation of fibers in X-ray tomographic reconstruction of concrete data using an ensemble of sparse mixed-scale networks (SMSNets), 2) inpainting of missing gap information in X-ray scattering data using U-Nets and mixed-scale dense networks (MSDNets), and 3) investigation into clustering autoencoder latent space on synthetic shapes data. ## Acknowledgments The above algorithms are implemented in a set of python3 routines, and are available upon request. Additionally, DLSIA modules for custom MSDNet, autoencoder, and U-Net instantiation for segmentation purposes are available within MLEexchange [71, 72]. MLExchange is a DOE-funded, web-based collaborative platform offering accessible machine learning tools for beamline scientists, including a custom API for managing jobs and workflows, adaptive GUI components for labeling data and customizing models, and trained model registration and distillation [73]. We gratefully acknowledge the support of this work by the Laboratory Directed Research and Development Program of Lawrence Berkeley National Laboratory under U.S. Department of Energy Contract No. DE-AC02-05CH11231. Further support originated from the Center for Advanced Mathematics in Energy Research Applications funded via the Advanced Scientific Computing Research and the Basic Energy Sciences programs, which are supported by the Office of Science of the US Department of Energy (DOE) under Contract DE-AC02-05CH11231, and from the National Institute Of General Medical Sciences of the National Institutes of Health (NIH) under Award 5R21GM129649-02. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The inpainting study was performed and partially supported by the US DOE, Office of Science, Office of Basic Energy Sciences Data, Artificial Intelligence and Machine Learning at the DOE Scientific User Facilities program under award No. 107514. Fig. 8: Summary statistics of fiber segmentation predictions. Displayed are **a.** histogram plot of fiber lengths and **b.** equal-area Hammer projection of the autocorrelation function of the 3D segmentation results at a radius of 30 pixels from the origin showing a general anisotropic distribution of the direction of the fibers. Fig. 9: Autoencoder latent space representation, further compressed by U-Map, of randomly sized and oriented shapes.
2306.15589
Quadratic embedding constants of graphs: Bounds and distance spectra
The quadratic embedding constant (QEC) of a finite, simple, connected graph $G$ is the maximum of the quadratic form of the distance matrix of $G$ on the subset of the unit sphere orthogonal to the all-ones vector. The study of these QECs was motivated by the classical work of Schoenberg on quadratic embedding of metric spaces [Ann. of Math., 1935] and [Trans. Amer. Math. Soc., 1938]. In this article, we provide sharp upper and lower bounds for the QEC of trees. We next explore the relation between distance spectra and quadratic embedding constants of graphs - and show two further results: $(i)$ We show that the quadratic embedding constant of a graph is zero if and only if its second largest distance eigenvalue is zero. $(ii)$ We identify a new subclass of nonsingular graphs whose QEC is the second largest distance eigenvalue. Finally, we show that the QEC of the cluster of an arbitrary graph $G$ with either a complete or star graph can be computed in terms of the QEC of $G$. As an application of this result, we provide new families of examples of graphs of QE class.
Projesh Nath Choudhury, Raju Nandi
2023-06-27T16:12:59Z
http://arxiv.org/abs/2306.15589v1
# Quadratic embedding constants of graphs: bounds and distance spectra ###### Abstract. The quadratic embedding constant (QEC) of a finite, simple, connected graph \(G\) is the maximum of the quadratic form of the distance matrix of \(G\) on the subset of the unit sphere orthogonal to the all-ones vector. The study of these QECs was motivated by the classical work of Schoenberg on quadratic embedding of metric spaces [_Ann. of Math._, 1935] and [_Trans. Amer. Math. Soc._, 1938]. In this article, we provide sharp upper and lower bounds for the QEC of trees. We next explore the relation between distance spectra and quadratic embedding constants of graphs - and show two further results: \((i)\) We show that the quadratic embedding constant of a graph is zero if and only if its second largest distance eigenvalue is zero. \((ii)\) We identify a new subclass of nonsingular graphs whose QEC is the second largest distance eigenvalue. Finally, we show that the QEC of the cluster of an arbitrary graph \(G\) with either a complete or star graph can be computed in terms of the QEC of \(G\). As an application of this result, we provide new families of examples of graphs of QE class. Key words and phrases:Distance matrix, distance spectra of graphs, euclidean distance geometry, quadratic embedding constant, tree, double star, cluster of graphs 2 Motivated by Schoenberg's result, the authors in [17] introduced the notion of _quadratic embedding constant_ of \(G\) denoted and defined as \[\text{QEC}(G):=\max\{\langle f,D_{G}f\rangle;\ f\in C(V),\ \langle f,f\rangle=1,\ \langle e,f\rangle=0\}.\] In particular, Schoenberg's result says a graph \(G\) is of QE class if and only if \(\text{QEC}(G)\leq 0\). \(\text{QEC}(G)\) has since been studied by several authors [3, 11, 12, 13, 17]. For instance, the quadratic embedding constants of \(K_{n}\), \(C_{n}\) and complete multipartite graphs are given in [14, 17]. E.g., \(\text{QEC}(K_{m,m})=m-2\); in particular \(K_{m,m}\) is not of QE class for \(m\geq 3\). We now consider trees. For a path graph \(P_{n}\) on \(n\) vertices, the authors in [13] derived a lower and upper bound for \(\text{QEC}(P_{n})\). Mlotkowski [12] recently extended this work by providing the exact value: \(\text{QEC}(P_{n})=\frac{-1}{1+\cos(\pi/n)}\). In [17], authors studied quadratic embedding of graphs and proved that any tree \(T\) is of QE class. Also, from [12, 17], \(\text{QEC}(P_{n})<\text{QEC}(K_{1,n-1})=-2/n\), for \(n\geq 4\). Now our first main result shows that the quadratic embedding constants of any tree on \(n\geq 4\) vertices lies between \(\text{QEC}(P_{n})\) and \(\text{QEC}(K_{1,n-1})\). **Theorem 1.2**.: _Let \(T_{n}\) be a tree on \(n\geq 4\) vertices. Then \(\text{QEC}(P_{n})\leq\text{QEC}(T_{n})\leq\text{QEC}(K_{1,n-1})\). Moreover, the left side equality holds if and only if \(T_{n}=P_{n}\) while the right hand equality holds if and only if \(T_{n}=K_{1,n-1}\)._ **Remark 1.3**.: _In [13], the authors showed that for any tree \(T_{n}\) on \(n\geq 3\) vertices_ \[\text{QEC}(T_{n})\leq-\frac{2}{2n-3}, \tag{1.1}\] _and this inequality is strict for \(n=4,5\) (see [17, Section 5]). A natural problem, which is mentioned in [3], is to find tight upper and lower bound for \(\text{QEC}(T_{n})\), where \(T_{n}\) runs over all trees with \(n\) vertices. By [17, Theorem 2.8], \(\text{QEC}(K_{1,n-1})=-\frac{2}{n}<-\frac{2}{2n-3}\). Hence our first main result answers this question._ An interesting topic in spectral graph theory is exploring the relation between distance spectra and the quadratic embedding constants of graphs. The distance matrix of a connected graph is an irreducible, nonnegative, symmetric matrix. By the Perron-Frobenius Theorem, its largest eigenvalue is simple. By the min-max principle for eigenvalues of symmetric matrices [10, Theorem 4.2.6], \[\lambda_{2}(G)\leq\text{QEC}(G)<\lambda_{1}(G). \tag{1.2}\] This leads us to a natural question that has been pursued in the literature: _Characterize the family of graphs that satisfy \(\lambda_{2}(G)=QEC(G)\)._ It is well known that this equality holds for distance regular graphs and transmission regular graphs [3]. In [17], the author showed that \(\lambda_{2}(G)=QEC(G)\) holds for path graphs with an even number of vertices. In this article, we show that this holds for unicyclic graphs with an even cycle, cacti with at least one even cycle, linear hexagonal chains and wheel graphs with odd vertices. In fact, we prove a stronger result: **Theorem 1.4**.: _Let \(G\) be a graph on \(n\geq 2\) vertices. Then \(\lambda_{2}(G)=0\) if and only if \(\text{QEC}(G)=0\)._ **Remark 1.5**.: _By the above theorem, any graph \(G\) with the property that either the second largest distance eigenvalue is zero or \(\operatorname{QEC}(G)=0\), satisfies \(\lambda_{2}(G)=\operatorname{QEC}(G)=0\). Unicyclic graphs with an even cycle, cacti with at least one even cycle, linear hexagonal chains, and a plethora of other graphs in [1] have the second largest eigenvalue zero. Hence, for all those graph classes, \(\operatorname{QEC}(G)\) is the second largest eigenvalue of \(D_{G}\)._ In 1977, Graham-Pollak [7] proved that \(D_{T}\) for any tree \(T\) is nonsingular and \(\det(D_{T})\) depends only on the number of vertices of \(T\). Thus, a path graph with even number of vertices provides an example of a graph with a nonsingular distance matrix and the quadratic embedding constant as a distance eigenvalue. Our next main result gives a new subclass of trees with the quadratic embedding constant as the second largest distance eigenvalue. **Theorem 1.6**.: _Let \(S_{n,n}\) be a double star graph (see Definition 3.3). Then_ \[\operatorname{QEC}(S_{n,n})=\lambda_{2}(S_{n,n})=\frac{-(n+2)+\sqrt{(n+2)^{2}- 8}}{2}.\] _Moreover, in the definition of \(\operatorname{QEC}(S_{n,n})\), the maximum is attained at a vector \(f\in\mathbb{R}^{2n}\) of the form \(\frac{g}{\|g\|}\), where_ \[g=\begin{pmatrix}y\\ -y\end{pmatrix},\ y\in\mathbb{R}^{n}\ \text{and}\ y_{i}=\left\{\begin{array}{cl} \frac{n+\sqrt{(n+2)^{2}-8}}{2},&\text{if}\ i=1,\\ -1,&\text{otherwise.}\end{array}\right. \tag{1.3}\] We, now turn our focus to investigating the quadratic embedding constant of \(G\) using graph operations. It can be challenging to determine the quadratic embedding constant of a graph sometimes, but if we factorize the given \(G\) into smaller graphs whose quadratic embedding constants are known, \(\operatorname{QEC}(G)\) can then be represented in terms of the quadratic embedding constant of the smaller graphs. Thus, a natural question arises: Can the quadratic embedding constant of \(G=G_{1}\#G_{2}\) be computed using \(\operatorname{QEC}(G_{1})\) and \(\operatorname{QEC}(G_{2})\), where \(\#\) denotes some graph operation? The quadratic embedding constant of \(G\) using several graph operations, e.g., Cartesian product [17], star product [3, 13], lexicographic product and joining [11] of graphs studied in the literature. In this article, we study the cluster \(G_{1}\{G_{2}\}\) of two simple, connected graphs \(G_{1}\) and \(G_{2}\) (See Definition 4.1). Our final result provides \(\operatorname{QEC}(G\{K_{n}\})\) and \(\operatorname{QEC}(G\{K_{1,n-1}\})\), in terms of the quadratic embedding constant of the base graph \(G\). **Theorem 1.7**.: _Let \(n\geq 2\) be an integer and \(G\) be an arbitrary graph. Then_ * _For a complete graph_ \(K_{n}\) _on_ \(n\geq 2\) _vertices_ \[\operatorname{QEC}(G\{K_{n}\})=\frac{(n\operatorname{QEC}(G)-n)+\sqrt{(n \operatorname{QEC}(G)-n)^{2}+4n\operatorname{QEC}(G)}}{2}.\] * _For a star graph_ \(K_{1,n-1}\) _on_ \(n\geq 2\) _vertices_ \[\operatorname{QEC}(G\{K_{1,n-1}\})=\frac{(n\operatorname{QEC}(G)-2)+\sqrt{(n \operatorname{QEC}(G)-2)^{2}+8\operatorname{QEC}(G)}}{2}.\] As applications of this result: (a) We generate new families of examples of graphs of QE class. (b) We obtain the quadratic embedding constant of a simple corona graph. **Organization of the paper:** The remaining sections are devoted to proving our main results above. In section 2, we recall the quadratic embedding constants of a complete bipartite graph and of the star product of graphs, and prove Theorem 1.2. In section 3, we prove Theorems 1.4 and 1.6. We also provide a formula for the quadratic embedding constant of a general double star tree \(S_{m,n}\) - see Theorem 3.6. In the final section, we prove Theorem 1.7, followed by several applications. ## 2. Sharp bound for the Quadratic Embedding Constant of Trees We begin by proving Theorem 1.2: for any tree \(T_{n}\) on \(n\) vertices, \(\operatorname{QEC}(T_{n})\) lies between \(\operatorname{QEC}(P_{n})\) and \(\operatorname{QEC}(K_{1,n-1})\). The proof requires a couple of preliminary results; the first gives the quadratic embedding constant of complete bipartite graphs. **Theorem 2.1**.: _[_17_, Theorem 2.8]_ _Let \(K_{m,n}\) be a complete bipartite graph on \((m+n)\) vertices. Then_ \[\operatorname{QEC}(K_{m,n})=\tfrac{2(mn-m-n)}{m+n}\quad\text{for all }\quad m\geq 1\text{, }n\geq 1.\] _In particular, for a star graph \(K_{1,n-1}\) on \(n\) vertices, \(\operatorname{QEC}(K_{1,n-1})=-\tfrac{2}{n}\)._ The next result concerns the quadratic embedding constant of the star product of two graphs. Recall that the star product of \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\) with respect to distinguished vertices \(\sigma_{1}\in V_{1}\) and \(\sigma_{2}\in V_{2}\) is also called their coalescence, is denoted by \(G_{1}\star G_{2}\) and is obtained by joining \(G_{1}\) and \(G_{2}\) at the vertices \(\sigma_{1}\) and \(\sigma_{2}\). **Proposition 2.2**.: _[_13_, Proposition 4.3]_ _Let \(G_{1}\) and \(G_{2}\) be two graphs on \(n_{1}+1\) and \(n_{2}+1\) vertices with \(Q_{1}=\operatorname{QEC}(G_{1})\) and \(Q_{2}=\operatorname{QEC}(G_{2})\) respectively. Let \(M=M(n_{1},n_{2};-Q_{1},-Q_{2})\) be the conditional infimum of_ \[\Phi(\alpha,x^{1},x^{2})=\sum_{i=1}^{2}(-Q_{i})\{\langle x^{i},x^{i}\rangle+ \langle e,x^{i}\rangle^{2}\},\qquad\alpha\in\mathbb{R},\ \ x^{i}\in\mathbb{R}^{n_{i}},\] _subject to_ \[\alpha^{2}+\sum_{i=1}^{2}\langle x^{i},x^{i}\rangle=1\quad\text{and}\quad \alpha+\sum_{i=1}^{2}\langle e,x^{i}\rangle=0.\] _Then, \(\operatorname{QEC}(G_{1}\star G_{2})\leq-M\)._ The third preliminary result establishes a relationship between the quadratic embedding constants of a graph and its subgraphs. **Theorem 2.3**.: _[_17_, Theorem 3.1]_ _Let \(G\) be a connected graph and \(H\) be a connected subgraph of \(G\). If \(H\) is isometrically embedded in \(G\), then \(\operatorname{QEC}(H)\leq\operatorname{QEC}(G)\)._ With these preliminaries at hand, our first main result follows. Proof of Theorem 1.2.: First, we prove the left inequality - \(\operatorname{QEC}(P_{n})\leq\operatorname{QEC}(T_{n})\). Suppose \(T_{n}\) is a tree other than the path graph \(P_{n}\). Then \(T_{n}\) has a vertex that is adjacent to at least three vertices, and the graph induced by them is a subtree \(K_{1,3}\) that is isometrically embedded in \(T_{n}\). By Theorems 2.1 and 2.3, \[-\frac{1}{2}=\operatorname{QEC}(K_{1,3})\leq\operatorname{QEC}(T_{n}). \tag{2.1}\] Also, from [3, Section 4], we have \[\operatorname{QEC}(P_{2})<QEC(P_{3})<\cdots<\operatorname{QEC}(P_{n})< \operatorname{QEC}(P_{n+1})<\cdots\longrightarrow-\frac{1}{2}. \tag{2.2}\] From equation (2.1) and (2.2), we have \(\operatorname{QEC}(P_{n})<\operatorname{QEC}(T_{n})\). Now, we prove the right inequality. Let \(T_{n}\) be a tree with \(n\) vertices distinct from the star graph \(K_{1,n-1}\). We claim that \(\operatorname{QEC}(T_{n})<\operatorname{QEC}(K_{1,n-1})\). We prove this by induction on \(n\geq 4\). In the base case, the only tree other than the star \(K_{1,3}\) on \(4\) vertices is \(P_{4}\) and \(\operatorname{QEC}(P_{4})<-\frac{1}{2}=\operatorname{QEC}(K_{1,3})\). For the induction step, let \(T_{r+1}\) be a tree on \((r+1)\) vertices other than \(K_{1,r}\), with \(r\geq 4\). Then \(T_{r+1}\) has a pendant vertex \(v\) such that the tree \(T_{r+1}-v=T_{r}\) is different from the star graph \(K_{1,r-1}\). Note that \(T_{r+1}=K_{2}\star T_{r}\). By the induction hypothesis, \[\operatorname{QEC}(T_{r})<\operatorname{QEC}(K_{1,r-1})=-\frac{2}{r}. \tag{2.3}\] Note that \(\operatorname{QEC}(K_{2})=-1\). By Proposition 2.2, \[\operatorname{QEC}(T_{r+1})=\operatorname{QEC}(K_{2}\star T_{r}) \leq-M(1,r-1;1,-\operatorname{QEC}(T_{r})). \tag{2.4}\] Since \(-\operatorname{QEC}(T_{r})>\frac{2}{r}\), by [13, Theorem 3.5 and Proposition 2.3], \[M(1,r-1;1,\frac{2}{r})<M(1,r-1;1,-\operatorname{QEC}(T_{r})). \tag{2.5}\] By [13, Theorem 3.5], \(M(1,r-1;1,\frac{2}{r})\) is the minimal solution of \[\frac{1}{1\cdot 1+1-\lambda}+\frac{r-1}{\frac{2}{r}\cdot(r-1)+\frac{2}{r}- \lambda}=\frac{1}{\lambda},\] and a straightforward calculation shows that \(M(1,r-1;1,\frac{2}{r})=\lambda=\frac{2}{r+1}\). Using (2.4) and (2.5), we get \[\operatorname{QEC}(T_{r+1})<-M(1,r-1;1,\frac{2}{r})=-\frac{2}{r+1}= \operatorname{QEC}(K_{1,r}).\] This completes the induction step. ## 3. Quadratic Embedding Constants and distance spectra of graphs We next prove Theorem 1.4. The proof employs the following preliminary result that computes the quadratic embedding constant using Lagrange multipliers. **Proposition 3.1**.: _[_17_, Proposition 4.1]_ _Let \(G=(V,E)\) be a graph on \(n\geq 3\) vertices. Identifying \(C(V)\) with \(\mathbb{R}^{n}\), let \(\mathcal{S}(D_{G})\) be the set of all stationary points \((f,\lambda,\mu)\in\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}\) of_ \[\phi(f,\lambda,\mu)=\langle f,D_{G}f\rangle-\lambda(\langle f,f\rangle-1)-\mu \langle e,f\rangle,\] _or equivalently, \((f,\lambda,\mu)\in\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}\) satisfying the system of the following three equations_ \[(D_{G}-\lambda I)f=\tfrac{\mu}{2}\ e\text{, \ \ \ }\langle f,f\rangle=1\ \text{ and \ }\langle e,f\rangle=0.\] _Then \(\mathcal{S}(D_{G})\) is nonempty and_ \[\operatorname{QEC}(G)=\max\{\lambda:(f,\lambda,\mu)\in\mathcal{S}(D_{G})\}.\] Proof of Theorem 1.4.: Suppose that \(\operatorname{QEC}(G)=0\). Then \(\lambda_{2}(G)\leq 0\) by (1.2). We claim that \(\lambda_{2}(G)=0\). Indeed, suppose \(\lambda_{2}(G)\neq 0\). Then \(D_{G}\) is a nonsingular matrix by (1.2). By Proposition 3.1, there exist \(f\in\mathbb{R}^{n}\) and \(\mu\in\mathbb{R}\) such that \[(D_{G}-0\cdot I)f=\tfrac{\mu}{2}\ e,\ \ \ \ \langle f,f\rangle=1\ \ \text{ and }\ \ \langle e,f\rangle=0.\] Since \(D_{G}\) is nonsingular, \(f=\tfrac{\mu}{2}D_{G}^{-1}e\) and \(\tfrac{\mu^{2}}{4}\langle D_{G}^{-1}e,D_{G}^{-1}e\rangle=1\). Thus \[\mu\neq 0\text{ and }\langle e,D_{G}^{-1}e\rangle=0. \tag{3.1}\] By [2, Lemma 8.3] and using (3.1), we have \(\det(D_{G}+J)=\det(D_{G})\). Let \(\beta_{1}\geq\cdots\geq\beta_{n}\) be the eigenvalues of \(D_{G}+J\). Since \(D_{G}\) is symmetric, by [10, Corollary 4.3.12], \(\lambda_{i}(G)\leq\beta_{i}\) for all \(1\leq i\leq n\). Let \(x>0\) be the Perron vector of \(D_{G}\). By the Perron-Frobenius Theorem, any eigenvector corresponding to \(\lambda_{1}(G)\) is a constant multiple of \(x\). Let \(y\) be an arbitrary eigenvector corresponding to \(\lambda_{1}(G)\). Then \[D_{G}y=\lambda_{1}(G)y\ \text{ and }\ \sum_{i=1}^{n}y_{i}\neq 0.\] Since \(Jy\neq 0\), again by [10, Corollary 4.3.12], \(\lambda_{1}(G)<\beta_{1}\). Thus \[\det(D_{G})=\prod_{i=1}^{n}\lambda_{i}(G)\neq\prod_{i=1}^{n}\beta_{i}=\det(D_ {G}+J),\] a contradiction. Hence \(\lambda_{2}(G)\) must be zero. To prove the converse, let \(\lambda_{2}(G)=0\). Then \(\operatorname{QEC}(G)\geq 0\). If \(\operatorname{QEC}(G)=0\), then we are done. Suppose that \(\operatorname{QEC}(G)>0\). Since \(0=\lambda_{2}(G)<\operatorname{QEC}(G)<\lambda_{1}(G)\), the matrix \(D_{G}-\operatorname{QEC}(G)I\) is invertible. Again, by Proposition 3.1, there exist \(f\in\mathbb{R}^{n}\) and \(\mu\in\mathbb{R}\) such that \[(D_{G}-\operatorname{QEC}(G)I)f=\tfrac{\mu}{2}\ e,\ \ \ \ \langle f,f\rangle=1\ \text{ and }\ \ \langle e,f\rangle=0,\] and so \(\det(D_{G}-\operatorname{QEC}(G)I+J)=\det(D_{G}-\operatorname{QEC}(G)I)\). The same argument can be adapted, as in the first half of the proof, to show that \(\lambda_{i}(G)\leq\beta_{i}\) for all \(2\leq i\leq n\) and \(\lambda_{1}(G)<\beta_{1}\). Hence \[\det(D_{G}-\operatorname{QEC}(G)I)=\prod_{i=1}^{n}(\lambda_{i}(G)-\operatorname {QEC}(G))\neq\prod_{i=1}^{n}(\beta_{i}-\operatorname{QEC}(G))=\det(D_{G}- \operatorname{QEC}(G)I+J),\] a contradiction. Thus \(\operatorname{QEC}(G)=0\). As a consequence, we characterize all singular graphs of QE class. **Corollary 3.2**.: _Let \(G\) be a graph of QE class. Then \(D_{G}\) is singular if and only if \(\operatorname{QEC}(G)=0\)._ Proof.: Suppose that \(G\) is a graph of QE class. By Schoenberg's Theorem 1.1, \(\operatorname{QEC}(G)\leq 0\). Since \(\lambda_{2}(G)\leq\operatorname{QEC}(G)<\lambda_{1}(G)\), \(\det(D_{G})=0\) if and only if \(\lambda_{2}(G)=0\). Hence the result follows by Theorem 1.4. Next, we will look at graphs with nonsingular distance matrices. In general, the quadratic embedding constant is not a distance eigenvalue for every graph with a nonsingular distance matrix, e.g., complete bipartite graphs \(K_{m,n}\) (\(m\neq n\)), path graphs with an odd number of vertices, star graphs, etc. In Theorem 1.6, we identify a subclass of trees, the double star graphs for which the QEC is an eigenvalue of the distance matrix. To prove Theorem 1.6, we first recall the definition of the double star graph. **Definition 3.3**.: _A double star graph is a tree denoted by \(S_{m,n}\), obtained by adding an edge between the center vertices of the star graphs \(K_{1,m-1}\) and \(K_{1,n-1}\)._ **Example 3.4**.: _Example of a double star graph on \(8\) vertices._ Proof of Theorem 1.6.: Note that \(S_{n,n}=K_{2}\{K_{1,n-1}\}\) (see Definition 4.1). By Theorem 1.7, \[\text{QEC}(S_{n,n})=\frac{-(n+2)+\sqrt{(n+2)^{2}-8}}{2}.\] Next, we show that \(\text{QEC}(S_{n,n})\) is an eigenvalue of \(D_{S_{n,n}}\). Let \(\{1,2,\ldots n,n+1,\ldots,2n\}\) be the vertex set of \(S_{n,n}\) such that one \(K_{1,n-1}\) is induced by \(\{1,2,\ldots,n\}\) with the vertex \(1\) as its centre and the other \(K_{1,n-1}\) is induced by vertices \(\{n+1,n+2,\ldots,2n\}\) with the centre vertex \(n+1\). Then \(D_{S_{n,n}}=\begin{pmatrix}P&Q\\ Q&P\end{pmatrix}\), where \(P=(p_{ij}),Q=(q_{ij})\in\mathbb{R}^{n\times n}\) are two symmetric matrices of the form \[p_{ij}=\left\{\begin{array}{ll}0,&\text{if }i=j,\\ 1,&\text{if }i=1\text{ and }2\leq j\leq n,\\ 2,&\text{otherwise.}\end{array}\right.\quad\text{and}\quad\ q_{ij}=\left\{ \begin{array}{ll}1,&\text{if }i=j=1,\\ 2,&\text{if }i=1\text{ and }2\leq j\leq n,\\ 3,&\text{otherwise.}\end{array}\right.\] Note that \[q_{ij}-p_{ij}=\left\{\begin{array}{ll}3,&\text{if }i=j\text{ and }2\leq i\leq n,\\ 1,&\text{otherwise.}\end{array}\right.\] Define the vectors \(g\) and \(y\) as in (1.3). Then \[(Q-P)y=\frac{-(n+2)+\sqrt{(n+2)^{2}-8}}{2}\Big{(}-\frac{n+\sqrt{(n+2)^{2}-8}} {2},1,1,\ldots,1\Big{)}^{T}\] and \[D_{S_{n,n}}g=\begin{pmatrix}(P-Q)y\\ (Q-P)y\end{pmatrix}=\frac{-(n+2)+\sqrt{(n+2)^{2}-8}}{2}\begin{pmatrix}y\\ -y\end{pmatrix}=\text{QEC}(S_{n,n})\ g.\] Hence \(\text{QEC}(S_{n,n})\) is an eigenvalue of \(D_{S_{n,n}}\). Since \(\lambda_{2}(S_{n,n})\leq\text{QEC}(S_{n,n})<\lambda_{1}(S_{n,n})\), we have \[\lambda_{2}(S_{n,n})=\text{QEC}(S_{n,n})\text{ and }\left\langle\frac{g}{ \parallel g\parallel},D_{S_{n,n}}\frac{g}{\parallel g\parallel}\right\rangle= \text{QEC}(S_{n,n}).\qed\] In the last part of this section, we provide the quadratic embedding constant of a general double star graph \(S_{m,n}\). Note that \(\text{QEC}(S_{m,n})\) is the maximum of \(\langle f,D_{S_{m,n}}f\rangle\) over the uncountable set of all vectors \(f\) in the unit sphere orthogonal to \(e\). Our next result avoid this rout and compute \(\text{QEC}(S_{m,n})\) via the largest root of a polynomial of degree \(3\). The proof requires a basic result that gives us a lower bound for the quadratic embedding constant of any connected graph. Figure 1. \(S_{3,5}\) **Proposition 3.5**.: _[_11_]_ _Let \(G\) be a graph on \(n\) vertices. Then, \(\operatorname{QEC}(G)\geq-1\). Moreover, equality holds if and only if \(G\) is a complete graph \(K_{n}\)._ From the above result and Proposition 3.1, to calculate the quadratic embedding constant of a graph that is not complete, it is sufficient to take the maximum over \(\lambda>-1\) such that \((f,\lambda,\mu)\) is a stationary point of \(\phi(f,\lambda,\mu)\). **Theorem 3.6**.: _Let \(m,n\geq 1\) be integers. Then \(\operatorname{QEC}(S_{m+1,n+1})=-2+2t\), where \(t\) is the largest root of the polynomial_ \[(m+n+2)t^{3}-(m+n+1-mn)t^{2}-2mnt+mn=0.\] Proof.: Let \(\{1,2,\ldots,m+1,m+2,\ldots,m+n+2\}\) be the vertex set of \(S_{m+1,n+1}\) such that \(K_{1,m}\) is induced by \(\{1,2,\ldots,m,m+n+1\}\) with the vertex \(m+n+1\) as its centre and \(K_{1,n}\) is induced by vertices \(\{m+1,m+2,\ldots,m+n,m+n+2\}\) with the centre vertex \(m+n+2\). Then, the distance matrix of \(S_{m+1,n+1}\) is an \((m+n+2)\times(m+n+2)\) matrix of the form \[\begin{pmatrix}A&P&Q\\ P^{T}&B&R\\ Q^{T}&R^{T}&C\end{pmatrix},\] where \(A=2(J-I)\in\mathbb{R}^{m\times m}\), \(B=2(J-I)\in\mathbb{R}^{n\times n}\), \(C=(J-I)\in\mathbb{R}^{2\times 2}\), \(P=3J\in\mathbb{R}^{m\times n}\), \[Q=\begin{pmatrix}1&2\\ 1&2\\ \vdots&\vdots\\ 1&2\end{pmatrix}\in\mathbb{R}^{m\times 2}\quad\text{ and }\quad R=\begin{pmatrix}2&1\\ 2&1\\ \vdots&\vdots\\ 2&1\end{pmatrix}\in\mathbb{R}^{n\times 2}.\] Let \(f=(x^{T},y^{T},z^{T})^{T}\in\mathbb{R}^{(m+n+2)}\), where \(x\in\mathbb{R}^{m}\), \(y\in\mathbb{R}^{n}\) and \(z\in\mathbb{R}^{2}\) such that \(\langle e,f\rangle=0\) and \(\langle f,f\rangle=1\). Then a straightforward but longwinded calculation shows that \[\langle f,D_{S_{m+1,n+1}}f\rangle =x^{T}Ax+y^{T}By+z^{T}Cz+2x^{T}Py+2x^{T}Qz+2y^{T}Rz\] \[=2\Big{\{}\Big{(}\sum_{i=1}^{m}x_{i}\Big{)}^{2}-\sum_{i=1}^{m}x_ {i}^{2}\Big{\}}+2\Big{\{}\Big{(}\sum_{j=1}^{n}y_{j}\Big{)}^{2}-\sum_{j=1}^{n} y_{j}^{2}\Big{\}}+\Big{(}\sum_{k=1}^{2}z_{k}\Big{)}^{2}-\sum_{k=1}^{2}z_{k}^{2}\] \[+6\Big{(}\sum_{i=1}^{m}x_{i}\Big{)}\Big{(}\sum_{j=1}^{n}y_{j} \Big{)}+2(z_{1}+2z_{2})\Big{(}\sum_{i=1}^{m}x_{i}\Big{)}+2(2z_{1}+z_{2})\Big{(} \sum_{j=1}^{n}y_{j}\Big{)}\] \[=-2+2\Big{\{}z_{2}^{2}-\Big{(}\sum_{i=1}^{m}x_{i}\Big{)}^{2}-2z_{ 1}\Big{(}\sum_{i=1}^{m}x_{i}\Big{)}\Big{\}}.\] Let \(\Psi(x,y,z)=\Psi(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n},z_{1},z_{2}):=z_{2}^{2} -\Big{(}\sum_{i=1}^{m}x_{i}\Big{)}^{2}-2z_{1}\Big{(}\sum_{i=1}^{m}x_{i}\Big{)}\). Then there exists a symmetric matrix \(M\) such that \(\Psi(x,y,z)=\langle f,Mf\rangle\). Thus \[\operatorname{QEC}(S_{m+1,n+1}) =\max\{\langle f,D_{S_{m+1,n+1}}f\rangle;\ f\in\mathbb{R}^{(m+n+2 )\times(m+n+2)},\ \langle f,f\rangle=1,\ \langle e,f\rangle=0\}\] \[=-2+2\ \max\{\langle f,Mf\rangle;\ f=(x,y,z)^{T},\ \langle f,f\rangle=1,\ \langle e,f\rangle=0\}.\] We use the Lagrange multiplier method to solve the above maximization problem. Set \[F(f,\lambda,\mu):=\langle f,Mf\rangle-\lambda(\langle f,f\rangle-1)-\mu\langle e, f\rangle.\] Then all the stationary points of \(F(f,\lambda,\mu)\) satisfy \((M-\lambda I)f=\frac{\mu}{2}\), \(\langle f,f\rangle=1\) and \(\langle e,f\rangle=0\). Since \(K_{1,m+1}\) is isometrically embedded in \(S_{m+1,n+1}\), it suffices to consider \(\lambda\in\mathbb{R}\) with \(\frac{m+1}{m+2}\leq\lambda<1\). By solving \((M-\lambda I)f=\frac{\mu}{2}\), we get \[x_{i} =\frac{\mu}{2}\cdot\frac{\lambda-1}{m-\lambda(m+\lambda)},\ \ 1\leq i \leq m,\] \[y_{j} =-\frac{\mu}{2\lambda},\ \ 1\leq j\leq n,\] \[z_{1} =\frac{\mu}{2}\cdot\frac{\lambda}{m-\lambda(m+\lambda)},\] \[z_{2} =\frac{\mu}{2-2\lambda}.\] Using \(\langle f,f\rangle=1\) and \(\langle e,f\rangle=0\), we have \(p(\lambda)=(m+n+2)\lambda^{3}-(m+n+1-mn)\lambda^{2}-2mn\lambda+mn=0\). So, \[\text{QEC}(S_{m+1,n+1}) =-2+2\ \max\{\langle f,Mf\rangle;\ f=(x,y,z)^{T},\ \langle f,f\rangle=1,\ \langle e,f\rangle=0\}\] \[=-2+2\ \max\{\lambda;\ (M-\lambda I)f=\frac{\mu}{2},\ \langle f,f \rangle=1,\ \langle e,f\rangle=0\}\] \[=-2+2\ \max\{\lambda;\ p(\lambda)=0\}.\] Next, we claim that the roots of \(p(\lambda)=0\) are real. Since \(p(0)=mn>0\), \(p(1)=1>0\), and \(p(\frac{m+n}{m+n+2})=-\Big{(}\frac{m-n}{m+n+2}\Big{)}^{2}\leq 0\), \(p(\lambda)\) has at least one positive root. By Descartes' rule of sign, \(p(\lambda)\) has two positive and one negative root. This completes the proof. ## 4. Quadratic Embedding Constant of Cluster of Graphs In this final section, we prove theorem 1.7 - deriving the quadratic embedding constants of the clusters of an arbitrary graph with \(K_{n}\) and \(K_{1,n-1}\). First we recall the definition of cluster of two graphs in general. **Definition 4.1**.: _[_20_]_ _Let \(G_{1}\) and \(G_{2}\) be two graphs and \(V(G_{1})\) be the vertex set of \(G_{1}\). The cluster \(G_{1}\{G_{2}\}\) is obtained by taking one copy of \(G_{1}\) and \(|V(G_{1})|\) copies of a rooted graph \(G_{2}\), and by identifying the root \((\)should be same for each copy of \(G_{2})\) of the i-th copy of \(G_{2}\) with the i-th vertex of \(G_{1}\), \(i=1,2,\ldots,|V(G_{1})|\)._ **Example 4.2**.: _Example of a cluster graph with \(G_{2}=K_{3}\):_ Cluster graphs have important applications in Chemistry - all composite molecules consisting of some amalgamation over a central submolecule can be understood as generalized cluster graphs. For instance, they can be used to understand some issues in metal-metal interaction in some molecules, since a cluster graph structure can be easily found. In [20], the Kirchhoff index formulae for cluster graphs are presented in terms of the pieces. To prove Theorem 1.7, we need a basic result. The following lemma provides two examples of monotonically increasing functions that will be crucial in proving Theorem 1.7. We only prove the monotonicity of the first function, and that of the second function may be shown similarly. **Lemma 4.3**.: _Let \(Y,Z:\mathbb{R}\to\mathbb{R}\) be two real-valued functions defined by_ \[\text{(i)}\ Y(b):=\tfrac{(nb-n)+\sqrt{(nb-n)^{2}+4nb}}{2}\ \ \ \ \text{ and }\ \ \ \ (\text{ii)}\ Z(b):=\tfrac{(nb-2)+\sqrt{(nb-2)^{2}+8b}}{2},\] _where \(n\geq 2\) is a natural number. Then \(Y\) and \(Z\) are monotonically increasing functions._ Proof.: (i) First, note that the term inside the square root in \(Y(b)\) is always positive. For \(b\geq 0\), it is obvious, while for \(b<0\), \[(nb-n)^{2}+4nb=(nb)^{2}+n^{2}+2nb(2-n)>0.\] Let \(F_{b}:=\sqrt{(b-1)^{2}+\tfrac{4b}{n}}\), where \(b\in\mathbb{R}\). Then \(F_{b}>0\) and \[(b-1)^{2}+\frac{4b}{n}=\Big{(}b-1+\frac{2}{n}\Big{)}^{2}+(\frac{4}{n}-\frac{4 }{n^{2}})>\Big{(}b-1+\frac{2}{n}\Big{)}^{2}=\Big{(}1-b-\frac{2}{n}\Big{)}^{2},\] which implies \(F_{b}>(1-b-\frac{2}{n})\). Now, for \(b\geq c\), \[Y(b)-Y(c) =\frac{(nb-n)+\sqrt{(nb-n)^{2}+4nb}}{2}-\frac{(nc-n)+\sqrt{(nc-n) ^{2}+4nc}}{2}\] \[=\frac{n}{2}\Big{\{}(b-c)+\frac{(b-1)^{2}+\tfrac{4b}{n}-(c-1)^{2} -\tfrac{4c}{n}}{\sqrt{(b-1)^{2}+\tfrac{4b}{n}}+\sqrt{(c-1)^{2}+\tfrac{4c}{n}}} \Big{\}}\] \[=\frac{n}{2}\Big{(}\frac{b-c}{F_{b}+F_{c}}\Big{)}(F_{b}+F_{c}+b+ c-2+\frac{4}{n})\geq 0.\] This concludes our proof. With Lemma 4.3 and Proposition 3.5 in hand, we now prove Theorem 1.7. Figure 2. Proof of Theorem 1.7.: (i) For the sake of notional convenience, set \(H:=G\{K_{n}\}\). Let the vertex set of \(G\) be \(V=\{1,2,\ldots,m\}\) and the vertex set of the \(i\)-th copy of \(K_{n}\) be \(V_{i}=\{i,i+m,i+2m\ldots,i+(n-1)m\}\), \(1\leq i\leq m\) (see the labeling of vertices in Figure 2). If \(\tilde{V}\) is the vertex set of \(H\), then \(\tilde{V}=\cup_{i=1}^{m}V_{i}\). Then, \(D_{H}\) can be written as \[\begin{pmatrix}D_{G}&D_{G}+J&D_{G}+J&\ldots&D_{G}+J\\ D_{G}+J&D_{G}+2(J-I)&D_{G}+2J-I&\ldots&D_{G}+2J-I\\ D_{G}+J&D_{G}+2J-I&D_{G}+2(J-I)&\ldots&D_{G}+2J-I\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ D_{G}+J&D_{G}+2J-I&D_{G}+2J-I&\ldots&D_{G}+2(J-I)\end{pmatrix}.\] Let \(S(D_{H})\) be the set of all \((f,\lambda,\mu)\in(C(\tilde{V})\cong\mathbb{R}^{mn})\times\mathbb{R}\times \mathbb{R}\) satisfying \[(D_{H}-\lambda I)f =\frac{\mu}{2}\ e, \tag{4.1}\] \[\langle f,f\rangle =1,\] (4.2) \[\langle e,f\rangle =0. \tag{4.3}\] By Proposition 3.1, \(\text{QEC}(H)=\max\{\lambda:(f,\lambda,\mu)\in S(D_{H})\}\). Suppose \(f=(x^{1T},x^{2T},\ldots,x^{nT})^{T}\), where \(x^{i}\in\mathbb{R}^{m}\) for all \(1\leq i\leq n\). Using (4.3), we have \(\sum_{i=1}^{n}\langle e,x^{i}\rangle=0\) which in turn gives \[J(\sum_{i=1}^{n}x^{i})=\sum_{i=1}^{n}\langle e,x^{i}\rangle e=0. \tag{4.4}\] From (4.1), we obtain a system of \(n\) equations. Using (4.4) and subtracting the \(i\)-th equation from the first equation for \(2\leq i\leq n\), we get the following system of \((n-1)\) equations: \[(\lambda+2)x^{2}+x^{3}+\cdots+x^{n}=\lambda x^{1},\] \[x^{2}+(\lambda+2)x^{3}+\cdots+x^{n}=\lambda x^{1},\] \[\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\] \[x^{2}+x^{3}+\cdots+(\lambda+2)x^{n}=\lambda x^{1}.\] By solving the above system of linear equations, we get \[x^{2}=x^{3}=\cdots=x^{n}=\frac{\lambda}{\lambda+n}x^{1}. \tag{4.5}\] From (4.4) and (4.5), \(\frac{n\lambda+n}{\lambda+n}Jx^{1}=0\) which implies \(Jx^{1}=0\), because \(\lambda>-1\) (by the discussion immediately following Proposition 3.5). Using the aforementioned observations, we can rewrite equations [4.1-4.3] as follows: \[\Big{\{}\Big{(}1+(n-1)\frac{\lambda}{\lambda+n}\Big{)}D_{G}- \lambda I\Big{\}}x^{1} = \frac{\mu}{2}\ e \tag{4.6}\] \[\langle x^{1},x^{1}\rangle = \frac{1}{1+(n-1)\Big{(}\frac{\lambda}{\lambda+n}\Big{)}^{2}}\] (4.7) \[\langle e,x^{1}\rangle = 0. \tag{4.8}\] On the other hand, to calculate \(\operatorname{QEC}(G)\), consider the set \(S(D_{G})\) of all the stationary points \((x,a,\gamma)\in(C(V)\cong\mathbb{R}^{m})\times\mathbb{R}\times\mathbb{R}\) satisfying \[(D_{G}-aI)x=\frac{\eta}{2}\ e,\ \ \ \ \langle x,x\rangle=1,\ \ \ \ \langle e,x\rangle=0. \tag{4.9}\] By looking at equations [4.6-4.9], the following relations \[\lambda=\frac{(na-n)+\sqrt{(na-n)^{2}+4na}}{2}\] \[x^{1}=\frac{1}{\sqrt{1+(n-1)\Big{(}\frac{\lambda}{\lambda+n} \Big{)}^{2}}}\ x=\frac{\lambda+n}{\sqrt{(\lambda+n)^{2}+(n-1)\lambda^{2}}}\ x,\] \[\mu=\frac{1+(n-1)\Big{(}\frac{\lambda}{\lambda+n}\Big{)}}{\sqrt{ 1+(n-1)\Big{(}\frac{\lambda}{\lambda+n}\Big{)}^{2}}}\ \eta=\frac{n(\lambda+1)}{\sqrt{(\lambda+n)^{2}+(n-1)\lambda^{2}}}\ \eta,\] give us a one to one correspondence between \(S(D_{G})\) and \(S(D_{H})\). So, \[\operatorname{QEC}(H) =\max\{\lambda:({({x^{1}}^{T},{x^{2}}^{T},\ldots,{x^{nT}})^{T}, \lambda,\mu})\in S(D_{H})\}\] \[=\max\Big{\{}\frac{(na-n)+\sqrt{(na-n)^{2}+4na}}{2}:(x,a,\eta)\in S (D_{G})\Big{\}}.\] Now \(a\leq\operatorname{QEC}(G)\), and in particular for \(\operatorname{QEC}(G)\), there exist \(\tilde{x}\in\mathbb{R}^{m}\) and \(\tilde{\eta}\in\mathbb{R}\) such that \((\tilde{x},\operatorname{QEC}(G),\tilde{\eta})\in S(D_{G})\). Thus \[\frac{(n\operatorname{QEC}(G)-n)+\sqrt{(n\operatorname{QEC}(G)-n)^{2}+4n \operatorname{QEC}(G)}}{2}\leq\operatorname{QEC}(H). \tag{4.10}\] Also, by Lemma 4.3, \(a\leq Q\) implies \[\operatorname{QEC}(H)\leq\frac{(n\operatorname{QEC}(G)-n)+\sqrt{(n \operatorname{QEC}(G)-n)^{2}+4n\operatorname{QEC}(G)}}{2}. \tag{4.11}\] Hence, the conclusion follows from (4.10) and (4.11). (ii) Set \(L:=G\{K_{1,n-1}\}\). Label the vertices of \(L\) as in the preceding part of this proof. Then \[D_{L}=\begin{pmatrix}D_{G}&D_{G}+J&D_{G}+J&\ldots&D_{G}+J\\ D_{G}+J&D_{G}+2(J-I)&D_{G}+2J&\ldots&D_{G}+2J\\ D_{G}+J&D_{G}+2J&D_{G}+2(J-I)&\ldots&D_{G}+2J\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ D_{G}+J&D_{G}+2J&D_{G}+2J&\ldots&D_{G}+2(J-I)\end{pmatrix}.\] Let \(S(D_{L})\) be the set of all stationary points \((f,\lambda,\mu)\in\mathbb{R}^{mn}\times\mathbb{R}\times\mathbb{R}\) satisfying \[(D_{L}-\lambda I)f=\frac{\mu}{2}\ e,\ \ \ \ \langle f,f\rangle=1,\ \ \ \ \langle e,f\rangle=0, \tag{4.12}\] and let \(S(D_{G})\) be the set of all stationary points \((x,a,\eta)\in\mathbb{R}^{m}\times\mathbb{R}\times\mathbb{R}\) satisfying \[(D_{G}-aI)x=\frac{\eta}{2}\ e,\ \ \ \ \langle x,x\rangle=1,\ \ \ \ \langle e,x\rangle=0. \tag{4.13}\] Here, note that \(K_{1,n}\) is isometrically embedded in \(L\). So, \(-\frac{2}{n+1}\leq\lambda\). Thus, it suffices to consider all stationary points in \(S(D_{L})\) with \(-\frac{2}{n}<\lambda\). By a similar argument as in the first part of the proof, we can show that (4.12) reduces to the following equations: \[\Big{\{}\Big{(}1+(n-1)\frac{\lambda}{\lambda+2}\Big{)}D_{G}-\lambda I \Big{\}}x^{1} = \frac{\mu}{2}\ e\] \[\langle x^{1},x^{1}\rangle = \frac{1}{1+(n-1)\Big{(}\frac{\lambda}{\lambda+2}\Big{)}^{2}}\] \[\langle e,x^{1}\rangle = 0,\] where \(f=({x^{1}}^{T},{x^{2}}^{T},\ldots,{x^{n}}^{T})^{T}\), \(x^{i}\in\mathbb{R}^{m}\). \[\mathrm{QEC}(L) =\max\{\lambda:(({x^{1}}^{T},{x^{2}}^{T},\ldots,{x^{nT}})^{T}, \lambda,\mu)\in S(D_{L})\}\] \[=\max\Big{\{}\frac{(na-2)+\sqrt{(na-2)^{2}+8a}}{2}:(x,a,\eta)\in S (D_{G})\Big{\}}.\] The remainder of the proof is same as the last part of the above proof, except for the use of the second part of Lemma 4.3. **Remark 4.4**.: _Since \(\mathrm{QEC}(K_{n})=-1\) and \(\mathrm{QEC}(K_{1,n-1})=-\frac{2}{n}\), using Theorem 1.7, one can verify that \(\mathrm{QEC}(K_{1,n-1}\{K_{m}\})=\mathrm{QEC}(K_{m}\{K_{1,n-1}\})\) if and only if \(m=n\)._ We conclude by discussing a couple of applications of Theorem 1.7. Our first application provides novel families of examples of graphs of QE class. **Corollary 4.5**.: _If \(G\) is a graph of QE class, then the graphs \(G\{K_{n}\}\) and \(G\{K_{1,n-1}\}\) are of QE class._ Proof.: If \(G\) is a graph of QE class then \(\mathrm{QEC}(G)\leq 0\). By Lemma 4.3, \[\frac{(n\,\mathrm{QEC}(G)-n)+\sqrt{(n\,\mathrm{QEC}(G)-n)^{2}+4n \,\mathrm{QEC}(G)}}{2} \leq\frac{-n+\sqrt{n^{2}}}{2}\] \[\Rightarrow\mathrm{QEC}(G\{K_{n}\}) \leq 0;\] and \[\frac{(n\,\mathrm{QEC}(G)-2)+\sqrt{(n\,\mathrm{QEC}(G)-2)^{2}+8\, \mathrm{QEC}(G)}}{2} \leq\frac{-2+2}{2}\] \[\Rightarrow\mathrm{QEC}(G\{K_{1,n-1}\}) \leq 0.\] Thus \(G\{K_{n}\}\) and \(G\{K_{1,n-1}\}\) are of QE class. The next application provides the quadratic embedding constant of a simple corona graph. **Definition 4.6**.: _Let \(G_{1}\) and \(G_{2}\) be two graphs defined on disjoint sets of \(m\) and \(n\) vertices, respectively. The corona \(G_{1}\circ G_{2}\) of \(G_{1}\) and \(G_{2}\) is defined as the graph obtained by taking one copy of \(G_{1}\) and \(m\) copies of \(G_{2}\), and then joining the \(i\)th vertex of \(G_{1}\) to every vertex in the \(i\)th copy of \(G_{2}\). If \(G_{2}=K_{1}\), then the corona \(G\circ K_{1}\) is called a simple corona graph._ **Corollary 4.7**.: _Let \(G\circ K_{1}\) be a simple corona graph. Then,_ \[\mathrm{QEC}(G\circ K_{1})=\mathrm{QEC}(G)-1+\sqrt{1+(\mathrm{QEC}(G))^{2}}.\] Observe that for any tree \(T\), \(T\{K_{1,n-1}\}\) is again a tree. As a final application of Theorem 1.7, we give a better lower and upper bound for this class of trees than Theorem 1.2. **Corollary 4.8**.: _Let \(T\) be a tree on \(m\) vertices. Then_ \[\operatorname{QEC}(P_{mn})<\operatorname{QEC}(P_{m}\{K_{1,n-1}\})\leq \operatorname{QEC}(T\{K_{1,n-1}\})\leq\operatorname{QEC}(K_{1,m-1}\{K_{1,n-1 }\})<\operatorname{QEC}(K_{1,mn-1}).\] Proof.: Note that the number of vertices in \(T\{K_{1,n-1}\}\) is \(mn\). By Theorems 1.2, 1.7 and Lemma 4.3, \[\operatorname{QEC}(P_{m}\{K_{1,n-1}\})\leq\operatorname{QEC}(T\{K_{1,n-1}\}) \leq\operatorname{QEC}(K_{1,m-1}\{K_{1,n-1}\})\] Again, by Theorem 1.2, \[\operatorname{QEC}(P_{mn})<\operatorname{QEC}(P_{m}\{K_{1,n-1}\})\quad\text{ and}\quad\operatorname{QEC}(K_{1,m-1}\{K_{1,n-1}\})<\operatorname{QEC}(K_{1,mn-1}). \tag{4.14}\] This concludes the proof. ## Acknowledgments We thank Apoorva Khare for a detailed reading of an earlier draft and for providing valuable feedback. P.N. Choudhury was partially supported by INSPIRE Faculty Fellowship research grant DST/INSPIRE/04/2021/002620 (DST, Govt. of India), and IIT Gandhinagar Internal Project: IP/IITGN/MATH/PNC/2223/25. R. Nandi was supported by IIT Gandhinagar Post-Doctoral Fellowship IP/IITGN/MATH/PC/2223/20.
2310.08558
Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate Exploration Bias
It is desirable for policies to optimistically explore new states and behaviors during online reinforcement learning (RL) or fine-tuning, especially when prior offline data does not provide enough state coverage. However, exploration bonuses can bias the learned policy, and our experiments find that naive, yet standard use of such bonuses can fail to recover a performant policy. Concurrently, pessimistic training in offline RL has enabled recovery of performant policies from static datasets. Can we leverage offline RL to recover better policies from online interaction? We make a simple observation that a policy can be trained from scratch on all interaction data with pessimistic objectives, thereby decoupling the policies used for data collection and for evaluation. Specifically, we propose offline retraining, a policy extraction step at the end of online fine-tuning in our Offline-to-Online-to-Offline (OOO) framework for reinforcement learning (RL). An optimistic (exploration) policy is used to interact with the environment, and a separate pessimistic (exploitation) policy is trained on all the observed data for evaluation. Such decoupling can reduce any bias from online interaction (intrinsic rewards, primacy bias) in the evaluation policy, and can allow more exploratory behaviors during online interaction which in turn can generate better data for exploitation. OOO is complementary to several offline-to-online RL and online RL methods, and improves their average performance by 14% to 26% in our fine-tuning experiments, achieves state-of-the-art performance on several environments in the D4RL benchmarks, and improves online RL performance by 165% on two OpenAI gym environments. Further, OOO can enable fine-tuning from incomplete offline datasets where prior methods can fail to recover a performant policy. Implementation: https://github.com/MaxSobolMark/OOO
Max Sobol Mark, Archit Sharma, Fahim Tajwar, Rafael Rafailov, Sergey Levine, Chelsea Finn
2023-10-12T17:50:09Z
http://arxiv.org/abs/2310.08558v1
# Offline Pretraining for Online RL: Decoupled Policy Learning to Mitigate Exploration Bias ###### Abstract It is desirable for policies to optimistically explore new states and behaviors during online reinforcement learning (RL) or fine-tuning, especially when prior offline data does not provide enough state coverage. However, exploration bonuses can bias the learned policy, and our experiments find that naive, yet standard use of such bonuses can fail to recover a performant policy. Concurrently, pessimistic training in offline RL has enabled recovery of performant policies from static datasets. Can we leverage offline RL to recover better policies from online interaction? We make a simple observation that a policy can be trained from scratch on all interaction data with pessimistic objectives, thereby decoupling the policies used for data collection and for evaluation. Specifically, we propose _offline retraining_, a policy extraction step at the end of online fine-tuning in our Offline-to-Online-to-Offline (OOO) framework for reinforcement learning (RL). An optimistic (_exploration_) policy is used to interact with the environment, and a _separate_ pessimistic (_exploitation_) policy is trained on all the observed data for evaluation. Such decoupling can reduce any bias from online interaction (intrinsic rewards, primacy bias) in the evaluation policy, and can allow more exploratory behaviors during online interaction which in turn can generate better data for exploitation. OOO is complementary to several offline-to-online RL and online RL methods, and improves their average performance by 14% to 26% in our fine-tuning experiments, achieves state-of-the-art performance on several environments in the D4RL benchmarks, and improves online RL performance by 165% on two OpenAI gym environments. Further, OOO can enable fine-tuning from incomplete offline datasets where prior methods can fail to recover a performant policy. The implementation can be found here: [https://github.com/MaxSobolMark/OOO](https://github.com/MaxSobolMark/OOO). ## 1 Introduction Offline reinforcement learning (Lange et al., 2012; Levine et al., 2020) provides a principled foundation for pre-training policies from previously collected data, handling challenges such as distributional shift and generalization, while online reinforcement learning (RL) is more often concerned with challenges that pertain to deciding what kind of data should be collected - i.e., exploration. The ability to reuse data from prior tasks is particularly critical for RL in domains where data acquisition is expensive, such as in robotics, healthcare, operations research, and other domains. However, reusing data from prior tasks can be challenging when the data is suboptimal and does not provide enough coverage. For example, while a robotic grasping dataset may be useful for a range of tasks, such an offline dataset is insufficient for learning how to grasp and then hammer a nail. Online reinforcement learning becomes relevant in context of fine-tuning offline RL policies, particularly, exploration bonuses (Bellemare et al., 2016; Pathak et al., 2017; Tang et al., 2017; Burda et al., 2018) can increase the state coverage by rewarding the agent for visiting novel states. In principle, as the state coverage increases, exploration bonuses decay to zero and a performant policy is recovered. However, we find that the typical interaction budgets often preclude exploring the state space sufficiently and indeed, in complex real-world environments, we expect the exploration budget to be incommensurate with the environment size. As a result, the novelty bonuses bias the policy towards exploratory behavior, rather than performantly completing the task. Can we recover maximally performant policies while preserving the benefits of increased state coverage from exploration bonuses? Prior works have often combined the offline and online paradigms by first pretraining on offline data and then finetuning online (Nair et al., 2020; Kostrikov et al., 2021; Nakamoto et al., 2023). However, the complementary strengths of these approaches can also be leveraged the other way, employing offline RL to _retrain policies_ collected with highly exploratory online algorithms, which themselves might have been initialized from offline data. Such an offline-to-online-to-offline training regimen provides an appealing way to decouple exploration from exploitation, which simplifies both stages, allowing the use of high optimism for state-covering exploration which is more likely to find high-reward regions but less likely to yield optimal policies, together with conservative or pessimistic policy recovery to maximally exploit the discovered high-reward behaviors. This insight forms the basis for our proposed framework, _Offline-to-Online-to-Offline_ (OOO) RL where we use an optimistic _exploration_ policy to interact with the environment, and pessimistically train an _exploitation_ policy on all the data seen thus far for evaluation, visualized in Figure 1. The offline retraining allows the exploitation policy to maximize extrinsic rewards exclusively, removing any bias introduced by the intrinsic reward bonuses. As a consequence, the exploitation policy can recover a performant policy even when the exploration policy is behaving suboptimally for the task reward in favor of further exploration. More subtly, this allows the exploration policy to search for better states and rewards. Ultimately, the exploration policy can generate better data, and the final performance is less sensitive to the balance between intrinsic and extrinsic rewards (Taiga et al., 2019; Chen et al., 2022). Concretely, we propose the OOO framework for RL, where we augment the conventional offline-to-online finetuning algorithms with an _offline retraining_ step before evaluation. OOO pre-trains an exploration policy on a combination of task rewards and exploration bonuses, and continues to optimize the combined rewards during the online fine-tuning. For evaluation at any time \(t\), OOO uses offline retraining to output a pessimistic policy on _all_ the data seen till time \(t\), including the offline data and all data collected by the exploration policy. OOO is a flexible framework that can be combined with several prior online RL algorithms, offline-to-online RL methods, and exploration bonuses. In this work, we experiment with implicit \(Q\)-learning (IQL) (Kostrikov et al., 2021) and Cal-QL (Nakamoto et al., 2023) for training the exploration and exploitation policies when prior offline data is available, and RLPD (Ball et al., 2023) for online RL. We use random network distillation (RND) (Burda et al., 2018) as the exploration bonus, though other novelty bonuses can be used in OOO. We evaluate OOO on fine-tuning tasks for Adroit manipulation and FrankaKitchen from DARL benchmark (Fu et al., 2020), improving the average performance of IQL by 26% and Cal-QL by 14%. Further, we improve the performance of RLPD by 165% on sparse locomotion tasks, and find that on challenging exploration problems, OOO can recover non-trivial performance where prior offline-to-online fine-tuning methods fail. Figure 1: When the offline data does not provide enough coverage of the state- space(1a), fine-tuning can benefit from the use of exploration bonuses to more broadly cover the state space (1b). However, the policies themselves can be suboptimal for the task reward, as they optimize for a combination of task reward and exploration bonus (1c, blue trajectory). OOO trains a separate pessimistic policy to recover a performant policy for evaluation (1c, red trajectory), allowing the exploration policy to search for better states and rewards. Related Work **Offline-to-Online Fine-Tuning.** Following successes in supervised learning, the offline pre-training (offline RL) with online fine-tuning literature has seen significant interest in recent years. Some prior works aim to propose RL algorithms that are simultaneously suitable for both offline and online RL Peng et al. (2019); Kumar et al. (2020); Nair et al. (2020); Kostrikov et al. (2021); Ghasemipour et al. (2021); Chen et al. (2021), hence making them applicable to offline-to-online fine-tuning. However, these methods may be too conservative, and indeed, we find that our approach can lead to greater performance in practice by improving exploration in the online phase. Other methods have specifically targeted the offline-to-online fine-tuning setting Lee et al. (2022); Hong et al. (2022); Nakamoto et al. (2023); Yu and Zhang (2023); Zhang et al. (2023), often by changing a regularization or pessimism term to be less conservative during the online phase, or even dispensing with offline training altogether and simply using efficient off-policy RL methods with replay buffers initialized from prior data Vecerik et al. (2017); Song et al. (2022); Ball et al. (2023). These works generally do not study the use of explicit exploration bonuses, and more significantly their final policy corresponds to the last iterate of online RL, as is common for standard online RL exploration methods. While our approach also addressing online exploration initialized from offline data, our main contribution lies in adding an additional _offline_ extraction step after the exploration phase, to obtain the best exploitation policy independently of the effect of exploration bonuses. **Exploration in Deep Reinforcement Learning.** Balancing exploration and exploitation is one of the cornerstones of reinforcement learning research Sutton and Barto (2018); Amin et al. (2021). There is an extensive body of work that uses extrinsic exploration bonuses coupled with reward based optimization Bellemare et al. (2016); Ostrovski et al. (2017); Tang et al. (2017); Fu et al. (2017); Pathak et al. (2017); Burda et al. (2018). One standard approach is count-based exploration Strehl and Littman (2008), which maintains statistics of the visitation counts to state-action pairs and encourages exploration of less-visited parts of the state space. Such methods enjoy certain theoretical guarantees on their performance in the tabular case Alexander L. Strehl (2005); Rashid et al. (2020) and have been successfully scaled to high-dimensional domains, for example by replacing count-based strategies with more powerful density models Bellemare et al. (2016); Ostrovski et al. (2017). Although successful, prior works have found it hard to balance intrinsic and extrinsic rewards Taiga et al. (2019); Chen et al. (2022) in high dimensional domains in practice. OOO specifically addresses this balancing challenge by decoupling the exploration and exploitation agent. Some prior works analyze the challenge of offline RL by controlling the state distribution used for training, where the data is generated either in _tandem_ by a passive or frozen policy Ostrovski et al. (2021) or by a state-covering exploratory policy Yarats et al. (2022). Schafer et al. (2021) goes further and considers a decoupled policy learning similar to ours, sans the offline retraining, which we find is critical for successful decoupled policy learning in Section 5.3. Moreover, most exploration works primarily consider pure online RL, whereas we focus on offline-to-online fine-tuning. **Self-Supervised Exploration.** Several prior works study self-supervised exploration Eysenbach et al. (2018); Pathak et al. (2019); Sharma et al. (2019); Sekar et al. (2020); Jin et al. (2020); Zhang et al. (2020); Laskin et al. (2021); Zhang et al. (2021). These methods decouple out of necessity: because the task reward is not available during pre-training, these methods first pre-train an exploration policy without task rewards and then train an exploitation policy when the reward becomes available. We find that decoupling exploration and exploitation is still helpful even when the task reward is available throughout pre-training and fine-tuning. ## 3 Problem Setup We formulate the problem as a Markov decision process (MDP) \(\mathcal{M}\equiv(\mathcal{S},\mathcal{A},\mathcal{T},r,\rho_{0},\gamma)\), where \(\mathcal{S}\) denotes the state space, \(\mathcal{A}\) denotes the action space, \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\mapsto\mathbb{R}_{ \geq 0}\) denotes the transition dynamics, \(r:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}\) denotes the reward function, \(\gamma\in[0,1)\) denotes the discount factor and \(\rho_{0}:\mathcal{S}\mapsto\mathbb{R}_{\geq 0}\) denotes the initial state distribution. For a policy \(\pi:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}_{\geq 0}\), the value function is defined as \(V^{\pi}(s)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\mid s_{ 0}=s\right]\). The optimal value function \(V^{*}=\max_{\pi}V^{\pi}\). Similarly, the state-action value function \(Q^{\pi}(s,a)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})|s_{0}=s,a_{0}=a\right]\). We are given an offline dataset of interactions \(\mathcal{D}_{\text{off}}=\{(s,a,s^{\prime},r)\}\) collected using an (unknown) behavior policy \(\pi_{\beta}\), where \(s\in\mathcal{S}\), \(a\in\mathcal{A}\) such that \(r\sim r(s,a)\) and \(s^{\prime}\sim\mathcal{T}(\cdot\mid s,a)\). Given a budget \(K\) of online interactions with \(\mathcal{M}\), the goal is to maximize \(V^{\pi_{K}}\), where \(\pi_{t}\) denotes the policy output of the algorithm after \(t\) steps of online interaction with the environment. We will have an offline dataset of interactions for majority of our experiments, but, we also experiment without an offline dataset, that is, the online RL case. ## 4 Offline-to-Online-to-Offline Reinforcement Learning (OOO) The aim of this section is to develop a framework for learning a decoupled set of policies, one targeted towards exploration for online interaction and one targeted towards exploitation for evaluation. Such decoupling allows us to optimistically explore the environment, while removing the bias from the intrinsic rewards in the evaluation policy (see Section 5.1 for a didactic example). In general, an algorithm is not tied to use the same policy for exploring to gather data from the environment and for deployment after training has concluded. While this fact is implicitly used in most standard RL algorithms when a noisy policy is executed in the environment online, for example, \(\epsilon\)-greedy in DQN (Mnih et al., 2015), Gaussian noise in SAC (Haarnoja et al., 2018) or OU noise in DDPG (Lillicrap et al., 2015), our framework, OOO, embraces this fully by introducing an _offline retraining_ step, learning an exploitation policy for evaluation independent of the policy used for interaction with the environment (dubbed exploration policy). First, we present the general OOO framework in Section 4.1 which can be instantiated with any RL algorithm and exploration bonus. In Section 4.2, we discuss how decoupled policy learning framework can allow us to rebalance data distribution for better exploitation policies, especially for hard exploration problems. Finally, in Section 4.3, we discuss how to instantiate OOO, using IQL as the base RL algorithm and RND as the exploration bonus. ### The OOO Framework Our frameworks consists of two policies: (a) An exploration policy to interact with the environment during fine-tuning, the associated set of parameters being denoted by \(\mathcal{V}_{\text{explo}}\), and (b) an exploitation policy used for evaluation and deployment after training, associated with parameters \(\mathcal{V}_{\text{explo}}\). Note, the choice of parameters depend on the algorithm used to instantiate the OOO framework and are treated abstractly in this section. To effectively utilize the fine-tuning of budget of \(K\) interactions, the exploration policy is pre-trained on the offline data \(\mathcal{D}_{\text{off}}\). In contrast to prior works, this update will involve some amount of optimism, for example, through an exploration bonus. We assume this is abstracted into opt_update associated with the learning algorithm. Next, the exploration policy interacts with the environment \(\mathcal{M}\) for \(K\) steps, incrementally adding to the online buffer \(\mathcal{D}_{\text{on}}\) and updating on \(\mathcal{D}_{\text{off}}\cup\mathcal{D}_{\text{on}}\). For simplicity, we assume that the same opt_update is used, though it can be different in principle. After the online interaction budget is exhausted, we introduce the _offline retraining_ step, where an exploitation policy is trained on all the interaction data \(\mathcal{D}_{\text{off}}\cup\mathcal{D}_{\text{on}}\) with some pessimistic offline RL algorithm (abstracted as pessm_update). The pessimism is necessary as we only have access to a finite dataset at the end of online interaction. The resulting meta-algorithm in presented in Algorithm 1. In theory, one can repeatedly recompute \(\mathcal{V}_{\text{explo}}\) after every step \(t\) and output a policy \(\pi_{\text{explo}}^{t}\), though this can be expensive in practice. Prior works generally couple the parameters, that is \(\mathcal{V}_{\text{explo}}=\mathcal{V}_{\text{explo}}\), with the exception of perhaps adding noise when executing actions in the environment. The optimal policy in presence of exploration bonuses coincides with the optimal task policy only when the exploration bonus goes to 0, which does not happen in a finite budget of interactions for general-purpose exploration bonuses. For example, a simple count bonus \(r_{i}(s,a)=1/\sqrt{n(s,a)}\) goes to 0 when the state-action visitation count \(n(s,a)\) goes to \(\infty\). As our experiments will show, practical exploration bonuses maintain non-trivial values throughout training and thus, bias the policy performance. When learning a decoupled set of parameters, the exploitation policy can be trained exclusively on the task-rewards (pessimistically) to recover a performant policy free from the bias introduced by these non-trivial exploration rewards. Note, such decoupled training is more expensive computationally, as we first train \(\mathcal{V}_{\text{explo}}\) and interact using an exploration policy, and then, train a separate set of parameters \(\mathcal{V}_{\text{explo}}\) to compute the output. However, for several practical decision making problems, computation is cheaper than online data collection and our proposed decoupled training requires no additional interaction with the environment. ### Reweighting Data for Exploitation For sparse reward problem with hard exploration, the data collected online can be heavily imbalanced as the sparse reward is rarely collected. Offline policy extraction can be ineffective with such imbalanced datasets (Hong et al., 2023), affecting the quality of exploitation policy. The decoupled policy learning allows us to rebalance the data distribution when learning the exploitation policy (Hong et al., 2023; Singh et al., 2022), without affecting the data distribution or policy learning for online exploration. Specifically, let \(\mathcal{D}=\mathcal{D}_{\text{on}}\cup\mathcal{D}_{\text{off}}\) denote all the transition data from the environment, and let \(\mathcal{D}_{\text{higher}}=\{(s,a,s^{\prime},r)\in\mathcal{D}\mid r=1\}\) denote the set of transitions achieving the sparse reward. To increase the probability of training on high reward transitions in pessm_update, we upsample transitions from \(\mathcal{D}_{\text{higher}}\) by sampling transitions \((s,a,s^{\prime},r)\sim\alpha\text{Unif}(\mathcal{D}_{\text{higher}})+(1- \alpha)\text{Unif}(\mathcal{D})\) for some \(\alpha\in[0,1]\). Similar strategies for data rebalancing can be derived for dense rewards as well. ### An Example Instantiation using IQL and RND The OOO framework makes minimal assumptions on how to instantiate \(\mathcal{V}\) or opt_update and pessm_update. While we present results for several choices of base RL in Section 5, we present a detailed example with IQL as the base reinforcement learning algorithm for both opt_update and pessm_update, and RND as the exploration bonus used in opt_update. As discussed in Appendix A, SARSA-like update in IQL avoid pessimistic \(Q\)-value estimates for states not in \(\mathcal{D}_{\text{off}}\), making it more amenable to online fine-tuning. RND can be an effective exploration bonus in large and continuous state spaces, where count based bonuses can be hard to extend (Burda et al., 2018; Zhu et al., 2020). In particular, we can instantiate \(\mathcal{V}_{\text{explo}}=\{\pi_{\text{explo}},Q_{\text{explo}},\hat{Q}_{ \text{explo}},V_{\text{explo}}\}\) and \(\mathcal{V}_{\text{explo}}=\{\pi_{\text{explo}},Q_{\text{explo}},\hat{Q}_{ \text{explo}},V_{\text{explo}},f_{\theta},\hat{f},\alpha\}\). The pessm_update directly follows the update equations for IQL in Appendix A. For the explore policy, opt_update follows the same equation, except we add the intrinsic reward to the task reward, that is, we optimize \(\tilde{r}(s,a)=r(s,a)+\alpha\|f_{\theta}(s)-\hat{f}(s)\|^{2}\), where \(\alpha\) controls thet trade-off between the task reward and exploration bonus. Along with the policy parameters, the RND predictor network is updated to regress to the target network \(\hat{f}\) in opt_update. ## 5 Experiments Our experimental evaluation studies the effectiveness of decoupling exploration and exploitation on a range of tasks with varied levels of exploration. We study a didactic example in Section 5.1, motivating the need for aggressive exploration and a decoupled exploitation policy to recover a performant policy. In Section 5.2, we instantiate OOO with several recent offline-to-online RL algorithms, IQL (Kostrikov et al., 2021) and Cal-QL (Nakamoto et al., 2023), and an online RL algorithm, RLPD (Ball et al., 2023), and evaluate on several environments with emphasis on hard exploration. Finally, we conduct several ablation studies that aim to isolate the effect of decoupled training, analyze the performance of OOO in context of the primary bias effect, understand the role of offline retraining for exploitation policies in Section 5.3. The experimental details related to implementation, hyper-parameters and environment setups can be found in Appendix B. ### Illustrating the Exploration Bias: A Didactic Example We consider a somewhat exaggerated example to understand when decoupling exploration and exploitation in the spirit of OOO framework can be effective. For this experiment, we consider a point-mass environment, shown in Figure 2, where the objective is to reach the goal location marked by the red apple, to receive a sparse reward of +1. There is a wall in the environment and the agent has to find a path to the goal that avoids the wall. The offline data in the environment demonstrates a suboptimal path that goes over the wall, but the optimal path goes under the wall. How do we learn the optimal policy that goes under the wall? When pre-training and fine-tuning using IQL, the policy only explores around the suboptimal data in the offline dataset, and thus, recovers the suboptimal policy in the process. To learn a policy that takes the shorter path under the wall, the agent first needs to substantially increase state coverage. To that extent, exploration bonuses can be effective. For this experiment, we consider a simple count based bonus \(r_{i}(s,a)=c/\sqrt{n(s,a)+1}\), where \(c\) is the coefficient determining the relative weight of the intrinsic reward compared to the sparse extrinsic reward. Note, because the offline data already provides a suboptimal path, the coefficient on the intrinsic reward needs to be high to incentivize the agent to explore more broadly beyond the offline data. As shown in Figure 2c, with a coefficient of 5.0, the state coverage can drastically improve, and includes visitations to the goal through the optimal path. So, are we done? As shown in Figure 2, the policy performs worse than the IQL. Since several parts of the environment have not been explored and the weight on the intrinsic reward is large, the agent is incentivized to keep exploring rest of the state space. And indeed, the optimal policy for \(c=5.0\) does not even seek to reach the goal. Can we still recover the optimal policy? Given the data collected by the agent thus far, it is clear that a path to the goal that goes under the wall can be recovered. However, the intrinsic rewards bias the policy towards exploring other parts of the state space. In our decoupled framework OOO, offline retraining allows the exploitation policy to maximize just the extrinsic reward, i.e. the sparse goal reaching reward, without any bias from intrinsic rewards. Trained pessimistically, the exploitation policy can indeed recover the optimal path, as shown in the Figure 2. Overall, this example motivates that offline retraining can recover a performant policy long before the exploration bonus decays to 0 when using our framework OOO. ### Offline-to-Online Fine-Tuning and Online RL Evaluation **Standard Offline-to-Online Benchmarks.** To evaluate OOO, we first consider six standard offline-to-online finetuning environments. Specifically, we consider three sparse reward Adroit environments where the objective is to complete manipulation tasks using a high-dimensional dexterous hand (Rajeswaran et al., 2017), and three FrankaKitchen environments from the D4RL benchmark (Fu et al., 2020), where the objective is to complete a sequence of tasks in a simulated kitchen environments with many common household objects. We instantiate OOO using two recent offline-to-online Figure 2: (2a) The agent (robot) is given access to demonstrations that take a sub-optimal trajectories (going over the wall) to reach the goal (apple). (2b) Visitation counts for online fine-tuning using IQL are shown (darker blue represents higher visitation count). IQL does not explore outside of the support of the offline dataset. (2c) Adding a visitation-count bonus to IQL increases the state-space coverage, and the resulting replay buffer contains trajectories that reach the goal going under the wall, but contains no optimal trajectories. While the exploration policy itself is quite suboptimal due to bias from intrinsic rewards, an exploitation policy trained offline on the same data can recover the optimal policy 2d. RL methods, **IQL**(Kostrikov et al., 2021) and **Cal-QL**(Nakamoto et al., 2023). We use RND (Burda et al., 2018) as the exploration bonus for all environments. In addition to IQL and Cal-QL, we benchmark our method against: (1) **TD3 + RND**: The offline data is loaded into the replay buffer for TD3 (Fujimoto et al., 2018) and a RND reward is added to the reward function to aid with the exploration challenge and (2) **PEX**(Zhang et al., 2023), which maintains two copies of the policy, one from offline pre-training and one fine-tuned online and then adaptively composes them online. We report the results of this evaluation in Table 1. First, the results of OOO depend on the base RL algorithm itself, so we are interested in comparing the performance of OOO relative to the performance of the base algorithm itself. We find that for IQL, OOO can improve the average performance by 26.4% for the same fine-tuning budget. Particularly notable are the improvements on relocate-binary-v0 (over 165% improvement) and kitchen-complete-v0 (over 45% improvement). For Cal-QL, we first improve the official public implementation1 by applying the same Monte-Carlo lower-bound for \(Q\)-values of uniformly sampled actions, and not just the actions sampled from the policy. We provide complete details for this improvement in Appendix B.3. With this improvement, we find that Cal-QL can nearly saturate the benchmark in the budget used by other methods, i.e., 1M steps for Adroit and 4M steps for FrankaKitchen (which are some of the hardest standard environments). With a reduced budget of 250K steps for fine-tuning on all environments, we find that OOO can improve the average Cal-QL performance by 14.3%. Overall, these results suggest OOO can further improve the performance of state-of-the-art methods by allowing effective use of exploration bonuses. Complete learning curves for typical fine-tuning budgets are provided in Figure \(8\), and OOO (Cal-QL) provides the best reported results to our knowledge, achieving almost 99% average success rate on FrankaKitchen environments. Footnote 1: [https://github.com/nakamotoo/Cal-QL](https://github.com/nakamotoo/Cal-QL) **Harder Exploration Problems.** To evaluate the effectiveness of exploration bonuses, we further consider environments requiring extended exploration. First, we consider two environments from D4RL: (1)antmaze-goal-missing-large-v2 is built on top of antmaze-large-diverse-v2 and (2) maze2d-missing-data-large-v1 is built on top of maze2d-large-v1 from D4RL, and involves controlling a point mass though the same maze as above. We reduce the offline data by removing transitions in the vicinity of the goal, thus, providing an exploration challenge during fine-tuning. We also consider hammer-truncated-expert-v1, built on hammer-expert-v1 (Rajeswaran et al., 2017). It consists of controlling a high-dimensional dexterous hand to pick up a hammer and hit a nail. The expert demonstrations are truncated to 20 timesteps, such that they only demonstrate grasping the hammer, but not hitting the nail and thus, have no successful trajectories. Finally, we consider standard locomotion environments from OpenAI gym (Brockman et al., 2016), specifically sparse reward ant-sparse-v2 and halfcheetah-sparse-v2 from (Rengarajan et al., 2022). The latter environments are purely online, i.e. there is no offline data. We again instantiate OOO with IQL and Cal-QL for these experiments, and compare against IQL, Cal-QL, PEX and TD3+RND. For the online RL environments, we instantiate OOO with RLPD (Ball et al., 2023), a recent state-of-the-art online RL algorithm. For these environments, we find two changes to be necessary, increased weight on the exploration bonus and, upweighting transitions with a reward 1 as described in Section 4.2. We find in Table 2 that OOO instantiated with IQL substantially outperforms other methods all other methods. Particularly, OOO (IQL) can learn a goal-reaching policy with non-trivial success rate on antmaze-goal-missing-large-v2 when IQL completely fails to do so. We found IQL to be better suited for such hard exploration than Cal-QL as the base algorithm, though OOO improves \begin{table} \begin{tabular}{c|l|c c|c c|c c} \hline \hline Domain & Task & TD3 + RND & PEX & Cal-QL \# 250k & OOO (Cal-QL) \# 250k & IQL & OOO (IQL) \\ \hline \multirow{3}{*}{Adroit} & relocate-binary-v0 & 1 (0.8) & 0 (0) & 11.67 (4.4) & 25.6 (10) & 23 (12.9) & 61 (6) \\ & door-binary-v0 & 2 (1.5) & 0 (0) & 87.17 (6.5) & 90.8 (3.7) & 84 (5.6) & 94 (2.6) \\ & pen-binary-v0 & 59 (18) & 6 (2.2) & 74 (15.6) & 96.4 (1.5) & 97 (1.1) & 97 (1.3) \\ \hline \multirow{3}{*}{FrankaKitchen} & relocate-partial-v0 & 0 (0) & 77 (6.5) & 85 (4.9) & 71.6 (1.1) & 85 (10.4) & 99 (1.1) \\ & kitchen-mixed-v0 & 0 (0) & 41 (5) & 70.5 (3.6) & 69 (2) & 63 (5.3) & 85 (6.6) \\ & kitchen-complete-v0 & 0 (0.3) & 91 (5.1) & 65.25 (2.1) & 96.5 (0.4) & 48 (2.4) & 70 (0.8) \\ \hline \hline - & average & 10.3 & 35.8 & 65.6 & **75 [+143.94]** & 66.7 & **84.3 [+26.45]** \\ \hline \hline \end{tabular} \end{table} Table 1: Normalized returns after 1M fine-tuning steps for Adroit and 4M steps for FrankaKitchen (unless noted otherwise). Mean and standard error across 5 seeds is reported. The improvement percentage for average performance is relative to the base RL algorithm. the performance of Cal-QL as well. For performance on locomotion environment in Table 3, not only does OOO improves the performance of RLPD, which fails to see successes, but it improves over the performance of RLPD + RND by 165%. Interestingly, OOO (RLPD) is able to recover a performant policy for some seeds where the exploration policy for RLPD + RND fails to reach the goal at all, resembling the scenario discussed in Section 5.1, suggesting that OOO can improve the learning stability. We further analyze this in Section 5.3. Overall, the large improvements in these environments suggests that offline retraining can be even more effective when substantial exploration is required. ### Empirical Analysis **Intrinsic rewards do not vanish during training.** Figure 3 shows the average intrinsic reward the agent has gotten over the last thousand steps on antmaze-goal-missing-large-v2 and relocate-binary-v0. Notice that in neither case the intrinsic rewards converge to zero, and in the case of antmaze-goal-missing-large-v2 they increase over time. Since the intrinsic rewards do not vanish, this implies that agents that continue optimizing for intrinsic rewards on these environments will continue dealing with the exploration/exploitation trade-off even when there is enough high-quality data in the replay buffer available to train a strong policy. **What explains the improved performance in OOO RL?** There are a few possible hypotheses for the performance improvements from using OOO. _Hypothesis 1: The increased state coverage induces a favorable policy, and sufficiently overcomes the bias from non-zero intrinsic rewards_. We compare the performance of IQL, IQL + RND and OOO (IQL) on some interesting cases in Figure 4 (comparison on all environments is in Figure 12), where OOO trains the exploitation policy exactly on the data collected online by IQL + RND. While the use of RND can improve the performance in several cases, OOO can improve performance even when IQL + RND does not improve the performance, or even hurts the performance (for example, relocate). So while the increased state coverage can eventually be used to learn a good exploitation policy, as OOO does, but the policy learned in the process does not necessarily utilize the increased state coverage. _Hypothesis 2: Mitigating primacy bias explains the improved performance_. Recent works suggest that reinitializing \(Q\)-value networks during training can mitigate the primacy bias (Nikishin et al., 2022), where \(Q\)-value networks that have lost plasticity by overfitting to initial data lead to suboptimal policies. Figure 3 (_right_) shows a comparison between OOO, and training an exploitation policy from scratch while still using the intrinsic rewards the exploration agent uses (OOO + Frozen RND) at that timestep. The RND reward predictor isn't trained further while training the exploitation policy. Mitigating primacy bias does \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline Task & PLE & TLS + RND & Cal-QL & OOO(Go-Go) & IQL & OOO(Go-Go) \\ \hline \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } \\ & & & & & & \\ \hline \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } \\ & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Normalized returns after 500K fine-tuning steps, with Table 3: Goal-reaching success rate after 1.5M fine-tuning steps for Antmaze. Mean and standard error 1M online environment steps. Mean and computed over 5 seeds. Figure 3: (_left_) Intrinsic RND rewards do not decay to zero over the course of online fine-tuning for antmaze-goal-missing-large-v2 and relocate-binary-v0. Surprisingly, intrinsic rewards increase over time for the AntMaze environment, increasing the bias in the policy performance. (_right_) The improvements in policy performance are not explained by the primacy bias phenomenon. not sufficiently explain the improvement in performance as (OOO + Frozen RND) substantially underperforms OOO. These ablations suggest that mitigating the exploration bias by removing the intrinsic reward when training the exploitation policy leads to improved performance under OOO. **Exploitation requires pessimistic learning.** Do exploitation policies need to be trained with a pessimistic algorithm? Yarats et al. (2022) suggest that given sufficient exploration and state coverage, standard RL can recover performant policies without pessimism. Indeed, prior work on decoupled policy learning trains the exploitation policy using standard RL (Schafer et al., 2021). To test this, we train the exploitation policy using TD3 (Fujimoto et al., 2018) instead of IQL in Figure 5. We find that standard RL fails to recover a performant exploitation policy despite having exactly identical data to OOO, as the \(Q\)-values explode when using TD3 (_right_). Using RND does increase state coverage, but for large environments such as those evaluated in our experiments, pessimism in offline retraining is a critical component for learning a performant policy. ## 6 Conclusion We present OOO RL, a simple framework for reinforcement learning that enables effective policy extraction for online RL and offline-to-online fine-tuning by leveraging offline retraining to mitigate the bias from intrinsic rewards. Our key insight is that exploration bonuses do not vanish during online reinforcement learning, especially not by the end of the small budgets of online fine-tuning. As a result, existing approaches that incorporate exploration bonuses learn a final policy that can be biased towards being too exploratory. We propose a simple solution, which is to decouple the Figure 4: Comparing the performance of the policy learned with and without policy decoupling over the course of online fine-tuning. While the RND exploration bonus is critical for increasing the state coverage, OOO greatly improves the performance by training a separate exploitation policy and mitigating the bias from non-zero intrinsic rewards. Figure 5: We evaluate if we need pessimistic training for the exploitation policy. 5a and 5b compare the performance when IQL is used to train the exploitation policy in OOO, with TD3 to train the exploitation policy on antmaze-goal-missing-large and relocate-binary-v0. TD3 fails to recover a useful policy on both the environment, despite having the same data as IQL. 5c shows the average Q-values on each batch through the training of the exploitation policy. The explosion of \(Q\)-values explains the poor performance of TD3, and justifies the use of a conservative algorithms for training exploitation. final policy from the exploration policy, and train a separate set of parameters using standard offline RL techniques on all interaction data available. This decoupling also allows us to more aggressively incorporate exploration bonuses, thereby improving both the coverage of the online data and the final policy used for evaluation. Our experiments verify that our approach significantly improves several prior approaches in a broad range of scenarios, especially in sparse reward tasks that necessitate exploration, or when the offline dataset does not provide sufficient coverage to learn a performant policy. As noted in Section 4.1, training an exploitation policy from scratch can be computationally expensive. Further, the evaluation performance can be sensitive to exploitation hyper-parameters. While some guidelines for offline policy selection exist (Kumar et al., 2020), better offline model selection can further improve the performance of OOO. ## 7 Reproducibility Statement We provide details of the experimental setup, implementation details and hyper-parameters in Appendix B. We experiment with open-source simulation benchmarks, details of which are provided in Section 5 and Appendix B.
2301.06234
Regularity results for mixed local and nonlocal double phase functionals
We investigate the De Giorgi-Nash-Moser theory for minimizers of mixed local and nonlocal functionals modeled after \[ v \mapsto \int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\dfrac{|v(x)-v(y)|^{p}}{|x-y|^{n+sp}}\,dxdy+\int_{\Omega}a(x)|Dv|^{q}\,dx, \] where $0<s<1<p \le q$ and $a(\cdot) \ge 0$. In particular, we prove H\"older regularity and Harnack's inequality under possibly sharp assumptions on $s,p,q$ and $a(\cdot)$.
Sun-Sig Byun, Ho-Sik Lee, Kyeong Song
2023-01-16T02:19:02Z
http://arxiv.org/abs/2301.06234v1
# Regularity results for mixed local and nonlocal double phase functionals ###### Abstract. We investigate the De Giorgi-Nash-Moser theory for minimizers of mixed local and nonlocal functionals modeled after \[v\mapsto\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{|v(x)-v(y)|^{p}}{|x-y| ^{n+sp}}\,dxdy+\int_{\Omega}a(x)|Dv|^{q}\,dx,\] where \(0<s<1<p\leq q\) and \(a(\cdot)\geq 0\). In particular, we prove Holder regularity and Harnack's inequality under possibly sharp assumptions on \(s,p,q\) and \(a(\cdot)\). Key words and phrases:Mixed local and nonlocal functionals, Double phase, Local boundedness, Holder continuity, Harnack's inequality. 2020 Mathematics Subject Classification: Primary: 49N60; Secondary: 35R11, 47G20, 35B65, 35R05 S. Byun was supported by NRF-2022R1A2C1009312. H. Lee was supported by NRF-2020R1C1C1A01013363. K. Song was supported by NRF-2021R1A4A1027378 ## 1. Introduction In this paper, we consider the following problem of nonlinear nonlinear Schrodinger equations \[\begin{cases}\partial_{t}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u }{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{ \partial x}u+\frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+ \frac{\partial u}{\partial x}u+\frac{\partial u}{\partial x}u+\ compare (1.1) with a local problem as in [26]. We thus develop a different method motivated from the ones in [1, 3, 23], whose crucial tools include the expansion of positivity results described in Lemmas 4.2 and 4.3 below. For fractional \(p\)-Laplacian type problems, analogous results are proved in [23, Lemma 6.3], but their proofs are not directly applicable to our double phase setting. To overcome this difficulty, we first make use of the local boundedness of minimizers and the Holder continuity of \(a(\cdot)\) to establish an improved Caccioppoli estimate given in Lemma 4.1 below. Then we prove Lemma 4.2 by considering two alternatives, say "the nonlocal phase" and "the mixed phase". Moreover, for Harnack's inequality, we take into account the case that \(sp>n\) as well, where optimal assumptions on \(s,p,q\) and \(a(\cdot)\) accordingly change. We also describe the precise nonlocal contribution of minimizers to our results via nonlocal tails. ### Assumptions and main results We actually consider a general functional of the type \[\mathcal{E}(u;\Omega)\coloneqq\iint_{\mathcal{C}_{\Omega}}|u(x)-u(y)|^{p}K_{ sp}(x,y)\,dxdy+\int_{\Omega}a(x)F(x,Du)\,dx,\] where \(F:\Omega\times\mathbb{R}^{n}\to\mathbb{R}\) is a Caratheodory function such that \[\Lambda^{-1}|\xi|^{q}\leq F(x,\xi)\leq\Lambda|\xi|^{q} \tag{1.4}\] for some \(\Lambda>1\), and \(K_{sp}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) is a symmetric kernel with order \((s,p)\); i.e., it is a measurable function satisfying \[\frac{\Lambda^{-1}}{|x-y|^{n+sp}}\leq K_{sp}(x,y)=K_{sp}(y,x)\leq\frac{ \Lambda}{|x-y|^{n+sp}} \tag{1.5}\] for a.e. \((x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\). Let us introduce relevant function spaces which will be used throughout the paper. We denote by \[\mathcal{A}(\Omega)\coloneqq\left\{v:\mathbb{R}^{n}\to\mathbb{R} \ \Big{|}\ v|_{\Omega}\in L^{p}(\Omega)\ \text{and}\ \mathcal{E}_{0}(v;\Omega)<\infty\right\}, \tag{1.6}\] and \[L^{p-1}_{sp}(\mathbb{R}^{n})\coloneqq\left\{v:\mathbb{R}^{n}\to \mathbb{R}\ \Big{|}\int_{\mathbb{R}^{n}}\frac{|v(x)|^{p-1}}{(1+|x|)^{n+sp}}\,dx<\infty \right\}. \tag{1.7}\] Then we define minimizers of the functional \(\mathcal{E}\) as follows. **Definition 1.1**.: We say that \(u\in\mathcal{A}(\Omega)\) is a minimizer of \(\mathcal{E}\) if \[\mathcal{E}(u;\Omega)\leq\mathcal{E}(v;\Omega) \tag{1.8}\] for any measurable function \(v:\mathbb{R}^{n}\to\mathbb{R}\) with \(v=u\) a.e. in \(\mathbb{R}^{n}\setminus\Omega\). The first theorem is about the local boundedness of minimizers in the case that \(sp\leq n\). **Theorem 1.2** (Local boundedness).: _Assume (1.2)-(1.5) for the functional \(\mathcal{E}\). Suppose that \(s,p,q\) satisfy_ \[\begin{cases}p\leq q\leq\frac{np}{n-sp}&\quad\text{when}\;\,sp<n,\\ p\leq q<\infty&\quad\text{when}\;\,sp=n.\end{cases} \tag{1.9}\] _Then every minimizer \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) of \(\mathcal{E}\) is locally bounded in \(\Omega\)._ The second theorem is concerned with the Holder continuity of bounded minimizers in the case that \(sp\leq n\). For this, we additionally assume the Holder continuity of \(a(\cdot)\): \[|a(x)-a(y)|\leq[a]_{\alpha}|x-y|^{\alpha},\quad\alpha\in(0,1] \tag{1.10}\] for every \(x,y\in\Omega\). **Theorem 1.3** (Holder continuity).: _Assume (1.2), (1.4) and (1.5) for the functional \(\mathcal{E}\). Suppose that (1.10) holds for \(a(\cdot):\Omega\to\mathbb{R}\). Let \(s,p\) and \(q\) satisfy \(sp\leq n\) and_ \[q\leq sp+\alpha.\] _Then every minimizer \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) of \(\mathcal{E}\) which is locally bounded in \(\Omega\) is locally Holder continuous in \(\Omega\). Moreover, for any open set \(\Omega^{\prime}\Subset\Omega\), there exists \(\gamma\in(0,1)\) depending only on \(n,s,p,q,\Lambda,\alpha,[a]_{\alpha}\) and \(\|u\|_{L^{\infty}(\Omega^{\prime})}\) such that \(u\in C^{\gamma}_{\mathrm{loc}}(\Omega^{\prime})\)._ Finally, in order to state a nonlocal version of Harnack's inequality, we define the tail as follows: \[\mathrm{Tail}(v;x_{0},R):=\int_{\mathbb{R}^{n}\setminus B_{R}(x_{0})}\frac{|v( x)|^{p-1}}{|x-x_{0}|^{n+sp}}\,dx. \tag{1.11}\] For any subset \(\Omega_{0}\Subset\Omega\), let us denote \[\texttt{data}(\Omega_{0})\coloneqq\begin{cases}n,s,p,q,\Lambda,\alpha,[a]_{ \alpha},\|u\|_{L^{\infty}(\Omega_{0})}&\quad\text{when}\quad sp\leq n,\\ n,s,p,q,\Lambda,\alpha,[a]_{\alpha},[u]_{W^{s,p}(\Omega_{0})}&\quad\text{when} \quad sp>n.\end{cases}\] Then we have the following. **Theorem 1.4** (Harnack's inequality).: _Assume (1.2), (1.4) and (1.5) for the functional \(\mathcal{E}\). Suppose that (1.10) for \(a(\cdot)\). Let \(s,p,q\) and \(\alpha\) satisfy_ \[\begin{cases}q\leq sp+\alpha&\quad\text{when}\,\,\,sp\leq n,\\ q\leq p+\frac{p\alpha}{n}+\frac{(s-1)pq}{n}&\quad\text{when}\,\,\,sp>n.\end{cases} \tag{1.12}\] _Let \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) be a minimizer of \(\mathcal{E}\) which is nonnegative in a ball \(B_{16R}=B_{16R}(x_{0})\Subset\Omega\). When \(sp\leq n\), assume further that \(u\) is bounded in \(B_{16R}\). Then_ \[\sup_{B_{R}}u\leq c\inf_{B_{R}}u+c\left[R^{sp}\mathrm{Tail}(u_{-};x_{0},2R) \right]^{\frac{1}{p-1}} \tag{1.13}\] _holds for a constant \(c=c(\texttt{data}(B_{2R}))\), where \(u_{-}=\max\{-u,0\}\)._ **Remark 1.5**.: We can obtain the following result for general minimizers by combining the results of Theorems 1.2 and 1.3. Namely, under the same assumptions on \(K_{sp}\), \(F\) and \(a(\cdot)\) as in Theorem 1.3, every minimizer \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) of \(\mathcal{E}\) is locally Holder continuous in \(\Omega\), provided \[\begin{cases}p\leq q\leq\min\left\{\frac{np}{n-sp},sp+\alpha\right\}&\quad \text{when}\,\,\,sp<n,\\ p\leq q\leq sp+\alpha=n+\alpha&\quad\text{when}\,\,\,sp=n.\end{cases}\] Also, we can combine the results of Theorems 1.2 and 1.4 as follows. With the same assumptions as Theorem 1.4 for \(K_{sp}\), \(F\) and \(a(\cdot)\), if \[\begin{cases}p\leq q\leq\min\left\{\frac{np}{n-sp},sp+\alpha\right\}&\quad \text{when}\,\,\,sp<n,\\ p\leq q\leq sp+\alpha=n+\alpha&\quad\text{when}\,\,\,sp=n,\\ p\leq q\leq p+\frac{p\alpha}{n}+\frac{(s-1)pq}{n}&\quad\text{when}\,\,\,sp>n, \end{cases}\] then we have the estimate (1.13) for every minimizer \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) of \(\mathcal{E}\). We organize the paper as follows. Section 2 is devoted to basic notations and inequalities which will be used throughout the paper. In Section 3, we obtain Caccioppoli estimates to prove Theorem 1.2. In Section 4, we prove the expansion of positivity lemma. Finally, in Section 5 we prove Theorems 1.3 and 1.4. ## 2. Preliminaries For \(x_{0}\in\mathbb{R}^{n}\) and \(r>0\), \(B_{r}(x_{0})\) is the open ball in \(\mathbb{R}^{n}\) with center \(x_{0}\) and radius \(r\). We omit the center of a ball if it is not important in the context. Throughout the paper, \(c\) is a general constant with \(c\geq 1\), and its value may differ from each line. The notation \(f\eqsim g\) means that there is a constant \(c\geq 1\) such that \(\frac{1}{c}f\leq g\leq cf\). We write \(v_{\pm}\coloneqq\max\{\pm v,0\}\) for a measurable function \(v\). Additionally, if \(v\) is integrable over a measurable set \(S\) with \(0<|S|<\infty\), we denote the integral average over \(S\) by \[(v)_{S}=\mathchoice{{\vbox{\hbox{$-$}}\kern-7.499886pt}}{{\vbox{ \hbox{$-$}}\kern-6.374903pt}}{{\vbox{\hbox{$-$}} \kern-4.499931pt}}{{\vbox{\hbox{$-$}}\kern-3.749943pt}}\!\int_{S}v\,dx= \frac{1}{|S|}\int_{S}v\,dx.\] When \(S\subset\Omega\), we also denote \[a_{S}^{+}\coloneqq\sup_{x\in S}a(x)\qquad\text{and}\qquad a_{S}^{-}\coloneqq \inf_{x\in S}a(x).\] We recall the definition and basic properties of fractional Sobolev spaces; for more details, see [31]. With an open set \(U\subseteq\mathbb{R}^{n}\), \(s\in(0,1)\) and \(p\geq 1\), the fractional Sobolev space \(W^{s,p}(U)\) consists of all measurable functions \(v:U\to\mathbb{R}\) with \[\|v\|_{W^{s,p}(U)}\coloneqq\|v\|_{L^{p}(U)}+[v]_{W^{s,p}(U)}=\left(\int_{U}|v |^{p}\,dx\right)^{\frac{1}{p}}+\left(\int_{U}\int_{U}\frac{|v(x)-v(y)|^{p}}{|x -y|^{n+sp}}\,dxdy\right)^{\frac{1}{p}}<\infty.\] We denote the \(s\)-fractional Sobolev conjugate of \(p\) by \[p_{s}^{*}=\begin{cases}\frac{np}{n-sp}&\text{if }sp<n,\\ \text{any number in }(p,\infty)&\text{if }sp\geq n.\end{cases}\] Then we have the following embedding of \(W^{s,p}(U)\), which holds, for instance, when \(U\) is a Lipschitz domain (see for instance [31]): * If \(sp\leq n\), then \(W^{s,p}(U)\hookrightarrow L^{p_{s}^{*}}(U)\). * If \(sp>n\), then \(W^{s,p}(U)\hookrightarrow C^{0,s-\frac{3}{p}}(U)\). Moreover, we recall the corresponding fractional Sobolev-Poincare type inequality on balls. **Lemma 2.1** ([44, 45]).: _Let \(s\in(0,1)\) and \(p\geq 1\). For any \(v\in W^{s,p}(B_{r})\) there holds_ \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-7.499886pt}}{{\vbox{ \hbox{$-$}}\kern-6.374903pt}}{{\vbox{\hbox{$-$}} \kern-4.499931pt}}{{\vbox{\hbox{$-$}}\kern-3.749943pt}}\!\int_{B_{r}}|v-(v)_ {B_{r}}|^{p_{s}^{*}}\,dx\right)^{\frac{p}{p_{s}^{*}}}\leq cr^{sp}\mathchoice{{ \vbox{\hbox{$-$}}\kern-7.499886pt}}{{\vbox{\hbox{$-$}} \kern-6.374903pt}}{{\vbox{\hbox{$-$}}\kern-4.499931pt}}{{\vbox{ \hbox{$-$}}\kern-3.749943pt}}\!\int_{B_{r}}\int_{B_{r}}\frac{|v(x)-v(y)|^{p} }{|x-y|^{n+sp}}\,dydx \tag{2.1}\] _for a constant \(c=c(n,s,p)\). Moreover, if \(sp>n\), then there holds_ \[[v]_{C^{0,s-\frac{n}{p}}(B_{r})}\leq c[v]_{W^{s,p}(B_{r})} \tag{2.2}\] _for a constant \(c=c(n,s,p)\)._ Recalling (1.1) and (1.6), we see that \[\mathcal{A}(\Omega)\subset W^{s,p}(\Omega).\] Also, (2.1) implies that \[\mathcal{A}(\Omega)\subset L^{q}(\Omega)\quad\text{if }\begin{cases}p\leq q\leq \frac{np}{n-sp}&\text{when }sp<n,\\ p\leq q<\infty&\text{when }sp\geq n.\end{cases}\] We next recall the tail space and tail given in (1.7) and (1.11), respectively. Note that our definition of nonlocal tail is slightly different from those in [23, 29, 30]. Observe that if \(v\in L^{q_{0}}(\mathbb{R}^{n})\) for some \(q_{0}\geq p-1\), or if \(v\in L^{p-1}(B_{R}(0))\cap L^{\infty}(\mathbb{R}^{n}\setminus B_{R}(0))\) for some \(R>0\), then \(v\in L^{p-1}_{sp}(\mathbb{R}^{n})\). In particular, we have \(W^{s,p}(\mathbb{R}^{n})\subset L^{p-1}_{sp}(\mathbb{R}^{n})\). From the inequality \[\frac{1+|x|}{|x-x_{0}|}\leq\frac{1+|x-x_{0}|+|x_{0}|}{|x-x_{0}|}\leq 1+\frac{1+|x_{0} |}{R}\qquad\text{for }x\in\mathbb{R}^{n}\setminus B_{R}(x_{0}),\] we have \(\text{Tail}(v;x_{0},R)<\infty\) for any \(v\in L^{p-1}_{sp}(\mathbb{R}^{n})\) and \(B_{R}(x_{0})\subset\mathbb{R}^{n}\). If the center \(x_{0}\) is not important, then we omit it and simply write \(\text{Tail}(v;x_{0},R)\equiv\text{Tail}(v;R)\). ### Useful lemmas We collect some inequalities which will be used in the proof of main theorems. The following lemma will be used in the proof of Theorem 1.2; its proof is essentially the same as that of [9, Lemma 2.4]. **Lemma 2.2**.: _Let the constants \(s,p\) and \(q\) satisfy (1.2) and (1.9). Then for any \(f\in W^{s,p}(B_{r})\) and any constant \(L_{0}\geq 0\), we have_ \[\fint_{B_{r}}\left(\left|\frac{f}{r^{s}}\right|^{p}+L_{0}\left| \frac{f}{r}\right|^{q}\right)\,dx \leq cL_{0}r^{(s-1)q}\left(\fint_{B_{r}}\int_{B_{r}}\frac{|f(x)-f( y)|^{p}}{|x-y|^{n+sp}}\,dxdy\right)^{\frac{q}{p}}\] \[\quad+c\left(\frac{|\mathrm{supp}f|}{|B_{r}|}\right)^{\frac{rp}{n }}\fint_{B_{r}}\int_{B_{r}}\frac{|f(x)-f(y)|^{p}}{|x-y|^{n+sp}}\,dxdy\] \[\quad+c\left(\frac{|\mathrm{supp}f|}{|B_{r}|}\right)^{p-1}\fint_{ B_{r}}\left(\left|\frac{f}{r^{s}}\right|^{p}+L_{0}\left|\frac{f}{r}\right|^{q} \right)\,dx\] _for some constant \(c=c(n,s,p,q)\) independent of \(L_{0}\)._ The following lemma is originated from [28] and used in [3]. **Lemma 2.3**.: _Let \(u\in W^{1,1}(B)\) for some ball \(B\subset\mathbb{R}^{n}\) and \(m,l\in\mathbb{R}\) with \(m<l\). Then we have_ \[(l-m)|B\cap\{u\leq m\}|^{1-\frac{1}{n}}\leq\frac{c(n)|B|}{|B\cap\{u\geq l\}|} \int_{B\cap\{m<u\leq l\}}|Du|\,dx.\] Finally, we need the iteration lemma from [37, Lemma 7.1]. **Lemma 2.4**.: _Let \(\{y_{i}\}_{i=0}^{\infty}\) be a sequence of nonnegative numbers with the inequality_ \[y_{i+1}\leq b_{1}b_{2}^{i}y_{i}^{1+\beta},\quad i=0,1,2,\ldots\] _for some constants \(b_{1},\beta>0\) and \(b_{2}>1\). If_ \[y_{0}\leq b_{1}^{-1/\beta}b_{2}^{-1/\beta^{2}},\] _then \(y_{i}\to 0\) as \(i\to\infty\)._ ## 3. Caccioppoli estimates and local boundedness First, we show a Caccioppoli type estimate with tail, which plays an important role throughout the paper. **Lemma 3.1**.: _Let \(u\in\mathcal{A}(\Omega)\cap L_{sp}^{p-1}(\Omega)\) be a minimizer of \(\mathcal{E}\) under the assumptions (1.2)-(1.5). Then for any ball \(B_{2r}=B_{2r}(x_{0})\Subset\Omega\) and \(0<\rho<\sigma\leq r\) we have_ \[\int_{B_{\rho}}\int_{B_{\rho}}\frac{|w_{\pm}(x)-w_{\pm}(y)|^{p}} {|x-y|^{sp}}\frac{dxdy}{|x-y|^{n}}+\int_{B_{\rho}}a(x)|Dw_{\pm}|^{q}\,dx\] \[\qquad+\int_{B_{\rho}}w_{\pm}(x)\left[\int_{\mathbb{R}^{n}}\frac{ w_{\mp}^{p-1}(y)}{|x-y|^{n+sp}}\,dy\right]\,dx\] \[\qquad\qquad\leq\frac{c}{(\sigma-\rho)^{p}}\int_{B_{\sigma}}\int _{B_{\sigma}}\frac{|w_{\pm}(x)+w_{\pm}(y)|^{p}}{|x-y|^{(s-1)p}}\frac{dxdy}{| x-y|^{n}}+\frac{c}{(\sigma-\rho)^{q}}\int_{B_{\sigma}}a(x)\left|w_{\pm}\right|^{q}\,dx\] \[\qquad\qquad+\frac{c\sigma^{n+sp}}{(\sigma-\rho)^{n+sp}}[\mathrm{ Tail}(w_{\pm};\sigma)]\int_{B_{\sigma}}w_{\pm}\,dx\] _for some \(c=c(n,s,p,q,\Lambda)\), where \(w_{\pm}\coloneqq(u-k)_{\pm}\) with \(k\geq 0\)._ Proof.: We only prove the estimate for \(w_{+}\), since the proof of the one for \(w_{-}\) is similar. Choose two radii \(\rho_{1},\sigma_{1}\) satisfying \(\rho\leq\rho_{1}<\sigma_{1}\leq\sigma\) and then a cut-off function \(\phi\in C_{0}^{\infty}(B_{\sigma_{1}+\rho_{1}})\) satisfying \(0\leq\phi\leq 1\), \(\phi=1\) on \(B_{\rho_{1}}\) and \(|D\phi|\leq\frac{4}{\sigma_{1}-\rho_{1}}\). We test (1.8) with \(v=u-\phi w_{+}\). Then, since \(u=v\) in \(\mathbb{R}^{n}\setminus B_{\sigma}\), we have \[\begin{split} 0&\leq\iint_{\mathcal{C}_{\Omega}}(|v(x)-v(y) |^{p}-|u(x)-u(y)|^{p})K_{sp}(x,y)\,dxdy\\ &\quad+\int_{\Omega}a(x)(F(x,Dv)-F(x,Du))\,dx\\ &\leq\int_{B_{\sigma}}\int_{B_{\sigma}}(|v(x)-v(y)|^{p}-|u(x)-u(y )|^{p})K_{sp}(x,y)\,dxdy\\ &\quad+2\int_{\mathbb{R}^{n}\setminus B_{\sigma}}\int_{B_{\sigma }}(|v(x)-v(y)|^{p}-|u(x)-u(y)|^{p})K_{sp}(x,y)\,dxdy\\ &\quad+\int_{B_{\sigma}}a(x)(F(x,Dv)-F(x,Du))\,dx\\ =:I_{1}+I_{2}+I_{3}.\end{split} \tag{3.1}\] Both \(I_{1}\) and \(I_{2}\) are estimated in the same way as in the proof of [23, Proposition 7.5]: \[\begin{split} I_{1}+I_{2}&\leq c\int_{B_{\sigma_{1} }\setminus B_{\rho_{1}}}\int_{B_{\sigma_{1}}\setminus B_{\rho_{1}}}\frac{|w_{ +}(x)-w_{+}(y)|^{p}}{|x-y|^{sp}}\frac{dxdy}{|x-y|^{n}}\\ &\quad+\frac{c}{(\sigma_{1}-\rho_{1})^{p}}\int_{B_{\sigma}}\int_ {B_{\sigma}}\frac{|w_{+}(x)+w_{+}(y)|^{p}}{|x-y|^{(s-1)p}}\frac{dxdy}{|x-y|^{ n}}\\ &\quad-\frac{1}{c}\int_{B_{\rho_{1}}}\int_{B_{\rho_{1}}}\frac{|w_ {+}(x)-w_{+}(y)|^{p}}{|x-y|^{sp}}\frac{dxdy}{|x-y|^{n}}\\ &\quad-\frac{1}{c}\int_{B_{\rho_{1}}}\int_{\mathbb{R}^{n}}\frac{w _{-}(y)^{p-1}w_{+}(x)}{|x-y|^{sp}}\frac{dydx}{|x-y|^{n}}\\ &\quad+c\frac{\sigma^{n+sp}}{(\sigma_{1}-\rho_{1})^{n+sp}}\left[ \operatorname{Tail}(w_{+};x_{0},\sigma)\right]\int_{B_{\sigma}}w_{+}\,dx\end{split} \tag{3.2}\] with \(c=c(n,s,p,\Lambda)\). For \(I_{3}\), we note that \(\operatorname{supp}(u-v)\subset A^{+}(k,\sigma_{1})\coloneqq\{x\in B_{\sigma _{1}}:u(x)\geq k\}\), which implies \[\begin{split} I_{3}&=\int_{A^{+}(k,\sigma_{1})}a(x )(F(x,Dv)-F(x,Du))\,dx\\ &\leq\Lambda\int_{A^{+}(k,\sigma_{1})}a(x)|Dv|^{q}\,dx-\Lambda^{- 1}\int_{A^{+}(k,\sigma_{1})}a(x)|Du|^{q}\,dx\\ &\leq\Lambda\int_{A^{+}(k,\sigma_{1})}a(x)|Dv|^{q}\,dx-\Lambda^{- 1}\int_{A^{+}(k,\sigma_{1})}a(x)|Dw_{+}|^{q}\,dx.\end{split}\] Here, we observe that \[\begin{split}|Dv|^{q}&=|Du-(D\phi)w_{+}-\phi(Dw_{+}) |^{q}\\ &=|(1-\phi)Dw_{+}-(D\phi)w_{+}|^{q}\\ &\leq c|(1-\phi)Dw_{+}|^{q}+c\left|\frac{w_{+}}{\sigma_{1}-\rho_{ 1}}\right|^{q}\end{split}\] holds in \(A^{+}(k,\sigma_{1})\). In turn, we have \[\begin{split} I_{3}&\leq c\int_{B_{\sigma_{1}}}a(x )\left(|(1-\phi)Dw_{+}|^{q}+\left|\frac{w_{+}}{\sigma_{1}-\rho_{1}}\right|^{q} \right)\,dx-\Lambda^{-1}\int_{B_{\sigma_{1}}}a(x)|Dw_{+}|^{q}\,dx\\ &\leq c\int_{B_{\sigma_{1}}\setminus B_{\rho_{1}}}a(x)|Dw_{+}|^{q }\,dx+c\int_{B_{\sigma}}a(x)\left|\frac{w_{+}}{\sigma_{1}-\rho_{1}}\right|^{q }\,dx-\Lambda^{-1}\int_{B_{\sigma_{1}}}a(x)|Dw_{+}|^{q}\,dx\end{split} \tag{3.3}\] for a constant \(c=c(n,q,\Lambda)\). Combining the above estimates (3.1), (3.2) and (3.3), we find \[\begin{split}&\int_{B_{\rho_{1}}}\int_{B_{\rho_{1}}}\frac{|w_{+}(x)-w _{+}(y)|^{p}}{|x-y|^{sp}}\frac{dxdy}{|x-y|^{n}}+\int_{B_{\rho_{1}}}a(x)|Dw_{+}| ^{q}\,dx\\ &\qquad+\int_{B_{\rho_{1}}}\int_{\mathbb{R}^{n}}\frac{w_{-}(y)^{p -1}w_{+}(x)}{|x-y|^{sp}}\frac{dydx}{|x-y|^{n}}\\ &\qquad\leq c\left(\int_{B_{\sigma_{1}}\setminus B_{\rho_{1}}} \int_{B_{\sigma_{1}}\setminus B_{\rho_{1}}}\frac{|w_{+}(x)-w_{+}(y)|^{p}}{|x -y|^{sp}}\frac{dxdy}{|x-y|^{n}}+\int_{B_{\sigma_{1}}\setminus B_{\rho_{1}}}a(x) |Dw_{+}|^{q}\,dx\right)\\ &\qquad+\frac{c}{(\sigma_{1}-\rho_{1})^{p}}\int_{B_{\sigma}}\int_{ B_{\sigma}}\frac{|w_{+}(x)+w_{+}(y)|^{p}}{|x-y|^{(s-1)p}}\frac{dxdy}{|x-y|^{n}}\\ &\qquad+\frac{c}{(\sigma_{1}-\rho_{1})^{q}}\int_{B_{\sigma}}a(x )\,|w_{+}|^{q}\,\,dx\\ &\qquad+\frac{c\sigma^{n+sp}}{(\sigma_{1}-\rho_{1})^{n+sp}}[ \operatorname{Tail}(w_{+};x_{0},\sigma)]\int_{B_{\sigma}}w_{+}\,dx.\end{split} \tag{3.4}\] Now, we define for \(t>0,\) \[\begin{split}\Phi(t)&=\int_{B_{t}}\int_{B_{t}}\frac {|w_{+}(x)-w_{+}(y)|^{p}}{|x-y|^{sp}}\frac{dxdy}{|x-y|^{n}}\\ &\qquad+\int_{B_{t}}a(x)|Dw_{+}|^{q}\,dx+\int_{B_{t}}\int_{ \mathbb{R}^{n}}\frac{w_{-}(y)^{p-1}w_{+}(x)}{|x-y|^{sp}}\frac{dydx}{|x-y|^{n}}. \end{split}\] Then (3.4) reads as \[\begin{split}\Phi(\rho_{1})\leq c(\Phi(\sigma_{1})-\Phi(\rho_{1}) )&+\frac{c}{(\sigma_{1}-\rho_{1})^{p}}\int_{B_{\sigma}}\int_{B_{ \sigma}}\frac{|w_{+}(x)+w_{+}(y)|^{p}}{|x-y|^{(s-1)p}}\frac{dxdy}{|x-y|^{n}}\\ &+\frac{c}{(\sigma_{1}-\rho_{1})^{q}}\int_{B_{\sigma}}a(x)\,|w_{+} |^{q}\,\,dx\\ &+\frac{c\sigma^{n+sp}}{(\sigma_{1}-\rho_{1})^{n+sp}}[\operatorname {Tail}(w_{+};x_{0},\sigma)]\int_{B_{\sigma}}w_{+}\,dx\end{split}\] with \(c=c(n,s,p,q,\Lambda).\) Now, the technical lemma [26, Lemma 2.5] gives the conclusion. We now prove the local boundedness result in Theorem 1.2. Proof of Theorem 1.2.: Throughout the proof, we denote \[H_{0}(t)\coloneqq t^{p}+\|a\|_{L^{\infty}}t^{q}\qquad(t\geq 0).\] Fix a ball \(B_{r}\equiv B_{r}(x_{0})\Subset\Omega\) with \(r\leq 1.\) Let \(r/2\leq\rho<\sigma\leq r\) and \(k>0.\) We define the upper level set \[A^{+}(k,\rho)\coloneqq\{x\in B_{\rho}:u(x)\geq k\}.\] Applying Lemma 2.2 with \(f\equiv(u-k)_{+},\) we obtain (3.5) For fixed \(0<h<k\), we see that \[(u(x)-h)_{+}=u(x)-h\geq k-h\quad\text{and}\quad(u(x)-h)_{+}=u(x)-h\geq u(x)-k=(u(x )-k)_{+}\] for \(x\in A^{+}(k,\rho)\subset A^{+}(h,\rho)\). Then we find \[\fint_{B_{\rho}}(u-k)_{+}\,dx\leq\fint_{B_{\rho}}(u-h)_{+}\left(\frac{(u-h)_{+ }}{k-h}\right)^{p-1}\,dx\leq\frac{1}{(k-h)^{p-1}}\fint_{B_{\sigma}}H_{0}((u-h) _{+})\,dx\] and \[\begin{split}\frac{|A^{+}(k,\rho)|}{|B_{\rho}|}& \leq\frac{1}{|B_{\rho}|}\int_{A^{+}(k,\rho)}\frac{(u-h)_{+}^{p}}{(k-h)^{p}}\,dx \\ &\leq\frac{1}{(k-h)^{p}|B_{\rho}|}\int_{A^{+}(h,\rho)}H_{0}((u-h) _{+})\,dx\\ &\leq\frac{1}{(k-h)^{p}}\fint_{B_{\rho}}H_{0}((u-h)_{+})\,dx. \end{split} \tag{3.6}\] By Lemma 3.1, we have \[\fint_{B_{\rho}}\int_{B_{\rho}}\frac{|f(x)-f(y)|^{p}}{|x-y|^{n+sp }}\,dxdy\] \[\leq\frac{c}{(\sigma-\rho)^{p}}\fint_{B_{\sigma}}(u(x)-h)_{+}^{p }\int_{B_{\sigma}}\frac{1}{|x-y|^{n+(s-1)p}}\,dydx+\frac{c\|a\|_{L^{\infty}}}{ (\sigma-\rho)^{q}}\fint_{B_{\sigma}}(u-h)_{+}^{q}\,dx\] \[\quad+c\left(\frac{\sigma^{n+sp}}{(\sigma-\rho)^{n+sp}}\text{Tail }(f;\sigma)\right)\fint_{B_{\sigma}}(u-h)_{+}\,dx\] \[\leq\frac{c\rho^{(1-s)p}}{(\sigma-\rho)^{p}}\fint_{B_{\sigma}}(u- h)_{+}^{p}\,dx+\frac{c\|a\|_{L^{\infty}}}{(\sigma-\rho)^{q}}\fint_{B_{\sigma}}(u -h)_{+}^{q}\,dx\] \[\quad+\frac{c\sigma^{n+sp}}{(\sigma-\rho)^{n+sp}}(\text{Tail}(f; \sigma))\fint_{B_{\sigma}}(u-h)_{+}\,dx\] \[\leq\frac{c}{(\sigma-\rho)^{q}}\fint_{B_{\sigma}}H_{0}((u-h)_{+}) \,dx+\frac{c\text{Tail}(f;\sigma)}{(\sigma-\rho)^{n+sp}}\fint_{B_{\sigma}}(u-h )_{+}\,dx.\] Recalling \(f\equiv(u-k)_{+}\) and combining the above estimate with (3.5)-(3.6) yield \[\rho^{-sp}\fint_{B_{\rho}}H_{0}((u-k)_{+})\,dx\] \[\leq\frac{c\rho^{(s-1)q}}{(\sigma-\rho)^{q^{2}/p}}\left(\fint_{B_{ \sigma}}H_{0}((u-h)_{+})\,dx\right)^{\frac{q}{p}}\] \[\quad+\frac{c}{(k-h)^{q/p^{\prime}}}\frac{\rho^{(s-1)q}[\text{Tail }((u-k)_{+};\sigma)]^{q/p}}{(\sigma-\rho)^{(n+sp)q/p}}\left(\fint_{B_{\sigma}}H_ {0}((u-h)_{+})\,dx\right)^{\frac{q}{p}}\] \[\quad+\frac{c}{(k-h)^{sp^{2}/n}}\frac{1}{(\sigma-\rho)^{q}} \left(\fint_{B_{\sigma}}H_{0}((u-h)_{+})\,dx\right)^{1+\frac{sp}{n}}\] \[\quad+\frac{c\text{Tail}((u-k)_{+};\sigma)}{(k-h)^{sp^{2}/n+p-1}( \sigma-\rho)^{n+sp}}\left(\fint_{B_{\sigma}}H_{0}((u-k)_{+})\,dx\right)^{1+ \frac{sp}{n}}\] \[\quad+\frac{cr^{-q}}{(k-h)^{p(p-1)}}\left(\fint_{B_{\sigma}}H_{ 0}((u-h)_{+})\,dx\right)^{p}.\] Now, for \(i=0,1,2,\dots\) and \(k_{0}>1\), we denote \[\sigma_{i}\coloneqq\frac{r}{2}(1+2^{-i}),\quad k_{i}\coloneqq 2k_{0}(1-2^{-i-1}) \quad\text{and}\quad y_{i}\coloneqq\int_{A^{+}(k_{i},\sigma_{i})}H_{0}((u-k_{i} )_{+})\,dx.\] Since \(H_{0}(u)\in L^{1}(\Omega)\) from (2.1) and (1.9), it follows that \[y_{0}=\int_{A^{+}(k_{0},r)}H_{0}((u-k_{0})_{+})\,dx\quad\longrightarrow\quad 0 \quad\text{as}\quad k_{0}\to\infty.\] Consider a large number \(k_{0}>1\) for which \[y_{i}\leq y_{i-1}\leq\cdots\leq y_{0}\leq 1,\quad i=1,2,\ldots.\] Then since \(u\in L^{p-1}_{sp}(\mathbb{R}^{n})\) and so \(\mathrm{Tail}((u-k_{i})_{+};\sigma_{i})\leq\mathrm{Tail}(u;r/2)<\infty\), we obtain \[y_{i+1} \leq\tilde{c}\left(2^{iq^{2}/p}y_{i}^{q/p}+2^{i(q/p^{\prime}+(n+sp )q/p)}y_{i}^{q/p}\right.\] \[\qquad\quad\left.+2^{i(sp^{2}/n+q)}y_{i}^{1+(sp/n)}+2^{i(sp^{2}/n +p+n+sp)}y_{i}^{1+(sp/n)}+2^{ip(p-1)}y_{i}^{p}\right)\] \[\leq\tilde{c}2^{\theta i}y_{i}^{1+\beta}\] for some constant \(\tilde{c}>0\) depending on \(n,s,p,q,\Lambda,\|a\|_{L^{\infty}},r\) and \(\mathrm{Tail}(u;r/2)\), where \[\theta=\max\left\{\frac{q^{2}}{p},\frac{q}{p^{\prime}}+(n+sp)\frac{q}{p},\frac {sp^{2}}{n}+q,\frac{sp^{2}}{n}+p+n+sp,p(p-1)\right\}\] and \[\beta=\min\left\{\frac{q}{p}-1,\frac{sp}{n},p-1\right\}.\] Now we select a constant \(k_{0}\) sufficiently large to satisfy \[y_{0}\leq\tilde{c}^{-1/\beta}2^{-\theta/\beta^{2}}.\] Then Lemma 2.4 yields \[y_{\infty}=\int_{A^{+}(2k_{0},r/2)}H_{0}((u-2k_{0})_{+})\,dx=0,\] and so \(u\leq k_{0}\) a.e. in \(B_{r/2}\). Applying the same argument to \(-u\), we finally obtain \(u\in L^{\infty}(B_{r/2})\). ## 4. Expansion of positivity Throughout this section we assume that \(K_{sp}\) is symmetric and satisfies (1.5). We suppose (1.4) for \(F\), and let \(a(\cdot)\) satisfy (1.3) and (1.10). Also, assume that \(s,p,q\) and \(\alpha\) satisfy (1.2) and (1.12). **Lemma 4.1**.: _Let \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) be a minimzer of \(\mathcal{E}\), and let \(B_{R}\Subset\Omega\) be a ball with \(R\leq 1\). When \(sp\leq n\), assume further that \(u\) is bounded in \(B_{R}\). Then for \(w_{\pm}\coloneqq(u-k)_{\pm}\) with \(|k|\leq\|u\|_{L^{\infty}(B_{R})}\), we have_ \[\begin{split}&[w_{\pm}]^{p}_{W^{s,p}(B_{\rho})}+a_{B_{R}}^{-}[w_{ \pm}]^{q}_{W^{1,q}(B_{\rho})}+\int_{B_{\rho}}w_{\pm}(x)\left[\int_{\mathbb{R}^ {n}}\frac{w_{\mp}^{p-1}(y)}{|x-y|^{n+sp}}\,dy\right]\,dx\\ &\leq c\left(\frac{r}{r-\rho}\right)^{n+q}\left[\frac{\|w_{\pm} \|_{L^{p}(B_{r})}^{p}}{r^{sp}}+a_{B_{R}}^{-}\frac{\|w_{\pm}\|_{L^{q}(B_{r})}^{ \dagger}}{r^{q}}+\|w_{\pm}\|_{L^{1}(B_{r})}\mathrm{Tail}(w_{\pm};r)\right]\end{split} \tag{4.1}\] _for a constant \(c=c(\texttt{data}(B_{R}))\), whenever \(B_{\rho}\subset B_{r}\subset B_{R}\) are concentric balls with \(R/2\leq\rho\leq r\leq R\)._ Proof.: Lemma 3.1 directly implies \[\begin{split}&[w_{\pm}]^{p}_{W^{s,p}(B_{\rho})}+a_{B_{R}}^{-}[w_{ \pm}]^{q}_{W^{1,q}(B_{\rho})}+\int_{B_{\rho}}w_{\pm}(x)\left[\int_{\mathbb{R}^ {n}}\frac{w_{\mp}^{p-1}(y)}{|x-y|^{n+sp}}\,dy\right]\,dx\\ &\leq\frac{c}{(r-\rho)^{p}}\int_{B_{r}}\int_{B_{r}}\frac{|w_{\pm}( x)+w_{\pm}(y)|^{p}}{|x-y|^{(s-1)p}}\frac{dxdy}{|x-y|^{n}}+\frac{c}{(r-\rho)^{q}} \int_{B_{r}}a(x)|w_{\pm}|^{q}\,dx\\ &\quad+\frac{cr^{n+q}}{(r-\rho)^{n+q}}\|w_{\pm}\|_{L^{1}(B_{r})} \mathrm{Tail}(w_{\pm};r).\end{split} \tag{4.2}\] We estimate the first integral in the right-hand side by using the symmetry of \(x\) and \(y\): \[\frac{c}{(r-\rho)^{p}}\int_{B_{r}}\int_{B_{r}}\frac{|w_{\pm}(x)+w_{ \pm}(y)|^{p}}{|x-y|^{(s-1)p}}\frac{dxdy}{|x-y|^{n}}\] \[\leq\frac{c}{(r-\rho)^{p}}\int_{B_{r}}\int_{B_{r}}\frac{|w_{\pm}(x )|^{p}}{|x-y|^{(s-1)p}}\frac{dxdy}{|x-y|^{n}}\] \[\leq\frac{c}{(r-\rho)^{p}}\int_{B_{r}}|w_{\pm}(x)|^{p}\int_{B_{2r} (x)}\frac{1}{|x-y|^{n+(s-1)p}}\,\,dydx\] \[\leq\frac{cr^{(1-s)p}}{(r-\rho)^{p}}\|w_{\pm}\|_{L^{p}(B_{r})}^{p} =c\left(\frac{r}{r-\rho}\right)^{p}\frac{\|w_{\pm}\|_{L^{p}(B_{r})}^{p}}{r^{ sp}}.\] For the second one, we use (1.10) and the fact that \(R/2\leq r\leq R\) to have \[\frac{1}{(r-\rho)^{q}}\int_{B_{r}}a(x)|w_{\pm}|^{q}\,dx\leq\frac{c}{(r-\rho)^ {q}}\int_{B_{r}}a_{B_{R}}^{-}|w_{\pm}|^{q}\,dx+\frac{c}{(r-\rho)^{q}}\int_{B_{ r}}r^{\alpha}|w_{\pm}|^{q}\,dx. \tag{4.3}\] Here, when \(sp\leq n\), we use \(\eqref{eq:1.12}_{1}\) in order to estimate \[\frac{c}{(r-\rho)^{q}}\int_{B_{r}}r^{\alpha}|w_{\pm}|^{q}\,dx\leq\frac{c\|w_{ \pm}\|_{L^{\infty}(B_{r})}^{q-p}}{(r-\rho)^{q}}\int_{B_{r}}r^{\alpha}|w_{\pm}| ^{p}\,dx\leq c\left(\frac{r}{r-\rho}\right)^{q}\frac{\|w_{\pm}\|_{L^{p}(B_{r}) }^{p}}{r^{sp}} \tag{4.4}\] with \(c=c(\texttt{data}(B_{r}))\). When \(sp>n\), note that \(\eqref{eq:1.12}_{2}\) is equivalent to \[q\leq sp+\alpha+\left(s-\frac{n}{p}\right)(q-p),\] which together with (2.2) implies \[\left(\operatorname*{osc}_{B_{r}}u\right)^{q-p}=\left(\frac{\operatorname*{ osc}_{B_{r}}u}{r^{s-\frac{n}{p}}}\right)^{q-p}r^{(s-\frac{n}{p})(q-p)}\leq c [u]_{W^{s,p}(B_{r})}^{q-p}r^{q-sp-\alpha}.\] In turn, we obtain \[\frac{c}{(r-\rho)^{q}}\int_{B_{r}}r^{\alpha}|w_{\pm}|^{q}\,dx\leq\frac{c( \operatorname*{osc}_{B_{r}}u)^{q-p}}{(r-\rho)^{q}}\int_{B_{r}}r^{\alpha}|w_{ \pm}|^{p}\,dx\leq c\left(\frac{r}{r-\rho}\right)^{q}\frac{\|w_{\pm}\|_{L^{p}( B_{r})}^{p}}{r^{sp}} \tag{4.5}\] for a constant \(c=c(\texttt{data}(B_{r}))\). Combining (4.2)-(4.3) with each of (4.4) and (4.5), and then recalling the fact \(p\leq q\), in any case we conclude with (4.1). In the following, with \(B_{r}\in\Omega\) being any ball, we denote \[G_{B_{r}}(t)\coloneqq\frac{t^{p}}{r^{sp}}+a_{B_{r}}^{-}\frac{t^{q}}{r^{q}} \quad\text{and}\quad g_{B_{r}}(t)\coloneqq\frac{t^{p-1}}{r^{sp}}+a_{B_{r}}^{-} \frac{t^{q-1}}{r^{q}}\qquad\text{for }t\geq 0. \tag{4.6}\] Now we prove the following key lemma. **Lemma 4.2**.: _Let \(u\in\mathcal{A}(\Omega)\cap L_{sp}^{p-1}(\mathbb{R}^{n})\) be a minimizer of \(\mathcal{E}\) which is nonnegative in a ball \(B_{4R}\Subset\Omega\) with \(R\leq 1\). When \(sp\leq n\), assume further that \(u\) is bounded in \(B_{4R}\). Suppose that_ \[|B_{2R}\cap\{u\geq t\}|\geq\nu|B_{2R}| \tag{4.7}\] _for some \(\nu\in(0,1)\) and \(t>0\). Then for any \(\delta\in(0,\frac{1}{8}]\), if_ \[\operatorname{Tail}(u_{-};4R)\leq g_{B_{4R}}(\delta t), \tag{4.8}\] _then_ \[|B_{2R}\cap\{u<2\delta t\}|\leq\frac{c_{1}}{\nu^{\max\{\frac{2q}{q-1},\frac{2 n}{n-1}\}}}\left(\delta^{\frac{p-1}{2}}+\frac{1}{|\log\delta|^{\frac{n}{n-1}\frac{q-1}{ q}}}\right)|B_{2R}|, \tag{4.9}\] _where \(c_{1}=c_{1}(\texttt{data}(B_{4R}))\)._ Proof.: We may assume that all the balls are centered at the origin. We first observe that, for any \(k\geq 0\) and \(\zeta\geq 1\), \[\|(u-k)_{-}\|_{L\zeta(B_{4R})}^{\zeta}=\int_{A^{-}(k,4R)}(k-u(x))^{\zeta}\,dx\leq |A^{-}(k,4R)|k^{\zeta}\leq|B_{4R}|k^{\zeta}. \tag{4.10}\] Fix any \(l\geq\frac{\delta t}{2}\). We apply (4.8) to obtain \[\begin{split}\mathrm{Tail}((u-l)_{-};4R)&=\int_{ \mathbb{R}^{n}\setminus B_{4R}}\frac{(l-u(x))_{+}^{p-1}}{|x|^{n+sp}}\,dx\\ &\leq c\left[l^{p-1}\int_{\mathbb{R}^{n}\setminus B_{4R}}\frac{ dx}{|x|^{n+sp}}+\int_{\mathbb{R}^{n}\setminus B_{4R}}\frac{u_{-}(x)^{p-1}}{|x|^{n+sp}} \,dx\right]\\ &=c\left[R^{-sp}l^{p-1}+\mathrm{Tail}(u_{-};4R)\right]\\ &\leq cg_{B_{4R}}(l).\end{split} \tag{4.11}\] Then by Lemma 4.1, (4.10) and (4.11), it follows that \[\begin{split}&\int_{B_{2R}}\int_{B_{2R}}\frac{(u(x)-l)_{+}^{p-1} (u(y)-l)_{-}}{|x-y|^{n+sp}}\,dxdy+a_{B_{4R}}^{-}\int_{B_{2R}}|D(u-l)_{-}|^{q} \,dx\\ &\leq c\left[\frac{\|(u-l)_{-}\|_{L^{p}(B_{4R})}^{p}}{R^{sp}}+a_{ B_{4R}}^{-}\frac{\|(u-l)_{-}\|_{L^{q}(B_{4R})}^{q}}{R^{q}}+\|(u-l)_{-}\|_{L^{1}(B_{4R })}\mathrm{Tail}((u-l)_{-};4R)\right]\\ &\leq cG_{B_{4R}}(l)|B_{R}|\end{split} \tag{4.12}\] for any \(l\geq\frac{\delta t}{2}\) with \(c=c(\texttt{data}(B_{4R}))\). Here, if \(a_{\bar{B}_{4R}}^{-}=0\), then we have \[\int_{B_{2R}}\int_{B_{2R}}\frac{(u(x)-l)_{+}^{p-1}(u(y)-l)_{-}}{|x-y|^{n+sp}} \,dxdy\leq c_{1}\left(\frac{l}{R^{s}}\right)^{p}|B_{R}|;\] in this case, (4.9) follows in the same way as in [23, Lemmas 6.3 and 6.5]. We now consider the case \(a_{B_{4R}}^{-}>0\). Note that we have \(u\in W^{1,q}(B_{4R})\) in this case. We consider the following two cases: \[\left(\frac{\delta t}{R^{s}}\right)^{p}\geq a_{B_{4R}}^{-}\left(\frac{\delta t }{R}\right)^{q}\quad\text{and}\quad\left(\frac{\delta t}{R^{s}}\right)^{p}<a_{ B_{4R}}^{-}\left(\frac{\delta t}{R}\right)^{q}. \tag{4.13}\] _Step 1: The case \(\eqref{eq:b4R}_{1}\)._ In this case, by (4.7) and (4.12) with \(l=4\delta t\), we have \[\begin{split}\left(\frac{4\delta t}{R^{s}}\right)^{p}|B_{2R}|& \stackrel{{\eqref{eq:b4R}}}{{\geq}}\frac{1}{c}\int_{B_{2R}} \int_{B_{2R}}\frac{(u(x)-4\delta t)_{+}^{p-1}(u(y)-4\delta t)_{-}}{|x-y|^{n+ sp}}\,dxdy\\ &\geq\frac{1}{cR^{n+sp}}\int_{B_{2R}\cap\{u\geq t\}}(u(x)-4 \delta t)^{p-1}\,dx\int_{B_{2R}\cap\{u<2\delta t\}}(4\delta t-u(y))\,dy\\ &\geq\frac{\delta t^{p}}{cR^{n+sp}}|B_{2R}\cap\{u\geq t\}||B_{2R} \cap\{u<2\delta t\}|\\ &\stackrel{{\eqref{eq:b4R}}}{{\geq}}\frac{\delta \nu t^{p}}{cR^{sp}}|B_{2R}\cap\{u<2\delta t\}|.\end{split}\] Then it follows that \[|B_{2R}\cap\{u<2\delta t\}|\leq\frac{c\delta^{p-1}}{\nu}|B_{2R}| \tag{4.14}\] with \(c=c(\texttt{data}(B_{4R}))\). We start to deal with the case \(\eqref{eq:b4R}_{2}\). We choose \(i\in\mathbb{N}\) satisfying \[2^{-i-1}\leq 2\delta<2^{-i} \tag{4.15}\] and further distinguish two subcases: \[\left(\frac{t}{R^{s}}\right)^{p}<a_{B_{4R}}^{-}\left(\frac{t}{R}\right)^{q}\quad \text{and}\quad\left(\frac{t}{R^{s}}\right)^{p}\geq a_{B_{4R}}^{-}\left(\frac{t }{R}\right)^{q}. \tag{4.16}\] _Step 2: The case \(\eqref{eq:2-1-1}_{2}\) and \(\eqref{eq:2-1-1}_{1}\)._ Note from \(\eqref{eq:2-1-1}_{2}\) that \[\left(\frac{2^{-i}t}{R^{s}}\right)^{p}=2^{p}\left(\frac{2^{-i-1}t}{R^{s}} \right)^{p}\leq 2^{p}\left(\frac{2\delta t}{R^{s}}\right)^{p}\leq 2^{p}a_{B_{4R}}^{ -}\left(\frac{2\delta t}{R}\right)^{q}\leq 2^{p}a_{B_{4R}}^{-}\left(\frac{2^{-i}t}{R} \right)^{q}.\] Then together with \(\eqref{eq:2-1-1}_{1}\) we have \[\left(\frac{kt}{R^{s}}\right)^{p}\leq 2^{p}a_{B_{4R}}^{-}\left(\frac{kt}{R} \right)^{q}\quad\text{for all }k\in[2^{-i},1]. \tag{4.17}\] Denoting \(A_{j}\coloneqq B_{2R}\cap\{2^{-j}t<u\leq 2^{-j+1}t\}\), it follows from Lemma 2.3 that for all \(j\in\{1,\ldots,i\}\), \[\begin{split} 2^{-j}t|B_{2R}\cap\{u\leq 2^{-j}t\}|^{1-\frac{u}{ n}}&\leq\frac{c|B_{2R}|}{|B_{2R}\cap\{u\geq 2^{-j+1}t\}|}\int_{A_{j}}|Du| \,dx\\ &\leq\frac{c}{\nu}\int_{A_{j}}|Du|\,dx,\end{split} \tag{4.18}\] where for the last inequality we have used \[|B_{2R}\cap\{u\geq 2^{-j+1}t\}|\geq|B_{2R}\cap\{u\geq t\}|\geq\nu|B_{2R}|.\] Then it follows that \[\nu(2^{-j}t)|B_{2R}\cap\{u\leq 2^{-j}t\}|^{\frac{n-1}{n}}\leq c\int_{A_{j}}| Du|\,dx.\] Moreover, by Holder's inequality, we find that \[\int_{A_{j}}|Du|\,dx\leq|A_{j}|^{\frac{1}{q}}\left(\int_{B_{2R}}|D(u-2^{-j+1}t )_{-}|^{q}\,dx\right)^{\frac{1}{q}}.\] Combining the last two displays, we have \[\begin{split} a_{B_{4R}}^{-}\nu^{q}(2^{-j}t)^{q}|B_{2R}\cap\{u \leq 2^{-j}t\}|^{\frac{n-1}{n}q}&\leq c|A_{j}|^{q-1}a_{B_{4R}}^{-} \int_{B_{2R}}|D(u-2^{-j+1}t)_{-}|^{q}\,dx\\ &\overset{\eqref{eq:2-1-1}}{\leq}c|A_{j}|^{q-1}G_{B_{4R}}(2^{-j+ 1}t)|B_{2R}|\\ &\overset{\eqref{eq:2-1-1}}{\leq}c|A_{j}|^{q-1}a_{B_{4R}}^{-} \left(\frac{2^{-j+1}t}{R}\right)^{q}|B_{2R}|,\end{split}\] and so \[\nu^{\frac{q}{q-1}}|B_{2R}\cap\{u\leq 2^{-j}t\}|^{\frac{n-1}{n}\frac{q}{q-1}} \leq c\left(\frac{|B_{2R}|}{R^{q}}\right)^{\frac{1}{q-1}}|A_{j}|\leq cR^{ \frac{n-q}{q-1}}|A_{j}|.\] Then since \((B_{2R}\cap\{u\leq 2^{-i}t\})\subset(B_{2R}\cap\{u\leq 2^{-j}t\})\) for all \(j\in\{1,\ldots,i\}\), there holds \[\nu^{\frac{q}{q-1}}|B_{2R}\cap\{u\leq 2^{-i}t\}|^{\frac{n-1}{n}\frac{q}{q-1}} \leq cR^{\frac{n-q}{q-1}}|A_{j}|. \tag{4.19}\] We recall the definition of \(A_{j}\) and sum up (4.19) over \(j\in\{1,\ldots,i\}\), to discover \[\begin{split} i\nu^{\frac{q}{q-1}}|B_{2R}\cap\{u\leq 2^{-i}t\}|^{ \frac{n-1}{n}\frac{q}{q-1}}&\leq cR^{\frac{n-q}{q-1}}\sum_{j=1}^{ i}|A_{j}|\\ &\leq cR^{\frac{n-q}{q-1}}|B_{2R}\cap\{2^{-i}t<u\leq 2t\}|\\ &\leq cR^{\frac{n-q}{q-1}}|B_{2R}|\eqsim|B_{2R}|^{\frac{n-1}{n} \frac{q}{q-1}},\end{split}\] and so \[|B_{2R}\cap\{u\leq 2^{-i}t\}|\leq\frac{c|B_{2R}|}{\nu^{\frac{n}{q-1}}\frac{i }{n^{-1}}\frac{q-1}{q}}.\] Recalling (4.15), one can easily conclude that \[|B_{2R}\cap\{u\leq 2\delta t\}|\leq\frac{c|B_{2R}|}{\nu^{\frac{n}{n-1}}|\log \delta|^{\frac{n}{n-1}}\frac{q-1}{q}} \tag{4.20}\] with \(c=c(\texttt{data}(B_{4R})).\) _Step 3: The case \(\eqref{eq:3}_{2}\) and \(\eqref{eq:3}_{2}.\)_ In this case, let \(\beta\in\{1,\ldots,i\}\) be such that \[\left(\frac{2^{-\beta}t}{R^{s}}\right)^{p}<a_{B_{4R}}^{-}\left(\frac{2^{- \beta}t}{R}\right)^{q}\quad\text{but}\quad\left(\frac{2^{-\beta+1}t}{R^{s}} \right)^{p}\geq a_{B_{4R}}^{-}\left(\frac{2^{-\beta+1}t}{R}\right)^{q}.\] By using (4.12) for \(l=2^{-\beta+2}t,\) we have \[\left(\frac{2^{-\beta+2}t}{R^{s}}\right)^{p}|B_{2R}|\stackrel{{ \eqref{eq:3}_{2}}}{{\geq}}\frac{1}{c}\int_{B_{2R}}\int_{B_{2R}}\frac{(u(x)-2^ {-\beta+2}t)_{+}^{p-1}(u(x)-2^{-\beta+2}t)_{-}}{|x-y|^{n+sp}}\,dxdy\] \[\geq\frac{1}{cR^{n+sp}}\int_{B_{2R}\cap\{u\geq t\}}(u(x)-2^{- \beta+2}t)^{p-1}\,dx\int_{B_{2R}\cap\{u\leq 2^{-\beta+1}t\}}(2^{-\beta+2}t-u(y))\,dy\] \[\geq\frac{\eqref{eq:3}_{2}}{cR^{n+sp}}2^{-\beta+1}t^{p}|B_{2R} \cap\{u\geq t\}||B_{2R}\cap\{u\leq 2^{-\beta+1}t\}|\] \[\stackrel{{\eqref{eq:3}_{2}}}{{\geq}}\frac{1}{cR^{ sp}}2^{-\beta+1}\nu t^{p}|B_{2R}\cap\{u\leq 2^{-\beta+1}t\}|.\] Consequently, \[|B_{2R}\cap\{u\leq 2^{-\beta+1}t\}|\leq\frac{c(2^{-\beta+1})^{p-1}}{\nu}|B_{2 R}|.\] On the other hand, by the same computations as in (4.18), we discover that for all \(j\in\{\beta,\ldots,i\},\) \[2^{-\beta}t|B_{2R}\cap\{u\leq 2^{-\beta}t\}|^{1-\frac{1}{n}}\leq\frac{c}{\nu }\int_{A_{j}}|Du|\,dx.\] Following the same arguments used in (4.18)-(4.19), we have the inequality \[\nu^{\frac{q}{q-1}}|B_{2R}\cap\{u\leq 2^{-i}t\}|^{\frac{n-1}{n}\frac{q}{q-1}} \leq cR^{\frac{n-q}{q-1}}|A_{j}|.\] Sum \(j=\beta,\beta+1,\ldots,i,\) to find \[\nu^{\frac{q}{q-1}}(i-\beta+1)|B_{2R}\cap\{u\leq 2^{-i}t\}|^{ \frac{n-1}{n}\frac{q}{q-1}} \leq cR^{\frac{n-q}{q-1}}\sum_{j=\beta}^{i}|A_{j}|\] \[\leq cR^{\frac{n-q}{q-1}}|B_{2R}\cap\{2^{-i}t<u\leq 2^{-\beta+1}t\}|\] \[\leq cR^{\frac{n-q}{q-1}}\frac{(2^{-\beta+1})^{p-1}}{\nu}|B_{2R}| \eqsim\frac{2^{-\beta(p-1)}}{\nu}|B_{2R}|^{\frac{n-1}{n}\frac{q}{q-1}}.\] Thus \[|B_{2R}\cap\{u\leq 2^{-i}t\}|\leq c|B_{2R}|\left(\frac{2^{-\beta(p-1)}}{\nu^{1 +\frac{q}{q-1}}(i-\beta+1)}\right)^{\frac{n}{n-1}\frac{q-1}{q}},\] which together with (4.15) implies \[|B_{2R}\cap\{u\leq 2\delta t\}|\leq c|B_{2R}|\left(\frac{2^{-\beta(p-1)}}{\nu^ {1+\frac{q}{q-1}}\left(\frac{-\log\delta}{\log 2}-\beta+1\right)}\right)^{\frac{n}{n-1} \frac{q-1}{q}}.\] We first consider the case that \(2^{-\beta}\leq\delta^{\frac{1}{2}}.\) Recall (4.15) and the fact that \(\beta\leq i-1,\) to observe \[\frac{-\log\delta}{\log 2}\geq\beta+2.\] Then we have \[|B_{2R}\cap\{u\leq 2\delta t\}|\leq c|B_{2R}|\left(\frac{\delta^{\frac{p-1}{2}}}{ \nu^{1+\frac{q}{q-1}}}\right)^{\frac{n}{n-1}\frac{q-1}{q}}\leq c|B_{2R}|\left( \frac{\delta^{\frac{p-1}{2}}}{\nu^{\frac{2q}{q-1}}}\right)^{\frac{n}{n-1} \frac{q-1}{q}}. \tag{4.21}\] We next consider the case \(2^{-\beta}>\delta^{\frac{1}{2}}.\) Then \(\frac{-\log\delta}{2\log 2}>\beta,\) and so \[\begin{split}|B_{2R}\cap\{u\leq 2\delta t\}|&\leq c|B_{2R}| \left(\frac{1}{\nu^{\frac{2q}{q-1}}\left(-\frac{\log\delta}{2\log 2}+1\right)}\right)^{\frac{n}{n-1}\frac{q-1}{q}}\\ &\leq c|B_{2R}|\left(\frac{1}{\nu^{\frac{2q}{q-1}}|\log\delta|} \right)^{\frac{n}{n-1}\frac{q-1}{q}}.\end{split} \tag{4.22}\] Combining (4.14), (4.20), (4.21) and (4.22), we conclude with \[|B_{2R}\cap\{u\leq 2\delta t\}|\leq\frac{c}{\nu^{\max\left\{\frac{2q}{q-1}, \frac{2n}{n-1}\right\}}}\left(\delta^{\frac{p-1}{2}}+\frac{1}{|\log\delta|^{ \frac{n}{n-1}\frac{q-1}{q}}}\right)|B_{2R}|,\] which completes the proof. Using Lemma 4.2, we now show the expansion of positivity. **Lemma 4.3**.: _Let \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) be a minimizer of \(\mathcal{E}\) which is nonnegative in a ball \(B_{4R}\Subset\Omega\) with \(R\leq 1.\) When \(sp\leq n\), assume further that \(u\) is bounded in \(B_{4R}.\) Suppose that_ \[|B_{2R}\cap\{u\geq t\}|\geq\nu|B_{2R}|\] _for some \(\nu\in(0,1)\) and \(t>0.\) Then there exists \(\delta=\delta(\mathtt{data}(B_{4R}),\nu)\in(0,\frac{1}{8}]\) such that if_ \[\mathrm{Tail}(u_{-};4R)\leq g_{B_{4R}}(\delta t), \tag{4.23}\] _then we have \(u\geq\delta t\) in \(B_{R}\)._ Proof.: If \(a^{-}_{B_{4R}}=0,\) then (4.23) follows from [23, Lemma 6.5]. Hence we only consider the case \(a^{-}_{B_{4R}}>0,\) in which \(u\in W^{1,q}(B_{2R}).\) Choose numbers \(h,k\) such that \(\delta t\leq h<k\leq 2\delta t\) and radii \(\rho,r\) such that \(2R\leq\rho<r\leq 4R.\) Define \[A^{-}(h,\rho)=B_{\rho}\cap\{u\leq h\}.\] Note that we can always choose \[\kappa\coloneqq\frac{p_{s}^{*}}{p}<\frac{q^{*}}{q}. \tag{4.24}\] Indeed, if \(q<n,\) then we have \(\kappa=n/(n-sp)<n/(n-q)=q^{*}/q\) from (1.2). If \(q\geq n,\) then we can choose the number \(q^{*}\) large enough to satisfy (4.24). Recalling that \(u\in W^{1,q}(B_{2R})\), we now apply Sobolev's embedding theorem and (2.1) to have \[\left[\left(\frac{k-h}{\rho^{s}}\right)^{p}+a^{-}_{B_{4R}}\left( \frac{k-h}{\rho}\right)^{q}\right]\left(\frac{|A^{-}(h,\rho)|}{|B_{\rho}|} \right)^{\frac{1}{\kappa}}\] \[\leq\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{ \rho}}\left[\left(\frac{(u-k)_{-}}{\rho^{s}}\right)^{p}+a^{-}_{B_{4R}}\left( \frac{(u-k)_{-}}{\rho}\right)^{q}\right]^{\kappa}\,dx\right)^{\frac{1}{\kappa}}\] \[\leq c\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{ \rho}}\int_{B_{\rho}}\frac{|(u(x)-k)_{-}-(u(y)-k)_{-}|^{p}}{|x-y|^{n+sp}}\,dxdy+c \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{ \rho}}a^{-}_{B_{4R}}|D(u-k)_{-}|^{q}\,dx\] \[\quad+c\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{\rho}}\left(\frac{(u-k)_{-}}{\rho^{s}}\right)^{p }+a^{-}_{B_{4R}}\left(\frac{(u-k)_{-}}{\rho}\right)^{q}\,dx\] \[\leq c\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{\rho}}\int_{B_{\rho}}\frac{|(u(x)-k)_{-}-(u(y)-k)_ {-}|^{p}}{|x-y|^{n+sp}}\,dxdy+c\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{\rho}}a^{-}_{B_{4R}}|D(u-k)_{-}|^{q }\,dx\] \[\quad+c\left[\left(\frac{\delta t}{\rho^{s}}\right)^{p}+a^{-}_{B_ {4R}}\left(\frac{\delta t}{\rho}\right)^{q}\right]\frac{|A^{-}(k,r)|}{|B_{r}|},\] where we have also used the fact that \(k\leq 2\delta t.\) Applying Lemma 4.1 to the right-hand side of the above display, and then recalling (4.6) and the fact that \(\rho\in[2R,4R]\), we find \[G_{B_{4R}}(k-h)\left(\frac{|A^{-}(h,\rho)|}{|B_{\rho}|}\right)^{ \frac{1}{\kappa}} \tag{4.25}\] \[\leq\frac{c}{|B_{r}|}\left(\frac{r}{r-\rho}\right)^{n+q}\left[ \frac{\|(u-k)_{-}\|^{p}_{L^{p}(B_{r})}}{r^{sp}}+a^{-}_{B_{4R}}\frac{\|(u-k)_{ -}\|^{q}_{L^{q}(B_{r})}}{r^{q}}\right]\] \[\quad+\frac{c}{|B_{r}|}\left(\frac{r}{r-\rho}\right)^{n+q}\|(u-k) _{-}\|_{L^{1}(B_{r})}\cdot\mathrm{Tail}((u-k)_{-};r)\] \[\quad+cG_{B_{4R}}(\delta t)\frac{|A^{-}(k,r)|}{|B_{r}|}.\] For the right-hand side of the above inequality, note that \[\|(u-k)_{-}\|^{\theta}_{L^{\theta}(B_{r})}=\int_{A^{-}(k,r)}(k-u(x))^{\theta} \,dx\leq|A^{-}(k,r)|k^{\theta} \tag{4.26}\] for any \(\theta\geq 1.\) Also, since \(r\in[2R,4R]\) and \(k\in[\delta t,2\delta t]\), we discover \[\mathrm{Tail}((u-k)_{-};r) =\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{(k-u(x))_{+}^{p-1}}{|x |^{n+sp}}\,dx \tag{4.27}\] \[\leq c\left[k^{p-1}\int_{\mathbb{R}^{n}\setminus B_{\rho}}\frac{ dx}{|x|^{n+sp}}+\int_{\mathbb{R}^{n}\setminus B_{4R}}\frac{u_{-}(x)^{p-1}}{|x|^{n+sp}} \,dx\right]\] \[=c\left[\rho^{-sp}k^{p-1}+\mathrm{Tail}(u_{-};4R)\right]\] \[\overset{\eqref{eq:2.1}}{\leq}cg_{B_{4R}}(\delta t).\] Connecting (4.26) and (4.27) to (4.25), we have \[G_{B_{4R}}(k-h)\left(\frac{|A^{-}(h,\rho)|}{|B_{\rho}|}\right)^{\frac{1}{\kappa} }\leq c\left(\frac{r}{r-\rho}\right)^{n+q}G_{B_{4R}}(k)\frac{|A^{-}(k,r)|}{|B_{ r}|}\] and so \[\frac{|A^{-}(h,\rho)|}{|B_{\rho}|}\leq c\left(\frac{r}{r-\rho}\right)^{(n+q) \kappa}\left(\frac{G_{B_{4R}}(k)}{G_{B_{4R}}(k-h)}\right)^{\kappa}\left(\frac{ |A^{-}(k,r)|}{|B_{r}|}\right)^{\kappa}. \tag{4.28}\] For \(i\in\mathbb{N}\cup\{0\}\), define \[r_{i}=(1+2^{-i})R,\quad k_{i}=(1+2^{-i})\delta t\quad\text{and}\quad\phi_{i} \coloneqq\frac{|A^{-}(k_{i},r_{i})|}{|B_{r_{i}}|}.\] Accordingly, we apply (4.28) with the choices \(h=k_{i}\), \(k=k_{i-1}\), \(\rho=r_{i}\) and \(r=r_{i-1}\). Then, since \(k_{i-1}-k_{i}=2^{-i}\delta t\) and \(\frac{r_{i-1}}{r_{i-1}-r_{i}}\leq 2^{i}\), we arrive at \[\phi_{i}\leq c_{2}2^{i(n+2q)\kappa}\phi_{i-1}^{\kappa}\] with \(c_{2}=c_{2}(\texttt{data}(B_{4R}))\). Choose \(\delta=\delta(\texttt{data}(B_{4R}),\nu)\in(0,\frac{1}{8}]\) such that \[\tau\equiv\tau(\delta)\coloneqq\frac{c_{1}}{\nu^{\max\{\frac{2q}{q-1},\frac{ 2q}{n-1}\}}}\left(\delta^{\frac{p-1}{2}}+\frac{1}{|\log\delta|^{\frac{n}{n-1} \frac{q-1}{q}}}\right)\leq c_{2}^{-\frac{1}{1-\kappa}}2^{-(n+2q)\frac{\kappa} {(1-\kappa)^{2}}},\] where \(c_{1}\) is the constant determined in Lemma 4.2. Then we apply Lemma 4.2 in order to have \[\phi_{0}=\frac{|A^{-}(2\delta t,2R)|}{|B_{2R}|}\leq c_{2}^{-1/(\kappa-1)}2^{-( n+2q)\kappa/(\kappa-1)^{2}}.\] Therefore, Lemma 2.4 implies \(\lim_{i\to\infty}\phi_{i}=0\), and we conclude that \(u\geq\delta t\) in \(B_{R}\). ## 5. Proof of Theorems 1.3 and 1.4 In this section, we prove Theorems 1.3 and 1.4. As in the previous section, we assume that \(K_{sp}\) is symmetric and satisfy (1.5), \(F\) satisfies (1.4), and \(a(\cdot)\) satisfies (1.3) and (1.10). Also, we assume that \(s,p,q\) and \(\alpha\) satisfy (1.2) and (1.12). ### Proof of Theorem 1.3 By translation, without loss of generality we assume \(x_{0}=0\). Let \(\delta\in(0,1/8]\) be the constant defined in Lemma 4.3. Choose \(\gamma=\gamma(\texttt{data}(B_{4R}))\in(0,1)\) such that \[0<\gamma\leq\min\left\{\frac{s}{2},\log_{4}\left(\frac{2}{2-\delta}\right)\right\} \tag{5.1}\] and \[\int_{4}^{\infty}\frac{(\rho^{\gamma}-1)^{p-1}}{\rho^{1+sp}}\,d\rho\leq\frac{s \delta^{p-1}}{8^{p+1}n|B_{1}|}. \tag{5.2}\] Observe that the left-hand side of (5.2) is an increasing function of \(\gamma\). Thus, if (5.2) holds for \(\gamma\) determined in (5.1), then (5.2) holds for any \(\beta\leq\gamma\) as well. We choose the number \[j_{0}\coloneqq\left\lceil\frac{2}{sp}\log_{4}\left(\frac{2^{p}\left(1+n|B_{1}| /(sp)\right)}{\delta^{p-1}}\right)\right\rceil, \tag{5.3}\] where \(\lceil t\rceil\) denotes the least integer greater than or equal to \(t\). We will show that there exist a non-decreasing sequence \(\{m_{i}\}\) and a non-increasing sequence \(\{M_{i}\}\) such that for any \(i\in\mathbb{N}\cup\{0\}\), \[m_{i}\leq u\leq M_{i}\ \ \text{in}\ \ B_{4^{1-i}R}\quad\text{and}\quad M_{i}-m_{i}=4 ^{-\gamma i}L, \tag{5.4}\] where \(L\) is defined as \[L\coloneqq 2^{1+sj_{0}}\|u\|_{L^{\infty}(B_{4R})}+[(4R)^{sp}\text{Tail}(u;4R)]^{ \frac{1}{p-1}}. \tag{5.5}\] We use strong induction on \(i\). Let \(m_{i}\coloneqq-4^{-\gamma i}L/2\) and \(M_{i}\coloneqq 4^{-\gamma i}L/2\) with \(i=0,\dots,j_{0}\). Then from (5.1) and (5.5) we notice that (5.4) holds for \(i=0,\dots,j_{0}\). Indeed, we have \[m_{i}=-4^{-\gamma i}L/2\leq-4^{-si/2}2^{sj_{0}}\|u\|_{L^{\infty}(B_{4R})}=-2^{- si}2^{sj_{0}}\|u\|_{L^{\infty}(B_{4R})}\leq-\|u\|_{L^{\infty}(B_{4R})}\leq u(x)\] for a.e. \(x\in B_{4^{1-i}R}\); a similar argument also shows that \(u\leq M_{i}\) a.e. in \(B_{4^{1-i}R}\). Now, we choose an integer \(j\geq j_{0}\) and assume the sequences \(\{m_{i}\}\) and \(\{M_{i}\}\) are constructed for \(i\in\{1,\dots,j\}\). Then we are going to prove (5.4) for \(i=j+1\), by constructing \(m_{j+1}\) and \(M_{j+1}\) properly. We start by observing that either \[\left|B_{4^{1-j}R/2}\cap\left\{u\geq m_{j}+\frac{M_{j}-m_{j}}{2}\right\}\right| \geq\frac{1}{2}\left|B_{4^{1-j}R/2}\right| \tag{5.6}\] or \[\left|B_{4^{1-j}R/2}\cap\left\{u\geq m_{j}+\frac{M_{j}-m_{j}}{2}\right\}\right|< \frac{1}{2}\left|B_{4^{1-j}R/2}\right|. \tag{5.7}\] We set \[w\coloneqq\begin{cases}u-m_{j}&\text{if \eqref{eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq conclude that (5.4) holds for any \(i\in\mathbb{N}\cup\{0\}\), and Holder continuity of \(u\) follows in a standard way. ### Harnack's inequality We next prove Harnack's inequality in Theorem 1.4. The following lemma can be proved in a very similar way as in Lemma 4.3. **Lemma 5.1**.: _Let \(u\in\mathcal{A}(\Omega)\cap L_{sp}^{p-1}(\mathbb{R}^{n})\) be a minimizer of \(\mathcal{E}\) which is nonnegative in a ball \(B_{16R}\Subset\Omega\) with \(R\leq 1\). When \(sp\leq n\), assume further that \(u\) is bounded in \(B_{16R}\). Suppose that_ \[|B_{R}\cap\{u\geq t\}|\geq\nu^{k}|B_{R}|\] _for some \(t>0\) and \(\nu\in(0,1)\). Then there exists \(\delta=\delta(\texttt{data}(B_{16R}),\nu)\in(0,\frac{1}{8}]\) such that if_ \[\mathrm{Tail}(u_{-};16R)\leq g_{B_{16R}}(\delta^{k}t),\] _then we have \(u\geq\delta^{k}t\) in \(B_{R}\)._ Using Lemma 5.1, we have the following: **Lemma 5.2**.: _Let \(u\in\mathcal{A}(\Omega)\cap L_{sp}^{p-1}(\mathbb{R}^{n})\) be a minimzer of \(\mathcal{E}\) which is nonnegative in a ball \(B_{16R}\Subset\Omega\) with \(R\leq 1\). When \(sp\leq n\), assume further that \(u\) is bounded in \(B_{16R}\). Then there exist constants \(\varepsilon_{0}\in(0,1)\) and \(c\geq 1\), both depending on \(\texttt{data}(B_{16R})\), such that_ \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{R}}u^{\varepsilon_{0}}\,dx\right)^{\frac{1}{ \varepsilon_{0}}}\leq\underset{B_{R}}{\inf}u+cg_{B_{16R}}^{-1}(\mathrm{Tail}(u_ {-};16R)). \tag{5.9}\] Proof.: We assume that \(u\) does not vanish on \(B_{R}\), otherwise there is nothing to prove. Let \(\delta\in(0,\frac{1}{8}]\) be the constant determined in Lemma 5.1 with the choice \(\nu=\frac{1}{2}\). We accordingly set \[\varepsilon_{0}\coloneqq\frac{\log\nu}{2\log\delta}=\frac{1}{2\log_{\frac{1}{ 2}}\delta}\in(0,1). \tag{5.10}\] We claim that for any \(t\geq 0\), \[\underset{B_{R}}{\inf}u+g_{B_{16R}}^{-1}(\mathrm{Tail}(u_{-};16R))\geq\delta \left(\frac{|A^{+}(t,R)|}{|B_{R}|}\right)^{\frac{1}{2\varepsilon_{0}}}t. \tag{5.11}\] We only consider the case \(t\in[0,\sup_{B_{R}}u)\), otherwise (5.11) is trivial. For each \(t\in[0,\sup_{B_{R}}u)\), let \(k=k(t)\) be the unique integer satisfying \[\log_{\frac{1}{2}}\frac{|A^{+}(t,R)|}{|B_{R}|}\leq k<1+\log_{\frac{1}{2}}\frac {|A^{+}(t,R)|}{|B_{R}|}. \tag{5.12}\] Then (5.10) and (5.12) imply \[\delta^{k}\geq\delta\left(\frac{|A^{+}(t,R)|}{|B_{R}|}\right)^{\frac{1}{2 \varepsilon_{0}}}. \tag{5.13}\] We assume that \[\mathrm{Tail}(u_{-};16R)<g_{B_{16R}}(\delta^{k}t),\] otherwise (5.11) again follows directly. Now, observe that (5.12) implies \[|A^{+}(t,R)|\geq 2^{-k}|B_{R}|.\] Then we are in a position to apply Lemma 5.1, which gives \[u\geq\delta^{k}t\quad\text{in }B_{R},\] and so \[\underset{B_{R}}{\inf}u+g_{B_{16R}}^{-1}(\mathrm{Tail}(u_{-};16R))\geq\delta^ {k}t. \tag{5.14}\] Combining (5.14) and (5.13), we have (5.11). At this moment, a similar argument as in the proof of [23, Proposition 6.8] yields (5.9). Now we have the following local sup-estimate. **Lemma 5.3**.: _Let \(u\in\mathcal{A}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) be a minimizer of \(\mathcal{E}\) and \(B_{2r}\equiv B_{2r}(z)\Subset\Omega\) a ball. When \(sp\leq n\), assume further that \(u\) is bounded in \(B_{2r}\). Then for any \(\delta\in(0,1)\), we have_ \[\sup_{B_{r}}u_{+}\leq c_{\delta}G_{B_{2r}}^{-1}\left(\fint_{B_{2r}}G_{B_{2r}}(u _{+})\,dx\right)+\delta\,g_{B_{r}}^{-1}\left(\mathrm{Tail}(u_{+};r)\right) \tag{5.15}\] _with \(c=c(\mathsf{data}(B_{2r}))\) and \(c_{\delta}=c_{\delta}(\mathsf{data}(B_{2r}),\delta)\)._ Proof.: For any \(j\in\mathbb{N}\cup\{0\}\), we write \[r_{j}=(1+2^{-j})r,\quad B_{j}=B_{r_{j}},\quad k_{j}=(1-2^{-j-1})2k_{0},\quad w_ {j}=(u-k_{j})_{+}.\] Observe that \[r<r_{j+1}<r_{j}<2r,\quad k_{j}<k_{j+1},\quad w_{j+1}\leq w_{j}.\] By using Lemma 4.1, we have \[\begin{split}&[w_{j+1}]_{W^{a,p}(B_{j+1})}^{p}+a_{B_{2r}}^{-}[w_{j +1}]_{W^{1,q}(B_{j+1})}^{q}\\ &\leq c\left(\frac{r_{j}}{r_{j}-r_{j+1}}\right)^{n+q}\left[\int_{ B_{j}}\left(\frac{w_{j}}{r_{j}^{s}}\right)^{p}dx+a_{B_{2r}}^{-}\int_{B_{j}} \left(\frac{w_{j}}{r_{j}}\right)^{q}\,dx+\|w_{j}\|_{L^{1}(B_{j})}(\mathrm{Tail }(w_{j};r_{j}))\right]\\ &\leq c\left(\frac{r_{j}}{r_{j}-r_{j+1}}\right)^{n+q}\left[\int_{ B_{j}}G_{B_{2r}}(w_{j})\,dx+\left(\int_{B_{j}}w_{j}\,dx\right)(\mathrm{Tail}(w_{j};r _{j}))\right]\end{split} \tag{5.16}\] for a constant \(c=c(\mathsf{data}(B_{2r}))\), where we have also used the relation that \[r_{j}\eqsim r\quad\Rightarrow\quad G_{B_{2r}}(t)\eqsim\left(\frac{t}{r_{j}^{ s}}\right)^{p}+a_{B_{2r}}^{-}\left(\frac{t}{r_{j}}\right)^{q}\quad\forall\;t\geq 0.\] Now, with \(\kappa\) defined in (4.24), we use (2.1) and Sobolev's embedding theorem to find \[\fint_{B_{j+1}}G_{B_{2r}}(w_{j+1})\,dx\leq\left(\frac{|A^{+}(k_{j +1},r_{j+1})|}{|B_{j+1}|}\right)^{\frac{1}{\kappa^{\prime}}}\left(\fint_{B_{j+1 }}[G_{B_{2r}}(w_{j+1})]^{\kappa}\,dx\right)^{\frac{1}{\kappa}}\] \[\leq c\left(\frac{|A^{+}(k_{j+1},r_{j+1})|}{|B_{j+1}|}\right)^{ \frac{1}{\kappa^{\prime}}}\left(\fint_{B_{j+1}}\int_{B_{j+1}}\frac{|w_{j+1}(x )-w_{j+1}(y)|^{p}}{|x-y|^{n+sp}}\,dxdy+a_{B_{2r}}^{-}\fint_{B_{j+1}}|Dw_{j+1}|^ {q}\,dx\right)\] \[\quad+c\left(\frac{|A^{+}(k_{j+1},r_{j+1})|}{|B_{j+1}|}\right)^{ \frac{1}{\kappa^{\prime}}}\fint_{B_{j+1}}G_{B_{2r}}(w_{j+1})\,dx.\] We also observe that \[\begin{split}&|A^{+}(k_{j+1},r_{j+1})|\leq\frac{1}{G_{B_{2r}}(k_{j +1}-k_{j})}\int_{A^{+}(k_{j},r_{j})}G_{B_{2r}}(w_{j})\,dx,\\ \fint_{B_{j+1}}w_{j+1}\,dx\leq\frac{1}{g_{B_{2r}}(k_{j+1}-k_{j})} \fint_{B_{j}}G_{B_{2r}}(w_{j})\,dx.\end{split} \tag{5.17}\] Combining (5.16)-(5.17), we find \[\begin{split}\fint_{B_{j+1}}G_{B_{2r}}(w_{j+1})\,dx& \leq\frac{c}{[G_{B_{2r}}(k_{j+1}-k_{j})]^{1/\kappa^{\prime}}} \left(\frac{r_{j}}{r_{j}-r_{j+1}}\right)^{n+q}\left(1+\frac{\mathrm{Tail}(w_{j };r_{j})}{g_{B_{2r}}(k_{j+1}-k_{j})}\right)\\ &\quad\cdot\left(\fint_{B_{j}}G_{B_{2r}}(w_{j})\,dx\right)^{1+ \frac{1}{\kappa^{\prime}}}.\end{split} \tag{5.18}\] Denoting \[a_{j}\eqsim\frac{1}{|B_{r}|}\int_{A^{+}(k_{j},r_{j})}G_{B_{2r}}(w_{j})\,dx,\] and recalling the definitions of \(k_{j}\) and \(r_{j}\), we see that (5.18) becomes \[a_{j+1}\leq\frac{c2^{(n+4)j}}{[G_{B_{2r}}(2^{-j}k_{0})]^{1/\kappa^{\prime}}} \left(1+\frac{\operatorname{Tail}(u_{+};r)}{g_{B_{2r}}(2^{-j}k_{0})}\right)a_{j }^{1+\frac{1}{\kappa^{\prime}}}.\] Here, if \(k_{0}\) is so large that \[\frac{\operatorname{Tail}(u_{+};r)}{g_{B_{2r}}(k_{0}/\delta)}\leq\frac{\delta^ {p}[\operatorname{Tail}(u_{+};r)]}{g_{B_{2r}}(k_{0})}\leq 1, \tag{5.19}\] then \[a_{j+1}\leq\frac{c_{2}2^{\theta j}}{\delta^{p}[G_{B_{2r}}(k_{0})]^{1/\kappa}}a_ {j}^{1+\frac{1}{\kappa^{\prime}}}\] holds for a constant \(c_{2}=c_{2}(\texttt{data}(B_{2r}))\), where \[\theta\coloneqq\frac{p}{\kappa^{\prime}}+n+p+q-1.\] We now fix \[k_{0}=G_{B_{2r}}^{-1}\left[\left(\frac{c_{2}}{\delta^{p}}\right)^{\kappa^{ \prime}}2^{\theta(\kappa^{\prime})^{2}}\int_{B_{2r}}G_{B_{2r}}(u_{+})\,dx \right]+\delta g_{B_{2r}}^{-1}\left(\operatorname{Tail}(u_{+};r)\right).\] Then (5.19) holds, and moreover we can apply Lemma 2.4 to conclude that \(a_{j}\to 0\) as \(j\to\infty\). In turn, an elementary manipulation gives the desired estimate (5.15). We are now ready to prove Theorem 1.4. Proof of Theorem 1.4.: By translation, we assume that \(x_{0}\) is the origin. _Step 1: Tail estimates._ First, we claim that for any \(z\in B_{R}\) and \(0<r\leq 2R\), \[\operatorname{Tail}(u_{+};z,r)\leq cg_{B_{r}(z)}\left(\sup_{B_{r}(z)}u\right)+ c\operatorname{Tail}(u_{-};z,r) \tag{5.20}\] holds for a constant \(c=c(\texttt{data}(B_{2R}))\). Indeed, denoting \(M\coloneqq\sup_{B_{r}(z)}u\), we apply (4.1) with \(k\equiv 2M\) to have \[\begin{split} I_{1}&\coloneqq\int_{B_{r/2}(z)}(u(x )-2M)_{-}\left[\int_{\mathbb{R}^{n}}\frac{(u(y)-2M)_{+}^{p-1}}{|x-y|^{n+sp}}\, dy\right]\,dx\\ &\leq c\left[\frac{\|(u-2M)_{-}\|_{L^{p}(B_{r}(z))}^{p}}{r^{sp}}+ a_{B_{r}(z)}^{-}\frac{\|(u-2M)_{-}\|_{L^{q}(B_{r}(z))}^{q}}{r^{q}}\right]\\ &\quad+c\|(u-2M)_{-}\|_{L^{1}(B_{r}(z))}\operatorname{Tail}((u-2M )_{-};z,r/2)\\ &\eqqcolon I_{2}.\end{split} \tag{5.21}\] For \(I_{1}\), we first notice that \[|x-y|\leq 2|y-z|\quad\text{for any}\quad x\in B_{r}(z)\quad\text{and}\quad y \in\mathbb{R}^{n}\setminus B_{r}(z).\] Also, from [23, Lemma 4.4] we obtain \[(u(y)-2M)_{+}^{p-1}\geq\min\{1,2^{2-p}\}u_{+}(y)^{p-1}-2^{p-1}M^{p-1}.\] From the above two observations and the fact that \(u\leq M\) on \(B_{r}(z)\), it follows that \[\begin{split}&\int_{B_{\frac{p}{2}}(z)}(u(x)-2M)_{-}\left[\int_{ \mathbb{R}^{n}}\frac{(u(x)-2M)_{+}^{p-1}}{|x-y|^{n+sp}}\,dy\right]\,dx\\ &\geq 2^{-n-sp}M\int_{B_{\frac{p}{2}(z)}}\left[\int_{\mathbb{R}^{n} \setminus B_{r}(z)}\frac{\min\{1,2^{2-p}\}u_{+}(y)^{p-1}-2^{p-1}M^{p-1}}{|y-z| ^{n+sp}}\,dy\right]\,dx\\ &\geq\frac{Mr^{n-sp}}{c}\operatorname{Tail}(u_{+};z,r)-cr^{n- sp}M^{p}.\end{split}\] On the other hand, since \(u\geq 0\) on \(B_{r}(z)\), we have \[I_{2}\leq cr^{n-sp}\left(M^{p}+a_{B_{r}(z)}^{-}r^{sp-q}M^{q}+M\mathrm{ Tail}(u_{-};z,r)\right).\] Merging the above two estimates together with (5.21) directly gives (5.20) as follows: \[\mathrm{Tail}(u_{+};z,r) \leq cr^{sp}\left(\frac{M^{p-1}}{r^{sp}}+a_{B_{r}(z)}^{-}\frac{M ^{q-1}}{r^{q}}+\frac{1}{r^{sp}}\mathrm{Tail}(u_{-};z,r)\right)\] \[\leq c\left(g_{B_{r}(z)}(M)+\mathrm{Tail}(u_{-};z,r)\right).\] _Step 2: Proof of (1.13)._ With \(\delta_{1}\in(0,1]\) being any number, we use Lemma 5.3 to have \[\sup_{B_{r}(z)}u\leq c_{\delta_{1}}G_{B_{r}(z)}^{-1}\left(\fint_{B_{2r}(z)}G_{ B_{r}(z)}(u)\,dx\right)+\delta_{1}g_{B_{r}(z)}^{-1}(\mathrm{Tail}(u_{+};z,r)),\] where \(c=c(\mathsf{data}(B_{2R}))\). Combining this estimate with (5.20), we find \[\sup_{B_{r}(z)}u \leq c_{\delta_{1}}G_{B_{r}(z)}^{-1}\left(\fint_{B_{2r}(z)}G_{B_ {r}(z)}(u)\,dx\right)+\delta_{1}g_{B_{r}(z)}^{-1}\left(g_{B_{r}(z)}\left(\sup_ {B_{r}(z)}u\right)+\mathrm{Tail}(u_{-};z,r)\right)\] \[\leq c_{\delta_{1}}G_{B_{r}(z)}^{-1}\left(\fint_{B_{2r}(z)}G_{B_ {r}(z)}(u)\,dx\right)+c\delta_{1}\left(\sup_{B_{r}(z)}u+g_{B_{r}(z)}^{-1}( \mathrm{Tail}(u_{-};z,r))\right). \tag{5.22}\] We next recall the exponent \(\varepsilon_{0}\in(0,1)\) determined in (5.10). Using Jensen's inequality with the convex function \(t\mapsto[G_{B_{r}(z)}^{-1}(t)]^{q}\), and then Young inequality with conjugate exponents \(q/(q-\varepsilon_{0})\) and \(q/\varepsilon_{0}\), where \(\varepsilon_{0}\) is determined in Lemma 5.2, we obtain \[G_{B_{r}(z)}^{-1}\left(\fint_{B_{2r}(z)}G_{B_{r}(z)}(u)\,dx\right) \leq\left(\fint_{B_{2r}(z)}u^{q}\,dx\right)^{\frac{1}{q}} \tag{5.23}\] \[\leq\left(\sup_{B_{2r}(z)}u\right)^{\frac{q-\varepsilon_{0}}{q}} \left(\fint_{B_{2r}(z)}u^{\varepsilon_{0}}\,dx\right)^{\frac{1}{q}}\] \[\leq\delta_{2}\sup_{B_{2r}(z)}u+c_{\delta_{2}}\left(\fint_{B_{2r}( z)}u^{\varepsilon_{0}}\,dx\right)^{\frac{1}{q_{0}}}\] for any \(\delta_{2}>0\). Combining (5.22) and (5.23) and taking \(\delta_{1},\delta_{2}\) sufficiently small, we obtain \[\sup_{B_{r}(z)}u\leq\frac{1}{2}\sup_{B_{2r}(z)}u+c\left(\fint_{B_{2r}(z)}u^{ \varepsilon_{0}}\,dx\right)^{\frac{1}{\varepsilon_{0}}}+c[r^{sp}\mathrm{Tail}( u_{-};z,r)]^{\frac{1}{p-1}}, \tag{5.24}\] where we have also used the fact that \(g_{B_{r}(z)}^{-1}(t)\leq(r^{sp}t)^{\frac{1}{p-1}}\) for any \(t\geq 0\). Now, let \(R\leq\rho<\tau\leq 2R\) be fixed. By employing (5.24) along with a suitable covering argument, we arrive at \[\sup_{B_{\rho}}u\leq\frac{1}{2}\sup_{B_{\tau}}u+\frac{c}{(\tau-\rho)^{n/q}}\|u \|_{L^{\varepsilon_{0}}(B_{2R})}+c[R^{sp}\mathrm{Tail}(u_{-};R)]^{\frac{1}{p-1 }}.\] Then an application of the technical lemma [23, Lemma 4.11] gives \[\sup_{B_{R}}u\leq c\left(\fint_{B_{2R}}u^{\varepsilon_{0}}\,dx\right)^{\frac{ 1}{\varepsilon_{0}}}+c\left[R^{sp}\mathrm{Tail}(u_{-};R)\right]^{\frac{1}{p-1}},\] which with (5.9) yields the desired Harnack's inequality (1.13).
2301.09107
A 'moment-conserving' reformulation of GW theory
We show how to construct an effective Hamiltonian whose dimension scales linearly with system size, and whose eigenvalues systematically approximate the excitation energies of GW theory. This is achieved by rigorously expanding the self-energy in order to exactly conserve a desired number of frequency-independent moments of the self-energy dynamics. Recasting $GW$ in this way admits a low-scaling O[$N^4$] approach to build and solve this Hamiltonian, with a proposal to reduce this further to O[$N^3$]. This relies on exposing a novel recursive framework for the density response moments of the random phase approximation (RPA), where the efficient calculation of its starting point mirrors the low-scaling approaches to compute RPA correlation energies. The frequency integration of $GW$ which distinguishes so many different GW variants can be performed without approximation directly in this moment representation. Furthermore, the solution to the Dyson equation can be performed exactly, avoiding analytic continuation, diagonal approximations or iterative solutions to the quasiparticle equation, with the full-frequency spectrum obtained from the complete solution of this effective static Hamiltonian. We show how this approach converges rapidly with respect to the order of the conserved self-energy moments, and is applied across the GW100 benchmark dataset to obtain accurate $GW$ spectra in comparison to traditional implementations. We also show the ability to systematically converge all-electron full-frequency spectra and high-energy features beyond frontier excitations, as well as avoiding discontinuities in the spectrum which afflict many other GW approaches.
Charles J. C. Scott, Oliver J. Backhouse, George H. Booth
2023-01-22T12:00:01Z
http://arxiv.org/abs/2301.09107v4
# A'moment-conserving' reformulation of GW theory ###### Abstract We show how to construct an effective Hamiltonian whose dimension scales linearly with system size, and whose eigenvalues systematically approximate the excitation energies of \(GW\) theory. This is achieved by rigorously expanding the self-energy in order to exactly conserve a desired number of frequency-independent moments of the self-energy dynamics. Recasting \(GW\) in this way admits a low-scaling \(\mathcal{O}[N^{4}]\) approach to build and solve this Hamiltonian, with a proposal to reduce this further to \(\mathcal{O}[N^{3}]\). This relies on exposing a novel recursive framework for the density response moments of the random phase approximation (RPA), where the efficient calculation of its starting point mirrors the low-scaling approaches to compute RPA correlation energies. The frequency integration of \(GW\) which distinguishes so many different \(GW\) variants can be performed without approximation directly in this moment representation. Furthermore, the solution to the Dyson equation can be performed exactly, avoiding analytic continuation, diagonal approximations or iterative solutions to the quasiparticle equation, with the full-frequency spectrum obtained from the complete solution of this effective static Hamiltonian. We show how this approach converges rapidly with respect to the order of the conserved self-energy moments, and is applied across the \(GW100\) benchmark dataset to obtain accurate \(GW\) spectra in comparison to traditional implementations. We also show the ability to systematically converge all-electron full-frequency spectra and high-energy features beyond frontier excitations, as well as avoiding discontinuities in the spectrum which afflict many other \(GW\) approaches. ## I Introduction Despite the phenomenal success of density functional theory (DFT) in electronic structure, its standard approach is both conceptually and (often) practically ill-suited for an accurate description of the energy levels in a material or chemical system [1]. These quantities are however essential for predictions of fundamental bandgaps and other charged excitation properties which govern the photo-dynamics, transport and response properties of a system. Into this, _GW_ theory has grown in popularity, first for materials and more recently for molecular systems, as a post-mean-field approach to obtain charged excitation spectra in a principled diagrammatic fashion, free from empiricism [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. The \(GW\) approach is based on Hedin's equations [15; 16], and in its most common formulation builds a self-energy to dress a reference description of the quasi-particles of a system (generally from DFT or Hartree-Fock (HF)) with an infinite resummation of all 'bubble' diagrams. These diagrams make up the random phase approximation (RPA) [17; 18; 19; 20], and physically describes all collective quantum charge fluctuations in the electron density from the reference state arising from their correlated mutual Coulomb repulsion. This dynamically screens the effective interaction between the constituent quasi-particles of the system, whose physics generally dominates in small gapped semi-conducting systems. The use of this RPA screened interaction in \(GW\) has therefore become widespread, correcting many of the failures of DFT for spectral properties. At the core of \(GW\) theory is a convolution, between the Green's function of the system, \(G(\omega)\), and the screened Coulomb interaction \(W_{p}(\omega)\), obtained (in general) at the RPA level of theory. This provides the dynamical part of the self-energy, \(\Sigma(\omega)\), formally written as \(\Sigma(\omega)=(i/2\pi)\int d\omega^{\prime}e^{i\eta\omega^{\prime}}G(\omega+ \omega^{\prime})W_{p}(\omega^{\prime})\). There are many different variants of \(GW\) theory [3; 13; 14; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34], which primarily differ due to i) the choice (or absence) of self-consistency conditions on \(G(\omega)\) and/or \(W_{p}(\omega)\) [21], ii) the approach to find the quasi-particle energies once \(\Sigma(\omega)\) is obtained (i.e. application of Dyson's equation), and iii) different approximations to perform the frequency integration in the convolution itself. A numerically exact formulation of this convolution entails an \(\mathcal{O}[N^{6}]\) scaling step, required to find the entire set of poles in \(W_{p}(\omega)\) from the RPA. However, there are a number of techniques to approximate this frequency integration which can reduce this scaling (generally down to \(\mathcal{O}[N^{3}-N^{4}]\)) based on plasmon pole approximations, analytic continuation, contour deformation and explicit grid resolved approaches for the dynamics of these quantities, amongst others. All of these approaches compress and approximate the dynamical resolution of the key quantities in order to simplify the resulting convolutional integral. Another key difference between the approaches rests on how the quasi-particle energies are updated from their mean-field molecular orbital (MO) energy starting point, once the self-energy has been constructed. This is formally an application of Dyson's equation, but is commonly approximated via a self-consistent solution to a quasi-particle (QP) equation entailing a diagonal approximation to the self-energy. This is valid when the quasiparticle energies are far from the self-energy poles, thereby asserting that the \(GW\) largely just provides a shift in the original MO energies, rather than introducing significant quasiparticle renormalization, additional satellite peaks from state splitting or relaxation of the mean-field electron density. These assumptions can however break down (especially in more correlated systems), while the numerical solution of the QP equation can also converge to different ('spurious') results based on the specifics of how it is solved. A thorough study of the discrepancies due to different approaches to both the QP equation solution and the frequency integral in the convolution can be found in Ref. [12]. In this work, we introduce a new approach to this frequency integration and reformulate \(GW\) theory with a number of desirable properties, while retaining a low scaling. The key step is that the self-energy is not represented as an explicitly dynamical quantity, but instead in terms of a series of static matrices representing the _moments_ of its frequency-dependence up to a given order. These can be directly obtained, and from them a compressed representation of the full self-energy can be algebraically constructed which only has a number of poles which scales linearly with system size, but nonetheless exactly preserves the moment distribution of the exact \(GW\) self-energy dynamics up to the desired order [45; 46; 47; 48; 49]. This order can be systematically increased to more finely resolve the full dynamical dependence of the \(GW\) self-energy. The dynamical information is therefore implicitly recast into a small number of _static_ matrices, each of which can be obtained in \(\mathcal{O}[N^{4}]\) time (with a proposed \(\mathcal{O}[N^{3}]\) algorithm also given). This removes the need for the definition of any frequency or time grids in which to resolve dynamical quantities, spectral broadening, finite temperatures, Fourier transforms or analytic continuation, with all dynamics implicitly represented by this series of static quantities. Furthermore, once these spectral moments of the self-energy are obtained, the QP equation and restrictions to a diagonal self-energy representation can be entirely removed, with an exact application of Dyson's equation possible in this'moment' representation. This leads to the self-energy represented as a small number of explicit poles at specific energies which taken together have exactly the moment distribution in their dynamics as described. This allows for a simple construction of the full frequency dependence of the resulting quasi-particle spectrum (including any additional emergent satellite structures from the correlations) via diagonalization in an 'upfolded' and explicit Hamiltonian representation. Moment expansions have a long history in the representation of dynamical quantities, with use in numerical approaches [50; 45], characterizing sum rules [51; 52], and as physical observables in their own right [53; 54; 55]. In Sec. II we show how these spectral moments of the self-energy can be directly constructed from moments of the Green's function and the two-point density-density response function from RPA, without any further approximation. We show how these can be used to directly obtain the full-frequency \(GW\) spectrum without the requirement of an explicit grid. In Sec. III we show how the RPA can be fully reformulated as a series expansion of moments of the density-density (dd) response, and in Sec. IV show how they can be efficiently obtained in \(\mathcal{O}[N^{4}]\) cost, based on ideas from the seminal work in 2010 by Furche and collaborators on low-scaling approaches for the RPA correlation energy [56]. Furthermore, in Sec. V we propose an approach to further reduce the scaling of the whole algorithm directly (rather than asymptotically) to cubic cost with system size, without invoking screening or locality assumptions. We then apply the approach in Sec. VI to the commonly used molecular \(GW100\) test set frequently used to benchmark \(GW\) implementations, demonstrating a rapid convergence of the moment expansion, and accurate and efficient results across this test set for the \(G_{0}W_{0}\) level of theory. ## II Moment-truncated GW theory In \(GW\) theory, the dynamical part of the self-energy, obtained as the convolution of the \(G(\omega)\) and \(W_{p}(\omega)\), can be formally expanded as a sum over the \(\mathcal{O}[N^{2}]\) neutral excitations of RPA theory (representing the poles of the screened Coulomb interaction) and the charged excitations of the reference Green's function. In the absence of self-consistency (the most common '\(G_{0}W_{0}\)' formulation of the method, which we exclusively consider in this work), the Green's function is just given from the \(\mathcal{O}[N]\) mean-field molecular orbital energies. This allows the self-energy to be explicitly evaluated in the frequency-domain as \[\begin{split}\Sigma_{pq}(\omega)=\sum_{\nu}\sum_{ia,jb,k}\frac{( pk|ia)(X^{\nu}_{ia}+Y^{\nu}_{ia})(X^{\nu}_{jb}+Y^{\nu}_{jb})(qk|jb)}{\omega-( \epsilon_{k}-\Omega_{\nu})-i0^{+}}\\ +\sum_{\nu}\sum_{ia,jb,c}\frac{(pc|ia)(X^{\nu}_{ia}+Y^{\nu}_{ia}) (X^{\nu}_{jb}+Y^{\nu}_{jb})(qc|jb)}{\omega-(\epsilon_{c}+\Omega_{\nu})+i0^{+ }}.\end{split} \tag{1}\] In these expressions, \(\Omega_{\nu}\) are the neutral excitation energies from RPA theory, and \(X^{\nu}_{ia}\) and \(Y^{\nu}_{ia}\) are the excitation and de-excitation amplitude components associated with this excitation. These are expanded in the basis of hole and particle spin-orbitals of the reference mean-field, represented by the indices \(i,j,k\) (\(a,b,c\)) of dimension \(o\) (\(v\)) respectively, and with orbital energies denoted by \(\epsilon_{x}\). The bare two-electron integrals are denoted by \((pk|ia)\) in standard Mulliken ('chemists') notation, which are therefore screened by the RPA reducible density response. More details on these quantities are given in Sec. III. The first term in Eq. 1 therefore represents the 'lesser' part, and the second term the 'greater' part of the full \(G_{0}W_{0}\) self-energy. Exact evaluation of the RPA excitations (\(\Omega_{\nu}\)) scales as \(\mathcal{O}[N^{6}]\), rendering it unsuitable for large-scale implementations. However, in this work, we are only interested in evaluating the spectral moments of the resulting self-energy, and finding a resulting compressed representation of the self-energy with fewer poles, but which by construction matches the spectral moments up to a desired order. These frequency-independent spectral moments are defined separately for the greater and lesser parts, and represent the \(n^{\rm th}\)-order moments of the resulting dynamical distributions, as \[\Sigma^{(n,<)}_{pq} =-\frac{1}{\pi}\int_{-\infty}^{\mu}\,{\rm Im}[\Sigma(\omega)_{pq} ]\omega^{n}d\omega \tag{2}\] \[=(-1)^{n}\left.\frac{d^{n}\Sigma(\tau)_{pq}}{d\tau^{n}}\right|_{ \tau=0^{-}}, \tag{3}\] and similarly \[\Sigma^{(n,>)}_{pq} =\frac{1}{\pi}\int_{\mu}^{\infty}\,{\rm Im}[\Sigma(\omega)_{pq}] \omega^{n}d\omega \tag{4}\] \[=(-1)^{n}\left.\frac{d^{n}\Sigma(\tau)_{pq}}{d\tau^{n}}\right|_{ \tau=0^{-}}, \tag{5}\] where \(\mu\) represents the chemical potential of the system. This exposes the relationship of these spectral moments to a Taylor expansion of the short-time dynamics of the greater and lesser parts of the self-energy, with the moments defining the integrated weight, mean, variance, skew, and higher-order moments of the dynamical distribution of each element of the self-energy in the frequency domain. Applied to the \(GW\) self-energy of Eq. 1, the moments can be constructed as \[\Sigma^{(n,<)}_{pq} =\sum_{\nu}\sum_{ia,jb}\sum_{k}\left[(pk|ia)(X^{\nu}_{ia}+Y^{\nu}_ {ia})\right.\] \[\qquad\left.(\epsilon_{k}-\Omega_{\nu})^{n}(X^{\nu}_{jk}+Y^{\nu} _{jk})(qk|jb)\right], \tag{6}\] \[\Sigma^{(n,>)}_{pq} =\sum_{\nu}\sum_{ia,jb}\sum_{c}\left[(pc|ia)(X^{\nu}_{ia}+Y^{\nu} _{ia})\right.\] \[\qquad\left.(\epsilon_{c}+\Omega_{\nu})^{n}(X^{\nu}_{jk}+Y^{\nu} _{jk})(qc|jb)\right]. \tag{7}\] The moment distribution of a convolution of two quantities can be expressed via the binomial theorem as a sum of products of the moments of the individual quantities. This enables us to split apart the expressions above into products of the individual Green's function and density-density response moments. Defining the \(n^{\rm th}\)-order spectral moments of the RPA density-response, summed over both particle-hole excitation and de-excitation components, as \[\eta^{(n)}_{ia,jb}=\sum_{\nu}(X^{\nu}_{ia}+Y^{\nu}_{ia})\Omega^{n}_{\nu}(X^{ \nu}_{jk}+Y^{\nu}_{jk}), \tag{8}\] we can rewrite Eqs. 6 and 7 as \[\Sigma^{(n,<)}_{pq} =\sum_{ia,jb,k}\sum_{t=0}^{n}{n\choose t}(-1)^{t}\epsilon_{k}^{n- t}(pk|ia)\eta^{(t)}_{ia,jb}(qk|jb) \tag{9}\] \[\Sigma^{(n,>)}_{pq} =\sum_{ia,jb,c}\sum_{t=0}^{n}{n\choose t}\epsilon_{c}^{n-t}(pc|ia )\eta^{(t)}_{ia,jb}(qc|jb). \tag{10}\] Evaluating the self-energy spectral moments of Eqs. 9-10 up to a desired order \(n\), represents the central step of the proposed'moment-conserving' \(GW\) formulation, defining the convolution between \(G(\omega)\) and \(W_{p}(\omega)\) in this moment expansion of the dynamics. In Sec. III we show how the RPA can be reformulated to define specific constraints on the relations between different orders of the RPA density-response moments, \(\eta^{(n)}_{ia,jb}\). These relations are subsequently used in Sec. IV to demonstrate how the self-energy moments can be evaluated in \(\mathcal{O}[N^{4}]\) scaling, with Sec. V going further to propose a cubic scaling algorithm for their evaluation (and therefore full \(GW\) algorithm). In addition to these moments representing the dynamical part of the self-energy, we also require a static (exchange) part of the self-energy, \(\mathbf{\Sigma}_{\infty}\), which can be calculated as \[\mathbf{\Sigma}_{\infty}=\mathbf{K}[\mathbf{D}]-\mathbf{V}_{\rm xc}, \tag{11}\] where \(\mathbf{K}[\mathbf{D}]\) is the exchange matrix evaluated with the reference density matrix. This reference density matrix is found from a prior mean-field calculation via a self-consistent Fock or Kohn-Sham single-particle Hamiltonian, \(\mathbf{f}[\mathbf{D}]\), with \(\mathbf{V}_{\rm xc}\) being the exchange-correlation potential used in \(\mathbf{f}\). Note that for a Hartree-Fock reference, this static self-energy contribution is zero. ### Full \(Gw\) spectrum from self-energy moments Once the moments of the self-energy are found, it is necessary to obtain the resulting dressed \(GW\) excitations and spectrum. While this is formally an application of Dyson's equation, the most common approach is to find each \(GW\) excitation explicitly via a self-consistent solution (or linearized approximation) of the quasiparticle equation, while assuming a diagonal self-energy in the MO basis [12]. This assumption neglects physical effects due to electron density relaxation and mixing or splitting of quasiparticle states in more strongly correlated systems. In this work, we allow for an exact invocation of Dyson's equation, which can be achieved straightforwardly in this moment domain of the effective dynamics, allowing extraction of quasi-particle weights associated with transitions, and a full matrix-valued form of the resulting \(GW\) Green's function over all frequencies, with all poles obtained analytically without artificial broadening. This is achieved by constructing an 'upfolded' representation of an effective Hamiltonian, consisting of coupling between a physical and 'auxiliary' space (with the latter describing the effect of the moment-truncated self-energy). Specifically, we seek an effective static Hamiltonian, \[\mathbf{\tilde{H}}=\begin{bmatrix}\mathbf{f}+\mathbf{\Sigma}_{\infty}&\mathbf{\tilde{ W}}\\ \mathbf{\tilde{W}}^{\dagger}&\mathbf{\tilde{d}}\end{bmatrix}, \tag{12}\] whose eigenvalues are the charged excitation energies at the level of the moment-truncated \(GW\), with quasiparticle weights and Dyson orbitals explicitly obtained from the projection of the corresponding eigenvectors into the physical (MO) space. The full Green's function can therefore be constructed as \[\mathbf{G}(\omega)=\left(\omega\mathbf{I}-\mathbf{f}+\mathbf{\Sigma}_{\infty}- \mathbf{\tilde{W}}(\omega\mathbf{I}-\mathbf{\tilde{d}})^{-1}\mathbf{\tilde{W}} ^{\dagger}\right)^{-1}. \tag{13}\] Such upfolded representations have been considered previously in diagrammatic theories, in a recasting of GF2 theory in terms of its moments [46; 57; 47] as well as more recently to \(GW\) amongst others [58; 59; 60; 61; 62; 63]. For 'exact' \(G_{0}W_{0}\), this auxiliary space (i.e. the dimension of \(\mathbf{d}\)) must scale as \(\mathcal{O}[N^{3}]^{60,61}\). However, in the moment truncation, \(\mathbf{\tilde{W}}\) and \(\mathbf{\tilde{d}}\) can be directly constructed such that their effect exactly matches that of a truncated set of conserved \(GW\) self-energy moments (separately in the particle and hole sectors), yet rigorously scales in dimension as \(\mathcal{O}[nN]\), where \(n\) is the number of conserved self-energy moments. This allows for a complete diagonalization of \(\mathbf{\tilde{H}}\), obtaining all excitations in a single shot, and a reconstruction of the full \(GW\) Green's function from its Lehmann representation in \(\mathcal{O}[(nN)^{3}]\) computational effort, avoiding the need for any grids or iterative solutions once \(\mathbf{\tilde{H}}\) is found. To find this effective upfolded representation of the moment-conserving dynamics, we modify the block Lanczos procedure to ensure the construction a \(\mathbf{\tilde{H}}\) of minimal size, whose effective hole and particle self-energy moments exactly match the ones from Eqs. 9-10. We first proceed by splitting the auxiliary space into a space denoting the effect of the hole (lesser) and particle (greater) self-energy, and consider each in turn. Focusing on the lesser self-energy, we can construct an _exact_ upfolded self-energy representation [63], via inspection from Eq. 1, with \[\mathbf{W}_{p,kv} =\sum_{ia}(pk|ia)(X^{\nu}_{ia}+Y^{\nu}_{ia}) \tag{14}\] \[\mathbf{d}_{kv,l\nu^{\prime}} =(\epsilon_{k}-\Omega_{\nu})\delta_{k,l}\delta_{\nu,\nu^{\prime}}, \tag{15}\] where we remove the tilde above upfolded auxiliary quantities when denoting the exact upfolded \(GW\) self-energy components. We now consider the projection of the exact \(GW\) upfolded matrix representation into a truncated block tridiagonal form, as \[\mathbf{\tilde{H}}_{\text{tri}} =\mathbf{\tilde{q}}^{(j),\dagger}\begin{bmatrix}\mathbf{f}+\mathbf{ \Sigma}_{\infty}&\mathbf{W}\\ \mathbf{W}^{\dagger}&\mathbf{d}\end{bmatrix}\mathbf{\tilde{q}}^{(j)}\] \[=\begin{bmatrix}\mathbf{f}+\mathbf{\Sigma}_{\infty}&\mathbf{L}&& &\mathbf{0}\\ \mathbf{L}^{\dagger}&\mathbf{M}_{1}&\mathbf{C}_{1}&&&\\ &\mathbf{C}_{1}^{\dagger}&\mathbf{M}_{2}&\mathbf{C}_{2}&&\\ &&\mathbf{C}_{2}^{\dagger}&\mathbf{M}_{3}&\ddots&\\ &&\ddots&\ddots&\mathbf{C}_{j-1}\\ \mathbf{0}&&&\mathbf{C}_{j-1}^{\dagger}&\mathbf{M}_{j}\end{bmatrix}, \tag{16}\] where we define \(\mathbf{\tilde{q}}^{(j)}\) as \[\mathbf{\tilde{q}}^{(j)}=\begin{bmatrix}\mathbf{I}&\mathbf{0}\\ \mathbf{0}&\mathbf{q}^{(j)}\end{bmatrix}. \tag{17}\] The \(\mathbf{q}^{(j)}\) are block Lanczos vectors of depth \(j\), which form a recursive Krylov space as \(\mathbf{q}^{(j)}=\begin{bmatrix}\mathbf{q}_{1}&\mathbf{q}_{2}&\cdots&\mathbf{ q}_{j}\end{bmatrix}\), ensuring that when taken together, they project to a block tridiagonal representation of the upfolded self-energy with \(j\) on-diagonal blocks as shown. The action of this block Lanczos tridiagonalization of the upfolded (hole or particle) self-energy is to exactly conserve these spectral moments of the self-energy [64; 65]. This block tridiagonal representation is equivalent to a truncated continued fraction form [66; 50], widely used in the representation of dynamical quantities [67; 68], and even as an expansion previously considered within \(GW\) theory [45]. We therefore seek to reformulate the Lanczos recursion in terms of just these moments, rather than the action of the full upfolded Hamiltonian which we seek to avoid due to its scaling. The initial couplings \(\mathbf{L}\) and Lanczos vectors \(\mathbf{q}_{1}\) can be found via a QR factorisation of the exact \(GW\) couplings \(\mathbf{W}\), as \[\mathbf{W}^{\dagger}=\mathbf{q}_{1}\mathbf{L}^{\dagger}. \tag{18}\] However, this will scale poorly, and so we can rewrite this to directly compute \(\mathbf{L}\) from the computed self-energy moments, rather than requiring manipulations of the full auxiliary space. Via the Cholesky QR algorithm [69; 70], we can relate \(\mathbf{L}\) to the zeroth order self-energy moment as \[\mathbf{L}^{\dagger}=\left(\mathbf{W}\mathbf{W}^{\dagger}\right)^{\frac{1}{2} }=\left(\mathbf{\Sigma}^{(0)}\right)^{\frac{1}{2}}, \tag{19}\] where the indication of the sector of the self-energy has been dropped, with this process considered independently for the hole and particle (lesser and greater respectively) parts of the self-energy. The initial Lanczos vector can then be computed as \(\mathbf{q}_{1}=\mathbf{W}^{\dagger}\mathbf{L}^{-1,\dagger}\). Subsequent block Lanczos vectors can then be defined according to the standard three-term recurrence \[\mathbf{q}_{i+1}\mathbf{C}_{i}^{\dagger}=\left[\mathbf{d}\mathbf{q}_{i}- \mathbf{q}_{i}\mathbf{M}_{i}-\mathbf{q}_{i-1}\mathbf{C}_{i-1}\right] \tag{20}\] where the on-diagonal blocks are defined as \[\mathbf{M}_{i}=\mathbf{q}_{i}^{\dagger}\mathbf{d}\mathbf{q}_{i}. \tag{21}\] In order to recast this process in terms of the self-energy moments directly, we wish to express the block Lanczos recurrence in terms of the inner space of the Lanczos vectors rather than spanning the large auxiliary space. The choice of initial vectors in Eq. 18 permits the definition \[\mathbf{S}^{(n)}_{1,1} =\mathbf{q}^{\dagger}_{1}\mathbf{d}^{n}\mathbf{q}_{1}\] \[=\mathbf{L}^{-1}\mathbf{\Sigma}^{(n)}\mathbf{L}^{-1,\dagger}, \tag{22}\] where the subscript indices on \(\mathbf{S}\) indicate the projection \[\mathbf{S}^{(n)}_{i+1,i} =\mathbf{q}^{\dagger}_{i+1}\mathbf{d}^{n}\mathbf{q}_{i}=\mathbf{ C}^{-1}_{i}\left[\mathbf{S}^{(n+1)}_{i,i}-\mathbf{M}_{i}\mathbf{S}^{(n)}_{i,i}- \mathbf{C}^{\dagger}_{i-1}\mathbf{S}^{(n)}_{i-1,i}\right], \tag{23}\] \[\mathbf{S}^{(n)}_{i+1,i+1} =\mathbf{q}^{\dagger}_{i+1}\mathbf{d}^{n}\mathbf{q}_{i+1}\] \[=\mathbf{C}^{-1}_{i}\left[\mathbf{S}^{(n+2)}_{i,i}+\mathbf{M}_{i }\mathbf{S}^{(n)}_{i,i}\mathbf{M}_{i}+\mathbf{C}^{\dagger}_{i-1}\mathbf{S}^{( n)}_{i-1,i-1}\mathbf{C}_{i-1}-P(\mathbf{S}^{(n+1)}_{i,i}\mathbf{M}_{i})+P( \mathbf{M}_{i}\mathbf{S}^{(n)}_{i,i-1}\mathbf{C}_{i-1})-P(\mathbf{S}^{(n+1)}_ {i,i-1}\mathbf{C}_{i-1})\right]\mathbf{C}^{-1,\dagger}_{i}, \tag{24}\] where the permutation operator \(P\) is defined as \(P(\mathbf{Z})=\mathbf{Z}+\mathbf{Z}^{\dagger}\). For a Hermitian theory we can write \(\mathbf{S}_{i,j}=\mathbf{S}^{\dagger}_{j,i}\), and assume the zeroth order Lanczos vector to be zero i.e. \(\mathbf{S}^{(n)}_{i,0}=\mathbf{S}^{(n)}_{0,j}=\mathbf{S}^{(n)}_{0,0}=\mathbf{0}\). Additionally, the orthogonality of the Lanczos vectors requires that \(\mathbf{S}^{(0)}_{i,j}=\delta_{ij}\mathbf{I}\). By considering again the Cholesky QR algorithm, the off-diagonal \(\mathbf{C}\) matrices can therefore be computed as \[\mathbf{C}^{2}_{i}= \left[\mathbf{S}^{(2)}_{i,i}+\mathbf{M}^{2}_{i}+\mathbf{C}^{ \dagger}_{i-1}\mathbf{C}_{i-1}\right.\] \[-\left.P(\mathbf{S}^{(1)}_{i,i}\mathbf{M}_{i})-P(\mathbf{S}^{(1)} _{i,i-1}\mathbf{C}_{i-1})\right], \tag{25}\] and the on-diagonal \(\mathbf{M}\) matrices can be found using Eq. 21 \[\mathbf{M}_{i}=\mathbf{q}^{\dagger}_{i}\mathbf{d}\mathbf{q}_{i}=\mathbf{S}^{( 1)}_{i,i}. \tag{26}\] These recurrence relations allow the calculation of the on- and off-diagonal blocks resulting in the block tridiagonal form of the Hamiltonian in Eq. 16. Despite the apparent complexity of the recurrence relations, this algorithm contains no step scaling greater than \(\mathcal{O}[N^{3}]\), by eliminating the explicit reference to the full upfolded Hamiltonian, whilst still conserving the exact self-energy moments by construction. It can be seen from Eq. 14 that the auxiliary space formally couples to both the hole and particle physical sectors (both occupied and virtual MOs). To allow for this coupling (which is critical for generating effective higher order diagrams and to avoid a 'non-Dyson' approximation [60; 61; 71; 72; 73; 74; 75; 76]), the solution of the Dyson equation using the compressed self-energy, i.e. diagonalisation of Eq. 12, requires the combination of the the block tridiagonal Hamiltonian in Eq. 16 resulting from both the hole and particle self-energy moments. This is constructed as \[\tilde{\mathbf{H}}=\begin{bmatrix}\mathbf{f}+\mathbf{\Sigma}_{\infty}&\tilde {\mathbf{W}}\\ \tilde{\mathbf{W}}^{\dagger}&\tilde{\mathbf{d}}\end{bmatrix}=\begin{bmatrix} \mathbf{f}+\mathbf{\Sigma}_{\infty}&\tilde{\mathbf{W}}^{<}&\tilde{\mathbf{W}} ^{>}\\ \tilde{\mathbf{W}}^{<,\dagger}&\tilde{\mathbf{d}}^{<}&\mathbf{0}\\ \tilde{\mathbf{W}}^{>,\dagger}&\mathbf{0}&\tilde{\mathbf{d}}^{>}\end{bmatrix}, \tag{27}\] into the block Lanczos space on the left and the right respectively. These provide the initialisation of the recurrence terms, and have a dimension which scales linearly with system size (the same as the input self-energy moments). These \(\mathbf{\Sigma}^{(n)}\) matrices are therefore the input to the procedure, defined by Eqs. 9-10. One can then use the definition of the three-term recurrence in Eq. 20 to express all definitions in terms of these moments, without formal reference to the large auxiliary space quantities (\(\mathbf{d}\) or \(\mathbf{W}\)), as \[\mathbf{S}^{(n)}_{i+1,i} =\mathbf{q}^{\dagger}_{i+1}\mathbf{d}^{n}\mathbf{q}_{i+1}\] \[=\mathbf{C}^{-1}_{i}\left[\mathbf{S}^{(n+2)}_{i,i}+\mathbf{M}_{i }\mathbf{S}^{(n)}_{i,i}\mathbf{M}_{i}+\mathbf{C}^{\dagger}_{i-1}\mathbf{S}^{ (n)}_{i-1,i-1}\mathbf{C}_{i-1}-P(\mathbf{S}^{(n+1)}_{i,i-1}\mathbf{C}_{i-1}) \right]\mathbf{C}^{-1,\dagger}_{i}, \tag{28}\] where \(\tilde{\mathbf{W}}\) are equal to the \(\mathbf{L}\) matrix padded by zeros \[\tilde{\mathbf{W}}^{\lessgtr}=\begin{bmatrix}\mathbf{L}^{\lessgtr}&\mathbf{ 0}&\cdots&\mathbf{0}\end{bmatrix}, \tag{29}\] and \(\tilde{\mathbf{d}}\) are defined by the block tridiagonal elements \[\tilde{\mathbf{d}}^{\lessgtr}=\begin{bmatrix}\mathbf{M}^{\lessgtr}_{1}& \mathbf{C}^{\lessgtr}_{1}&&\mathbf{0}\\ \mathbf{C}^{\lessgtr,\dagger}_{1}&\mathbf{M}^{\lessgtr}_{2}&\ddots&\\ &\ddots&\ddots&\mathbf{C}^{\lessgtr}_{j-1}\\ \mathbf{0}&&\mathbf{C}^{\lessgtr,\dagger}_{j-1}&\mathbf{M}^{\lessgtr}_{j} \end{bmatrix}. \tag{30}\] This ensures conservation of both the separate hole and particle moments of the self-energy, as well as conservation in the central moments according to their sum. The compressed Hamiltonian can be returned to a diagonal representation of the self-energy by diagonalising \(\tilde{\mathbf{d}}\) and appropriately rotating \(\tilde{\mathbf{W}}\) into this basis. The eigenvalues of \(\tilde{\mathbf{H}}\) are moment-conserving approximations to those of the exact upfolded Hamiltonian, and the corresponding eigenvectors \(\mathbf{u}\) can be transformed into Dyson orbitals via \(\mathbf{LPu}\), where \(\mathbf{P}\) is a projection into the physical space, and the \(\mathbf{L}\) is required to transform the physical component of the eigenvectors back to the MO representation. This process conserves exactly the first \(2j\) hole and particle self-energy moments. Commonly, a notation referring to the number of iterations of the block Lanczos recurrence \(n_{\mathrm{iter}}\) is used; in this notation the \(n_{\mathrm{iter}}=0\) calculation corresponds to the inclusion of only a single on-diagonal block \(\mathbf{M}_{1}\), with modified couplings \(\mathbf{L}\) to the physical space. As such, in this notation the number of conserved moments equals \(2n_{\mathrm{iter}}+2\), i.e. up to and including the \(2n_{\mathrm{iter}}+1\) order moment. This is the same number of moments as required as input to the recurrence relations, and therefore the algorithm conserves all the moments used as input, which should be up to an odd order. After \(n_{\mathrm{iter}}\) applications of this algorithm to both the lesser and greater self-energy sectors, this results in \(N(2n_{\mathrm{iter}}+3)\) quasiparticle states (demonstrating the potential to capture satellite features with these additional poles). As such, application of this algorithm becomes theoretically equivalent to a full diagonalisation of the exact upfolded Hamiltonian in the limit of \(n_{\rm iter}\sim N^{2}\). ## III Density response moments in the RPA Having described the overall approach in Sec. II, what remains for a practical implementation is to ensure that the \(GW\) self-energy moments described in Eqs. 9-10 can be computed efficiently. As a first step towards this, in this section we show how the RPA can be motivated from the perspective of the two-point density-density (dd) response moments of Eq. 8, which are central quantities to obtain in this approach to \(GW\) theory. We find that we can reformulate RPA entirely in terms of these dd-moments of the system and a strict recursive form for their inter-relation [77]. This recursive relation between the moments is a direct result of the fact that the RPA can be written as a quadratic Hamiltonian in Bosonic operators [78; 79; 63; 80; 5]. This effectively ensures that all information required to build the 2-point RPA dd-response is contained in the first two spectral moments, analogous to how all the information on the density of states in mean-field theory (quadratic in Fermionic operators) is contained in the first two Green's function moments (i.e. the one-body density matrix and Fock matrix). We start from the Casida formulation of RPA [81; 82], as a generalized eigenvalue decomposition \[\begin{bmatrix}\mathbf{A}&\mathbf{B}\\ -\mathbf{B}&-\mathbf{A}\end{bmatrix}\begin{bmatrix}\mathbf{X}&\mathbf{Y}\\ \mathbf{Y}&\mathbf{X}\end{bmatrix}=\begin{bmatrix}\mathbf{X}&\mathbf{Y}\\ \mathbf{Y}&\mathbf{X}\end{bmatrix}\begin{bmatrix}\mathbf{\Omega}&\mathbf{0} \\ \mathbf{0}&-\mathbf{\Omega}\end{bmatrix}, \tag{30}\] where the left and right eigenvectors form the biorthogonal set as \[\begin{bmatrix}\mathbf{X}&\mathbf{Y}\\ \mathbf{Y}&\mathbf{X}\end{bmatrix}^{T}\begin{bmatrix}\mathbf{X}&\mathbf{Y}\\ -\mathbf{Y}&-\mathbf{X}\end{bmatrix}=\begin{bmatrix}\mathbf{I}&\mathbf{0}\\ \mathbf{0}&-\mathbf{I}\end{bmatrix}. \tag{31}\] This biorthogonality ensures an inverse relationship between \((\mathbf{X}+\mathbf{Y})\) and \((\mathbf{X}-\mathbf{Y})^{T}\), as \[(\mathbf{X}+\mathbf{Y})(\mathbf{X}-\mathbf{Y})^{T}=(\mathbf{X}+\mathbf{Y})^{T }(\mathbf{X}-\mathbf{Y})=\mathbf{I}. \tag{32}\] The \(\mathbf{A}\) and \(\mathbf{B}\) matrices are defined as \[A_{ia,jb} =(\epsilon_{a}-\epsilon_{i})\,\delta_{ij}\delta_{ab}+\mathcal{K }_{ia,bj} \tag{33}\] \[B_{ia,jb} =\mathcal{K}_{ia,jb}. \tag{34}\] Here, \(\mathcal{K}\) is an interaction kernel which couples particle-hole excitations and de-excitations. In the traditional RPA (without second-order exchange), this coupling is taken to be the same for excitations, de-excitations and their coupling, given by the static, bare Coulomb interaction, \(\mathcal{K}_{ia,jb}=(ia|jb)=\mathcal{K}_{ia,bj}\). Hole and particle orbital energies are given by \(\epsilon_{i}\) and \(\epsilon_{a}\) respectively, defining the irreducible polarizability of the system from the reference state in \(\mathbf{A}\). Upon diagonalization, the eigenvectors defined by \(X_{ia,\nu}\) and \(Y_{ia,\nu}\) define the coefficients of the RPA excitations in the particle-hole and hole-particle basis, with energies \(\Omega_{\nu}\), with \(\mathbf{\Omega}\) therefore a diagonal matrix of the positive (neutral) RPA excitation energies. These neutral excitations define the poles of the full RPA reducible density-density (dd) response function, which can be constructed \[\chi(\omega)=\begin{bmatrix}\mathbf{X}&\mathbf{Y}\\ \mathbf{Y}&\mathbf{X}\end{bmatrix}\begin{bmatrix}\omega\mathbf{I}-\mathbf{ \Omega}&\mathbf{0}\\ \mathbf{0}&-\omega\mathbf{I}-\mathbf{\Omega}\end{bmatrix}^{-1}\begin{bmatrix} \mathbf{X}&\mathbf{Y}\\ \mathbf{Y}&\mathbf{X}\end{bmatrix}^{T}. \tag{35}\] Note that this matrix is formally equivalent to the alternative directly dynamical construction from \(\chi(\omega)=(\mathbf{P}(\omega)^{-1}-\mathcal{K})^{-1}\), where \(\mathbf{P}(\omega)\) is the irreducible polarizability of the reference state. Considering the positive-frequency part of the dd-response (noting that the negative frequency part is symmetric due to the bosonic-like symmetry of Eq. 35), we can write a more compact form of the dd-response as \[\eta(\omega)=(\mathbf{X}+\mathbf{Y})(\omega\mathbf{I}-\mathbf{\Omega})^{-1} (\mathbf{X}+\mathbf{Y})^{T}, \tag{36}\] which sums contributions from particle-hole and hole-particle fluctuations together, and from which optical properties such as dynamic polarizabilities can be computed [83]. However, in this work we are interested in the order-by-order moments of the spectral distribution of Eq. (36) over all RPA excitation energies, which is given as \[\eta^{(n)}_{ia,jb}=-\frac{1}{\pi}\int_{0}^{\infty}\mathrm{Im}[\eta_{ia,jb}( \omega)]\omega^{n}d\omega. \tag{37}\] The non-negative integer index \(n\) denotes the order of this static dd spectral moment information. Performing this integration results in the direct construction of the dd-moments as defined in Eq. 8, which can be written more compactly in matrix form as \[\eta^{(n)}=(\mathbf{X}+\mathbf{Y})\mathbf{\Omega}^{n}(\mathbf{X}+\mathbf{Y}) ^{T}, \tag{38}\] and constitutes a central object of interest in this work, required for the GW self-energy moment construction of Eqs. 9-10. [84] We now show that the RPA can be entirely reformulated in terms of the dd-moments, Eq. (38), without loss of information, and expose constraints on the relationship between the moments of different order at the RPA level. Firstly, we note that from the definition of the original eigendecomposition of Eq. 30, along with insertion of a resolution of the identity via Eq. 32, we find a relation between the first two dd-moments as \[\eta^{(1)} =(\mathbf{X}+\mathbf{Y})\mathbf{\Omega}(\mathbf{X}+\mathbf{Y})^{ T}=(\mathbf{A}-\mathbf{B}), \tag{39}\] \[=\eta^{(0)}(\mathbf{A}+\mathbf{B})\eta^{(0)}, \tag{40}\] noting that \((\mathbf{A}+\mathbf{B})=(\mathbf{X}-\mathbf{Y})\mathbf{\Omega}(\mathbf{X}- \mathbf{Y})^{T}\). By taking the sum and difference of the two Casida equations of Eq. 30, we also find \[(\mathbf{A}+\mathbf{B})\left(\mathbf{X}+\mathbf{Y}\right) =(\mathbf{X}-\mathbf{Y})\,\mathbf{\Omega} \tag{41}\] \[(\mathbf{A}-\mathbf{B})\left(\mathbf{X}-\mathbf{Y}\right) =(\mathbf{X}+\mathbf{Y})\,\mathbf{\Omega}, \tag{42}\] from which an equation for the square of the RPA excitations can be found as \[\left(\mathbf{A}-\mathbf{B}\right)\left(\mathbf{A}+\mathbf{B}\right)\left( \mathbf{X}+\mathbf{Y}\right)=\left(\mathbf{X}+\mathbf{Y}\right)\mathbf{\Omega}^ {2}, \tag{43}\] which has previously appeared in the RPA literature [85]. By right-multiplying by \(\left(\mathbf{X}+\mathbf{Y}\right)^{T}\), this leads to a relation between the zeroth and second dd-moment, as \[\left(\mathbf{A}-\mathbf{B}\right)\!\left(\mathbf{A}+\mathbf{B}\right)\!\eta^{ \left(0\right)}=\eta^{\left(2\right)}. \tag{44}\] The above equations can be further generalized as a recursive relation to generate all higher-order moments from lower-order ones, as \[\eta^{\left(m\right)} =\left(\mathbf{A}-\mathbf{B}\right)\!\left(\mathbf{A}+\mathbf{B} \right)\!\eta^{\left(m-2\right)} \tag{45}\] \[=[\eta^{\left(0\right)}\!\left(\mathbf{A}+\mathbf{B}\right)]^{m }\eta^{\left(0\right)}. \tag{46}\] While these are important relations in themselves, they also illustrate that all the RPA excitations and weights in the dd-response of Eq. 36 are implicitly accessible without requiring the explicit solution to the Casida equation (\(\mathbf{X}\), \(\mathbf{Y}\) and \(\mathbf{\Omega}\) matrices). This reformulation just requires knowledge of \(\eta^{\left(0\right)}\) as the central variable (which can be defined independently of the original equations via Eq. 40), the \(\mathbf{A}\) and \(\mathbf{B}\) matrices defining the system and their interactions, and the recursive relations of Eq. 46. As an aside, the Tamm-Dancoff approximation (TDA) sets \(\mathbf{B}=0\), which dramatically simplifies the resulting expressions due to the lack of correlation in the ground state, with \(\eta^{\left(0\right)}=\mathbf{I}\), and \(\eta^{\left(n\right)}=\mathbf{A}^{n}\). This reflects that the 2-RDM in the TDA is equivalent to that of mean-field theory. Finally, we note in passing that the ground state correlation energy from the RPA can be similarly formulated in terms of the zeroth-order dd-moment, as \[E_{\text{corr}}^{\text{RPA}}=\frac{1}{2}\text{Tr}[\eta^{\left(0\right)}\!\left( \mathbf{A}+\mathbf{B}\right)-\mathbf{A}]. \tag{47}\] Related expressions for the RPA correlation energy can be found found in Eqs. 39-40 of Ref. [86] in terms of the quantities \(\eta^{\left(0\right)}\), \(\mathbf{A}+\mathbf{B}\) and \(\mathbf{A}-\mathbf{B}\) (where there, these quantities are referred to as \(\mathbf{Q}_{l}^{\text{dRPA}}\), \(\varepsilon\) and \(\varepsilon+2\mathbf{K}\) ). Equivalence between these expressions (as well as the more commonly used expression found in Ref. [87]) can be seen by noting (using Eqs. 32, 41 and 42) that \[\text{Tr}\left[\eta^{\left(0\right)}\!\left(\mathbf{A}+\mathbf{B }\right)\right] \tag{48}\] \[= \text{Tr}\left[(\eta^{\left(0\right)})^{-1}\!\left(\mathbf{A}- \mathbf{B}\right)\right]\] (49) \[= \text{Tr}\left[\left(\left(\mathbf{A}-\mathbf{B}\right)\!\frac{ 1}{2}\!\left(\mathbf{A}+\mathbf{B}\right)\!\left(\mathbf{A}-\mathbf{B}\right) \!\frac{1}{2}\right)^{\frac{1}{2}}\right]=\text{Tr}\left[\mathbf{\Omega}\right]. \tag{50}\] Overall, this perspective on the RPA in terms of dd-moments is key to open new avenues such as the ones explored in this work. ## IV Efficient evaluation of self-energy and density response moments Given the recasting of the RPA dd-response in Sec. III in terms of its lowest order moment (Eq. 40) and recursion to access the higher moments via Eq. 46, we now consider the efficient \(\mathcal{O}[N^{4}]\) evaluation of these quantities which are central to the approach in this work, thus avoiding their formal \(\mathcal{O}[N^{6}]\) construction via Eq. 38. The derivation here is heavily inspired by the seminal RI approach of Furche to compute RPA correlation energies [56], with key adaptations to target these dd-moments to arbitrary order, rather than the correlation energy. We first employ a standard low-rank decomposition of the two-electron repulsion integrals (e.g. via density fitting or Cholesky decomposition) as \[\left(ia|jb\right)\simeq\sum_{P}V_{ia,P}V_{jb,P}=\mathbf{V}\mathbf{V}^{T}, \tag{51}\] where we use \(P,Q,\dots\) to index elements of this auxiliary (RI) basis, whose dimension \(N_{\text{aux}}\) scales \(\mathcal{O}[N]\) with system size. We define an intermediate quantity \[\tilde{\eta}_{ia,P}^{\left(n\right)}=\sum_{jb}\eta_{ia,jb}^{\left(n\right)}V_{jb,P}\quad. \tag{52}\] If this intermediate can be efficiently found, then the greater self-energy moment of Eq. 10 can be rewritten as \[\Sigma_{pq}^{\left(n,>\right)}=\sum_{t=0}^{n}\binom{n}{t}\left(\epsilon_{c}^{n -t}V_{pc,Q}\left(V_{qc,P}\left(\tilde{\eta}_{ia,P}^{\left(t\right)}V_{ia,Q} \right)\right)\right), \tag{53}\] where the brackets indicate the order of contractions in order to preserve \(\mathcal{O}[N^{4}]\) scaling, and einstein summation is implied. The lesser self-energy moment of Eq. 9 can be recast in an analogous fashion.[88] Obtaining all dd-moments of the form of Eq. 52 up to order \(n\) can be simply reduced to knowledge of the first two moments \(\tilde{\eta}_{ia,P}^{\left(0\right)}\) and \(\tilde{\eta}_{ia,P}^{\left(1\right)}\), via use of the recursive relationship between the moments as given by Eq. 45, as for even moment orders, \[\tilde{\eta}^{\left(n\right)}=[(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})] ^{n/2}\tilde{\eta}^{\left(0\right)} \tag{54}\] and for odd moment orders, \[\tilde{\eta}^{\left(n\right)}=[(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})] ^{(n-1)/2}\tilde{\eta}^{\left(1\right)} \tag{55}\] where we have omitted explicit indices for brevity. Ensuring that an \(\mathcal{O}[N^{4}]\) scaling is retained in this recursion relies on \(\left(\mathbf{A}-\mathbf{B}\right)\!\left(\mathbf{A}+\mathbf{B}\right)\) admitting a form where it can be written as a diagonal plus low-rank correction. For the RPA, this is true since (from Eqs. 33-34), \[\left(\mathbf{A}-\mathbf{B}\right)_{ia,jb}=(\epsilon_{a}-\epsilon_{i})\delta_{ ij}\delta_{ab}=\mathbf{D} \tag{56}\] is a purely diagonal matrix, while using Eq. 51 we can cast \(\left(\mathbf{A}+\mathbf{B}\right)\) into an appropriate form as \[\left(\mathbf{A}+\mathbf{B}\right)=\mathbf{D}+2\mathcal{K}=\mathbf{D}+2 \sum_{P}V_{ia,P}V_{jb,P}. \tag{57}\] We therefore express the low-rank asymmetrically decomposed form of \((\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})\) in a general fashion as a diagonal plus asymmetric low-rank part, as \[(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})=\mathbf{D}^{2}+\mathbf{S}_{L} \mathbf{S}_{R}^{T}, \tag{58}\] where for the RPA, \(\mathbf{D}\) is defined by Eq. 56, \(\mathbf{S}_{L}=\mathbf{D}\mathbf{V}\) and \(\mathbf{S}_{R}=2\mathbf{V}\). Future work will explore other analogous approaches where \((\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})\) can be decomposed in this way, for applicability to e.g. the Bethe-Salpeter equation or other RPA variants with (screened) exchange contributions [89; 90; 62]. From this low-rank decomposition and the recursive definition of Eqs. 54-55, a fixed number of dd-moments of the form of Eq. 52 can be found in \(\mathcal{O}[N^{4}]\) time, provided the original \(\tilde{\eta}^{(0)}\) and \(\tilde{\eta}^{(1)}\) values are known. We now consider how to obtain these initial low-order dd-moments efficiently. From the definitions of Eqs. 39 and 52, and specifying the standard RPA definition of Eq. 56, we find that it is straightforward to efficiently construct the first moment, as \[\tilde{\eta}^{(1)}=\mathbf{D}\mathbf{V}=\mathbf{S}_{L}. \tag{59}\] The zeroth-order dd-moment can be constructed via a rapidly-convergent numerical integration, as we will show. From Eq. 40, we can write \[(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})=\left(\eta^{(0)}(\mathbf{A}+ \mathbf{B})\right)^{2}, \tag{60}\] from which we can find an expression for \(\eta^{(0)}\) as \[\eta^{(0)}=[(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})]^{\frac{1}{2}}( \mathbf{A}+\mathbf{B})^{-1}. \tag{61}\] We note that for RPA to be well-defined with positive excitation energies, \((\mathbf{A}-\mathbf{B})\) and \((\mathbf{A}+\mathbf{B})\) must both be positive-definite matrices [85]. Using Eqs. 57 and 58, we can write the low-rank RPA form of this as \[\tilde{\eta}^{(0)}=(\mathbf{D}^{2}+\mathbf{S}_{L}\mathbf{S}_{R}^{T})^{\frac{ 1}{2}}(\mathbf{D}+2\mathbf{V}\mathbf{V}^{T})^{-1}\mathbf{V}. \tag{62}\] We first consider the evaluation of the second half of this expression, which we denote as \(\mathbf{T}\). We can use the Woodbury matrix identity to rewrite it as \[\mathbf{T} =(\mathbf{D}+2\mathbf{V}\mathbf{V}^{T})^{-1}\mathbf{V} \tag{63}\] \[=\mathbf{D}^{-1}\mathbf{V}-2\mathbf{D}^{-1}\mathbf{V}(\mathbf{I} +2\mathbf{V}^{T}\mathbf{D}^{-1}\mathbf{V})^{-1}\mathbf{V}^{T}\mathbf{D}^{-1} \mathbf{V}. \tag{64}\] This now only requires the inversion of the diagonal matrix, \(\mathbf{D}\), and a matrix of dimension \(N_{\text{aux}}\), with the overall \(ov\times N_{\text{aux}}\) matrix able to be constructed in \(\mathcal{O}[N_{\text{aux}}^{3}+N_{\text{aux}}^{2}ov]\) time. Having constructed \(\mathbf{T}\), we can complete the evaluation of \(\tilde{\eta}^{(0)}\) using the definition of the matrix square-root as an integration in the complex plane [91], \[\mathbf{M}^{\frac{1}{2}}=\frac{1}{\pi}\int_{-\infty}^{\infty}\left(\mathbf{I} -z^{2}(\mathbf{M}+z^{2}\mathbf{I})^{-1}\right)dz. \tag{65}\] From Eq. 62, this results in \[\tilde{\eta}^{(0)} =(\mathbf{D}^{2}+\mathbf{S}_{L}\mathbf{S}_{R}^{T})^{\frac{1}{2}} \mathbf{T} \tag{66}\] \[=\frac{1}{\pi}\int_{-\infty}^{\infty}\left(\mathbf{I}-z^{2}( \mathbf{D}^{2}+\mathbf{S}_{L}\mathbf{S}_{R}^{T}+z^{2}\mathbf{I})^{-1}\right) \mathbf{T}dz. \tag{67}\] We can modify this integrand into one more efficient for numerical integration, via another application of the Woodbury matrix identity to reduce the scaling of the matrix inverse. We also simplify the notation by introducing the intermediates, \[\mathbf{F}(z) =(\mathbf{D}^{2}+z^{2}\mathbf{I})^{-1} \tag{68}\] \[\mathbf{Q}(z) =\mathbf{S}_{R}^{T}\mathbf{F}(z)\mathbf{S}_{L}, \tag{69}\] where \(\mathbf{F}(z)\) is a diagonal matrix in the \(ov\) space, and \(\mathbf{Q}(z)\) is a \(N_{\text{aux}}\times N_{\text{aux}}\) matrix which can be constructed in \(\mathcal{O}[N_{\text{aux}}^{2}ov]\) time. This casts Eq. 67 into the form \[\tilde{\eta}^{(0)}=\frac{1}{\pi}\int_{-\infty}^{\infty}\left[\mathbf{I}-z^{2} \mathbf{F}(z)\left(\mathbf{I}-\mathbf{S}_{L}(\mathbf{I}+\mathbf{Q}(z))^{-1} \mathbf{S}_{R}^{T}\mathbf{F}(z)\right)\right]\mathbf{T}dz. \tag{70}\] For each value of the integration variable \(z\), the integrand is a matrix of size \(ov\times N_{\text{aux}}\), which can be constructed in \(\mathcal{O}[N^{4}]\) scaling, rendering it efficient for numerical quadrature. Along with the results of Eqs. 59, 58, 54 and 55, this therefore completes the ambition of constructing a fixed number of dd-response moments needed for the moment-truncated \(GW\) method as defined in Eq. 52, in no more than \(\mathcal{O}[N^{4}]\) scaling (and \(\mathcal{O}[N^{3}]\) memory). However, manipulations of the resulting integrand and choice of quadrature points can further improve the efficiency of their construction by ensuring a faster decay of the integrand and separating components which can be analytically integrated. This derivation is given in Appendix A, and results in a final \(\mathcal{O}[N_{\text{aux}}^{2}ov+N_{\text{aux}}^{3}]\) expression to evaluate for the zeroth-order dd-moment as \[\tilde{\eta}^{(0)} =\mathbf{D}\mathbf{T}\] \[+\frac{1}{\pi}\int_{0}^{\infty}e^{-t\mathbf{D}}\mathbf{S}_{L} \mathbf{S}_{R}^{T}e^{-t\mathbf{D}}\mathbf{T}dt\] \[+\frac{1}{\pi}\int_{-\infty}^{\infty}z^{2}\mathbf{F}(z)\mathbf{S} _{L}\left((\mathbf{I}+\mathbf{Q}(z))^{-1}-\mathbf{I}\right)\mathbf{S}_{R}^{T} \mathbf{F}(z)\mathbf{T}dz. \tag{71}\] The first numerical integral in Eq. 71 (where the integrand decays exponentially) is computed via Gauss-Laguerre quadrature, while the second (where the integrand decays as \(\mathcal{O}[z^{-4}]\)) is evaluated via Clenshaw-Curtis quadrature. A comparison of the decay of the original and refined integrands is shown in the inset to Fig. 1. The scaling of the grid spacing of both numerical quadratures is optimised to ensure exact integration of the trace of a diagonal approximation for the integrand, analogous to the grid optimization discussed in Ref. [56]. For numerical robustness, we optimize the quadrature for evaluating \(\text{Tr}[(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})]^{\frac{1}{2}}\), rather than the full integrand. We write a diagonal approximation to \((\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})\) as \[\mathbf{M}^{D}=\mathbf{D}^{2}+\mathbf{S}_{L}^{D}(\mathbf{S}_{R}^{D})^{T}, \tag{72}\] with \[(S_{L}^{D})_{ia,R} =(S_{L})_{ia,Q}(S_{R})_{ia,Q}\delta_{R,ia} \tag{73}\] \[(S_{R}^{D})_{jb,R} =\delta_{R,jb}. \tag{74}\] This is the same form as Eq. (58), but contains only diagonal matrices (denoted by subscript '\(D\)' labels) and has an auxiliary space of size \(ov\). As such, the exact square root and all quantities within the numerical integrations can be obtained in \(\mathcal{O}[ov]\) computational time. We then seek to ensure the trace of the difference between the exact and numerically integrated estimate of \(\mathbf{M}^{D^{\frac{1}{2}}}\) vanishes. This is achieved for both integrals in Eq. (71), with the analytic form for the first integral given by \(\frac{1}{2\pi}\text{diag}(\mathbf{D}^{-1}\mathbf{S}_{L}\mathbf{S}_{R}^{T})=I^ {\text{offset}}\) and the second numerical integral as \(\mathbf{M}^{D^{\frac{1}{2}}}-\mathbf{D}-I^{\text{offset}}=I^{\text{int}}\). Writing this explicitly, given quadrature points and weights \(\{z_{i},w_{i}\}\) for a \(n_{p}\)-point infinite or semi-infinite quadrature, we seek to scale our points by a factor \(a\), which is a root the objective functions \[O^{\text{offset}}(a) =I^{\text{offset}}-\frac{1}{\pi}\sum_{i}^{n_{p}}aw_{i}\text{Tr}(e^ {-2\mathbf{D}az_{i}}\mathbf{S}_{L}\mathbf{S}_{R}^{T}) \tag{75}\] \[O^{\text{int}}(a) =I^{\text{int}}-\frac{1}{\pi}\sum_{i}^{n_{p}}aw_{i}\text{Tr}(I^{ \text{D}}(az_{i}))\] (76) \[I^{\text{D}}(z) =z^{2}\mathbf{F}^{D}(z)\mathbf{S}_{L}^{D}\left(\left(\mathbf{I}+ \mathbf{Q}^{D}(z)\right)^{-1}-\mathbf{I}\right)(\mathbf{S}_{R}^{D})^{T}\mathbf{ F}^{D}(z), \tag{77}\] where Eq. 75 is minimized to optimize the grid for the first integral of Eq. 71, and Eq. 76 minimized for the second integral. This can be done via either simple root-finding or minimization, and gives a robust optimization of the integration grids in \(\mathcal{O}[ov]\) computational cost. The resulting exponential convergence of the zeroth dd-moment estimate with number of quadrature points, along with the error estimates derived in App. B, are shown in Fig. 1 for both numerical integrands. We find that as few as 12 quadrature points are sufficient for high accuracy in the results of this work, while the number of points is expected to increase for systems with a small or vanishing spectral gap. ## V Reduction to Cubic-Scaling GW With this reformulation of \(GW\) in terms of the moments of the self-energy, it is possible to further reduce the scaling to cubic in time and quadratic in memory with respect to system size, in common with the lowest-scaling \(GW\) approaches [24; 29; 92; 93]. We stress that this is not an asymptotic scaling after exploiting screening and locality arguments, but rather a formal scaling exploiting further rank reduction of quantities. To do this, we employ a _double_ factorization of the Coulomb integrals, allowing them to be written as as a product of five rank-2 tensors, as \[(ia|jb)\simeq\sum_{P,Q}X_{iP}X_{aP}Z_{PQ}X_{jQ}X_{bQ}, \tag{78}\] This factorizes the orbital product into separate terms, and is also known as tensor hypercontraction or CP decomposition, used for various recent low-scaling formulations of quantum chemical methods, where the dimension of the \(P\) and \(Q\) space rigorously grow linearly with system size [94]. Use of this doubly-factorized form has also been previously suggested in the use of a reduced-scaling RPA and particle-particle RPA schemes [95; 96]. This form for the integrals can be directly constructed with controllable errors in \(\mathcal{O}[N^{3}]\) time [97; 98]. Once found, the \(Z_{PQ}\) can be symmetrically decomposed as \(Z_{PQ}=Y_{PR}Y_{QR}\). By replacing the density-fitted integral tensor \(V_{iaR}\) in the above expressions with the fully factorized form \(X_{iP}X_{aP}Y_{RP}\), the contractions to form the moments of the \(GW\) self-energy, Figure 1: Exponential convergence of the numerical integration (NI) error for \(\eta^{(0)}\) in Eq. 71 with respect to integration points, for the singlet oxygen dimer in a cc-pVTZ basis at equilibrium (1.207 Å) bond length. Also included are the two error estimates of the true NI error, used to check convergence and estimate the required number of points (see App. B, Eq. B9 for the ‘Nested Fit’ and Eq. B4 for the ‘Lower Bound’ definitions). Inset: The originally derived integrand (Eq. 70), and the form optimized for efficient NI given in Eq. 71 and derived in App. A, showing the faster decay. and hence the Green's function and quasi-particle spectrum naturally follow as \(\mathcal{O}[N^{3}]\) with formation of appropriate intermediates. We also require the numerical computation of the partially transformed dd-moment, \(Z_{QS}X_{aS}X_{iS}\eta^{(0)}_{ia,ijk}X_{jR}X_{bR}Z_{PR}\). Inspired by the low-scaling approach taken to RPA correlation energies in the work of Refs. [99], this can be achieved with the use of a contracted double-Laplace transform in the place of the original numerical integration procedure. This factorizes the squared energy denominator \(\mathbf{F}(z)\), allowing the occupied and unoccupied indices to be contracted independently, similar to the space-time approaches to \(GW\)[93]. While this becomes a two-dimensional numerical integral, optimal quadrature grids can be calculated in a minimax sense [100; 101]. Applied to Eq. 68, this contracted double-Laplace transform takes the form \[F_{ia,ia}(z)=D_{ia,ia}\int_{p=0}^{\infty}\frac{\sin{zp}}{z}e^{-pD_{ia,ia}}dp, \tag{79}\] which allows the key matrix \(\mathbf{Q}(z)\) of Eq. 69 to be obtained as \[Q_{PS}(z)=2\int_{p=0}^{\infty}\frac{\sin{zp}}{z}Y_{PQ}(A_{QR}(p) -B_{QR}(p))Y_{SR}dp \tag{80}\] where both intermediates \[A_{QR}(p) =X_{iQ}X_{a0}\epsilon_{a}e^{-\epsilon_{a}p}e^{\epsilon ip}X_{iR}X _{aR} \tag{81}\] \[B_{QR}(p) =X_{iQ}X_{a0}\epsilon_{i}e^{-\epsilon_{a}p}e^{\epsilon ip}X_{iR} X_{aR} \tag{82}\] can be evaluated in \(\mathcal{O}[N^{3}]\) cost. Further contractions in the evaluation of Eq. 71 also follow naturally with cubic scaling. An alternative approach to reduce the scaling (to _asymptotically_ linear) without requiring the doubly-factorized integrals, is to screen the atomic orbital density-fitted integral contributions (constructed with the overlap metric), along with the double-Laplace transform, exploiting locality as has been recently performed for the RPA correlation energy [99; 101]. An explicit numerical demonstration of this reduction to cubic cost via the double factorization of the Coulomb tensor of Eq. 78 will follow in forthcoming work, with numerical results in the rest of this work employing the quartic scaling algorithm described in Sec. IV. ## VI Results ### Comparison to quasiparticle GW approaches We first consider the convergence of the moment-truncated \(G_{0}W_{0}\) algorithm compared to more traditional implementations, as found in the PySCF software package [102; 103; 104; 105]. As found to be more effective for molecular systems due to the importance of exact exchange, we perform all calculations on a restricted Hartree-Fock reference [11]. In Fig. 2 we consider the convergence of the first ionization potential (IP) of singlet O\({}_{2}\) with conserved self-energy moment order. We compare this to two \(G_{0}W_{0}\) implementations, one of which performs an exact frequency integration (denoted 'full QP-\(G_{0}W_{0}\)', which scales as \(\mathcal{O}[N^{6}]\)), and one which performs an analytic continuation (AC) of the imaginary frequency self-energy to real-frequencies via a fit to Pade approximants in order to perform the convolution (denoted 'AC QP-\(G_{0}W_{0}\)', which scales as \(\mathcal{O}[N^{4}]\)) [106; 105; 44]. However, both of these two implementations also solve the diagonal approximation to the quasiparticle equation in solving for each state, effectively imposing a diagonal approximation to the self-energy in the MO basis. This is avoided in our work, however we can constrain a similar diagonal approximation by simply removing the off-diagonal components of our computed self-energy moments. This does not result in a significant computational saving in our approach, and therefore is only relevant for comparison purposes when considering the effect of this neglected off-diagonal part of the self-energy. As can be seen in Fig. 2, the IP converges rapidly with moment order, with the full self-energy moments Figure 2: Convergence of the first \(G_{0}W_{0}\) IP of singlet O\({}_{2}\) in a cc-pVDZ basis with respect to the number of conserved self-energy moments. Also shown is the same quasiparticle state computed from traditional \(G_{0}W_{0}\) via an exact frequency integration (‘Full’) and analytic continuation approach to \(G_{0}W_{0}\) (‘AC’), albeit both imposing a diagonal approximation in their solution to the QP equation. In the moment expansion, we also consider a diagonal approximation, by explicitly removing the non-diagonal parts of our computed self-energy moments (‘Diagonal \(\Sigma\) moment’). All approaches find an IP of 8.49 eV to within 10 meV, with the difference between full-frequency and AC approaches 5 meV, and the relaxation of diagonal approximation also accounting for small \(\sim\) 5 meV differences, with the reference Hartree-Fock IP for comparison being 9.73 eV. converging slightly faster and more systematically without the diagonal approximation (something also observed in other applications). The diagonal approximation converges to the 'exact' frequency integration as expected, with our more complete (non-diagonal) self-energy moment approach only very slightly different, indicating the relative importance of the non-diagonal self-energy components in this system and lack of significant correlation-induced coupling between the mean-field quasiparticle states. Furthermore, the analytic continuation approach is also highly accurate for this system, introducing an error of only 5 meV compared to the full frequency integration. We have furthermore numerically verified that our approach scales as \(\mathcal{O}[N^{4}]\), with computational cost comparable to the analytic continuation approach. Having demonstrated correctness compared to a high-scaling exact frequency implementation, we can compare our results to AC-\(G_{0}W_{0}\) across a larger test set to consider the moment truncation convergence. We use the established '\(GW100\)' benchmark test set, where many \(GW\) (and other excited state method) implementations have been rigorously compared [12; 107; 108; 47]. This benchmark set contains 102 diverse systems, with the IP of the molecules in the set ranging from \(\sim 4-25\) eV, and featuring molecules with bound metal atoms (including metal diatomics and clusters), strongly ionic bonding, and molecules with a strongly delocalised electronic structure. The molecules range in size from simple atomic systems to the five canonical nucleobases and large aliphatic and aromatic hydrocarbons, providing a suitable range in system size. In Fig. 3 we consider the discrepancy in the first IP, electron affinity (EA), and quasiparticle gap across all systems as the order of conserved self-energy moments increases in a realistic def2-TZVPP basis, again compared to AC-\(G_{0}W_{0}\). Errors in individual systems, along with the mean signed error (MSE) across the set (white circle) is shown for each conserved moment order. This MSE for the first IP decreases from -0.142 eV for the lowest order moment conservation, to -11 meV when up to the 11\({}^{\text{th}}\)-order moment is conserved, with the EA errors generally a little larger. Similarly, the gap calculations converge to a MSE of -34.8 meV, with standard deviation about the AC-\(G_{0}W_{0}\) result of only 91 meV across all systems. We note that there may also be small differences arising due to the comparison with AC-\(G_{0}W_{0}\) due to the approximations of the frequency integration via analytically continued quantities, as well as the differences in whether off-diagonal parts of the self-energy are included. These will contribute to the discrepancy between the methods at each order, however while the comparison is not strictly equivalent and therefore these errors will be overestimated compared to an exact frequency and non-diagonal \(G_{0}W_{0}\) limit, the general trend, convergence and level of accuracy which can be reached with moment order is likely to be similar. It is important to put the scale of the moment truncation convergence in the context of the overall accuracy of the \(G_{0}W_{0}\) method for these systems. In Fig. 4 we therefore compare the aggregated mean absolute error (MAE) in the moment-truncated \(G_{0}W_{0}\) IP and EA values over this \(GW100\) test set to highly accurate CCSD(T) calculations on the separate charged and neutral systems, which is often used as a more faithful benchmark to compare against than experiment. We can therefore see the convergence of the moment truncation to the AC-\(G_{0}W_{0}\) values compared to the intrinsic error in the method. This intrinsic error is found to dominate over the error due to the self-energy moment truncation for higher numbers of conserved moments. We note however that the mean error compared to CCSD decreases systematically with increasing numbers of moments for these systems, which contrasts with observed behaviour for moment-truncated GF2 theory (where lower order self-energy truncations were found to give rise to a more accurate overall excitations) [46]. It is natural to therefore also consider whether a simple extrapolation can improve the moment-truncated results to an infinite moment limit. Figure 3: Errors in the IP, EA and gap for each system in the \(GW100\) benchmark set, compared to AC-\(G_{0}W_{0}\) in a def2-TZVPP basis, for each order of self-energy moment conservation. White circles show the mean signed error (MSE) aggregated over the test set for the given moment truncation in each quantity. We therefore apply a linear extrapolation of the excitation energies to the infinite moment limit from the two most complete moment calculations of each system. We can see from Fig. 4 that these results continue the trend of the MAE across the test set, slightly overshooting the AC-\(G_{0}W_{0}\) comparison, albeit noting the other potential sources of discrepancy between these values discussed earlier. ### Full frequency spectra One of the strengths of the moment-conserving approach to \(GW\) of this work, is the ability to obtain all excitations from a given order of truncation in a single complete diagonalization of the effective Hamiltonian of Eq. 27. This allows full frequency spectra to be obtained, with the approximation not expected to bias significantly towards accuracy in any particular energy range, making it suitable for \(G_{0}W_{0}\) excitations beyond frontier excitations. Description of low-lying states is a particular challenge for many other \(GW\) approaches, with analytic continuation becoming less reliable, and alternatives like contour-deformation scaling as \(\mathcal{O}[N^{5}]\) in general to obtain the full spectrum [41]. In Fig. 5 we therefore show a series of spectra plotting on the real frequency axis for the guanine nucleobase in a def2-TZVPP basis set over a 100 eV energy window about the quasiparticle gap, taken from the \(GW100\) benchmark set, and compared to the AC-\(G_{0}W_{0}\) full-frequency spectrum. The convergence of the spectrum is shown for a series of conserved \(G_{0}W_{0}\) moment orders, from the HF level up to the AC-\(G_{0}W_{0}\) spectrum. The AC-\(G_{0}W_{0}\) spectrum is also shown 'behind' the other spectra, to allow the deviations to be observed for each moment. It can be seen that the full-frequency spectrum rapidly converges with conserved self-energy moment order, even for high-energy states where the HF approximation is poor. The similarity of each spectrum to the AC-\(G_{0}W_{0}\) result accross the full frequency range can be rigo Figure 4: Mean absolute errors (eV) for the IP (\(x\)-axis) and EA (\(y\)-axis) across the \(GW100\) benchmark set in a def2-TZVPP basis compared to accurate CCSD(T) values. AC-\(G_{0}W_{0}\) values, as well as moment-truncated results are shown, with the number in each data point marker giving the number of conserved moments. Extrapolation of individual data points from the 9th- and 11th-order conserved moment values are also provided, denoted by the truncation label ‘\(\circ\)’. Figure 5: Spectral functions for guanine in a def2-TZVPP basis set, for HF, AC-\(G_{0}W_{0}\), and the moment-conserving \(G_{0}W_{0}\) approach with zero to five block Lanczos iterations, thereby conserving up to the 11th-order self-energy moment. The values indicated in the spectra indicate the Wasserstein metric taken with respect to the AC-\(G_{0}W_{0}\) spectrum, quantifying the difference between the spectral distributions. The AC-\(G_{0}W_{0}\) spectrum is indicated transparently behind the other spectra to ease visualisation of the convergence, and was calculated using an iterative diagonal approximation to the quasiparticle equation. Wasserstein or 'earth-mover' metric, which describes the similarity between probability distributions. This metric is shown as the value inside of each plot, indicating a rapid and robust convergence of the spectral features from the mean-field to the full \(G_{0}W_{0}\) spectrum with increasing numbers of included moments. This Wasserstein metric convergence plateaus at the \(\sim 7^{\rm th}\)-order conserved moments, with further orders not improving this metric further. This could be due to numerical precision of the algorithm, or fundamental approximations in the AC-\(G_{0}W_{0}\) such as the precision of the analytic continuation, or the diagonal approximation to the self-energy. Furthermore, it should be noted that the moment-conserving \(GW\) approximation will rigorously have a larger number of poles included in its spectrum compared to those \(GW\) approaches which rely on an iterative solution to the QP equation which considers the change to a single MO at a time. These additional peaks are likely very low weighted for this weakly-correlated system, yet could be contributing to this discrepancy with AC-\(G_{0}W_{0}\) in the Wasserstein metric. We consider this point in more detail in the next section. ### Multiple solutions and additional spectral features This fact that many low-scaling \(GW\) implementations rely on an iterative solution to solve the quasiparticle equation can be a source of error and loss of robustness. This is because when self-energy poles are found near quasiparticle energies, the \(GW\) poles can split into multiple peaks, where the final excitation energy converged to can depend sensitively on the specifics of the root-finding algorithm used to solve the QP equation. This was highlighted in the Ref. [12] as a significant source of error, where a number of simple systems were found to exhibit a number of poles close to the HOMO energy level (with these solutions spanning a range of up to 1 eV). The specifics of which pole is converged to (with undesired solutions called'spurious') then depended on initial conditions, choices of optimization method, and specifics of the self-energy construction or linearization of the QP equation. The requirement to select one of these multiple solutions to the QP equation can also manifest in undesirable discontinuities in the excitation energies as e.g. the molecular geometry or correlation strength changes, as described in Refs. [109, 110, 111]. An indicator for the presence of these'spurious' poles and multiple solutions is the magnitude of the quasi-particle weight or renomalization factor evaluated at the quasi-particle energy, defined as \(Z_{p}=(1-\frac{\partial\Sigma_{pp}(\epsilon_{p})}{\partial\omega})^{-1}\), which indicates the approximate weight in the quasi-particle solutions. Since the moment-conserving \(GW\) approach obtains all poles in one step (including satellites and low-weighted features), all the excitations can be characterized by their quasiparticle weight, and either selected as specific excitation energies, or all excitations included in the spectrum to ensure smooth changes with molecular geometry. Points of discontinuity will therefore manifest as the presence of multiple lower-weighted solutions at a given energy, giving a smooth change to a broadened feature in the spectrum near self-energy poles. To demonstrate this, we consider the same simple system as Ref. [111], observing the \(GW\) quasiparticle energies as a function of the inter-nuclear distance in the H\({}_{2}\) dimer in a 6-31G basis set. Figure 6 shows quasiparticle energies, self-energies and quasiparticle renormalisation factors for AC-\(G_{0}W_{0}\), and the moment conserving \(G_{0}W_{0}\) approach with both up to the 1\({}^{\rm st}\)- and 11\({}^{\rm th}\)-order conserved self-energy moments in each sector. The self-energies plotted are the diagonal elements corresponding to the particular MO, evaluated at the respective quasiparticle energy. The figure shows the HOMO and first three unoccupied states found in this system, with the AC-\(G_{0}W_{0}\) (first row) exhibiting discontinuities in the LUMO+2 state at slightly compressed geometries, and discontinuities in the LUMO+1 state at slightly stretched geometries. As discussed, these changing solutions arise from the specifics of the root-finding in the solution to the QP equation, which will generally (but not always) converge to the solution with largest quasiparticle weight between the multiple options, indicated by the renormalization factor. These discontinuous changes between states are also shown to coincide with poles in the self-energy in the second column at the MO energies for a given separation. These AC-\(G_{0}W_{0}\) results are essentially the same as those found in Ref. [111]. When only the first two self-energy moments are conserved in each sector (second row), the approximation to the self-energy renders its pole structure sufficiently sparse such that their poles are pushed far from the MO energies at all geometries for these states. While this regularization removes the discontinuities, this significant approximation renders the renormalization factors close to one at all points, indicating only small changes from the original MOs. The final row represents a \(G_{0}W_{0}\) calculation with up to the 11\({}^{\rm th}\)-order conserved moments. With this finer resolution of the self-energy dynamics, the structure of the self-energy closely matches the one from AC-\(G_{0}W_{0}\), however, the multiple solutions are all found simultaneously, with their changing quasiparticle weights shown. The points of discontinuity are replaced by the presence of multiple solutions at similar energies and with competing quasiparticle weights, providing broad spectral features at those points. If a single solution is required, the specific excitation can be selected from the manifold based rigorously on e.g. the maximum overlap with the MO of interest (shown as thicker lines in the plot) or largest quasiparticle weight, all of which can easily be selected. This removes the uncertainty in the energies of the states based on the unphysical specifics of the QP solution algorithm, without incurring additional complexity or cost in the moment-conserving algorithm. Furthermore, the relaxation of the diagonal approximation of the self-energy in this approach is expected to be more significant at these points of multiple solution, where mixing between the different MOs is expected to be more pronounced. ## VII Conclusions and outlook In this work, we present a reformulation of the \(GW\) theory of quasiparticle excitations, based around a systematic expansion and conservation of the spectral moments of the self-energy. This contrasts with other approaches designed to approximate the central frequency integration of \(GW\) theory, which use e.g. grid expansions, analytic continuation or contour deformation in order to affect a scaling reduction from the exact theory. The moment expansion presented in this work has appealing features arising from the avoidance of the an iterative solution to the quasiparticle equation for each state (avoiding'spurious' solutions), diagonal self-energy approximations, or requirements for analytic continuation of dynamical quantities. It allows for all excitation energies and weights to be obtained directly in a non-iterative single diagonalization of a small effective Hamiltonian, controlled by a single parameter governing the number of conserved self-energy moments. Full RPA screening and particle-hole coupling in the self-energy is included, which is captured with \(\mathcal{O}[N^{4}]\) computational scaling via a numerical one-dimensional integration, with a reduction to cubic scaling also proposed. This approach is enabled by a recasting of the RPA in terms of the moments of the density-density response function. Applied across we \(GW100\) test set, we find rapid convergence to established \(GW\) methodology results for both state-specific and full spectral properties, with errors due to the incompleteness of the moment expansion many times smaller than the inherent accuracy of the method. The formulation follows relatively Figure 6: Quasiparticle energies, self-energies, and renormalisation factors for the H\({}_{2}\) dimer in a 6-31G basis set with varying bond lengths. Shown are results for AC-\(G_{0}W_{0}\) and moment-conserved \(G_{0}W_{0}\) with zero and five iterations, thereby conserving up to the 1\({}^{\text{st}}\)- and 11\({}^{\text{th}}\)-order moments, respectively. The self-energy corresponds to the diagonal element corresponding to the particular orbital, evaluated at the quasiparticle energy. The renormalisation factor corresponds to \(Z_{p}=(1-\frac{\partial\Sigma_{\text{gg}}(\epsilon_{p})}{\partial\omega})^{-1}\), with larger values indicating a more quasiparticle-like excitation. The transparent lines in the lower panel show the existence of multiple solutions, broadening spectral features near the self-energy poles, and where a single dominant solution can be chosen (defined here by maximum overlap with the corresponding MO) denoted by the thicker line. closely from previous low-scaling approaches to RPA correlation energies, enabling these codes to be adapted to low-scaling \(GW\) methods with relatively little effort. Going forwards, we will aim to test the limits of the moment-truncated \(GW\) formulation, pushing it to larger systems including the solid-state and different reference states, lower-scaling variants, and the inclusion of various self-consistent flavors of the theory (beyond the \(G_{0}W_{0}\) implementation here). The reformulation of RPA in terms of a recursive moment expansion also lends itself to a low-scaling implementation of the Bethe-Salpeter equation for neutral excitations, which we will explore in the future, as well as other beyond-RPA approaches. Finally, we will also explore the connections of this moment expansion to kernel polynomial approaches which expand spectral quantities in terms of Chebyshev and other orthogonal polynomial expansions [112]. ## Code and data availability Open-source code for reproduction of all results in this paper, along with examples, can be found at [https://github.com/BoothGroup/momentGW](https://github.com/BoothGroup/momentGW). The repository also includes the data used in this paper relating to the \(GW100\) benchmark. ## Acknowledgements The authors thank Filipp Furche and Johannes Tolle for useful feedback on the manuscript. G.H.B. gratefully acknowledges support from the Royal Society via a University Research Fellowship, as well as funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 759063. We are grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/P020194/1 and EP/T022213/1). Further computational resources were awarded under the embedded CSE programme of the ARCHER2 UK National Supercomputing Service ([http://www.archer2.ac.uk](http://www.archer2.ac.uk)).
2303.16886
End-to-End $n$-ary Relation Extraction for Combination Drug Therapies
Combination drug therapies are treatment regimens that involve two or more drugs, administered more commonly for patients with cancer, HIV, malaria, or tuberculosis. Currently there are over 350K articles in PubMed that use the "combination drug therapy" MeSH heading with at least 10K articles published per year over the past two decades. Extracting combination therapies from scientific literature inherently constitutes an $n$-ary relation extraction problem. Unlike in the general $n$-ary setting where $n$ is fixed (e.g., drug-gene-mutation relations where $n=3$), extracting combination therapies is a special setting where $n \geq 2$ is dynamic, depending on each instance. Recently, Tiktinsky et al. (NAACL 2022) introduced a first of its kind dataset, CombDrugExt, for extracting such therapies from literature. Here, we use a sequence-to-sequence style end-to-end extraction method to achieve an F1-Score of $66.7\%$ on the CombDrugExt test set for positive (or effective) combinations. This is an absolute $\approx 5\%$ F1-score improvement even over the prior best relation classification score with spotted drug entities (hence, not end-to-end). Thus our effort introduces a state-of-the-art first model for end-to-end extraction that is already superior to the best prior non end-to-end model for this task. Our model seamlessly extracts all drug entities and relations in a single pass and is highly suitable for dynamic $n$-ary extraction scenarios.
Yuhang Jiang, Ramakanth Kavuluru
2023-03-29T17:55:50Z
http://arxiv.org/abs/2303.16886v1
# End-to-End \(n\)-ary Relation Extraction ###### Abstract Combination drug therapies are treatment regimens that involve two or more drugs, administered more commonly for patients with cancer, HIV, malaria, or tuberculosis. Currently there are over 350K articles in PubMed that use the combination drug therapy MeSH heading with at least 10K articles published per year over the past two decades. Extracting combination therapies from scientific literature inherently constitutes an \(n\)-ary setting relation extraction problem. Unlike in the general \(n\)-ary setting where \(n\) is fixed (e.g., drug-gene-mutation relations where \(n=3\)), extracting combination therapies is a special setting where \(n\geq 2\) is dynamic, depending on each instance. Recently, Tiktinsky et al. (NAACL 2022) introduced a first of its kind dataset, CombrugExt, for extracting such therapies from literature. Here, we use a sequence-to-sequence style end-to-end extraction method to achieve an F1-Score of \(66.7\%\) on the CombrugExt test set for positive (or effective) combinations. This is an absolute \(\approx 5\%\) F1-score improvement even over the prior best relation classification score with spotted drug entities (hence, not end-to-end). Thus our effort introduces a state-of-the-art first model for end-to-end extraction that is already superior to the best prior non end-to-end model for this task. Our model seamlessly extracts all drug entities and relations in a single pass and is highly suitable for dynamic \(n\)-ary extraction scenarios. + Footnote †: Research reported in this paper was supported by the National Library of Medicine of the National Institutes of Health (NIH) under Award Number R01LM031240. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. of recognizing all drugs through named entity recognition (NER) first and then checking if subsets from the list identified represent a CDT. Toward more efficient single-pass CDT extraction, we adapt the sequence-to-sequence style relation extraction approach recently introduced by Giorgi et al. [3], called Seq2Rel. Sequence-to-sequence models have been originally popularized for machine translation. This is typically carried out in an encoder-decoder architecture where the encoder processes the source sentence with the encoder output used by the decoder to output tokens, in order, of the target sentence. This encoder-decoder setup has been shown to be powerful and flexible enough to represent relation extraction tasks. This was found to be specifically more advantageous in end-to-end settings where negative examples are not explicitly annotated in training data and in \(n\)-ary extraction scenarios. The general idea is to have the input sentence processed by the encoder, whose output is then used by the decoder to generate relations as a target sequence. This is done through (1) a so called _linearization schema_ that helps represent and interpret the output sequence as relations (more in Section III) and (2) a copy-mechanism where the decoder is constrained to output only tokens that are observed in the sequence input to the encoder (unlike from the full vocabulary of the target language in machine translation). This enables the decoder to output spans of the input text that correspond to entities and special tokens that correspond to relation labels connecting entities. With this introductory setup, our main contributions in CDT extraction can be summarized as follows * We adapt the Seq2Rel [3] approach through a linearization schema that fits the end-to-end CDT extraction task for the **CombDrugExt** dataset [2]. We show that a model built with this architecture achieves an end-to-end F1-score that is \(\approx 5\%\) better than the **non** end-to-end prior best score for effective drug combination extraction. * We also develop a modified linearization schema for Seq2Rel that not only outputs relations but also all entities in an exhaustive manner (regardless of whether they participate in a relation). Though this was not a proposed task for **CombDrugExt**, we show that this approach not only helps with drug NER, but also leads to better overall end-to-end relation extraction performance when considering both effective and non-effective combinations. * We examine patterns in false positives and false negatives through error analysis. We also study how context length affects performance in this approach through simple ablation experiments. The code to reproduce our results is available here: [https://github.com/bionlproc/end-to-end-CombDrugExt](https://github.com/bionlproc/end-to-end-CombDrugExt). The original dataset is already publicly available: [https://github.com/allenai/drug-combo-extraction](https://github.com/allenai/drug-combo-extraction). ## II The **CombDrugExt** Dataset In this section we outline the characteristics of the **CombDrugExt** dataset, the associated \(n\)-ary combination extraction task as designed by Tiktinsky et al. [2], and the end-to-end variation we consider in this paper. The full dataset consists of 1600 manually annotated abstracts, each of which mentions between 2 and 15 drugs. 840 of these abstracts describe one or more drug combinations that have a positive effect, with the number of drugs in each such combination varying from 2 to 11. The remaining 760 abstracts either contain mentions of drugs that are not used in combination or discuss combinations of drugs that do not have a combined positive effect. The abstracts (with associated annotations) are subdivided into training, validation, and test splits, which we use for our experiments and evaluation. ### _CDT Relation classes_ The relations are categorized into three classes: (a). "Positive combination" (POS), where the text suggests that the drugs are used together in a treatment and are described or implied to have a positive effect; (b). "Non-positive combination" (COMB), where the input sentence indicates that the drugs are used together in a treatment, but there is no evidence in the text to suggest that it is effective; (c). "Not a combination" (NOCOMB), referring to an instance where there is no indication in the text to suggest that the drugs are used together at all. An example for each class is shown in Table I (along with the linearized output representations for training/inference, which we discuss a little later in the manuscript). At this juncture it is important to state two important aspects of the task: 1. The annotation (and extraction) task setup by Tiktinsky et al. [2] does not involve identifying the target condition or disease for which the combination is being considered. This target is only relevant for the POS and COMB relations as there isn't a notion of a target for a set of drugs that do not even participate in a combination (i.e., for the NOCMB class). Even though this is suboptimal, we stick with this task definition introduced by Tiktinsky et al. [2] in the rest of this paper. We intend to extend **CombDrugExt** and the associated task to also identify the target disease for POS and COMB relations in a future effort. For example, the first sentence in Table I would have the target attribute of "hepatocellular carcinoma." 2. Each input instance is provided as an abstract within which a **single sentence** that has all the drugs involved is designated for extraction. As such, this is a sentence level end-to-end extraction task, with the provision of a broader abstract context which contains the sentence2. Models ought to process the sentence and may optionally consider the context surrounding it in the full abstract. Multiple combinations are allowed as output for each sentence. We note that Tiktinsky et al. [2] assume that the entities are already spotted and hence solve the relation classification problem; we propose an end-to-end solution that operates on input text without known drug spans. Footnote 2: We recognize that this is not the most generic formulation; ideally, all entities should not be required to be present in the same sentence. However, considering the setup in the **CombDrugExt** dataset, we make this assumption. ### _Formal end-to-end task definition_ Succinctly, given a sentence that contains drug names and an enclosing context (here, abstract), the end-to-end CDT extraction task is to identify all stated sets of drug spans that correspond to either a POS or COMB relation; for sentences where no such relations exist, output the set of all drug mentions with a NOCOMB label. More formally, the task is to consider an input instance \(X=(C,i)\) where \(C=[S_{1},\ldots,S_{n}]\) is an ordered list of \(n\) sentences (here, all the sentences in an abstract) and \(1\leq i\leq n\) is an index of a sentence \(S_{i}\) that is designated as the input sentence. Let \(D(S_{i})=\{(d^{1}_{start},d^{1}_{end}),\ldots,(d^{m}_{start},d^{m}_{end})\}\) be the set of \(m>=2\) spans of drug mentions in \(S_{i}\) (which is not part of the input instance in our end-to-end assumption). The extraction model output should be a result set \(R(X)=\{(A_{j},y_{j})\}\) where \(A_{j}\in\mathcal{P}(D(S_{i}))\) is a drug combination from the power set of \(D(S_{i})\) and \(y_{j}\in\{\texttt{POS},\texttt{COMB}\}\). When \(X\) does not contain any combinations, the model output is simply the singleton \(R(X)=\{(D(S_{i}),\texttt{NOCOMB})\}\). ### _Data and task characteristics_ There are already few \(n\)-ary tasks in general in biomedical relation extraction as the focus tends to be more on binary relations, which is reasonable. As indicated in prior sections, the dynamic nature of \(n\) in this task is unique and it becomes trickier as drugs that are part of a combination and other drugs that are mentioned in a different sense may all occur in the same input sentence. The model ought to separate these two sets and may also need to identify multiple POS/COMB relations within the same sentence -- 16% of sentences containing drug combinations in CombrugExt have more than one combination. Around 70% of POS combinations are binary, 19% are \(3\)-ary, and over 5% are \(4\)-ary3. Although the task is in one sense sentence level, the enclosing context of the full abstract is essential in several cases. The drug combination and the evidence whether it is effective may be far apart in the abstract. In fact, in an extreme case, sentences containing the effectiveness evidence and the participating drug names were separated by 41 sentences. Other linguistic phenomena including coordination, numerical reasoning, and (biomedical) world knowledge may also need to be considered in arriving at combinations. Footnote 3: Although these were counted to throw light on data characteristics, the proportions were not used in designing or evaluating the model. ## III End-to-End Model for CDT Extraction In this effort, we use the sequence-to-sequence neural architecture Seq2Rel by Giorgi et al. [3], which was designed for flexible end-to-end relation extraction. Compared with pipeline architectures that are particularly expensive for the dynamic \(n\)-ary nature of CDT extraction, Seq2Rel can be adapted easily to extract relations (and involved entities) in a single pass. Recall that our input sequence is a sentence with an enclosing full abstract context and the output is a list of 2-tuples each of which is a set of set of drugs and the corresponding label (POS, COMB, or NOCOMB). Before we describe the architecture, we first present the representation of these output 2-tuples. ### _Linearization schema for CDT_ A sentence is easy and natural to represent as a sequence. However, representing 2-tuples that correspond to the output for the end-to-end task (as described in Section II-B) needs a clear specification, which we call the linearization schema. It involves special tokens that represent different relation labels in CombrugExt (as shown in Table II) and the special token @DRUG@ to represent a drug entity. The representation is straightforward in that it lists all the drug spans forming a combination, in the order that was originally provided by Tiktinksky et al. [2], each followed by the @DRUG@ token with a suitable combination label token (from Table II)) as the final token. For a simple example shown in the POS relation in Table I (Row 1), the linearization is shown right below it where the two drugs spans are followed by the @POS@ token. If more than two drugs are part of the combination, all of them are generated before the \begin{table} \begin{tabular}{c c} \hline \hline Relation labels & Special token \\ \hline Positive combination & @POS@ \\ Non-positive combination & @COMB@ \\ Not a combination & @NOCOMB@ \\ \hline \hline \end{tabular} \end{table} Table II: Special tokens for relations in CombrugExt \begin{table} \begin{tabular}{c l l} \hline \hline Relation Classes & Example & Comment \\ \hline \multirow{3}{*}{Positive Combination} & INPUT: Codelivery of **sorafenib** and **curcurcinin** by directed self-assembled nanoparticles enhances therapeutic effect on hepatocellular carcinoma. & **sorafenib** and **curcinin** are an effective combination. \\ \cline{2-3} & OUTPUT: **sorafenib** @DRUG@ **curcinin** @PRUG@ **@POS@** \\ \hline \multirow{3}{*}{Non-pos. Combination} & INPUT: Patients received **docetaxel** 35 mg/m(2) and **irinotecan** 60 mg/m(2), intravenously, on Days 1 and 8, every 21 days, until disease progression. & A combination (**docetaxel** and **irinotecan**) is employed with no evidence of effectiveness. \\ \cline{2-3} & OUTPUT: **docetaxel** @DRUG** irinotecan** @RUUG** @COMB@ \\ \hline \multirow{3}{*}{Not a Combination} & INPUT: The results showed that **lamotrigine** did not produce any change in cognitive function, while **carbamazepine** produced cognitive dysfunction. & **lamotrigine** and **carbamazepine** are used separately. \\ \cline{2-3} & OUTPUT: **lamotrigine** @DRUG@ **carbamazepine** @RUUG** @NOCOMB@ \\ \hline \hline \end{tabular} \end{table} Table I: Examples of different drug combination relations indicating sentences and corresponding linearized model outputs. label is output. Outputs with more than one combination are also represented the same way with the combination order determined by the ordering provided by dataset creators [2]. For instance, consider the sentence: "In non-metastatic castration-resistant prostate cancer, two second-generation anti-androgens, **apalutamide** and **enzalutamide**, when used in combination with **ADT**, have demonstrated a significant benefit in metastasis-free survival." Here the expected output is: **apalutamide**@DRUG@ **ADT** @DRUG@@POS@ **enzalutamide**@DRUG@ **ADT** @DRUG@@@POS@. Here, **ADT** is used in combination with each of the other two drugs as asserted in the sentence. ### _Model architecture_ The Seq2Rel architecture is shown in Figure 1 populated with an example for CDT extraction. A BERT based encoder [4] takes the input sentence (along with accompanying context from the full abstract) and maps each token to a contextual embedding. A single layer LSTM [5] serves as the decoder for generating the serialized output sequence as discussed in Section III-A. The probability \(P(Y|X)\) for the serialized representation \(Y\) of \(R(X)\) (from Section II-B) for input \(X\) is computed using the chain rule involving probability estimates for prefixes of \(Y\) given \(X\) as is common in encoder-decoder architectures for sequence-to-sequence modeling. The loss function is the average cross-entropy per target word. We refer the reader to Chapter 9.7 of Jurafsky and Martin [6] for further details. Given this setup is prone to generating arbitrary tokens that do not appear in the input sequence (as the output sequence is typically taken from the entire vocabulary), generation is restricted to special tokens and only tokens from the input sentence through a copy mechanism [7] as implemented by Giorgi et al. [3]. This allows the decoder to obtain a probability distribution over input and special tokens at each time step, thus ensuring that only expected units are generated in the serialized output. ### _Implementation, training, and post-processing_ We used PubMedBERT [8] as the encoder given it is pretrained from scratch with PubMed abstracts and articles and was also used by the baselines for this task [3]. The single layer LSTM with random initialized weights was used. We trained the model for 130 epochs, with a learning rate of 2e-5 for encoder and a learning rate of 1.21e-4 for decoder. We trained the model on Google Colab ([https://colab.research.google.com](https://colab.research.google.com)), which took nearly 2.5 hours. Due to how WordPiece tokenization works in BERT models, we noticed our model outputs extra spaces around the hyphen ("-") character in drugs containing it. As an example, consider the sentence: "Nal-IRI with **5-fluorouracil** (5-FU) and leucovorin or gemcitabine plus cisplatin in advanced biliary tract cancer- the NIFE trial." Note the extra spaces around hyphen in the serialized output: "**5 - fluorouracil** @DRUG@ leucovorin @DRUG@@COMB@ gemcitabine @DRUG@ cisplatin @DRUG@@COMB@." So we simply post-processed such outputs to remove these extra spaces (around hyphens) to match strings in the input sentence. ## IV Main Experiments and Results We evaluate our model based on the F1-score (for comparison with prior efforts) while examining both precision and Figure 1: The Seq2Rel architecture for CDT extraction with an input sentence and a serialized output of drug combinations recall. We use the "exact match" criterion where a relation is correct if (a). all drugs in the combination are identified and match the gold spans and (b). the relation label is correct. With this notion of correctness, F1-score is the harmonic mean of \[Recall=\#correct\_relations/\#all\_gold\_relations\quad\text{and}\] \[Precision=\#correct\_relations/\#predicted\_relations.\] There are two types of evaluations reported in the original effort by Tiktinsky et al. [2] and we do the same here. The first one is the F1-score for POS class and the second one is a relaxed setup where POS and COMB classes are combined into the same class ANY-COMB and report its F1-score. The natural way to model the original 3-class setup is to use all three tags (from Table II), which inherently gives per-class F1-scores. But to compute the ANY-COMB scores, we just collapse the POS and COMB labels to the same ANY-COMB label and compute the ANY-COMB F1-score. A slightly different way is to use a 2-class set up during the training and testing through a different labeling scheme upfront. For the POS classifier, we simply use two tags: @POS@ and @NON-POS@, where the latter is the tag given to instances in the union of COMB and NOCMB. For the ANY-COMB classifier, we use a new tag @ANY-COMB@ for instances in the union of POS and COMB classes and the @NOCMB@ is the tag for the other class. We show the scores obtained through both the 3-class and 2-class modeling in Table III4. We note that both approaches achieved similar scores. The the 3-way setup results in a slightly better POS score and the 2-way labeling improves by \(1.6\%\) in ANY-COMB F1-score compared with the 3-class approach. Precision and recall are also close to each other but overall recall is slightly less than precision. Footnote 4: Note that the first row of this table is computed using a single 3-way classifier while the second row is based on two binary classifiers. Next, we compare our results with prior state-of-the-art in Table IV, where the last row contains the F1-scores of our 3-way model (from Table III). We see that for both POS and ANY-COMB settings, our Seq2Rel model has better performance. Compared with the penultimate row, which shows the prior SoTA score on this dataset, we have an almost \(5\%\) improvement in POS score even though our model is end-to-end while all other rows in the table assume drugs names are spotted. All other models are trained with the PURE method [10], a span-based approach where entity markers are used to glean signal for relation classification. The gains for ANY-COMB are not as substantial (a 1.7% improvement over prior SoTA) using this approach. However, the 2-way model (from Table III) produced a \(3.3\%\) improvement in ANY-COMB F1-score. If we look at per-class F1-scores for non-POS classes, the 3-way model has an F1-score of \(70.5\%\) for NOCMB, but it only has an \(F_{1}\) score of \(26.3\%\) on COMB class; this is to be expected as COMB (non-positive combinations) is the least frequent class in the dataset (nearly 23% of training data) and it is harder to distinguish it from the more frequent POS class. Furthermore, Tiktinsky et al. [2] also convey that for over 2/3 of the instances manual annotation necessitated consideration of additional context outside the sentence containing the drug names. But all results shown in Tables III and IV are trained on just the sentence containing the drug names. However, our results on including additional context from the enclosing full abstract context were underwhelming (more later). ## V Ablation and Error Analyses To get at the relative importance of different ingredients of our models, we ran a small ablation experiment for the 2-way ANY-COMB model shown in Table III (2nd row). In Table V, we observe that removing the 2-way approach and moving to a 3-way approach results in a \(1.6\%\) dip in the score. If we remove the post-processing step involving hyphenated entities (as discussed in Section III-C), the performance decreases by around \(4\%\). This is not surprising because \(10\%\) of entities contain at least one hyphen in the training data. The final row in Table V shows the scores if we used a simple semicolon (:) symbol instead of the special @DRUG@ token to denote and separate drug entities in the linearization schema (Section III-A). Since there is only one entity type involved, we thought entity-specific tags may not be necessary. However, we see that using ";" instead of @DRUG@ lowers F1-score by over \(3\%\). This is potentially due to the role semicolon plays in general English and overloading its functionality to also represent drug entities may have had unintended consequences in the final model. We now discuss four different types of errors we often noticed in the model output. 1. **Handling many-to-one attachments**: These are sentences where a drug is described to be used in combination with another drug from a list of candidates, leading to multiple combinations each with two drugs. In these cases, the language may be misinterpreted by our model to create larger combinations with more than two drugs. For example, consider the sentence: "After successful phase II studies, recent phase III trials established combinations of **chlorambucil** with anti-CD20 antibodies such as **rituximab**, **ofatumumab** and **obinutuzumab** as a valuable treatment option for these patients." Here the gold prediction has three combinations each with two drugs: **chlorambucil** @DRUG@ **rituximab** @DRUG@ **@poly@ chlorambucil** @DRUG@ **ofatumumab** @DRUG@ **chlorambucil** @DRUG@ **obinutuzumab** @DRUG@POS@. However, the model incorrectly predicted a \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Methods & \multicolumn{2}{c}{POS} & \multicolumn{2}{c}{Combination} & \multicolumn{2}{c}{Any Combination} \\ & \(F_{1}\) & P & R & \(F_{1}\) & P & R \\ \hline 3-way classification & **66.7** & 68.1 & 65.3 & 71.1 & 73.5 & 68.9 \\ 2-way classification & 66.4 & 69.1 & 64.0 & **72.7** & 74.1 & 71.3 \\ \hline \hline \end{tabular} \end{table} Table III: Our main results (precision, recall, F1-score) using 3-way and 2-way classifier modeling for CDT extraction. combination with all four drugs mentioned: **chlorambucil** @DRUG@**rituximab** @DRUG@**ofatumumab** @DRUG@**obinuutzumab** @DRUG@**@POS@** 2. **Recognizing drugs that are not part of a combination**: There are occasions where the language contained in the sentence may not be conclusive enough to distinguish between a real combination in comparison with drugs that are being discussed as individual treatments. To drive this home, let's look at this sentence: "**Dexamethasone** and **piroiccam** provided in the diet were found to significantly inhibit lung tumors induced by 60 mg/kg vinyl carbamate at 24 weeks whereas **myoinositol** also provided in the diet, did not significantly inhibit tumor formation." Here the gold annotation indicated that **Dexamethasone** and **piroiccam** are in a POS relation but our model predicted that all three drugs (including **myo-inositol**) are part of a size-3 COMB relation. The model was unable to notice that **myo-inositol** is not part of the combination. 3. **Distinguishing between POS and COMB relations**: At times, it was not straightforward from the intra-sentence context to determine whether a combination is effective or non-positive. Getting these relations right may need more complex multi-hop reasoning across different sentences of the abstract. Case in point is this sentence: "Randomized trial of **lenalidomide** alone versus **lenalidomide** plus **rituximab** in patients with recurrent follicular lymphoma : CALGB 50401 (Alliance)." The gold annotation for this is a POS relation consisting of **lenalidomide** and **rituximab**. However, the model predicted a COMB relation, which seems quite plausible when we examine the sentence. In fact, searching for this sentence on PubMed shows that this is the title of an article describing a clinical trial. The title does not give away enough information about whether the combination was effective or not, but the full abstract which was used by human annotators explicitly says that the combination is effective. 4. **Need for external domain knowledge**: Training data may not have enough examples of indirect and implicit ways of communicating efficacy of medications. As such, external domain knowledge about biomedical concepts maybe needed to correctly capture certain relations. To demonstrate this, consider the sentence: "Growth inhibition and apoptosis were significantly higher in BxPC-3, HPAC, and PANC-1 cells treated with **celecoxib** and **erlotinib** than cells treated with either **celecoxib** or **erlotinib**." Here efficacy is implicitly conveyed through the phrases "growth inhibition" and "apoptosis" and the quantifier "higher". The former two refer to concepts in disease mechanisms indicating that tumor cell production is lowered or tumor cells are dying, both implying a potential therapeutic effect. Thus, the gold POS relation between **celecoxib** and **erlotinib** is missed by our model, which predicted as a COMB link. ## VI Experiments with Longer Contexts Thus far all our experiments were conducted using the input sentence that contained the drug names without considering any surrounding context from the enclosing full abstract. As was already alluded to in Section IV, 2/3 of the instances in **CombDrugExt** needed human annotators to consider other sentences outside the input sentence containing drug names [2] to determine the correct labels. This behooves us to build models that consider \(n\) sentences to the left and right of the main input sentence. We varied \(n\) from 1 to 4 to better assess Seq2Rel's ability to work with a broader context. We added a special [SEP] token surrounding the relation-bearing sentence to inform the model that the task only considers drug entities and combinations expressed in the target sentence while it is still allowed to consider the neighboring context outside it. We present our evaluation results in Table VII, which clearly shows that considering additional context did not really help our situation. The first row (copy pasted from the last row of our main results Table IV) is clearly better than all other rows. A phenomenon that can be observed is that as \(n\) goes from 1 to 4, the recall increases monotonously (and so does \begin{table} \begin{tabular}{l c} \hline \hline **Model** & ANY-COMB\(F_{1}\) \\ \hline **Full 2-way model** & **72.7** \\ w/o. 2-way classification & 71.1 \\ w/o. post-processing & 68.8 \\ w/o. @DRUG@ entity type token & 69.5 \\ \hline \hline \end{tabular} \end{table} Table V: ANY-COMB ablated \(F_{1}\) scores on the test set. \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **Positive Combination \(F_{1}\)** & **Any Combination \(F_{1}\)** \\ \hline PURE (w/SciBERT) & 44.6 & 50.2 \\ PURE (w/BlueBERT) & 41.2 & 47.3 \\ PURE (w/BioBERT) & 45.4 & 46.7 \\ PURE (w/PubMedBERT) & 50.7 & 55.9 \\ \hline PURE (w/PubMedBERT+DAPT) & 61.8 & 69.4 \\ \hline Seq2Rel (w/PubMedBERT) **(end-to-end)** & **66.7** & **71.1** \\ \hline \hline \end{tabular} \end{table} Table IV: Our results (last row) compared with different baseline foundation models (DAPT means with continued domain-adaptive pretraining [9]) trained with the PURE method [10]. The baseline results (first 5 rows) are from Tiktinsky et al. [2]. the F1-score). This gives us the impression that more context seems to help but it is still not able to beat the score when no additional context is used. This is in contrast with the findings of Tiktinsky et al. [2] where extra context appeared to help. However, there is a crucial difference between their and our evaluation process -- the entities are provided and locked-in before the relations are predicted for them but our model is end-to-end and has to generate entities from the input. As such, the extra context may have confused our model in fetching the right entities, while it could have helped their span based approach with fixed entity locations. ## VII Drug NER Plus CST Experiment Please recall that the way the CDT extraction task was designed originally (Section II-B), the model only outputs drug entities that are part of a POS or COMB relations; drugs that are not part of any such relations are not output unless the input is a sentence where neither POS nor COMB relations occur. Thus named entity recognition (NER) of drugs in a comprehensive sense is not integral to the CDT task. This is a reasonable setting because in many cases, one may not be interested in drugs that do not form part of an interesting combination. However, there might be other scenarios where one may want to output all drug mentions even if they are not part of a relation. This could be for computing distributional statistics or to design more complex knowledge discovery applications. Since the annotation of CombrugExt does contain all drug spans (even those that do not participate in a relation), we can design a extended CDT task where both entities and relations are required to be output (regardless of whether entities are part of a relation). This is the task we model in this section. To also capture all drug entities, for POS and COMB relations, we extend the original linearization scheme by pre-pending all the entities mentioned in the sentence before relations are enumerated. For NOCOMB instances, only the entities are to be output (as the relation is self-explanatory when we do not see POS or COMB labels). Our extended linearization scheme is shown in Table VI, where we can see the new special token @NER@ as a delimiter to indicate the end of the entity list before relation enumeration begins. We also replaced the @DRUG@ token with a semicolon here for simplification, especially since we are already listing all drugs upfront. The first POS example shown in the table has three drugs listed for NER using this extended schema; **myo-inositol** would not have been part of the output in the original schema. From Table VIII, we see that a Seq2Rel model trained with this updated linearization schema (for both NER and CDT) achieves an F1-score of \(94\%\) for drug NER. However, there is a dip of around \(1\%\) in POS F1-score compared to the approach without the NER output (last row of Table IV). But surprisingly the ANY-COMB F1-score is clearly better (by at least \(1.3\%\)) than the corresponding scores obtained through both 2-way and 3-way models as per the original approach (from Table III). Given the ability to achieve high NER score, along with high scores for POS (within \(1\%\) of SoTA) and ANY-COMB (new SoTA), we believe this approach of jointly accounting for both entities and relations is better than the original approach that outputs only relations. ## VIII Left-to-right linearization schema In Section III-A, linearization simply followed the order in which the drug spans and combinations were ordered by the creators of CombrugExt. While this order was often the left-to-right order in the input sentence, this was not always the case. However, the original Seq2Rel method [3] uses a strict left-to-right ordering of entity spans as observed in the input sentence during training time. Of course, at test time, ordering is irrelevant as we are matching sets (not lists). We also experimented with this left-to-right ordering. That is, in the \begin{table} \begin{tabular}{l l} \hline \hline Relations & Example \\ \hline \multirow{3}{*}{Positive Combination} & **Dexamethasone** and **piroicam** provided in the diet were found to significantly inhibit lung tumors induced by 60 mg/kg vinyl carbamate at 24 weeks whereas **myo-inositol** also provided in the diet, did not significantly inhibit tumor formation. \\ \cline{2-3} & **Dexamethasone**; **piroicam**; **myo-inositol @NER@ Dexamethasone**; **piroicam @POS@** \\ \hline \multirow{3}{*}{Not a Combination} & The results showed that **lamotrigine** did not produce any change in cognitive function, while **carbamazepine** produced cognitive dysfunction. \\ \cline{2-3} & **lamotrigine**; **carbamazepine**@NER@** \\ \hline \hline \end{tabular} \end{table} Table VI: Examples of the extended linearization schema for both NER and relation extraction \begin{table} \begin{tabular}{l c c c} \hline \hline Model & \(F_{1}\) & \(P\) & \(R\) \\ \hline No additional context & **66.7** & 68.1 & 65.3 \\ 1 sentence of context & 58.3 & 63.2 & 54.0 \\ 2 sentences of context & 62.8 & 68.5 & 58.0 \\ 3 sentences of context & 65.7 & 69.9 & 62.0 \\ 4 sentences of context & 66.2 & 67.8 & 64.7 \\ \hline \hline \end{tabular} \end{table} Table VII: The effect of considering a context of \(n\) sentences on either side of the relation-bearing sentence. \begin{table} \begin{tabular}{l c c c} \hline \hline Task & \(F_{1}\) & P & R \\ \hline NER & 94.0 & 93.5 & 94.5 \\ \hline POS Combination CDT & 65.8 & 64.9 & 66.7 \\ Any Combination CDT & 74.0 & 75.9 & 72.2 \\ \hline \hline \end{tabular} \end{table} Table VIII: End-to-end drug NER and CDT extraction precision, recall, and \(F_{1}\)-scores for POS and ANY-COMB using the linearization schema from Table VI. linearized output, a combination involving drugs that appear earlier in the sentence appears before another combination whose constituent drugs appear later. If two combinations share a drug, the order of the remaining unshared drugs in the input sentence determines combination order in the output5. The results were very similar to results showed thus far in the paper with minor variations: \(<1\%\) performance shifts that are not always in favor of the original or left-to-right order. So we believe these variations are not worth discussing in the rest of this paper. Footnote 5: More formally, the drug spans form a finite total order based on their left to right positioning in the input sentence. Our linearization strategy is based on the well-known extension of a total order on a finite set (drug spans) to a total order on its power set (drug combinations). ## IX Related Work Biomedical relation extraction (RE) has mostly focused on a sub-problem, where the entities are assumed to be already provided, making it a relation classification problem [11, 12, 13, 14]. In the end-to-end setting, raw textual input is provided and the task is to simultaneously output both entities and relations. While there is merit in considering non end-to-end settings in complex RE problems, overall the end-to-end setup is more realistic for assessment and evaluation for application design. There has been a recent surge in end-to-end approaches in the general RE domain. These can be classified into two main types: (1) A pipeline approach that feeds the output of an NER model to a relation classification model [10]. (2) A joint modeling approach that simultaneously extracts both entities and relations [15, 16, 17]. In particular, sequence-to-sequence models for end-to-end relation extraction have been on the rise [18, 19, 3, 7] that tend to fall in the joint modeling group of methods. Especially, these approaches easily lend themselves to the dynamic \(n\)-ary nature of the CDT extraction task and hence we employ them in this paper. There are few efforts in biomedical natural language processing that handle drug-disease treatment relations. Kilicoglu et al. [20] present a broad coverage hybrid system to extract different types of relations, which also includes treatment relations. Dumitrache et al. [21] release one of the few publicly available datasets with both expert annotated and crowd sourced treatment relation annotations. \(n\)-ary relation extraction datasets are also not widely available in biomedicine with the major exception of drug-gene-mutation relations [22]. Finally, Tiktinsky et al. [2] are the first to annotate and publicly release a dataset for drug combinations, which forms the main focus of our manuscript. ## X Conclusion In this study, we adapted an end-to-end relation extraction approach based on sequence-to-sequence methods for extracting drug combinations that are part of combination drug therapies. The resulting Seq2Rel models use linearization schemes to encode entities and relations as output sequences for training the models and to use them in the generation mode for testing. We showed that this approach results in new state-of-the-art performances on the CombDrugExt dataset in the end-to-end setting that improve even over non end-to-end baselines (by nearly \(5\%\) in F1-score), established by prior efforts. We see five important directions for future efforts: * As highlighted in Section II-A, the CombDrugExt dataset does not include the target disease, which we believe is central to any future work on this problem. We propose to extend textbfCombDrugExt by specifically adding this extra piece of information to make it a more complete dataset for CDT extraction. * As per our results in Section VI, our method, as it stands, is unable to take advantage of the additional full abstract context available. We plan to further adapt Seq2Rel methods to take advantage of potential multi-hop links that are present in the broader context of the enclosing abstract. Based on manual examination of some errors, we believe co-references of the participating drug combinations may need to be handled more carefully. * A minor issue with order selection in linearization (Sections III-A and VIII) is that during training time, we unfairly penalize the target tokens if they do not exactly match the order we specify, even if they are in essence representing the same combination. We believe that an ensemble model where each constituent model trains with a different valid output order that nevertheless encodes the same combinations may perform much better than any single model trained with a fixed order. This is a hypothesis we aim to test in the future. * It may be important to qualify our end-to-end formulation in Section II-B with a caveat. Although we do not assume that the drug spans are available upfront, we still assume that the sentence that contains the names of all participating drugs is available. Ideally, for CDT extraction to be truly end-to-end, our method ought to extract relations given the full abstract without any index of the sentence containing the potential combination. A future direction would be to operate without the knowledge of the sentence containing the drug names. However, this might need additional human curated annotations and may be a more expensive process. * In Section V we identified different types of errors, one of which was due to the dependence on domain knowledge about treatment mechanisms through which drugs operate; we surmised that this knowledge may not have been available from the language variation in the training examples. A future direction would be to imbue this domain knowledge into the modeling process for CDT extraction, potentially through external knowledge graphs of biomedical processes. Of course, in this case, for fair comparison, this external knowledge should also be provided in some manner along with the dataset for benchmarking by the wider community.
2303.07753
A functorial approach to monomorphism categories II: Indecomposables
We investigate the (separated) monomorphism category $\operatorname{mono}(Q,\Lambda)$ of a quiver $Q$ over an Artin algebra $\Lambda$. We construct an epivalence from $\overline{\operatorname{mono}}(Q,\Lambda)$ to $\operatorname{rep}(Q,\overline{\operatorname{mod}}\, \Lambda)$, where $\operatorname{mod}\Lambda$ is the category of finitely generated modules and $\overline{\operatorname{mod}}\, \Lambda$ and $\overline{\operatorname{mono}}(Q,\Lambda)$ denote the respective injectively stable categories. Furthermore, if $Q$ has at least one arrow, then we show that this is an equivalence if and only if $\Lambda$ is hereditary. In general, it induces a bijection between indecomposable objects in $\operatorname{rep}(Q,\overline{\operatorname{mod}}\, \Lambda)$ and non-injective indecomposable objects in $\operatorname{mono}(Q,\Lambda)$. We show that the generalized Mimo-construction, an explicit minimal right approximation into $\operatorname{mono}{(Q,\Lambda)}$, gives an inverse to this bijection. Using this, we describe the indecomposables in the monomorphism category of a radical-square-zero Nakayama algebra, and give a bijection between the indecomposables in the monomorphism category of two artinian uniserial rings of Loewy length $3$ with the same residue field. These results are proved using free monads on an abelian category, in order to avoid the technical combinatorics arising from quiver representations. The setup also specializes to representations of modulations. In particular, we obtain new results on the singularity category of the algebras $H$ which were introduced by Geiss, Leclerc, and Schr\"oer in order to extend their results relating cluster algebras and Lusztig's semicanonical basis to symmetrizable Cartan matrices. We also recover results on the $\iota$quivers algebras which were introduced by Lu and Wang to realize $\iota$quantum groups via semi-derived Hall algebras.
Nan Gao, Julian Külshammer, Sondre Kvamme, Chrysostomos Psaroudakis
2023-03-14T09:59:09Z
http://arxiv.org/abs/2303.07753v3
# A functorial approach to monomorphism categories II: Indecomposables ###### Abstract. We investigate the (separated) monomorphism category \(\operatorname{mono}(Q,\Lambda)\) of a quiver over an Artin algebra \(\Lambda\). We show that there exists an equivalence (called representation equivalence in the terminology of Auslander) from \(\operatorname{\overline{mono}}(Q,\Lambda)\) to \(\operatorname{rep}(Q,\operatorname{\overline{mod}}\Lambda)\), where \(\operatorname{mod}\Lambda\) is the category of finitely generated modules and \(\operatorname{\overline{mod}}\Lambda\) and \(\operatorname{\overline{mono}}(Q,\Lambda)\) denote the respective injectively stable categories. Furthermore, if \(Q\) has at least one arrow, then we show that this is an equivalence if and only if \(\Lambda\) is hereditary. In general, the equivalence induces a bijection between indecomposable objects in \(\operatorname{rep}(Q,\operatorname{\overline{mod}}\Lambda)\) and non-injective indecomposable objects in \(\operatorname{mono}(Q,\Lambda)\), and we show that the generalized Mimo-construction, an explicit minimal right approximation into \(\operatorname{mono}(Q,\Lambda)\), gives an inverse to this bijection. We apply these results to describe the indecomposables in the monomorphism category of a radical-square-zero Nakayama algebra, and to give a bijection between the indecomposables in the monomorphism category of two artinian uniserial rings of Loewy length \(3\) with the same residue field. The main tool to prove these results is the language of a free monad of an exact endofunctor on an abelian category. This allows us to avoid the technical combinatorics arising from quiver representations. The setup also specializes to more general settings, such as representations of modulations. In particular, we obtain new results on the singularity category of the algebras \(H\) which were introduced by Geiss, Leclerc, and Schroer in order to extend their results relating cluster algebras and Lusztig's semicanonical basis to symmetrizable Cartan matrices. We also recover results on the \(\iota\)quivers algebras which were introduced by Lu and Wang to realize \(\iota\)quantum groups via semi-derived Hall algebras. ###### Contents * 1 Introduction * 2 Preliminaries * 3 The monomorphism category of the free monad * 4 Injective objects * 5 The equivalence * 6 The Mimo-construction * 7 A characterization of the indecomposable objects in \(\operatorname{Mono}(X)\) * 8 Applications to quiver representations over Artin algebras * 9 Applications to modulations ## 1. Introduction The fundamental theorem of finitely generated abelian groups is one of the oldest classification results in algebra. It was implicitly stated by Kronecker in [14, SS1], and later more explicitly by Frobenius and Stickelberger in [15, IV, p. 231]. After its proof people started studying subgroups of finite abelian groups, with work by Miller [13, 14], Hilton [15], and Birkhoff [16]. In this case one isn't only interested in the abstract groups, but also in their embedding. The classification of all such embeddings can be reduced to determining all indecomposable embeddings into a finitely generated \(\mathbb{Z}/(p^{n})\)-module, for \(p\) a prime. The difficulty of this problem ## 1. Introduction Let \(\mathcal{B}\) be an abelian category of an Artin algebra \(\Lambda\). Let \(\mathcal{B}\) be an abelian category of an Artin algebra \(\Lambda\). in their corresponding monomorphism categories. We discuss consequences of this in more detail in Subsection 8.1. Theorem A is particularly useful for comparing local uniserial rings of finite Loewy length, such as \(\mathbb{Z}/(p^{n})\) and \(\,\Bbbk[x]/(x^{n})\). On the one hand, there are several results on the representation theory of monomorphism categories over \(\,\Bbbk[x]/(x^{n})\). For example, the representation type of \(\operatorname{mono}(\mathbb{A}_{m},\operatorname{mod}\Bbbk[x]/(x^{n}))\) for \[\mathbb{A}_{m}=\mathtt{1}\to\mathtt{2}\to\cdots\to\mathtt{m}\] was given in [10], and the classification of indecomposables and the Auslander-Reiten quiver in the representation-finite and tame cases when \(m\leq 3\) were given in [11, 12]. On the other hand, there haven't been many new results for \(\Lambda=\mathbb{Z}/(p^{n})\), which might be surprising because of the similarity between \(\mathbb{Z}/(p^{n})\) and \(\,\Bbbk[x]/(x^{n})\), both being local uniserial rings of Loewy length \(n\). However, it can be explained by the lack of certain tools like covering theory. As a prominent example, the Auslander-Reiten quiver of \(\operatorname{mono}(\mathbb{A}_{2},\operatorname{mod}\Bbbk[x]/(x^{6}))\) was determined in [12] using covering theory. The analogous question for \(\operatorname{mono}(\mathbb{A}_{2},\operatorname{mod}\mathbb{Z}/(p^{6}))\) is the Birkhoff problem, which is still open. In general there is a hope that the representation theory of \(\operatorname{mono}(Q,\operatorname{mod}\Bbbk[x]/(x^{n}))\) and \(\operatorname{mono}(Q,\operatorname{mod}\mathbb{Z}(p^{n}))\) is similar. The following result goes a long way towards confirming this when \(n\leq 3\). It is proved using Theorem A and the fact that there is a stable equivalence \[\overline{\operatorname{mod}}\,\mathbb{F}_{p}[x]/(x^{n})\cong\overline{ \operatorname{mod}}\,\mathbb{Z}/(p^{n})\] in this case. Here \(\mathbb{F}_{p}\) denotes the finite field with \(p\) elements. **Theorem B** (Theorem 8.13).: _Let \(Q\) be a finite acyclic quiver and \(n\) an integer less than or equal to \(3\). Then, there exists a bijection which preserves partition vectors between indecomposable objects in \(\operatorname{mono}(Q,\operatorname{mod}\mathbb{F}_{p}[x]/(x^{n}))\) and in \(\operatorname{mono}(Q,\operatorname{mod}\mathbb{Z}/(p^{n}))\)._ By partition vector we mean the following: For a representation \((M_{\mathtt{i}},M_{\alpha})\) over \(R=\mathbb{F}_{p}[x]/(x^{n})\) or \(R=\mathbb{Z}/(p^{n})\) each \(M_{\mathtt{i}}\) can be written as \(M_{\mathtt{i}}=R/\mathtt{m}^{n_{1}}\oplus R/\mathtt{m}^{n_{2}}\oplus\cdots \oplus R/\mathtt{m}^{n_{k}}\) where \(\mathtt{m}\) is the maximal ideal of \(R\) and \(n_{1}\geq n_{2}\geq\cdots\geq n_{k}\). The associated sequences of numbers \((n_{1},n_{2},\cdots,n_{k})\) is the partition of \(M_{\mathtt{i}}\). Doing this for each vertex \(\mathtt{i}\) gives rise to the partition vector of the representation \((M_{\mathtt{i}},M_{\alpha})\). Another application of Theorem A is a Gabriel-style classification of representation-finite monomorphism categories over \(\operatorname{rad}^{2}\)-zero Nakayama Artin algebras. Here \(m\) is the number of simple \(\Lambda\)-modules and \(t\) is the number of non-injective simple \(\Lambda\)-modules. **Theorem C** (Theorem 8.17).: _Let \(Q\) be a finite connected acyclic quiver and let \(\Lambda\) be a non-semisimple \(\operatorname{rad}^{2}\)-zero Nakayama Artin algebra. Then \(\operatorname{mono}_{Q}(\Lambda)\) is of finite type if and only if \(Q\) is Dynkin. In this case, the number of indecomposable objects is \(m\cdot|Q_{0}|+t\cdot|\Phi^{+}|\), where \(|\Phi^{+}|\) denotes the set of positive roots corresponding to the Dynkin type._ We don't assume our algebra to be linear over a field. In particular, the result applies to \(\mathbb{Z}/(p^{2})\). Assume \(\mathcal{B}\) is module category of an Artin algebra. Then Theorem A provides a bijection. \[\left\{\begin{aligned} &\operatorname{Isomorphism classes of}\\ &\operatorname{indecomposable objects in }\operatorname{mono}(Q,\mathcal{B})\end{aligned}\right\} \cong\left\{\begin{aligned} &\operatorname{Isomorphism classes of}\\ &\operatorname{indecomposable objects}\\ &\operatorname{in }\operatorname{rep}(Q,\overline{\mathcal{B}})\end{aligned}\right\} \tag{1.0.1}\] given by the functor \(\operatorname{mono}(Q,\mathcal{B})\to\overline{\operatorname{mono}}(Q, \mathcal{B})\to\operatorname{rep}(Q,\overline{\mathcal{B}})\). In many cases, \(\operatorname{rep}(Q,\overline{\mathcal{B}})\) is easier to study than \(\operatorname{mono}(Q,\mathcal{B})\). For example, if \(\mathcal{B}\) is the module category of a \(\operatorname{rad}^{2}\)-zero Nakayama Artin algebra, then \(\overline{\mathcal{B}}\) is just the module category of a product of skew fields, and hence \(\operatorname{rep}(Q,\overline{\mathcal{B}})\) can be computed using classical methods. This is how Theorem C is shown. To make best use of the bijection (1.0.1), we would like to construct its inverse explicitly, so that we can obtain a description of the indecomposables in \(\operatorname{mono}(Q,\mathcal{B})\) from the ones in \(\operatorname{rep}(Q,\overline{\mathcal{B}})\). We explain the procedure to do this. Let \((B_{\mathtt{i}},B_{\alpha})\) be an object in \(\operatorname{rep}(Q,\overline{\mathcal{B}})\). For each vertex \(\mathtt{i}\), choose an object \(\hat{B}_{\mathtt{i}}\) in \(\mathcal{B}\) with no nonzero injective summands and which is isomorphic to \(B_{\mathtt{i}}\) in \(\overline{\mathcal{B}}\). For each arrow \(\alpha\colon\mathtt{i}\to\mathtt{j}\), choose a lift \(\hat{B}_{\alpha}\colon\hat{B}_{\mathtt{i}}\to\hat{B}_{\mathtt{j}}\) of \(B_{\alpha}\) to \(\mathcal{B}\). This gives a (non-unique) representation \((\widehat{B}_{\mathbf{i}},\widehat{B}_{\alpha})\) in \(\operatorname{rep}(Q,\mathcal{B})\). Next, take the minimal right \(\operatorname{mono}(Q,\mathcal{B})\)-approximation of \((\widehat{B}_{\mathbf{i}},\widehat{B}_{\alpha})\). This is denoted \(\operatorname{Mimo}(\widehat{B}_{\mathbf{i}},\widehat{B}_{\alpha})\) and called the Mimo-construction. It was introduced in [10] for \(\mathbb{A}_{2}\) and generalized in [11] to finite acyclic quivers. For an explicit formula of it see Example 6.9 or [11, Section 3a]. **Theorem D** (Theorems 7.9 and 8.1).: _Let \(Q\) be a finite acyclic quiver and \(\mathcal{B}=\operatorname{mod}\Lambda\) for an Artin algebra \(\Lambda\). The association \((B_{\mathbf{i}},B_{\alpha})\mapsto\operatorname{Mimo}(\widehat{B}_{\mathbf{i} },\widehat{B}_{\alpha})\) above gives an inverse to (1.0.1)._ This theorem holds more generally if \(\mathcal{B}\) has injective envelopes and is noetherian or artinian or locally noetherian. To complete the picture, we give a description of the indecomposable injective objects in \(\operatorname{mono}(Q,\mathcal{B})\). They are precisely given by \(f_{!}(J(\mathbf{i}))=(f_{!}(J(\mathbf{i}))_{\mathbf{j}},f_{!}(J(\mathbf{i}))_ {\alpha})\) where \(J\) is indecomposable injective in \(\mathcal{B}\) and \[f_{!}(J(\mathbf{i}))_{\mathbf{k}}=\bigoplus_{\begin{subarray}{c}p\in Q_{ \geqslant 0}\\ s(p)=\mathbf{i},f(p)=\mathbf{k}\end{subarray}}J\quad\text{and}\quad f_{!}(B( \mathbf{i}))_{\alpha}\colon\bigoplus_{\begin{subarray}{c}p\in Q_{\geqslant 0}\\ s(p)=\mathbf{i},f(p)=\mathbf{k}\end{subarray}}J\to\bigoplus_{\begin{subarray}{c }p\in Q_{\geqslant 0}\\ s(p)=\mathbf{i},f(p)=\mathbf{1}\end{subarray}}J \tag{1.0.2}\] for an arrow \(\alpha\colon\mathbf{k}\to\mathbf{l}\), where \(f_{!}(J(\mathbf{i}))_{\alpha}\) is induced by the identity map \(J\xrightarrow{1}J\) between the components indexed by paths \(p\) and \(\alpha p\). We use this and Theorem D to give an explicit description of the indecomposable objects in monomorphisms categories for linearly oriented \(\mathbb{A}_{n}\)-quivers, for a non-linearly oriented \(\mathbb{A}_{4}\)-quiver, and for the Kronecker quiver, see Subsections 8.3 and 8.4. In most of the proofs we use the more abstract language of monads and Eilenberg-Moore categories, similar to [1]. This is to avoid the technical combinatorics arising from quiver representations, e.g. from the Mimo-construction and \(f_{!}\) above. To see how they relate, consider the endofunctor \[X\colon\mathcal{C}\to\mathcal{C}\quad X(B_{\mathbf{i}})_{\mathbf{i}\in Q_{0} }=\left(\bigoplus_{\alpha\in Q_{1},t(\alpha)=\mathbf{j}}B_{s(\alpha)}\right)_ {\mathbf{j}\in Q_{0}}\] on \(\mathcal{C}=\prod_{\mathbf{i}\in Q_{0}}\mathcal{B}\). The data of a representation of \(Q\) in \(\mathcal{B}\) is equivalent to an object \(C\in\mathcal{C}\) and a morphism \(X(C)\to C\). Furthermore, the representation lies in the monomorphism category if and only if \(X(C)\to C\) is a monomorphism. Similarly, the data of a representation of \(Q\) in \(\overline{\mathcal{B}}\) is equivalent to an object \(C\in\overline{\mathcal{C}}\) and a morphism \(X(C)\to C\) in \(\overline{\mathcal{C}}\). It follows from this that there are equivalences \[\operatorname{rep}(Q,\mathcal{B})\cong\mathcal{C}^{T(X)}\quad\text{and}\quad \operatorname{Mono}(X)\cong\operatorname{Mono}(Q,\mathcal{B})\quad\text{and} \quad\operatorname{rep}(Q,\overline{\mathcal{B}})\cong\overline{\mathcal{C} }^{T(X)}\] where \(\mathcal{C}^{T(X)}\) and \(\overline{\mathcal{C}}^{T(X)}\) denote the Eilenberg-Moore categories of the free monad \(T(X)\) on \(\mathcal{C}\) and \(\overline{\mathcal{C}}\), respectively, and \(\operatorname{Mono}(X)\) is the full subcategory of \(\mathcal{C}^{T(X)}\) where \(X(C)\to C\) is a monomorphism. In this language several of the constructions become easier and more conceptual. For example, \(f_{!}\) is the left adjoint of the forgetful functor \(f^{*}\colon\mathcal{C}^{T(X)}\to\mathcal{C}\) from the Eilenberg-Moore category, and the Mimo-construction, given by the complicated formula above, is just obtained by taking a particular pushout, see Definitions 5.1 and 6.1. Our results hold for any exact, locally nilpotent endofunctor \(X\) which preserves injectives on an abelian category. The constructions and proofs use the Eilenberg-Moore category of its free monad as illustrated above. Since we are working in a more general setting, our result also applies to generalizations of quiver representations, such as representations of modulations, see Example 3.9. They are known under several different names in the literature (and with varying hypothesis), such as representations of pro-species of algebras in [12], representations of phyla in [1], representations over diagrams of abelian categories in [10], and twisted representations in [13]. Monomorphism categories of modulations have connections to Gorenstein homological algebra [12, 10], and to cotorsion pairs and model structures [10]. An important class of examples are given by prospecies over selfinjective rings. More explicitly, given a quiver \(Q\), one associates to each vertex \(\mathbf{i}\) a selfinjective algebra \(\Lambda_{\mathbf{i}}\), and to each arrow \(\alpha\colon\mathbf{i}\to\mathbf{j}\) a \(\Lambda_{\mathbf{i}}\)-\(\Lambda_{\mathbf{j}}\)-bimodule \(M_{\alpha}\) which is projective as left \(\Lambda_{\mathbf{i}}\)-module and as right \(\Lambda_{\mathbf{j}}\)-module. It turns out that the monomorphism category of the corresponding modulation is equal to the category of Gorenstein projectives modules over the tensor algebra \(T(M)\) of \(M=\bigoplus_{\alpha\in Q_{1}}M_{\alpha}\) by \(\Lambda=\prod_{\mathtt{i}\in Q_{0}}\Lambda_{\mathtt{i}}\), and its stable category is equivalent to the singularity category of \(T(M)\), see [10]. One of the most important classes of prospecies over selfinjective rings arises from the GLS-algebras \(H=H(C,D,\Omega)\) introduced in [11]. Here \(C=(c_{\mathtt{i},\mathtt{j}})\) is a symmetrizable Cartan matrix with symmetrizer \(D=\operatorname{diag}(d_{\mathtt{i}})\) and orientation \(\Omega\). We can apply our results to study the indecomposable non-projective Gorenstein projective \(H\)-modules, or equivalently the indecomposable objects in the singularity category of \(H\). An algebra is called Cohen-Macaulay finite if it has finitely many finitely generated indecomposable Gorenstein-projective modules up to isomorphism. **Theorem E**.: _Let \(C=(c_{\mathtt{i},\mathtt{j}})_{\mathtt{i},\mathtt{j}\in I}\) be a symmetrizable Cartan matrix with symmetrizer \(D=\operatorname{diag}(d_{\mathtt{i}}\mid\mathtt{i}\in I)\) and orientation \(\Omega\). Assume \(d_{\mathtt{i}}\leq 2\) for all \(\mathtt{i}\in I\). Let \(I^{\prime}\subseteq I\) be the subset of all elements \(\mathtt{i}\) for which \(d_{\mathtt{i}}=2\), and let \(C|_{I^{\prime}\times I^{\prime}}=(c_{\mathtt{i},\mathtt{j}})_{\mathtt{i}, \mathtt{j}\in I^{\prime}}\) be the corresponding submatrix of \(C\). Then \(H=H(C,D,\Omega)\) is Cohen-Macaulay finite if and only if \(C|_{I^{\prime}\times I^{\prime}}\) is Dynkin as a symmetric Cartan matrix. Furthermore, in this case there is a bijection between the positive roots of \(C|_{I^{\prime}\times I^{\prime}}\) and the isomorphism classes of indecomposable objects in the singularity category of \(H\)._ More generally, we show that there is a bijection between isomorphism classes of indecomposable objects in the singularity category of \(H\), and finite-dimensional indecomposable representations over the quiver determined by the Cartan matrix \(C|_{I^{\prime}\times I^{\prime}}\) with orientation \(\Omega|_{I^{\prime}\times I^{\prime}}\). For this no finiteness assumptions are necessary. It is a consequence of a more general result on modulations over \(\operatorname{rad}^{2}\)-zero cyclic Nakayama algebras, see Theorem 9.3. The bijection is obtained from an analogue of the equivalence in Theorem A for modulations, and we get an explicit description of the indecomposable object in the singularity category associated to a given representation from an analogue of Theorem D. The more general result also applies to modules over \(\operatorname{\iota quiver}\) algebras studied in [14, 15, 16], see Subsection 9.2. There the category of Gorenstein projective modules plays an important role, in particular since it is used to realize \(\operatorname{\iota quantum}\) groups via Hall algebras [14, 15], and since for Dynkin quivers it is equivalent to the category of projectives over the regular Nakajima-Keller-Scherotzke categories [14]. The structure of the paper is as follows. Section 2 contains the necessary background on monads and exact categories, respectively. In Section 3 we define the monomorphism category of the free monad on an endofunctor. The section is divided into four parts: In Subsection 3.1 we define the free monad of an endofunctor, and provide examples of it. In Subsection 3.2 we restrict to the abelian case. Subsection 3.3 introduces the top functor and recalls its basic properties. Finally, Subsection 3.4 introduces the key player of the paper, the monomorphism category. Section 4 deals with injective objects and the existence of injective envelopes for the monomorphism category. In Section 5 we prove Theorem A. It starts with a discussion of contravariantly finiteness of the monomorphism category in Subsection 5.1, proceeds with the proof of the existence of the equivalence in Subsection 5.2, and finishes by discussing the hereditary case in Subsection 5.3. Section 6 introduces the Mimo construction in our language. Subsection 6.1 deals with the general case while Subsection 6.2 makes the setup explicit in the case of modulations. Section 7 contains the proof of Theorem D, with a short discussion on maximal injective summands in the beginning. Section 8 discusses applications to representations of quivers over Artin algebras. Subsection 8.1 is on stable equivalences and the induced bijections between indecomposables in monomorphism categories. In Subsection 8.2 we prove Theorem B and discuss connections to other results in the literature. In Subsection 8.3 we prove Theorem C, and explicitly compute indecomposables for monomorphism categories over \(\operatorname{rad}^{2}\)-zero Nakayama algebras, using Theorem D. In Subsection 8.4 we compute the indecomposables in the monomorphism category of the Kronecker quiver over \(\operatorname{k}[x]/(x^{2})\). In Section 9 we apply our results to representations of modulations over \(\operatorname{rad}^{2}\)-zero cyclic Nakayama algebras to obtain Theorem 9.3. We then consider \(\operatorname{\iota quivers}\) and modules over the GLS algebras in Subsection 9.2, and finish by proving Theorem E from the introduction. ## 2. Preliminaries ### Notation We ignore set-theoretic issues in this paper. All categories are assume to be additive and idempotent-complete, and all functors are assumed to be additive. For functors \(F\colon\mathcal{C}\to\tilde{\mathcal{C}}\) and \(G\colon\tilde{\mathcal{C}}\to\mathcal{C}\) we write \(F\dashv G\) to denote that \(F\) is left adjoint to \(G\). The unit and counit of the adjunction are denoted by \(\eta\colon\operatorname{Id}\to GF\) and \(\varepsilon\colon FG\to\operatorname{Id}\), respectively. For a ring \(\Lambda\) we let \(\operatorname{Mod}\Lambda\), respectively \(\operatorname{mod}\Lambda\), denote the category of right \(\Lambda\)-modules, respectively finitely presented right \(\Lambda\)-modules. Throughout the paper \(\operatorname{k}\) is a commutative ring. ### Monads and modules over monads In this subsection we recall the definition of a monad and its Eilenberg-Moore category, depending on the context also called the category of algebras or the category of modules over the monad. Our main examples arise from modulations on quivers, see Example 3.9. **Definition 2.1**.: Let \(\mathcal{C}\) be a category. A _monad_ on \(\mathcal{C}\) is a tuple \((T,\eta,\mu)\) where \(T\colon\mathcal{C}\to\mathcal{C}\) is a functor and \(\eta\colon\operatorname{Id}\to T\) and \(\mu\colon T^{2}\to T\) are natural transformations such that the diagrams commute. By standard abuse of notation we sometimes denote a monad \((T,\eta,\mu)\) simply by \(T\). An important source of instances of monads are adjunctions: If \(\eta\) and \(\varepsilon\) denote the unit and counit of an adjunction \(L\dashv R\), then the tuple \((R\circ L,\eta,R(\varepsilon_{L}))\) defines a monad. We refer to [10, Section VI.1, p.138] for more details on this. Conversely, monads give rise to adjunctions via the Eilenberg-Moore category, which we recall next. This is sometimes called the category of algebras or the category of modules over the monad in the literature. **Definition 2.2**.: Let \((T,\eta,\mu)\) be a monad on \(\mathcal{C}\). 1. A _\(T\)-module_ is a pair \((M,h)\) where \(M\) is an object in \(\mathcal{C}\) and \(h\colon T(M)\to M\) is a morphism satisfying \(h\circ\eta_{M}=1_{M}\) and \(h\circ\mu_{M}=h\circ T(h)\). 2. The _Eilenberg-Moore category_\(\mathcal{C}^{T}\) of \(T\) is the category whose objects are \(T\)-modules, and where a morphism \(g\colon(M,h)\to(M^{\prime},h^{\prime})\) between \(T\)-modules is given by a morphism \(g\colon M\to M^{\prime}\) in \(\mathcal{C}\) satisfying \(g\circ h=h^{\prime}\circ T(g)\). 3. \(f^{*}\colon\mathcal{C}^{T}\to\mathcal{C}\) denotes the forgetful functor given by \(f^{*}(M,h)=M\) and \(f^{*}(g)=g\). 4. \(f\colon\mathcal{C}\to\mathcal{C}^{T}\) denotes the functor given by \(f_{!}(M)=(T(M),\mu_{M})\) and \(f_{!}(g)=T(g)\). **Proposition 2.3**.: _Let \(T\) be a monad on \(\mathcal{C}\). Then \(f^{*}\colon\mathcal{C}^{T}\to\mathcal{C}\) is right adjoint to \(f_{!}\colon\mathcal{C}\to\mathcal{C}^{T}\)._ Proof.: This follows from [10, Theorem VI.2.1]. We fix \(\eta\) and \(\varepsilon\) to be the unit and counit of the adjunction \(f_{!}\dashv f^{*}\). We finish by giving a sufficient criterion for the Eilenberg-Moore category to be abelian. **Proposition 2.4**.: _[_1_, Proposition 5.3]_ _Let \(\mathcal{C}\) be an abelian category, and let \((T,\eta,\mu)\) be a monad on \(\mathcal{C}\). Assume that \(T\) is a right exact functor. The following hold:_ 1. _The Eilenberg-Moore category_ \(\mathcal{C}^{T}\) _is abelian._ 2. _A sequence_ \((M,h)\to(M^{\prime},h^{\prime})\to(M^{\prime\prime},h^{\prime\prime})\) _in_ \(\mathcal{C}^{T}\) _is exact if and only if the sequence_ \(M\to M^{\prime}\to M^{\prime\prime}\) _in_ \(\mathcal{C}\) _is exact._ ### Exact categories Here we recall some basic properties of exact categories, in particular results on injective envelopes. An _exact category_ is an additive category \(\mathcal{E}\) endowed with a class of kernel-cokernel pairs, called _conflations_, satisfying certain properties, see [11, Appendix A] or [1] for more details. If \(E_{1}\overset{i}{\to}E_{2}\xrightarrow{p}E_{3}\) is a conflation, then \(i\) is called an _inflation_ and \(p\) is called a _deflation_. Let \(\mathcal{E}^{\prime}\) be a full subcategory of an exact category \(\mathcal{E}\). We say that \(\mathcal{E}^{\prime}\) is _extension-closed_ if for any conflation \(E_{1}\to E_{2}\to E_{3}\) with \(E_{1}\) and \(E_{3}\) in \(\mathcal{E}^{\prime}\), the middle term \(E_{2}\) must be in \(\mathcal{E}^{\prime}\). In this case \(\mathcal{E}^{\prime}\) inherits an exact structure whose conflations are the conflations in \(\mathcal{E}\) where all the terms are in \(\mathcal{E}^{\prime}\) see [1, Lemma 10.20]. An object \(I\) in an exact category \(\mathcal{E}\) is called _injective_ if for any inflation \(E\to E^{\prime}\) the morphism \(\operatorname{Hom}_{\mathcal{E}}(E^{\prime},I)\to\operatorname{Hom}_{\mathcal{E }}(E,I)\) is surjective. The exact category \(\mathcal{E}\) is said to have _enough injectives_ if for any object \(E\) in \(\mathcal{E}\) there exists an inflation \(E\to I\) with \(I\) injective. In this case we let \(\overline{\mathcal{E}}\) denote the quotient of \(\mathcal{E}\) by the ideal of morphisms factoring through an injective object. Let \(g\colon E_{1}\to E_{2}\) be a morphism in \(\mathcal{E}\). We say that \(g\) is _left minimal_ if any morphism \(g^{\prime}\colon E_{2}\to E_{2}\) satisfying \(g^{\prime}\circ g=g\) is an isomorphism. An _injective envelope_ of an object \(E\) in \(\mathcal{E}\) is a left minimal inflation \(i\colon E\to I\) where \(I\) is injective. Note that an injective envelope of an object is unique up to isomorphism. The _radical_ of \(\mathcal{E}\) is the ideal defined by \[\operatorname{Rad}_{\mathcal{E}}(E,E^{\prime})\coloneqq\{g\in\operatorname{ Hom}_{\mathcal{E}}(E,E^{\prime})\mid 1_{E^{\prime}}-g\circ g^{\prime}\text{ is invertible for all }g^{\prime}\in \operatorname{Hom}_{\mathcal{E}}(E^{\prime},E)\}\] for all \(E,E^{\prime}\in\mathcal{E}\). Recall that the radical is symmetric, and so we have the equality \[\operatorname{Rad}_{\mathcal{E}}(E,E^{\prime})\coloneqq\{g\in\operatorname{ Hom}_{\mathcal{E}}(E,E^{\prime})\mid 1_{E}-g^{\prime}\circ g\text{ is invertible for all }g^{\prime}\in \operatorname{Hom}_{\mathcal{E}}(E^{\prime},E)\}.\] We refer to [11, Section 2.1] for more information on injective envelopes, and to [11, Section 2] and [12] for more information on the radical. We only need the following results relating them. The proof of the second claim is based on the argument given in [10, Theorem 1.2]. **Lemma 2.5**.: _Let \(\mathcal{E}\) be a exact category, and let \(E\in\mathcal{E}\). The following hold:_ 1. _If_ \(i\colon E\to I\) _is an injective envelope, then_ \(p\colon I\to\operatorname{coker}i\) _is in the radical of_ \(\mathcal{E}\)_._ 2. _Assume_ \(\mathcal{E}\) _is abelian with the natural exact structure. Furthermore, assume_ \(\mathcal{E}\) _has injective envelopes. Let_ \(g\colon J\to E\) _be a morphism in_ \(\mathcal{E}\)_. If_ \(J\) _is injective and_ \(E\) _has no nonzero injective summands, then the inclusion_ \(\ker g\to J\) _is an injective envelope. In particular,_ \(g\) _is in the radical of_ \(\mathcal{E}\)_._ Proof.: To prove statement (1) let \(k\colon\operatorname{coker}i\to I\) be an arbitrary morphism. Then the endomorphism \(k^{\prime}=1_{I}-k\circ p\) satisfies \(k^{\prime}\circ i=i\). Since \(i\) is an injective envelope, we must have that \(k^{\prime}\) is an isomorphism. Since \(k\) was arbitrary, it follows that \(p\) is in the radical of \(\mathcal{C}\). To prove (2) let \(u\colon\ker g\to E(\ker g)\) denote the injective envelope of \(\ker g\). Then the monomorphism \(i\colon\ker g\to J\) lifts to a split monomorphism \(E(\ker g)\to J\). Consider the commutative diagram where the lower row is a split exact sequence Since the left square commutes, we get an induced morphism \(\operatorname{coker}i\to J^{\prime}\). Since \(J^{\prime}\) is injective, we can lift this to a morphism \(E\to J^{\prime}\) such that the rightmost square in the diagram commutes. Since \(J\to J^{\prime}\) is a split epimorphism, the morphism \(E\to J^{\prime}\) is a split epimorphism. Therefore \(J^{\prime}\) must be a summand of both \(J\) and \(E\). But being a summand of \(J\) implies that \(J^{\prime}\) is injective, and \(E\) has no nonzero injective summands. Therefore \(J^{\prime}=0\). Hence \(i\) is an injective envelope. Since \(g\) factors through the cokernel of \(i\), it must be in the radical by part (1). We finish by proving a uniqueness result on the decomposition of an object by a maximal injective summand. It holds for abelian categories with injective envelopes by the previous lemma. It also holds for the monomorphism category by Lemma 6.5. **Lemma 2.6**.: _Let \(\mathcal{E}\) be an exact category, and let \(E_{1}\) and \(E_{2}\) be objects in \(\mathcal{E}\). Assume that any morphism from an injective object to \(E_{1}\) or \(E_{2}\) is in the radical of \(\mathcal{E}\). Let_ \[E_{1}\oplus I_{1}\xrightarrow{\cong}E_{2}\oplus I_{2}\] _be an isomorphism where \(I_{1}\) and \(I_{2}\) are injective. Then the restrictions \(E_{1}\to E_{2}\) and \(I_{1}\to I_{2}\) are isomorphisms._ Proof.: Let \[\phi=\begin{pmatrix}\phi_{1}&\phi_{2}\\ \phi_{3}&\phi_{4}\end{pmatrix}\colon E_{1}\oplus I_{1}\to E_{2}\oplus I_{2} \quad\text{and}\quad\psi=\begin{pmatrix}\psi_{1}&\psi_{2}\\ \psi_{3}&\psi_{4}\end{pmatrix}\colon E_{2}\oplus I_{2}\to E_{1}\oplus I_{1}\] denote the isomorphism and its inverse. Then we have that \(\psi_{3}\circ\phi_{2}+\psi_{4}\circ\phi_{4}=1_{I_{1}}\). By assumption, \(\phi_{2}\) is in the radical of \(\mathcal{E}\). Hence, by definition of the radical the composite \(\psi_{4}\circ\phi_{4}=1_{I_{1}}-\psi_{3}\circ\phi_{2}\) is an isomorphism. By a similar argument the composite \(\phi_{4}\circ\psi_{4}\) is also an isomorphism. Hence \(\phi_{4}\colon I_{1}\to I_{2}\) must itself be an isomorphism. Now consider the commutative diagram with split exact rows Since the two leftmost vertical maps are isomorphisms, the map \(\phi_{1}\colon E_{1}\to E_{2}\) must be an isomorphism. This proves the claim. ## 3. The monomorphism category of the free monad Fix a k-linear additive category \(\mathcal{C}\). Throughout the section \(X\colon\mathcal{C}\to\mathcal{C}\) is assumed to be a k-linear functor which is _locally nilpotent_, i.e. for any object \(M\in\mathcal{C}\) there exists an \(n\geq 0\) such that \(X^{n}(M)=0\). From Subsection 3.2 we assume \(\mathcal{C}\) is abelian and \(X\) is exact, and from Section 4 onwards we also assume that \(X\) preserves injective objects. The assumptions on \(X\) are introduced to capture the essential properties of monads arising from representations of finite acyclic quivers in additive and abelian categories, see Example 3.11. This is done using the free monad on \(X\), which we define. In addition to this we define the monomorphism category and the analogue of the top functor. **Remark 3.1**.: The assumption we make on \(X\) differs from the ones in [10]. Several of the results in [10] rely on the existence of a relative Nakayama functor on the Eilenberg-Moore category of the free monad on \(X\), and this is not necessary to assume in this paper. On the other hand, many of the proofs in [10] still go through. Another technical condition we use in [10] is a relative version of Nakayama's lemma, which says that if \(M\) is non-zero, then there are no epimorphisms \(X(M)\twoheadrightarrow M\), see [10, Lemma 6.25]. Under the assumption that \(X\) is locally nilpotent, this is automatically satisfied when \(X\) preserves epimorphisms, and in particular if \(X\) is an exact functor on an abelian category. ### The free monad The definition of the free monad mimics the construction of the path algebra of a quiver, and more generally the tensor algebra of a bimodule over an algebra. **Definition 3.2**.: The _free monad_ on \(X\) is the monad \((T(X),\eta,\mu)\) where \(T(X)\colon\mathcal{C}\to\mathcal{C}\) is given by \[T(X)(M)=\coprod_{i\geq 0}X^{i}(M)\] and where \(\eta\colon\operatorname{Id}\to T(X)\) is the canonical inclusion and \(\mu\colon T(X)\circ T(X)\to T(X)\) is given componentwise by the canonical identification \(X^{i}X^{j}(M)\xrightarrow{\cong}X^{i+j}(M)\). Since \(X\) is locally nilpotent, the coproduct \(\coprod_{i\geq 0}X^{i}(M)\) is finite for each object \(M\in\mathcal{C}\). We therefore identify it with the direct sum \(\bigoplus_{i\geq 0}X^{i}(M)\). **Remark 3.3**.: In [10] it is assumed that \(X\) preserves countable coproducts when defining the free monad. However this is only used to conclude that the canonical map \(X(\coprod_{i\geq 0}X^{i}(M))\to\coprod_{i\geq 1}X^{i}(M)\) is an isomorphism, which follows here from the fact that since \(X\) is locally nilpotent, the coproduct is finite and is therefore preserved by the additive functor \(X\). The Eilenberg-Moore category of a free monad \(T(X)\) has an alternative simpler description. Let \((X\Downarrow\operatorname{Id}_{\mathcal{C}})\) be the category whose objects are pairs \((M,h_{1})\) where \(M\in\mathcal{C}\) and \(h_{1}\colon X(M)\to M\) is a morphism, and where a morphism \(g\colon(M,h_{1})\to(M^{\prime},h_{1}^{\prime})\) in \((X\Downarrow\operatorname{Id}_{\mathcal{C}})\) is a morphism \(g\colon M\to M^{\prime}\) in \(\mathcal{C}\) satisfying \(g\circ h_{1}=h_{1}^{\prime}\circ X(g)\). Note that we have a functor \[\mathcal{C}^{T(X)}\to(X\Downarrow\operatorname{Id}_{\mathcal{C}})\quad( \bigoplus_{i\geq 0}X^{i}(M)\xrightarrow{h}M)\mapsto(M,h_{1})\] where \(h_{1}\colon X(M)\to M\) is the restriction of \(h\) to \(X(M)\). **Lemma 3.4**.: _The functor \(\mathcal{C}^{T(X)}\to(X\Downarrow\operatorname{Id}_{\mathcal{C}})\) above is an isomorphism of categories._ Proof.: This follows from the proof of Lemma 5.18 in [10]. We will identify the categories \(\mathcal{C}^{T(X)}\) and \((X\Downarrow\operatorname{Id}_{\mathcal{C}})\). We use sans serif typestyle \(\mathsf{M},\mathsf{N},\ldots\) to denote objects in \(\mathcal{C}^{T(X)}\), so that the same letter without the sans serif typestyle denotes the underlying object in \(\mathcal{C}\), i.e. \(M=f^{*}(\mathsf{M})\) and \(N=f^{*}(\mathsf{N})\). The induced morphism \(X(M)\to M\) is then denoted by \(h_{\mathsf{M}}\) and called the _structure map_ of \(\mathsf{M}\). In Section 2.2 we saw that the forgetful functor \(f^{*}\colon\mathcal{C}^{T(X)}\to\mathcal{C}\) has a left adjoint \(f_{!}\colon\mathcal{C}\to\mathcal{C}^{T(X)}\). It is given by \(f_{!}(M)=(\bigoplus_{i\geq 0}X^{i}(M),\iota_{M})\) where the structure map \(\iota_{M}\) is the canonical inclusion \[\iota_{M}\colon\bigoplus_{i\geq 1}X^{i}(M)\to\bigoplus_{i\geq 0}X^{i}(M) \tag{3.4.1}\] Any summand of an object of the form \(f_{!}(M)\) is called _relative projective_. If \(\mathcal{C}=\operatorname{Mod}\Lambda\) for a semisimple Artin algebra \(\Lambda\), then the relative projectives coincide with the projectives in \(\mathcal{C}^{T(X)}\), cf. Proposition 4.3. The following lemma implies that the relative projectives behave as projectives for objectwise split epimorphisms in \(\mathcal{C}^{T(X)}\). **Lemma 3.5**.: _Let \(N\) be an object in \(\mathcal{C}\) and let \(g\colon\mathsf{M}\to\mathsf{M}^{\prime}\) be a morphism in \(\mathcal{C}^{T(X)}\). Assume \(f^{*}(g)\) is a split epimorphism. Then any morphism \(f_{!}(N)\to\mathsf{M}^{\prime}\) factors through \(g\)._ Proof.: By the adjunction \(f_{!}\dashf^{*}\), the map \[\operatorname{Hom}_{\mathcal{C}^{T(X)}}(f_{!}(N),g)\colon\operatorname{Hom}_{ \mathcal{C}^{T(X)}}(f_{!}(N),\mathsf{M})\to\operatorname{Hom}_{\mathcal{C}^{ T(X)}}(f_{!}(N),\mathsf{M}^{\prime})\] is isomorphic to the map \[\operatorname{Hom}_{\mathcal{C}}(N,f^{*}(g))\colon\operatorname{Hom}_{ \mathcal{C}}(N,f^{*}(\mathsf{M}))\to\operatorname{Hom}_{\mathcal{C}}(N,f^{*}( \mathsf{M}^{\prime})).\] The latter must be an epimorphism since \(f^{*}(g)\) is a split epimorphism. This proves the claim. A sequence \(\mathsf{M}_{1}\to\mathsf{M}_{2}\to\mathsf{M}_{3}\) in \(\mathcal{C}^{T(X)}\) is called _objectwise split exact_ if the sequence \[f^{*}(\mathsf{M}_{1})\to f^{*}(\mathsf{M}_{2})\to f^{*}(\mathsf{M}_{3})\] is split exact in \(\mathcal{C}\). We show that such a sequence is an exact sequence in \(\mathcal{C}^{T(X)}\), i.e. a kernel-cokernel pair. Note that this holds even though \(\mathcal{C}^{T(X)}\) is not assumed to be abelian. We also show that the free monad \(T(X)\) has relative global dimension one, i.e. any object in \(\mathcal{C}^{T(X)}\) has a objectwise split resolution of length one by relative projective objects. Here the structure morphism of \(f_{!}(M)\) is considered as a morphism \(\iota_{M}\colon f_{!}X(M)\to f_{!}(M)\) in \(\mathcal{C}^{T(X)}\). **Lemma 3.6**.: _The following hold:_ 1. _Any objectwise split exact sequence is an exact sequence._ 2. _For each_ \(\mathsf{M}\in\mathcal{C}^{T(X)}\) _the sequence_ \[0\to f_{!}X(M)\xrightarrow{\iota_{M}-f_{!}(h_{\mathsf{M}})}f_{!}(M) \xrightarrow{\varepsilon_{\mathsf{M}}}\mathsf{M}\to 0\] _is objectwise split exact._ Proof.: Let \(\mathsf{M}_{1}\to\mathsf{M}_{2}\to\mathsf{M}_{3}\) be a objectwise split exact sequence. By [1, Proposition 4.3.1] the map \(\mathsf{M}_{1}\to\mathsf{M}_{2}\) is a kernel of \(\mathsf{M}_{2}\to\mathsf{M}_{3}\) since \(f^{*}(\mathsf{M}_{1})\to f^{*}(\mathsf{M}_{2})\) is a kernel of \(f^{*}(\mathsf{M}_{2})\to f^{*}(\mathsf{M}_{3})\). Since \(T(X)\) preserves split exact sequences, [1, Proposition 4.3.2] implies that \(\mathsf{M}_{2}\to\mathsf{M}_{3}\) is a cokernel of \(\mathsf{M}_{1}\to\mathsf{M}_{2}\) since \(f^{*}(\mathsf{M}_{1})\to f^{*}(\mathsf{M}_{2})\to f^{*}(\mathsf{M}_{3})\) is split exact. This proves (1). The fact that \[0\to f^{*}f_{!}X(M)\xrightarrow{f^{*}(\iota_{M}-f_{!}(h_{\mathsf{M}}))}f^{*}f_{! }(M)\xrightarrow{f^{*}(eu)}f^{*}(\mathsf{M})\to 0\] is a split exact sequence can be shown in a similar way as in the proof of Lemma 6.17 in [10]. Note that the infinite sums in that proof are finite in our case, since \(X\) is locally nilpotent. Next we show that the relative projectives are always isomorphic to objects of the form \(f_{!}(M)\). They can also be characterized by their structure morphism being a split monomorphism. **Proposition 3.7**.: _Let \(\mathsf{M}\in\mathcal{C}^{T(X)}\). The following are equivalent:_ 1. \(\mathsf{M}\) _is relative projective._ 2. \(h_{\mathsf{M}}\colon X(M)\to M\) _is a split monomorphism in_ \(\mathcal{C}\)_._ 3. \(\mathsf{M}\) _is isomorphic to an object of the form_ \(f_{!}(N^{\prime})\) _for_ \(N^{\prime}\in\mathcal{C}\)_._ Proof.: Clearly the class of objects \(\mathsf{M}\in\mathcal{C}^{T(X)}\) for which \(h_{\mathsf{M}}\) is a split monomorphism is closed under direct summands. Since \(\iota_{N}\) is a split monomorphism for all \(N\in\mathcal{C}\), this shows (1)\(\Rightarrow\)(2). For (2)\(\Rightarrow\)(3), consider the split exact sequence \[0\to X(M)\xrightarrow{h_{\mathsf{M}}}M\to N^{\prime}\to 0\] in \(\mathcal{C}\) and choose a section \(i\colon N^{\prime}\to M\). Then for each \(m\geq 0\) the morphism \(X^{m}(i)\colon X^{n}(N^{\prime})\to X^{n}(M)\) is a section to the split exact sequence \[0\to X^{m+1}(M)\xrightarrow{X^{m}(h_{\mathsf{M}})}X^{m}(M)\to X^{m}(N^{ \prime})\to 0.\] Using the \(X^{m}(i)\)'s, we get isomorphisms \[M\cong N^{\prime}\oplus X(M)\cong N^{\prime}\oplus X(N^{\prime})\oplus X^{2} (M)\cong\cdots\cong N^{\prime}\oplus X(N^{\prime})\oplus\cdots\oplus X^{n}(N^{ \prime})\] where \(n\) is some integer with \(X^{n+1}(M)=0\). Since these isomorphisms commute with the structure morphisms of \(\mathsf{M}\) and \(f_{!}(N^{\prime})\), we get an isomorphism \(\mathsf{M}\cong f_{!}(N^{\prime})\). The remaining direction (3)\(\Rightarrow\)(1) is obvious. **Corollary 3.8**.: _An object \(M\in\mathcal{C}\) is indecomposable if and only if \(f_{!}(M)\) is indecomposable._ Proof.: By Proposition 3.7 any summand of \(f_{!}(M)\) is of the form \(f_{!}(N)\). Taking the cokernel of their structure morphisms, we see that this is equivalent to \(N\) being a summand of \(M\). Hence \(f_{!}(M)\) is indecomposable if and only if \(M\) is indecomposable. **Example 3.9**.: Let \(Q=(Q_{0},Q_{1})\) be a quiver, where \(Q_{0}\) and \(Q_{1}\) denote the set of vertices and arrows of \(Q\), respectively. A \(\Bbbk\)_-modulation_\(\mathfrak{B}\) of \(Q\) is an assignment of a \(\Bbbk\)-linear additive category \(\mathcal{B}_{\mathfrak{i}}\) to each vertex \(\mathfrak{i}\in Q_{0}\) and a \(\Bbbk\)-linear functor \(F_{\alpha}\colon\mathcal{B}_{\mathfrak{i}}\to\mathcal{B}_{\mathfrak{j}}\) to each arrow \(\alpha\colon\mathfrak{i}\to\mathfrak{j}\). Associated to a modulation \(\mathfrak{B}\) we have the category \(\operatorname{rep}\mathfrak{B}\) of \(\mathfrak{B}\)_-representations_. Explicitly, its objects are collections \((B_{\mathfrak{i}},B_{\alpha})_{\mathfrak{i}\in Q_{0},\alpha\in Q_{1}}\) where \(B_{\mathfrak{i}}\) is an object of \(\mathcal{B}_{\mathfrak{i}}\) and \(B_{\alpha}\colon F_{\alpha}(B_{\mathfrak{i}})\to B_{\mathfrak{j}}\) is a morphism in \(\mathcal{B}_{\mathfrak{j}}\). A morphism of \(\mathfrak{B}\)-representations \((B_{\mathfrak{i}},B_{\alpha})\to(B_{\mathfrak{i}}^{\prime},B_{\alpha}^{\prime})\) is a collection of morphism \((\varphi_{\mathfrak{i}}\colon B_{\mathfrak{i}}\to B_{\mathfrak{i}}^{\prime})_ {\mathfrak{i}\in Q_{0}}\) such that the following diagram commutes for every \(\alpha\in Q_{1}\): Note that the category of representations can be identified with the sections of the Grothendieck construction of a certain functor obtained from the \(\Bbbk\)-modulation, see Remark 4.6 in [10]. Assume \(Q\) is finite and acyclic and set \(\mathcal{C}=\prod_{i\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\). Define the functor \[X\colon\mathcal{C}\to\mathcal{C},\qquad(B_{\mathfrak{i}})_{\mathfrak{i}\in Q _{0}}\mapsto(\bigoplus_{\begin{subarray}{c}\alpha\in Q_{1}\\ t(\alpha)=\mathfrak{i}\end{subarray}}F_{\alpha}(B_{s(\alpha)}))_{\mathfrak{i} \in Q_{0}}. \tag{3.9.1}\] By the assumptions on \(Q\) it follows that \(X\) is nilpotent, i.e. \(X^{n}=0\) for some \(n\gg 0\). The category \(\operatorname{rep}\mathfrak{B}\) of \(\mathfrak{B}\)-representations can be identified with Eilenberg-Moore category \(\mathcal{C}^{T(X)}\) of the free monad \(T(X)\) on \(\mathcal{C}\). We describe the Eilenberg-Moore adjunction \(f_{!}\dashv f^{*}\) explicitly, following [10, Meta-Example 6.20]. The forgetful functor is given by \[f^{*}\colon\operatorname{rep}\mathfrak{B}\to\mathcal{C},\qquad f^{*}(B_{ \mathfrak{i}},B_{\alpha})_{\mathfrak{i}\in Q_{0},\alpha\in Q_{1}}=(B_{\mathfrak{ i}})_{\mathfrak{i}\in Q_{0}}.\] For \(f_{!}\), we need some notation. Let \(Q_{\geq 0}\) denote the collection of all paths in \(Q\), and for \(p\in Q_{\geq 0}\) let \(s(p)\) and \(t(p)\) denote its source and target, respectively. If \(p=\alpha_{n}\alpha_{n-1}\cdots\alpha_{1}\) set \[F_{p}\coloneqq F_{\alpha_{n}}\circ F_{\alpha_{n-1}}\circ\cdots\circ F_{\alpha_{ 1}}\colon\mathcal{B}_{s(p)}\to\mathcal{B}_{t(p)}\] The functor \(f_{!}\) applied to \(B=(B_{\mathfrak{i}})_{\mathfrak{i}\in Q_{0}}\in\mathcal{C}\) is then given by \[f_{!}(B)=\left(\bigoplus_{\begin{subarray}{c}p\in Q_{\geq 0}\\ t(p)=\mathfrak{i}\end{subarray}}F_{p}(B_{s(p)}),f_{!}(B)_{\alpha}\right)_{ \begin{subarray}{c}\mathfrak{i}\in Q_{0}\\ \alpha\in Q_{\mathfrak{i}}\end{subarray}} \tag{3.9.2}\] Here \[f_{!}(B)_{\alpha}\colon\bigoplus_{\begin{subarray}{c}p\in Q_{\geq 0}\\ t(p)=s(\alpha)\end{subarray}}F_{\alpha}F_{p}(B_{s(p)})\to\bigoplus_{ \begin{subarray}{c}q\in Q_{\geq 0}\\ t(q)=\mathfrak{i}(\alpha)\end{subarray}}F_{q}(B_{s(q)})\] is induced by the identity map \(F_{\alpha}F_{p}(B_{s(p)})\xrightarrow{1}F_{q}(B_{s(q)})\) for \(q=\alpha p\). For more details on this construction see [10, Section 5]. **Example 3.10**.: Example 3.9 can be extended to a modulation \(\mathfrak{B}\) on any quiver \(Q\) (not necessarily finite or acylic). To ensure that the functor \(X\) given by (3.9.1) is locally nilpotent, we need to restrict to the subcategory \(\mathcal{C}\) consisting of objects \((B_{\mathfrak{i}})_{\mathfrak{i}\in Q_{0}}\) of \(\prod_{\mathfrak{i}\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\) satisfying: 1. There exists an integer \(n\geq 0\) (depending on \((B_{\mathfrak{i}})_{\mathfrak{i}\in Q_{0}}\)) such that \(B_{s(p)}=0\) for any path \(p\) of length greater than \(n\). Clearly, \(X\) restricts to a locally nilpotent endofunctor on \(\mathcal{C}\). Since \(\mathcal{C}=\prod_{\mathfrak{i}\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\) if \(Q\) is finite and acyclic, this generalizes Example 3.9. Note that \(\mathcal{C}\) is nonzero if and only if \(Q\) has a sink vertex. **Example 3.11**.: As a special case of the previous examples we can consider the modulation \(\mathfrak{B}\) given by \(\mathcal{B}_{\mathfrak{i}}=\mathcal{B}\) for all \(\mathfrak{i}\in Q_{0}\) and \(F_{\alpha}=\operatorname{Id}\) for all \(\alpha\in Q_{1}\). In this case, the category of representation of \(\mathfrak{B}\) is just the category \(\operatorname{rep}(Q,\mathcal{B})\) of representations of \(Q\) in \(\mathcal{B}\), whose objects are collections \((B_{\mathfrak{i}},B_{\alpha})_{\mathfrak{i}\in Q_{0},\alpha\in Q_{1}}\) of objects \(B_{\mathfrak{i}}\) and morphisms \(B_{\alpha}\colon B_{s(\alpha)}\to B_{\mathfrak{i}(\alpha)}\) in \(\mathcal{B}\). Note that \(\operatorname{rep}(Q,\mathcal{B})\) is equivalent to the category \(\mathcal{B}^{\operatorname{k}Q}\) of k-linear functors \(\Bbbk Q\to\mathcal{B}\), where \(Q\) is considered as a category with objects in \(Q_{0}\) and with morphisms given by \(Q_{\geq 0}\), where \(\Bbbk Q\) denotes the k-linearization of the category \(Q\). ### The free monad on an abelian category In this subsection \(\mathcal{C}\) is assumed to be abelian and \(X\colon\mathcal{C}\to\mathcal{C}\) is assumed to be exact. We have the following basic results for \(T(X)\) and its Eilenberg-Moore category. **Lemma 3.12**.: _The following hold:_ 1. _The functor_ \(T(X)\colon\mathcal{C}\to\mathcal{C}\) _is exact_ 2. _The category_ \(\mathcal{C}^{T(X)}\) _is abelian._ 3. _The functors_ \(f_{!}\colon\mathcal{C}\to\mathcal{C}^{T(X)}\) _and_ \(f^{*}\colon\mathcal{C}^{T(X)}\to\mathcal{C}\) _are exact._ 4. _If_ \(X\) _preserves injectives, then_ \(T(X)\) _preserves injectives._ Proof.: Since \(X\) is exact and taking coproducts is a right exact functor, \(T(X)\) must be right exact. Since \(X\) is locally nilpotent, we have an isomorphism \(T(X)\cong\prod_{\mathfrak{i}\geq 0}X^{i}\). Since taking products is a left exact functor, \(T(X)\) must also be left exact. Hence, \(T(X)\) is exact. The fact that \(\mathcal{C}^{T(X)}\) is abelian and \(f^{*}\) is exact follows from Proposition 2.4. Now the exactness of \(f_{!}\) follows from the exactness of \(T(X)\) and the description of exact sequences in \(\mathcal{C}^{T(X)}\) from Proposition 2.4. Finally, if \(I\in\mathcal{C}\) is injective and \(X\) preserves injectives, then \(X^{i}(I)\) is injective for all \(i\geq 0\). Since injective objects are closed under products, \(T(X)(I)\cong\prod_{\mathfrak{i}\geq 0}X^{i}(I)\) must be injective, so \(T(X)\) preserves injective objects. If \(X\) preserves injective objects, then \(X\) descends to an endofunctor on the stable injective category \(\overline{\mathcal{C}}\). By Lemma 3.12 (4) the functor \(T(X)\) also preserves injective objects, and therefore descends to a monad on \(\overline{\mathcal{C}}\), denoted in the same way. **Lemma 3.13**.: _Assume \(\mathcal{C}\) has enough injectives and \(X\) preserves injectives. Then \(T(X)\), considered as a monad on \(\overline{\mathcal{C}}\), is equal to the free monad of \(X\) on \(\overline{\mathcal{C}}\). In particular, we have an isomorphism \(\overline{\mathcal{C}}^{T(X)}\xrightarrow{\cong}(X\Downarrow\operatorname{Id} \overline{\mathcal{C}})\)._ Proof.: Since \(X\) is locally nilpotent, the sum \(\bigoplus_{i\geq 0}X^{i}\) is finite and therefore preserved by the functor \(\mathcal{C}\to\overline{\mathcal{C}}\). Hence, \(T(X)\) is the free monad of \(X\) on \(\overline{\mathcal{C}}\), which proves the claim. **Example 3.14**.: Let \(Q\) be a finite and acyclic quiver and \(\mathfrak{B}\) a modulation on \(Q\) as in Example 3.9, and assume the categories \(\mathcal{B}_{\mathfrak{i}}\) are abelian. Then \(X\) is exact if and only if if the functors \(F_{\alpha}\colon\mathcal{B}_{i}\to\mathcal{B}_{j}\) are exact, and \(X\) preserves injectives if and only if the functors \(F_{\alpha}\) preserve injectives. Assume these two conditions hold and the categories \(\mathcal{B}_{\mathfrak{i}}\) have enough injectives for all vertices \(\mathfrak{i}\). Then we have another modulation \(\overline{\mathfrak{B}}\) on \(Q\), given by the injectively stable category \(\overline{\mathcal{B}_{\mathfrak{i}}}\) at vertex \(\mathfrak{i}\), and by the functor \(G_{\alpha}\colon\overline{\mathcal{B}_{\mathfrak{i}}}\to\overline{\mathcal{B} _{\mathfrak{j}}}\) induced from \(F_{\alpha}\) at an arrow \(\alpha\colon\mathfrak{i}\to\mathfrak{j}\). Setting \(\mathcal{C}=\prod_{i\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\), we get equivalences \(\overline{\mathcal{C}}\cong\prod_{\mathfrak{i}\in Q_{0}}\overline{\mathcal{B} _{\mathfrak{i}}}\) and \(\operatorname{rep}\overline{\mathfrak{B}}\cong\overline{\mathcal{C}}^{T(X)}\). If \(\mathcal{B}_{\mathfrak{i}}=\mathcal{B}\) for all \(\mathfrak{i}\in Q_{0}\) and \(F_{\alpha}=\operatorname{Id}\) for all \(\alpha\in Q_{1}\) as in Example 3.11, then this gives \(\operatorname{rep}(Q,\overline{\mathcal{B}})\cong\overline{\mathcal{C}}^{T(X)}\). **Example 3.15**.: Let \(Q\) be any quiver, not necessarily finite or acyclic. Assume the categories \(\mathcal{B}_{\mathfrak{i}}\) are abelian with enough injectives. Then the abelian category \(\prod_{i\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\) has enough injectives, and they are given componentwise, i.e. \((B_{\mathfrak{i}})_{\mathfrak{i}\in Q_{0}}\) is injective if and only if \(B_{\mathfrak{i}}\) is injective for all \(\mathfrak{i}\in Q_{0}\). Assume the functors \(F_{\alpha}\) are exact and preserve injectives. To ensure that the functor \(X\) given by (3.9.1) is exact and preserves injectives, we need to restrict to objects \((B_{\mathfrak{i}})_{\mathfrak{i}\in Q_{0}}\) of \(\prod_{\mathfrak{i}\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\) satisfying: * For each vertex \(\mathfrak{j}\in Q_{0}\) one of the following conditions hold: * There are only finitely many arrows \(\alpha\) with \(t(\alpha)=\mathfrak{j}\) and \(B_{s(\alpha)}\neq 0\), or * \(\mathcal{B}_{\mathfrak{j}}\) is a locally noetherian Grothendieck category. Indeed, \(X\) is exact and preserves injectives in the first case since finite direct sums are exact and preserve injectivity. It holds in the second case since infinite coproducts are exact in Grothendieck categories, and since infinite coproducts of injectives are injective in locally noetherian categories. Now let \(\mathcal{C}\) be the subcategory \(\prod_{\mathfrak{i}\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\) consisting of all objects satisfying (A) in Example 3.10 and (B). Since \(\mathcal{C}\) is closed under subobjects, extensions, and quotients, it is a Serre subcategory of \(\prod_{\mathfrak{i}\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\). In particular, it is abelian. It also has enough injectives, since for any \((B_{\mathfrak{i}})_{\mathfrak{i}\in Q_{0}}\) in \(\mathcal{C}\) we can find a monomorphism \((B_{\mathfrak{i}})_{\mathfrak{i}\in Q_{0}}\to(J_{\mathfrak{i}})_{\mathfrak{i} \in Q_{0}}\) with \(J_{\mathfrak{i}}\) injective for all \(\mathfrak{i}\in Q_{0}\) and \(J_{\mathfrak{i}}\neq 0\) if and only if \(B_{\mathfrak{i}}\neq 0\). Furthermore, \(X\) restricts to an exact, locally nilpotent endofunctor on \(\mathcal{C}\) which preserves injective objects. Hence, it satisfies the standing assumptions in this section. Again, \(\mathcal{C}\) is nonzero if and only if \(Q\) has a sink vertex, and \(\mathcal{C}=\prod_{\mathfrak{i}\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\) if \(Q\) is finite and acyclic. **Example 3.16**.: Let \(\mathfrak{B}\) be a modulation on a quiver \(Q\) such that \(\mathcal{B}_{s(\alpha)}=\operatorname{Mod}R\) and \(\mathcal{B}_{t(\alpha)}=\operatorname{Mod}S\) for two rings \(R\) and \(S\) and an arrow \(\alpha\). We investigate common situations in which the functor \(F_{\alpha}\colon\mathcal{B}_{s(\alpha)}\to\mathcal{B}_{t(\alpha)}\) is exact and preserves injectives, analogous to [1, Meta-Example 4.2]. Assume \(F_{\alpha}=\operatorname{Hom}_{R}(M,-)\) for an \(S\)-\(R\)-bimodule \(M\). We claim that \(F_{\alpha}\) is exact and preserves injectives if and only if \(M\) is projective as a right \(R\)-module and flat as a left \(S\)-module. Indeed, exactness of \(F_{\alpha}\) is clearly equivalent to \(M\) being projective as a right \(R\)-module. Now \(F_{\alpha}(I)=\operatorname{Hom}_{R}(M,I)\) is an injective \(S\)-module if and only if \[\operatorname{Hom}_{S}(-,\operatorname{Hom}_{R}(M,I))\cong\operatorname{Hom}_{ R}(-\otimes_{S}M,I)\] is an exact functor. This holds for every injective \(R\)-module if and only if \(-\otimes_{S}M\) is exact, i.e. \(M\) is flat as a left \(S\)-module. Now assume \(F_{\alpha}=-\otimes_{R}N\) for an \(R\)-\(S\)-bimodule \(N\), and that \(N\) is finitely presented as a left \(R\)-module. We claim that \(F_{\alpha}\) is exact and preserves injectives if and only if \(N\) is projective as a left \(R\)-module and \(\operatorname{Hom}_{R^{\operatorname{op}}}(N,R)\) is flat as a left \(S\)-module. Indeed, \(F_{\alpha}\) being exact is equivalent to \(N\) being flat as an \(R\)-module, and since \(N\) is finitely presented this is again equivalent to \(N\) being projective as an \(R\)-module. If \(N\) is finitely presented and projective, then \[F_{\alpha}=-\otimes_{R}N\cong\operatorname{Hom}_{R}(\operatorname{Hom}_{R^{ \operatorname{op}}}(N,R),-).\] Hence \(F_{\alpha}\) preserves injectives if and only if \(\operatorname{Hom}_{R^{\operatorname{op}}}(N,R)\) is flat as a left \(S\)-module by the argument above. **Example 3.17**.: As a special case of Example 3.16, let \(\mathfrak{B}\) be a modulation on \(Q=(\mathfrak{1}\to\mathfrak{2})\) given by a tensor product \(-\otimes_{R}N\colon\operatorname{Mod}R\to\operatorname{Mod}S\) by an \(R\)-\(S\)-bimodule \(N\). Then the category of \(\mathfrak{B}\)-representations is equivalent to \(\operatorname{Mod}\Lambda\), where \(\Lambda\) is the triangular matrix ring \[\Lambda\coloneqq\begin{pmatrix}R&0\\ N&S\end{pmatrix}\] Such rings have for example been studied in [10] and [1, Section III.2]. Their monomorphism categories (defined below, see Example 3.26) occur in [11, 12, 13] when describing Gorenstein projective \(\Lambda\)-modules. ### The top functor We consider the cokernel functor \(\operatorname{top}_{X}\) and its right adjoint \(S\): \[\operatorname{top}_{X}\colon\mathcal{C}^{T(X)}\to\mathcal{C}\quad\mathsf{M} \mapsto\operatorname{coker}h_{\mathsf{M}}\] \[S\colon\mathcal{C}\to\mathcal{C}^{T(X)}\quad M\mapsto(X(M)\xrightarrow{0}M).\] Applying \(\operatorname{top}_{X}\) is analogous to taking the top of a module, while \(S(M)\) can be thought of as the analogue of the semisimple module concentrated at \(M\). Here we collect their most important properties. Lemma 3.18 (4) can be considered as an analogue of Nakayama's Lemma. **Lemma 3.18**.: _The following hold:_ 1. _We have an adjunction_ \(\operatorname{top}_{X}\dashp S\)_._ 2. \(\operatorname{top}_{X}f_{!}(M)\cong M\) _naturally in_ \(M\in\mathcal{C}\)_._ 3. \(\operatorname{top}_{X}(\iota_{M})=0\) _for all_ \(M\in\mathcal{C}\)_._ 4. _If_ \(\operatorname{top}_{X}(\mathsf{M})=0\)_, then_ \(\mathsf{M}=0\)_._ Proof.: Part (1) follows from [1, Lemma 5.26]. Part (2) and (3) are obvious. Part (4) follows since there are no epimorphisms \(X(M)\twoheadrightarrow M\) if \(M\neq 0\), see Remark 3.1. We also need a result on the existence of the left derived functors of \(\operatorname{top}_{X}\). **Lemma 3.19**.: _The left derived functors \(L_{j}\operatorname{top}_{X}\) exist for all \(j>0\). Furthermore, they vanish on relative projective objects._ Proof.: By [11, Proposition 3.1.4] it suffices to show that the functors \(f_{!}f^{*}\) and \(\operatorname{top}_{X}\circ f_{!}f^{*}\) are exact. But this follows immediately from the fact that \(f_{!}\) and \(f^{*}\) are exact and the isomorphism \(\operatorname{top}_{X}\circ f_{!}\cong\operatorname{Id}_{\mathcal{C}}\) in Lemma 3.18 (2). We finish with the following description of the left derived functors of \(\operatorname{top}_{X}\). **Lemma 3.20**.: _For \(\mathsf{M}\in\mathcal{C}^{T(X)}\) we have_ \[L_{1}\operatorname{top}_{X}(\mathsf{M})=\ker h_{\mathsf{M}}\quad\text{and} \quad L_{i}\operatorname{top}_{X}(\mathsf{M})=0\quad i\geq 2.\] Proof.: This follows by the same proof as in [1, Lemma 6.24]. ### The monomorphism category **Definition 3.21**.: The _monomorphism category_\(\operatorname{Mono}(X)\) is the full subcategory of \(\mathcal{C}^{T(X)}\) consisting of objects \(\mathsf{M}\) where the structure map \(h_{\mathsf{M}}\) is a monomorphism. Note that the objects of the monomorphism category can equivalently be characterized using the first left derived functor of \(\operatorname{top}_{X}\). **Lemma 3.22**.: _Let \(\mathsf{M}\in\mathcal{C}^{T(X)}\). Then \(\mathsf{M}\in\operatorname{Mono}(X)\) if and only if \(L_{1}\operatorname{top}_{X}(\mathsf{M})=0\)_ Proof.: This follows immediately from the description of \(L_{1}\operatorname{top}_{X}\) in Lemma 3.20. Recall that a subcategory of an abelian category is called _resolving_ if it is generating and closed under extensions, kernels of epimorphisms, and direct summands. **Lemma 3.23**.: _The category \(\operatorname{Mono}(X)\) is resolving and closed under subobjects in \(\mathcal{C}^{T(X)}\). In particular, it is an exact category whose conflations are epimorphisms and whose inflations are monomorphisms with cokernel in \(\operatorname{Mono}(X)\)._ Proof.: For any object \(\mathsf{M}\in\mathcal{C}^{T(X)}\) the counit \(\varepsilon_{\mathsf{M}}\colon f_{!}f^{*}(\mathsf{M})\to\mathsf{M}\) is an epimorphism. Since any object in the image of \(f_{!}\) is contained in \(\operatorname{Mono}(X)\), this shows that \(\operatorname{Mono}(X)\) is generating. The fact that \(\operatorname{Mono}(X)\) is closed under extensions and subobjects follows from the fact that monomorphisms are closed under extensions and subobjects. Since kernels of epimorphisms and direct summands are special cases of subobjects, this shows that \(\operatorname{Mono}(X)\) is resolving. Finally, since \(\operatorname{Mono}(X)\) is closed under extensions, it inherits an exact structure from \(\mathcal{C}^{T(X)}\) where the inflations and the deflations are the monomorphism and the epimorphisms whose cokernel and kernel lies in \(\operatorname{Mono}(X)\), respectively. Since \(\operatorname{Mono}(X)\) is closed under kernels of epimorphisms, the deflations coincide with the epimorphisms in \(\operatorname{Mono}(X)\). Next we show how \(\operatorname{top}_{X}\) detects inflations and isomorphisms in \(\operatorname{Mono}(X)\). **Lemma 3.24**.: _Let \(g\colon\mathsf{M}\to\mathsf{N}\) be a morphism in \(\operatorname{Mono}(X)\). The following hold:_ 1. \(g\) _is an inflation in_ \(\operatorname{Mono}(X)\) _if and only if_ \(\operatorname{top}_{X}(g)\) _is a monomorphism._ 2. \(g\) _is an isomorphism if and only if_ \(\operatorname{top}_{X}(g)\) _is an isomorphism._ Proof.: To prove part (1) assume that \(g\) is a monomorphism with cokernel in \(\operatorname{Mono}(X)\). Applying \(\operatorname{top}_{X}\) to the exact sequence \[0\to\mathsf{M}\xrightarrow{g}\mathsf{N}\to\operatorname{coker}g\to 0\] and using that \(L_{1}\operatorname{top}_{X}(\operatorname{coker}g)=0\), we get that \(\operatorname{top}_{X}(g)\) is a monomorphism. Conversely, assume \(\operatorname{top}_{X}(g)\) is a monomorphism, and consider the exact sequence \[0\to\ker g\to\mathsf{M}\to\operatorname{Im}g\to 0.\] Since \(\operatorname{Im}g\) is a subobject of \(\mathsf{N}\), it is contained in \(\operatorname{Mono}(X)\). Therefore \(L_{1}\operatorname{top}_{X}(\operatorname{Im}g)=0\), so applying \(\operatorname{top}_{X}\) to the inclusion \(\ker g\to\mathsf{M}\) gives a monomorphism \(\operatorname{top}_{X}(\ker g)\to\operatorname{top}_{X}(\mathsf{M})\). Furthermore, since the composite \(\operatorname{top}_{X}(\ker g)\to\operatorname{top}_{X}(\mathsf{M}) \xrightarrow{\operatorname{top}_{X}(g)}\operatorname{top}_{X}(\mathsf{N})\) is \(0\) and \(\operatorname{top}_{X}(g)\) is a monomorphism, it follows that \(\operatorname{top}_{X}(\ker g)=0\). Hence \(\ker g=0\) by Lemma 3.18 (4), so \(g\) must be a monomorphism. Finally, applying \(\operatorname{top}_{X}\) to the short exact sequence \[0\to\mathsf{M}\xrightarrow{g}\mathsf{N}\to\operatorname{coker}g\to 0\] and using that \(\operatorname{top}_{X}(g)\) is a monomorphism and \(L_{1}\operatorname{top}_{X}(\mathsf{N})=0\), we get that \(L_{1}\operatorname{top}_{X}(\operatorname{coker}g)=0\). Hence \(\operatorname{coker}g\in\operatorname{Mono}(X)\) by Lemma 3.22. This proves (1). To prove part (2) observe that if \(g\) is an isomorphism, then \(\operatorname{top}_{X}(g)\) must be an isomorphism. Conversely, assume that \(\operatorname{top}_{X}(g)\) is an isomorphism. Then \(g\) must be a monomorphism by part (1). Hence we have a right exact sequence \[\operatorname{top}_{X}(\mathsf{M})\xrightarrow{\operatorname{top}_{X}(g)} \operatorname{top}_{X}(\mathsf{N})\to\operatorname{top}_{X}(\operatorname{ coker}g)\to 0\] Since \(\operatorname{top}_{X}(g)\) is an isomorphism, we must have that \(\operatorname{top}_{X}(\operatorname{coker}g)=0\). Therefore \(\operatorname{coker}g=0\) by Lemma 3.18 (4), so \(g\) must be an isomorphism. As a consequence of this we get a criterion for a morphism to be in the radical. **Lemma 3.25**.: _Let \(g\colon\mathsf{M}\to\mathsf{N}\) be a morphism in \(\operatorname{Mono}(X)\), and assume \(\operatorname{top}_{X}(g)\) is in the radical of \(\mathcal{C}\). Then \(g\) is in the radical of \(\operatorname{Mono}(X)\)._ Proof.: This follows immediately from the definition of the radical in Subsection 2.3 and the fact that \(\operatorname{top}_{X}\) reflects isomorphisms by Lemma 3.24 (2). **Example 3.26**.: Let \(Q\) be a finite and acyclic quiver and \(\mathfrak{B}\) a modulation on \(Q\) by abelian categories \(\mathcal{B}_{\mathtt{i}}\) and exact functors \(F_{\alpha}\) as in Example 3.9. Let \((B_{\mathtt{i}},B_{\alpha})_{\mathtt{i}\in Q_{0},\mathtt{o}\in Q_{1}}\) be a \(\mathfrak{B}\)-representation. For each vertex \(\mathtt{i}\in Q_{0}\) consider the map \[B_{\mathtt{i},\mathrm{in}}\colon\bigoplus_{\begin{subarray}{c}\alpha\in Q_{1} \\ t(\alpha)=\mathtt{i}\end{subarray}}F_{\alpha}(B_{\mathtt{s}(\alpha)}) \xrightarrow{(B_{\alpha})}B_{\mathtt{i}}.\] Then \(\operatorname{top}_{X}\) is given by \[\operatorname{top}_{X}(B_{\mathtt{i}},B_{\alpha})_{\mathtt{i}\in Q_{0},\mathtt{ o}\in Q_{1}}=(\operatorname{coker}B_{\mathtt{i},\mathrm{in}})_{\mathtt{i}\in Q_{0}}.\] Furthermore, by Lemma 3.20 it follows that \[L_{1}\operatorname{top}_{X}(B_{\mathfrak{i}},B_{\alpha})_{\mathfrak{i}\in Q_{0}, \alpha\in Q_{1}}=(\ker B_{\mathfrak{i},\operatorname{in}})_{\mathfrak{i}\in Q _{0}}.\] Hence, \((B_{\mathfrak{i}},B_{\alpha})_{\mathfrak{i}\in Q_{0},\alpha\in Q_{1}}\in \operatorname{Mono}(X)\) if and only if \(B_{\mathfrak{i},\operatorname{in}}\) is a monomorphism for all \(\mathfrak{i}\in Q_{0}\). ## 4. Injective objects Recall that \(\operatorname{Mono}(X)\) is an exact category by Proposition 3.23. In this section we investigate the injective objects in \(\operatorname{Mono}(X)\) under the exact structure. We show that up to isomorphism they are the objects of the form \(f_{!}(I)\), where \(I\) is injective in \(\mathcal{C}\), see Proposition 4.2 and Corollary 4.4. This improves on Lemma 6.5 in [1] for the free monad, since that result only implies that injective objects are summands of objects of the form \(f_{!}(I)\). Furthermore the proof in [1] relies on the existence of a relative Nakayama functor, and this assumption is removed from the proofs below. We finish this section by characterizing when \(\operatorname{Mono}(X)\) has enough injectives and when it has injective envelopes. Throughout \(\mathcal{C}\) is a \(\operatorname{k}\)-linear abelian category, and \(X\colon\mathcal{C}\to\mathcal{C}\) is an exact functor which is locally nilpotent and preserves injective objects. We first show that the functor \(\operatorname{top}_{X}\) induces a surjective map on morphisms spaces when the codomain is of the form \(f_{!}(I)\) with \(I\) injective. **Lemma 4.1**.: _Let \(g\colon\operatorname{top}_{X}\operatorname{\mathsf{M}}\to I\) be a morphism in \(\mathcal{C}\) with \(I\) injective and \(\operatorname{\mathsf{M}}\in\operatorname{Mono}(X)\). Then there exists a morphism \(k\colon\operatorname{\mathsf{M}}\to f_{!}(I)\) in \(\mathcal{C}^{T(X)}\) with \(\operatorname{top}_{X}(k)=g\)._ Proof.: Let \(k_{0}\) denote the composite \(M\to\operatorname{coker}h_{\operatorname{\mathsf{M}}}\xrightarrow{g}I\). Applying \(X\), we get a morphism \(X(k_{0})\colon X(M)\to X(I)\), and since \(X(I)\) is injective and \(h_{\operatorname{\mathsf{M}}}\colon X(M)\to M\) is a monomorphism, we can find a morphism \(k_{1}\colon M\to X(I)\) satisfying \(k_{1}\circ h_{\operatorname{\mathsf{M}}}=X(k_{0})\). Repeating this procedure, we get morphisms \(k_{i}\colon M\to X^{i}(I)\) satisfying \(k_{i}\circ h_{\operatorname{\mathsf{M}}}=X(k_{i-1})\) for each \(i\geq 1\). These induce a morphism \(M\to\bigoplus_{i\geq 0}X^{i}(I)\), and since \(k_{0}\circ h_{\operatorname{\mathsf{M}}}=0\) this lifts to a morphism \(k\colon\operatorname{\mathsf{M}}\to f_{!}(I)\) in \(\mathcal{C}^{T(X)}\). By construction we have that \(\operatorname{top}_{X}(k)=g\), so we are done. We can now show that \(f_{!}(I)\) is injective in \(\operatorname{Mono}(X)\) when \(I\) is injective in \(\mathcal{C}\). **Proposition 4.2**.: _Let \(I\in\mathcal{C}\) be injective. Then_ \[\operatorname{Ext}_{\mathcal{C}^{T(X)}}^{i}(\operatorname{\mathsf{M}},f_{!}(I ))=0\quad\text{for all $i>0$}.\] Proof.: We prove the result using Yoneda-Ext. Let \(\xi\in\operatorname{Ext}_{\mathcal{C}^{T(X)}}^{i}(\operatorname{\mathsf{M}}, f_{!}(I))\). We want to find a representative of \(\xi\) whose leftmost map is a split monomorphism. To this end, note that since \(\operatorname{Mono}(X)\) is resolving, it satisfies the dual of condition (C2) in [12, Section 12]. Therefore, the dual of [12, Theorem 12.1] implies that the induced functor \[D^{-}(\operatorname{Mono}(X))\to D^{-}(\mathcal{C}^{T(X)})\] between the derived categories is fully faithful, where we consider \(\operatorname{Mono}(X)\) as an exact category. Hence, we can find a representative of \(\xi\) of the form \[0\to f_{!}(I)\to\operatorname{\mathsf{N}}_{1}\to\dots\to\operatorname{ \mathsf{N}}_{i}\to\operatorname{\mathsf{M}}\to 0\] where all the terms are in \(\operatorname{Mono}(X)\). Since \(\operatorname{Mono}(X)\) is closed under subobjects, also all the intermediate kernels are in \(\operatorname{Mono}(X)\). Let \(i\colon f_{!}(I)\to\operatorname{\mathsf{N}}_{1}\) denote the leftmost morphism. By Lemma 3.24 the morphism \(\operatorname{top}_{X}(i)\colon I\to\operatorname{top}_{X}\operatorname{ \mathsf{N}}_{1}\) is monic. Since \(I\) is injective, \(\operatorname{top}_{X}(i)\) is a split monomorphism, so we can choose a left inverse \(g\colon\operatorname{top}_{X}\operatorname{\mathsf{N}}_{1}\to I\) of it. By Lemma 4.1 we can find a morphism \(k\colon\operatorname{\mathsf{N}}_{1}\to f_{!}(I)\) satisfying \(\operatorname{top}_{X}(k)=g\). By construction, if we apply \(\operatorname{top}_{X}\) to the composite \(k\circ i\colon f_{!}(I)\to f_{!}(I)\) we get the identity morphism on \(I\). Hence, by Lemma 3.24 (2) the morphism \(k\circ i\) is an isomorphism, so \(i\) must be a split monomorphism. This proves the claim. To show the converse of Proposition 4.2 we use the following result. **Proposition 4.3**.: _Let \(\operatorname{\mathsf{M}}\in\operatorname{Mono}(X)\), and assume \(M\) is injective in \(\mathcal{C}\). Then \(\operatorname{\mathsf{M}}\cong f_{!}(J)\) for some injective object \(J\) in \(\mathcal{C}\)._ Proof.: Since \(\operatorname{\mathsf{M}}\in\operatorname{Mono}(X)\), the map \(h_{\operatorname{\mathsf{M}}}\colon X(M)\to M\) is a monomorphism. Since \(M\) is injective, \(X(M)\) must be injective, so \(h_{\operatorname{\mathsf{M}}}\) is a split monomorphism. By Proposition 3.7 the claim follows. **Corollary 4.4**.: _Let \(\mathsf{M}\) be an injective object in \(\operatorname{Mono}(X)\) considered as an exact category. Then \(\mathsf{M}\cong f_{!}(J)\) for some injective object \(J\) in \(\mathcal{C}\)._ Proof.: It follows from the assumptions that \(\operatorname{Ext}^{1}_{\mathcal{C}^{T(X)}}(f_{!}(N),\mathsf{M})=0\) for any \(N\in\mathcal{C}\). Now the adjunction \(f_{!}\dashv f^{*}\) induces an isomorphism \[\operatorname{Ext}^{1}_{\mathcal{C}^{T(X)}}(f_{!}(N),\mathsf{M})\cong \operatorname{Ext}^{1}_{\mathcal{C}}(N,f^{*}(\mathsf{M}))\] see [1, Lemma 3.2]. Therefore \(\operatorname{Ext}^{1}_{\mathcal{C}}(N,f^{*}(\mathsf{M}))=0\) for all \(N\in\mathcal{C}\). This implies that \(f^{*}(\mathsf{M})\) is injective in \(\mathcal{C}\). Hence by Proposition 4.3 we get that \(\mathsf{M}\cong f_{!}(J)\) for some injective object \(J\in\mathcal{C}\). We use our results to investigate when \(\operatorname{Mono}(X)\) has enough injectives. **Proposition 4.5**.: _Assume \(\mathcal{C}\) has enough injectives. Then \(\operatorname{Mono}(X)\) has enough injectives._ Proof.: Let \(\mathsf{M}\in\operatorname{Mono}(X)\) be arbitrary, and choose a monomorphism \(i\colon\operatorname{top}_{X}(\mathsf{M})\to I\) in \(\mathcal{C}\) with \(I\) injective. Note that \(f_{!}(I)\) is injective by Proposition 4.2. By Lemma 4.1 we can find a morphism \(j\colon\mathsf{M}\to f_{!}(I)\) satisfying \(\operatorname{top}_{X}(j)=i\). Now by Lemma 3.24 (1) we have that \(j\) is an inflation in \(\operatorname{Mono}(X)\). This proves the claim. We finish by showing the existence of injective envelopes in \(\operatorname{Mono}(X)\). **Proposition 4.6**.: _The following hold:_ 1. _Let_ \(g\colon\mathsf{M}\to f_{!}(I)\) _be a morphism in_ \(\operatorname{Mono}(X)\) _with_ \(I\in\mathcal{C}\) _injective. Then_ \(g\) _is an injective envelope in_ \(\operatorname{Mono}(X)\) _if and only if_ \(\operatorname{top}_{X}(g)\colon\operatorname{top}_{X}(M)\to I\) _is an injective envelope in_ \(\mathcal{C}\)_._ 2. _If_ \(\mathcal{C}\) _has injective envelopes, then_ \(\operatorname{Mono}(X)\) _has injective envelopes._ Proof.: To prove part (1) we first assume \(\operatorname{top}_{X}(g)\) is an injective envelope. Then \(g\) must be a monomorphism with cokernel in \(\operatorname{Mono}(X)\) by Lemma 3.24. It remains to show that \(g\) is left minimal. Assume \(k\colon f_{!}(I)\to f_{!}(I)\) is a morphism satisfying \(k\circ g=g\). Applying \(\operatorname{top}_{X}\) we get that \(\operatorname{top}_{X}(k)\circ\operatorname{top}_{X}(g)=\operatorname{top}_{X}(g)\). Since \(\operatorname{top}_{X}(g)\) is an injective envelope, it is left minimal, so \(\operatorname{top}_{X}(k)\) must be an isomorphism. Hence \(k\) must be an isomorphism by Lemma 3.24 (2), so \(g\) is left minimal. Conversely, assume \(g\) is an injective envelope. Since \(g\) is an inflation, \(\operatorname{top}_{X}(g)\colon\operatorname{top}_{X}(\mathsf{M})\to I\) must be a monomorphism. Hence, it only remains to show that \(\operatorname{top}_{X}(g)\) is left minimal, so let \(k\colon I\to I\) be a morphism satisfying \(k\circ\operatorname{top}_{X}(g)=\operatorname{top}_{X}(g)\). Consider \(k^{\prime}\coloneqq g-f_{!}(k)\circ g\colon\mathsf{M}\to f_{!}(I)\). Since \(\operatorname{top}_{X}(k^{\prime})=0\), there exists a morphism \(r\colon\mathsf{M}\to f_{!}X(I)\) such that the left triangle in is commutative. Since \(f_{!}X(I)\) is injective in \(\operatorname{Mono}(X)\) and \(g\colon\mathsf{M}\to f_{!}(I)\) is a monomorphism, the morphism \(r\) extends to a morphism \(s\colon f_{!}(I)\to f_{!}X(I)\) via \(g\). Then clearly \[g=(\iota\circ s+f_{!}(k))\circ g\] and hence \(\iota\circ s+f_{!}(k)\) is an isomorphism since \(g\) is an injective envelope. Finally, since \(\operatorname{top}_{X}(\iota)=0\), it follows that \(\operatorname{top}_{X}(\iota\circ s+f_{!}(k))=k\), which must therefore also be an isomorphism. This shows that \(\operatorname{top}_{X}(g)\) is left minimal. To prove part (2) let \(\mathsf{M}\in\operatorname{Mono}(X)\) be arbitrary, and let \(i\colon\operatorname{top}_{X}\mathsf{M}\to I\) be an injective envelope in \(\mathcal{C}\). By Lemma 4.1 we can find a morphism \(j\colon\mathsf{M}\to f_{!}(I)\) satisfying \(\operatorname{top}_{X}(j)=i\). Then \(j\) must be an injective envelope by the first part of this lemma. ## 5. The epivalence Throughout this section we let \(\mathcal{C}\) be a \(\Bbbk\)-linear abelian category with enough injectives and \(X\colon\mathcal{C}\to\mathcal{C}\) an exact functor which is locally nilpotent and preserves injectives. Our goal is to show that the canonical functor \[\overline{\operatorname{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}\] is an equivalence, and an actual equivalence if \(\mathcal{C}\) is hereditary. The proof that the functor is dense is using a particular construction of right \(\operatorname{Mono}(X)\)-approximations, which we investigate first. ### Contravariantly finiteness Recall from Lemma 3.6 that we have an exact sequence \[0\to f_{!}X(M)\xrightarrow{\iota_{M}-f_{!}(hu)}f_{!}(M)\xrightarrow{\varepsilon_ {M}}\mathsf{M}\to 0\] for all \(\mathsf{M}\in\mathcal{C}^{T(X)}\). Applying \(\operatorname{top}_{X}\) to this gives an exact sequence \[0\to L_{1}\operatorname{top}_{X}\mathsf{M}\to X(M)\xrightarrow{hu}M\to \operatorname{top}_{X}\mathsf{M}\to 0\] since \(L_{1}\operatorname{top}_{X}f_{!}(M)=0\). **Definition 5.1**.: Fix a monomorphism \(j\colon L_{1}\operatorname{top}_{X}\mathsf{M}\hookrightarrow J\) into an injective module \(J\), and fix a lift \(e\colon X(M)\to J\) of \(j\). Define the object \(\mathfrak{Q}\mathsf{M}\in\mathcal{C}^{T(X)}\) and the morphism \(p_{\mathsf{M}}\colon\mathfrak{Q}\mathsf{M}\to\mathsf{M}\) to be such that the lower sequence in the following diagram is a pushout of the upper sequence: (5.1.1) Note that \(\mathfrak{Q}\mathsf{M}\) is not well-defined up to isomorphism, since it depends on the choice of \(J\), the monomorphism \(j\), and the lift \(e\). In Proposition 5.3 we show that it is well-defined in a quotient category of \(\mathcal{C}^{T(X)}\). We first show that \(\mathfrak{Q}\mathsf{M}\) gives a right \(\operatorname{Mono}(X)\)-approximation. **Theorem 5.2**.: _The morphism \(p_{\mathsf{M}}\colon\mathfrak{Q}\mathsf{M}\to\mathsf{M}\) is a right \(\operatorname{Mono}(X)\)-approximation for \(\mathsf{M}\in\mathcal{C}^{T(X)}\)._ Proof.: We first show that \(\mathfrak{Q}\mathsf{M}\) is contained in \(\operatorname{Mono}(X)\). To this end, we apply \(\operatorname{top}_{X}\) to the diagram (5.1.1). This yields the diagram with exact rows. By commutativity of the leftmost square the map \(L_{1}\operatorname{top}_{X}\mathsf{M}\to J\) is equal to \(j\), whence is a monomorphism. Furthermore, the lower row can be extended to an exact sequence \[0\to L_{1}\operatorname{top}_{X}\mathfrak{Q}\mathsf{M}\to L_{1} \operatorname{top}_{X}\mathsf{M}\xrightarrow{j}J\] since \(L_{1}\operatorname{top}_{X}(f_{!}(J))=0\). Hence, it follows that \(L_{1}\operatorname{top}_{X}\mathfrak{Q}\mathsf{M}=0\). Thus, \(\mathfrak{Q}\mathsf{M}\in\operatorname{Mono}(X)\). To see that \(p_{\mathsf{M}}\) is a right approximation, apply \(\operatorname{Hom}_{\mathcal{C}^{T(X)}}(\mathsf{N},-)\) with \(\mathsf{N}\in\operatorname{Mono}(X)\) to the exact sequence \[0\to f_{!}(J)\to\mathfrak{Q}\mathsf{M}\xrightarrow{pu}\mathsf{M}\to 0.\] This gives an epimorphism \[\operatorname{Hom}_{\mathcal{C}^{T(X)}}(\mathsf{N},p_{\mathsf{M}})\colon \operatorname{Hom}_{\mathcal{C}^{T(X)}}(\mathsf{N},\mathfrak{Q}\mathsf{M}) \to\operatorname{Hom}_{\mathcal{C}^{T(X)}}(\mathsf{N},\mathsf{M}).\] since \(\operatorname{Ext}^{1}_{\mathcal{C}^{T(X)}}(\mathsf{N},f_{!}(J))=0\) by Proposition 4.2. This proves the claim. Next we show that \(\mathfrak{Q}\mathsf{M}\) satisfies a universal property. Here \(\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C})}\) denotes the quotient of the category \(\mathcal{C}^{T(X)}\) by the ideal of morphisms factoring through an object of the form \(f_{!}(J)\) where \(J\in\operatorname{inj}\mathcal{C}\). **Proposition 5.3**.: _Let \(g\colon\mathsf{N}\to\mathsf{M}\) be a morphism in \(\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C})}\) with \(\mathsf{N}\in\operatorname{Mono}(X)\). Then there exists a unique morphism_ \[g^{\prime}\colon\mathsf{N}\to\mathfrak{Q}\mathsf{M}\] _in \(\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C})}\) such that the equality \(p_{\mathsf{M}}\circ g^{\prime}=g\) holds._ Proof.: The existence of \(g^{\prime}\) follows from \(p_{\mathsf{M}}\) being a right \(\operatorname{Mono}(X)\)-approximation. We show uniqueness: Assume \(g^{\prime}\colon\mathsf{N}\to\mathfrak{Q}\,\mathsf{M}\) and \(g^{\prime\prime}\colon\mathsf{N}\to\mathfrak{Q}\,\mathsf{M}\) are two morphisms in \(\mathcal{C}^{T(X)}\) such that \(p_{\mathsf{M}}\circ g^{\prime}\) and \(p_{\mathsf{M}}\circ g^{\prime\prime}\) are equal as morphisms in \(\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C})}\). Then there exists \(J^{\prime}\in\operatorname{inj}\mathcal{C}\) and morphisms \(u\colon\mathsf{N}\to f_{!}(J^{\prime})\) and \(v\colon f_{!}(J^{\prime})\to\mathsf{M}\) such that \[v\circ u=p_{\mathsf{M}}\circ(g^{\prime}-g^{\prime\prime})\] in \(\mathcal{C}^{T(X)}\). Furthermore, since \(p_{\mathsf{M}}\) is a right \(\operatorname{Mono}(X)\)-approximation and \(f_{!}(J^{\prime})\in\operatorname{Mono}(X)\), there exists a morphism \(w\colon f_{!}(J^{\prime})\to\mathfrak{Q}\,\mathsf{M}\) satisfying \[v=p_{\mathsf{M}}\circ w.\] Hence, \(p_{\mathsf{M}}\circ(g^{\prime}-g^{\prime\prime}-w\circ u)=0\) which implies that \(g^{\prime}-g^{\prime\prime}-w\circ u\) factors through \(\ker p_{\mathsf{M}}=f_{!}(J)\). Therefore, \(g^{\prime}-g^{\prime\prime}=(g^{\prime}-g^{\prime\prime}-w\circ u)+w\circ u\) factors through \(f_{!}(J)\oplus f_{!}(J^{\prime})\cong f_{!}(J\oplus J^{\prime})\). This shows that \(g^{\prime}\) and \(g^{\prime\prime}\) are equal in \(\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C})}\). It follows from Proposition 5.3 that \(\mathfrak{Q}\,\mathsf{M}\) and \(p_{\mathsf{M}}\) are unique up to isomorphism in \(\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C})}\), independently of the choice of \(j\colon L_{1}\operatorname{top}_{X}\mathsf{M}\to J\) and the lift \(e\colon X(M)\to J\). In fact, the universal property of \(\mathfrak{Q}\,\mathsf{M}\) implies that the assignment \(\mathsf{M}\mapsto\mathfrak{Q}\,\mathsf{M}\) can be made into a functor, which the following result shows. Here we write \[\overline{\operatorname{Mono}}(X)=\tfrac{\operatorname{Mono}(X)}{f_{!}( \operatorname{inj}\mathcal{C})}\] since by Proposition 4.2 and Proposition 4.3 the subcategory of injectives in \(\operatorname{Mono}(X)\) is \(f_{!}(\operatorname{inj}\mathcal{C})\). **Corollary 5.4**.: _The assignment \(\mathsf{M}\mapsto\mathfrak{Q}\,\mathsf{M}\) induces a functor_ \[\mathfrak{Q}\colon\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C })}\to\overline{\operatorname{Mono}}(X)\] _which is right adjoint to the inclusion functor \(i\colon\overline{\operatorname{Mono}}(X)\to\frac{\mathcal{C}^{T(X)}}{f_{!}( \operatorname{inj}\mathcal{C})}\). Furthermore, the counit of the adjunction \(i\dashv\mathfrak{Q}\) at \(\mathsf{M}\) is \(p_{\mathsf{M}}\)._ Proof.: Composing with \(p_{\mathsf{M}}\) gives an isomorphism \[\operatorname{Hom}_{\overline{\operatorname{Mono}}(X)}(\mathsf{N},\mathfrak{Q }\,\mathsf{M})\xrightarrow{\cong}\operatorname{Hom}_{\frac{\mathcal{C}^{T(X) }}{f_{!}(\operatorname{inj}\mathcal{C})}}(\mathsf{N},\mathsf{M})\] for \(\mathsf{N}\in\operatorname{Mono}(X)\) by Proposition 5.3. Since \(\mathfrak{Q}\,\mathsf{M}\in\operatorname{Mono}(X)\), it follows from Yoneda's lemma that the assignment \(\mathsf{M}\mapsto\mathfrak{Q}\,\mathsf{M}\) defines a functor \(\mathfrak{Q}\colon\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj}\mathcal{C })}\to\overline{\operatorname{Mono}}(X)\) which makes the isomorphism \[\operatorname{Hom}_{\frac{\mathcal{C}^{T(X)}}{f_{!}(\operatorname{inj} \mathcal{C})}}(i(-),\mathsf{M})\cong\operatorname{Hom}_{\overline{ \operatorname{Mono}}(X)}(-,\mathfrak{Q}\,\mathsf{M})\] natural in \(\mathsf{M}\). If \(\mathsf{N}=\mathfrak{Q}\,\mathsf{M}\), then the image of the identity \(1_{\mathfrak{Q}\,\mathsf{M}}\) is \(p_{\mathsf{M}}\), which is therefore the counit at \(\mathsf{M}\). This proves the claim. ### The general case In this subsection, we show that the inclusion of the monomorphism category into the Eilenberg-Moore category of the stable category induces an epivalence. **Definition 5.5**.: A functor is called an _epivalence_ if it is full, dense and reflects isomorphisms. The remainder of the subsection is concerned with proving the following result. **Theorem 5.6**.: _The canonical functor \(\overline{\operatorname{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}\) is an epivalence._ **Remark 5.7**.: It was observed in [1, Chapter II] that equivalences preserve and reflect several important representation-theoretic concepts. In particular, Theorem 5.6 has the following consequences: 1. An object in \(\overline{\operatorname{Mono}}(X)\) is indecomposable if and only if its image in \(\overline{\mathcal{C}}^{T(X)}\) is indecomposable. 2. There is a bijection between isomorphism classes of objects in \(\overline{\operatorname{Mono}}(X)\) and isomorphism classes of objects in \(\overline{\mathcal{C}}^{T(X)}\), which restricts to a bijection between the indecomposables. 3. \(\overline{\operatorname{Mono}}(X)\) is Krull-Remak-Schmidt if and only if \(\overline{\mathcal{C}}^{T(X)}\) is Krull-Remak-Schmidt. Under mild assumptions on \(\mathcal{C}\) there is a bijection between the indecomposable non-injective objects in \(\operatorname{Mono}(X)\) and the indecomposable objects in \(\operatorname{\overline{Mono}}(X)\). Therefore, given an indecomposable object in \(\overline{\mathcal{C}}^{T(X)}\), we have an associated unique (up to isomorphism) indecomposable non-injective object in \(\operatorname{Mono}(X)\). In Section 7 we show that the Mimo-construction, a refinement of \(\mathfrak{Q}\) as defined in the previous subsection, gives an explicit way to describe it. In particular, Theorem 7.9 reduces the study of indecomposable objects in \(\operatorname{Mono}(X)\) to the study of indecomposable objects in \(\operatorname{\overline{\mathcal{C}}}^{T(X)}\), which is often much simpler. **Lemma 5.8**.: _The canonical functor \(\operatorname{Mono}(X)\to\overline{\mathcal{C}}^{T(X)}\) is dense._ Proof.: An object \(\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\) is given by a morphism \(h_{\mathsf{M}}\colon X(M)\to M\) in \(\overline{\mathcal{C}}\) by Lemma 3.13. Choose a lift \(h^{\prime}\colon X(M)\to M\) to \(\mathcal{C}\) of \(h_{\mathsf{M}}\), and let \(\mathsf{M}^{\prime}\) be the object in \(\mathcal{C}^{T(X)}\) corresponding to it. By Theorem 5.2 the object \(\mathfrak{Q}\,\mathsf{M}^{\prime}\) is in \(\operatorname{Mono}(X)\), and by construction it must be isomorphic to \(\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\). This proves the claim. Next we show that the functor \(\operatorname{Mono}(X)\to\overline{\mathcal{C}}^{T(X)}\) is full. **Lemma 5.9**.: _Let \(\mathsf{M}\in\operatorname{Mono}(X)\) and \(\mathsf{N}\in\mathcal{C}^{T(X)}\). Then any morphism \(\mathsf{M}\to\mathsf{N}\) in \(\overline{\mathcal{C}}^{T(X)}\) can be lifted to a morphism in \(\mathcal{C}^{T(X)}\). In particular, the functor \(\operatorname{Mono}(X)\to\overline{\mathcal{C}}^{T(X)}\) is full._ Proof.: Let \(\overline{g}\colon\mathsf{M}\to\mathsf{N}\) be a morphism in \(\overline{\mathcal{C}}^{T(X)}\), and let \(g^{\prime}\colon M\to N\) be an arbitrary lift of \(f^{*}(\overline{g})\) to \(\mathcal{C}\). Since \(f^{*}(\overline{g})\circ h_{\mathsf{M}}\) and \(h_{\mathsf{N}}\circ X(f^{*}(\overline{g}))\) are equal in \(\overline{\mathcal{C}}\), the difference \[g^{\prime}\circ h_{\mathsf{M}}-h_{\mathsf{N}}\circ X(g^{\prime})\colon X(M)\to N\] is equal to a composite \(X(M)\xrightarrow{u^{\prime}}J\xrightarrow{v^{\prime}}N\) where \(J\) is injective. Furthermore, since \(h_{\mathsf{M}}\) is a monomorphism, the map \(u^{\prime}\) can be lifted to a morphism \(u_{0}\colon M\to J\). Similarly, since \(X(J)\) is injective, the map \(X(u_{0})\colon X(M)\to X(J)\) can be lifted to a morphism \(u_{1}\colon M\to X(J)\). Repeating this argument, we get maps \(u_{i}\colon M\to X^{i}(J)\) for each \(i\geq 0\) such that \(u_{i}\circ h_{\mathsf{M}}=X(u_{i-1})\) for all \(i>0\). Now let \(I\coloneqq\bigoplus_{i\geq 0}X^{i}(J)\), and note that \(I\) is injective since \(X\) is locally nilpotent and therefore \(\bigoplus_{i\geq 0}X^{i}(J)\) is a finite sum. Let \(v\colon I\to N\) be the unique map given on component \(X^{i}(J)\) as the composite \[X^{i}(J)\xrightarrow{X^{i}(v^{\prime})}X^{i}(N)\xrightarrow{X^{i-1}(h_{ \mathsf{N}})}X^{i-1}(N)\xrightarrow{X^{i-2}(h_{\mathsf{N}})}\cdots\xrightarrow {h_{\mathsf{N}}}N.\] Furthermore, let \(u\colon M\to I\) be the unique map given on component \(X^{i}(J)\) as \(u_{i}\colon M\to X^{i}(J)\). If we let \(g=g^{\prime}-v\circ u\), then a short computation shows that \(g\circ h_{\mathsf{M}}=h_{\mathsf{N}}\circ X(g)\), so \(g\) is a morphism \(\mathsf{M}\to\mathsf{N}\) in \(\mathcal{C}^{T(X)}\). Since \(g\) is equal to \(\overline{g}\) in \(\overline{\mathcal{C}}^{T(X)}\) and \(\overline{g}\) was arbitrary, this proves the claim. **Lemma 5.10**.: _The canonical functor \(\overline{\operatorname{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}\) reflects isomorphisms._ Proof.: Let \(g\colon\mathsf{M}\to\mathsf{N}\) be a morphism in \(\mathcal{C}^{T(X)}\) which becomes an isomorphism in \(\overline{\mathcal{C}}^{T(X)}\). Consider the commutative diagram with exact rows Since \(f^{*}(g)\colon M\to N\) is an isomorphism in \(\overline{\mathcal{C}}\), the maps \(f_{!}f^{*}(g)\) and \(f_{!}Xf^{*}(g)\) are isomorphisms in \(\overline{\operatorname{Mono}}(X)\). Now \(\mathsf{M}\) is isomorphic to the cone of \(\iota_{M}-f_{!}(h_{\mathsf{M}})\) and \(\mathsf{N}\) is isomorphic to the cone of \(\iota_{N}-f_{!}(h_{\mathsf{N}})\) when we consider \(\overline{\operatorname{Mono}}(X)\) as a right triangulated category as in [13], see also [10] and [1]. Since the two leftmost vertical maps \(f_{!}Xf^{*}(g)\) and \(f_{!}f^{*}(g)\) are isomorphisms in \(\overline{\operatorname{Mono}}(X)\), the map \(g\) between the cones must also be an isomorphism in \(\overline{\operatorname{Mono}}(X)\), see for example the proof of [1, Corollary 1.5]. This proves the claim. Proof of Theorem 5.6.: This follows from Lemma 5.8, Lemma 5.9 and Lemma 5.10. **Remark 5.11**.: Let \(\mathrm{k}\) be a field and \(Q\) a finite acyclic quiver. Consider \(\mathrm{rep}(Q,\mathrm{mod}\;\mathrm{k}[x]/(x^{2}))\) as in Example 3.11. Then the category \(\mathrm{Mono}(X)\) is equivalent to the category of perfect differential \(kQ\)-modules considered in [10]. Moreover, since \(\mathrm{mod}\;\mathrm{k}[x](x^{2})\cong\mathrm{mod}\;\mathrm{k}\), it follows that \[\mathrm{rep}(Q,\overline{\mathrm{mod}}\;\mathrm{k}[x](x^{2}))\cong\mathrm{ rep}(Q,\mathrm{mod}\;\mathrm{k}).\] Hence we have an equivalence \(\overline{\mathrm{Mono}(X)}\to\mathrm{rep}(Q,\mathrm{mod}\;\mathrm{k})\). The composite \[\mathrm{Mono}(X)\to\overline{\mathrm{Mono}(X)}\to\mathrm{rep}(Q,\mathrm{mod} \;\mathrm{k})\] can be identified with the homology functor in [10, Theorem 1.1 b)]. ### The hereditary case Recall that an object \(M\in\mathcal{C}\) is called a _cosyzygy_ (of an object \(N\)) if there exists an exact sequence \(0\to N\to I\to M\to 0\) in \(\mathcal{C}\) with \(I\) injective. The category \(\mathcal{C}\) is called _hereditary_ if any coszyzygy in \(\mathcal{C}\) is injective. Our goal is to show that the functor in Theorem 5.6 becomes an actual equivalence when \(\mathcal{C}\) is hereditary. We start by proving that relative projective objects are closed under cosyzygies in \(\mathrm{Mono}(X)\). **Proposition 5.12**.: _Assume we have an exact sequence_ \[0\to f_{!}(M)\xrightarrow{g}f_{!}(J)\to\mathsf{N}\to 0\] _in \(\mathcal{C}^{T(X)}\) with \(\mathsf{N}\in\mathrm{Mono}(X)\) and \(J\in\mathcal{C}\) injective. Then \(\mathsf{N}\cong f_{!}(N^{\prime})\) for some \(N^{\prime}\in\mathcal{C}\)._ Proof.: Choose an exact sequence \[0\to M\xrightarrow{i}I\xrightarrow{q}M^{\prime}\to 0\] in \(\mathcal{C}\) with \(I\) injective. Applying \(f_{!}(-)\), we get an exact sequence \[0\to f_{!}(M)\xrightarrow{f_{!}(i)}f_{!}(I)\xrightarrow{f_{!}(q)}f_{!}(M^{ \prime})\to 0\] in \(\mathcal{C}^{T(X)}\). This implies that both \(\mathsf{N}\) and \(f_{!}(M^{\prime})\) are cosyzygies of \(\mathsf{M}\) in \(\mathrm{Mono}(X)\). But the cosyzygy of an object is well-defined in \(\overline{\mathrm{Mono}}(X)\), see [11]. Hence, there exist injective objects \(J_{1},J_{2}\in\mathcal{C}\) and an isomorphism \[\mathsf{N}\oplus f_{!}(J_{1})\cong f_{!}(M^{\prime})\oplus f_{!}(J_{2}).\] In particular, \(\mathsf{N}\) is a direct summand of \(f_{!}(M^{\prime}\oplus J_{2})\), and therefore by Proposition 3.7 we have an isomorphism \(\mathsf{N}\cong f_{!}(N^{\prime})\) for some \(N^{\prime}\in\mathcal{C}\). Next we show that if a morphism in \(\mathrm{Mono}(X)\) factors componentwise through an injective, then it must factor through a relative projective. **Proposition 5.13**.: _Let \(g\colon\mathsf{M}\to\mathsf{N}\) be a morphism in \(\mathrm{Mono}(X)\), and assume \(f^{*}(g)\colon M\to N\) factors through an injective object in \(\mathcal{C}\). Then \(g\) factors through an object of the form \(f_{!}(K)\) where \(K\) is a cosyzygy of \(X(M)\)._ Proof.: Let \(r\colon M\to J\) be a monomorphism with \(J\) injective. Since \(f^{*}(g)\) factors through an injective object, it must also factor through \(r\), so we can write \(f^{*}(g)=s\circ r\) for some map \(s\colon J\to N\). Let \(\mathsf{K}^{\prime}\) be the pushout of \(\varepsilon_{\mathsf{M}}\colon f_{!}(M)\to\mathsf{M}\) along \(f_{!}(r)\colon f_{!}(M)\to f_{!}(J)\). We then get the following commutative diagram where the left hand square is the pushout square and the bottom right morphism is uniquely defined such that the right hand square commutative and the composite \(\mathsf{M}\to\mathsf{K}^{\prime}\to\mathsf{N}\) is equal to \(g\). Note that the left hand square is also a pullback square since \(f_{!}(r)\) is a monomorphism. Therefore, since \(\varepsilon_{\mathsf{M}}\) is an epimorphism with kernel \(f_{!}X(M)\), the same must hold for \(f_{!}(J)\to\mathsf{K}^{\prime}\), so we get an exact sequence \[0\to f_{!}X(M)\to f_{!}(J)\to\mathsf{K}^{\prime}\to 0.\] By Proposition 5.12 it follows that \(\mathsf{K}^{\prime}\cong f_{!}(K)\) for some object \(K\in\mathcal{C}\). Applying \(\operatorname{top}_{X}\) to the exact sequence, we get the exact sequence \[0\to X(M)\to J\to K\to 0.\] This shows that \(K\) is a coszyzygy of \(X(M)\), which proves the claim. We can now prove the main result of this subsection. **Theorem 5.14**.: _The following are equivalent:_ 1. _Any object in the image of_ \(X\) _has injective dimension at most_ \(1\)_._ 2. \(\overline{\operatorname{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}\) _is an equivalence._ _In particular, this holds if \(\mathcal{C}\) is hereditary._ Proof.: We already know that the functor in (2) is full and dense, so the statement is equivalent to the functor being faithful. By Proposition 5.13 any morphism \(g\) in \(\operatorname{Mono}(X)\) which is \(0\) in \(\overline{\mathcal{C}}^{T(X)}\) must factor through an object of the form \(f_{!}(K)\) where \(K\) is a coszyzygy in \(\mathcal{C}\) of an object of the form \(X(M)\). If (1) holds, then \(K\) must be injective, so \(g\) must be \(0\) in \(\overline{\operatorname{Mono}}(X)\). Next assume (2) holds, and let \(M\in\mathcal{C}\) be arbitrary. Choose an exact sequence \[0\to X(M)\to J\to K\to 0\] with \(J\) injective. Our goal is to show that \(K\) is injective. Applying \(f_{!}\) and taking the pushout along \(\iota_{M}\) gives a commutative diagram with exact rows. Note that \(\mathsf{E}\in\operatorname{Mono}(X)\) since it is an extension of two objects in \(\operatorname{Mono}(X)\). Since \(f^{*}(\iota_{M})\) is a split monomorphism, \(f^{*}(g)\) must be a split monomorphism and \(f^{*}(p)\) must factor through \(f^{*}f_{!}(J)\). Hence, \(p\) is \(0\) in \(\overline{\mathcal{C}}^{T(X)}\). It follows from assumption (2) that \(p\) factors through an object of the form \(f_{!}(I)\) with \(I\) injective in \(\mathcal{C}\). Hence \(\operatorname{top}_{X}(p)\colon\operatorname{top}_{X}\mathsf{E}\to K\) factors through \(I\). We claim that \(\operatorname{top}_{X}(p)\) is also a split epimorphism. Indeed, applying \(\operatorname{top}_{X}\) to the diagram above gives a commutative diagram with exact rows Since \(\operatorname{top}_{X}(\iota_{M})=0\) and the left hand square is a pushout, the map \(M\to\operatorname{top}_{X}\mathsf{E}\) is a split monomorphism. Therefore \(\operatorname{top}_{X}(p)\) must be a split epimorphism. Since \(\operatorname{top}_{X}(p)\) factors through \(I\), it follows that the induced map \(I\to K\) is also a split epimorphism. Hence \(K\) is a summand of an injective object, and must therefore be injective. **Remark 5.15**.: Consider the category \(\operatorname{rep}(Q,\mathcal{B})\) of representations of a finite acyclic quiver \(Q\) in \(\mathcal{B}\) as in Example 3.11, and assume \(\mathcal{B}\) has enough injectives. Let \(\operatorname{Mono}(Q,\mathcal{B})\) be the monomorphism category, and consider the functor \[\overline{\operatorname{Mono}}(Q,\mathcal{B})\to\operatorname{rep}(Q,\overline {\mathcal{B}})\] in Theorem 5.14. If \(Q\) has at least one arrow, then the image of \(X\) contains at least one copy of \(\mathcal{B}\). Then it follows from Theorem 5.14 that the functor above is an equivalence if and only if \(\mathcal{B}\) is hereditary. In contrast, if \(Q\) has no arrows, than \(X\) is the zero functor, so (1) is automatically satisfied and in (2), both categories coincide with \(\prod_{i\in Q_{0}}\overline{\mathcal{C}}\). We are particularly interested in situations where \(\overline{\mathcal{C}}\) is abelian. This implies that \(\overline{\mathcal{C}}^{T(X)}\) is abelian, and hence \(\overline{\operatorname{Mono}}(X)\) is abelian by Theorem 5.14. Using the Mimo-construction in Theorem 7.9, it follows that describing the indecomposable non-injective objects in \(\operatorname{Mono}(X)\) is equivalent to describing the indecomposable objects in an abelian category, namely \(\overline{\mathcal{C}}^{T(X)}\). **Remark 5.16**.: Assume \(\mathcal{C}\) is the category of left \(\Lambda\)-modules for an Artin algebra \(\Lambda\). Then \(\mathcal{C}\) is hereditary if and only if \(\Lambda\) is left hereditary. It was shown in [15, Theorem 9.5] that for such rings the stable category of \(\mathcal{C}\) modulo projectives, denoted \(\underline{\mathcal{C}}\), is abelian if and only if the injective envelope of \(\Lambda\) is projective. Since \(\overline{\mathcal{C}}\cong\underline{\mathcal{C}}\) by [11, Proposition 5.5], this is also equivalent to \(\overline{\mathcal{C}}\) being abelian. Now the injective envelope of \(\Lambda\) being projective holds precisely if \(\Lambda\) is isomorphic to a finite direct products of complete blocked triangular matrix algebras over division rings, see [15, Remark 7.6]. **Remark 5.17**.: Consider the category \(\operatorname{rep}(Q,\mathcal{B})\) of representations of a finite acyclic quiver \(Q\) in \(\mathcal{B}\) as in Example 3.11. Assume furthermore that \(\mathcal{B}=\operatorname{Mod}\,\Bbbk\!\operatorname{A}_{n}\) where \(\Bbbk\!\operatorname{A}_{n}\) is the path algebra of a linearly oriented \(\mathbb{A}_{n}\)-quiver over a field \(\Bbbk\). Since \(\operatorname{Mod}\!\operatorname{A}_{n}\cong\operatorname{Mod}\,\Bbbk \!\operatorname{A}_{n-1}\) we get that \[\mathcal{C}=\prod_{\mathfrak{i}\in Q_{0}}\operatorname{Mod}\,\Bbbk\! \operatorname{A}_{n}\quad\text{and}\quad\overline{\mathcal{C}}\cong\prod_{ \mathfrak{i}\in Q_{0}}\operatorname{Mod}\,\Bbbk\!\operatorname{A}_{n-1}.\] Hence by Theorem 5.14 we get an equivalence \[\overline{\operatorname{Mono}}(Q,\operatorname{Mod}\,\Bbbk\!\operatorname{A} _{n})\cong\operatorname{rep}(Q,\operatorname{Mod}\,\Bbbk\!\operatorname{A} _{n-1}).\] By letting \(Q\) be the linearly oriented \(\mathbb{A}_{m}\)-quiver, we recover (the dual of) [1, Theorem 1.5]. Note that \[\prod_{\mathfrak{i}\in Q_{0}}\operatorname{Mod}\,\Bbbk\!\operatorname{A}_{n} \cong\operatorname{Mod}\Lambda\] where \(\Lambda=\prod_{\mathfrak{i}\in Q_{0}}\,\Bbbk\!\operatorname{A}_{n}\) satisfies the condition in Remark 5.16. **Remark 5.18**.: There are other examples of hereditary categories \(\mathcal{C}\) such that \(\overline{\mathcal{C}}\) is abelian. Indeed, let \(\mathcal{C}=\operatorname{art}\Lambda\) be the category of artinian modules over a Dedekind domain \(\Lambda\). This is abelian, closed under injective envelopes [14, Proposition 2*], and hereditary. We claim that any artinian module must be a finite sum of indecomposable injective modules and modules of finite length. Indeed, since any indecomposable finite length module over \(\Lambda\) is of the form \(\Lambda/\mathfrak{m}^{n}\) where \(\mathfrak{m}\) is a maximal ideal, the finite length modules form a uniserial category in the sense of [11]. By [11, Proposition 2.4.20] any artinian module is a filtered colimit of finite length modules, so the claim follows from [11, Theorem 13.1.28]. Consider the stable category \(\overline{\mathcal{C}}\). Since any injective module over a Dedekind domain has infinite length [12, Corollary 2], the objects in \(\overline{\mathcal{C}}\) are up to isomorphism precisely the modules of finite length. Furthermore, since \(\Lambda\) is hereditary there are no nonzero morphisms from injective modules to modules of finite length, so \(\overline{\mathcal{C}}\) must be equivalent to the category \(\operatorname{fl}\Lambda\) of finite length modules over \(\Lambda\), which is abelian. ## 6. The Mimo-construction We fix a \(\Bbbk\)-linear abelian category \(\mathcal{C}\) with enough injectives, and an exact functor \(X\colon\mathcal{C}\to\mathcal{C}\) which is locally nilpotent and preserves injectives. The goal of this section is to extend the Ringel-Schmidmeier's Mimo-construction [13] to our setting. In particular, we show that it is a minimal right \(\operatorname{Mono}(X)\)-approximation. ### Definition and properties We start with the definition of the Mimo-construction. **Definition 6.1**.: Let \(\mathsf{M}\) be an object in \(\mathcal{C}^{T(X)}\), and consider the construction \(\mathfrak{Q}\,\mathsf{M}\) in Definition 5.1. If \(J\) is an injective envelope of \(L_{1}\operatorname{top}_{X}\mathsf{M}\), then we write \(\operatorname{Mimo}\,\mathsf{M}:=\mathfrak{Q}\,\mathsf{M}\) and call this the _Mimo-construction_ of \(\mathsf{M}\). Note that \(\operatorname{Mimo}\,\mathsf{M}\) might not exist for all \(\mathsf{M}\), unless \(\mathcal{C}\) has injective envelopes. We show that if \(\operatorname{Mimo}\,\mathsf{M}\) exists, then it is a minimal right approximation and therefore unique up to isomorphism. **Theorem 6.2**.: _Let \(\mathsf{M}\in\mathcal{C}^{T(X)}\), and assume \(\operatorname{Mimo}\,\mathsf{M}\) exists. Then the canonical morphism \(p_{\mathsf{M}}\colon\operatorname{Mimo}\,\mathsf{M}\to\mathsf{M}\) is a minimal right \(\operatorname{Mono}(X)\)-approximation._ Proof.: By Theorem 5.2 we only need to show \(p_{\mathsf{M}}\) is right minimal. Let \(\varphi\colon\operatorname{Mimo}\mathsf{M}\to\operatorname{Mimo}\mathsf{M}\) be a morphism satisfying \(p_{\mathsf{M}}\circ\varphi=p_{\mathsf{M}}\). Consider the following commutative diagram with exact rows where \(\psi\) is induced from the commutativity of the right hand square. Applying \(\operatorname{top}_{X}\) yields the commutative diagram: As \(j\) is a minimal left approximation, it follows that \(\operatorname{top}_{X}(\psi)\) is an isomorphism. Therefore \(\psi\) is an isomorphism by Lemma 3.24 part (2). The \(5\)-Lemma then implies that \(\varphi\) is an isomorphism. Next we show that the isomorphism class of \(\operatorname{Mimo}\mathsf{M}\) only depends on the isomorphism class of \(\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\). This plays an important role in Section 7. First we prove an analogue for \(\mathfrak{Q}\mathsf{M}\). **Lemma 6.3**.: _Let \(\mathsf{M},\mathsf{N}\in\mathcal{C}^{T(X)}\). If \(\mathsf{M}\cong\mathsf{N}\) in \(\overline{\mathcal{C}}^{T(X)}\), then \(\mathfrak{Q}\mathsf{M}\cong\mathfrak{Q}\mathsf{N}\) in \(\overline{\operatorname{Mono}}(X)\)._ Proof.: Consider the composite of the isomorphisms \(\mathfrak{Q}\mathsf{M}\cong\mathfrak{Q}\mathsf{N}\cong\mathfrak{Q}\mathsf{N}\) in \(\overline{\mathcal{C}}^{T(X)}\). By Lemma 5.9 the functor \(\overline{\operatorname{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}\) is full, and hence this composite can be lifted to a morphism \(\mathfrak{Q}\mathsf{M}\to\mathfrak{Q}\mathsf{N}\) in \(\overline{\operatorname{Mono}}(X)\). By Lemma 5.10 the functor \(\overline{\operatorname{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}\) reflects isomorphisms, so the lift has to be an isomorphism in \(\overline{\operatorname{Mono}}(X)\). This proves the claim. **Remark 6.4**.: We are not claiming that \(\mathfrak{Q}\) is a functor on \(\overline{\mathcal{C}}^{T(X)}\) in Lemma 6.3. As one can see from the proof, the choice of the isomorphism \(\mathfrak{Q}\mathsf{M}\cong\mathfrak{Q}\mathsf{N}\) is not unique. To obtain a similar result for \(\operatorname{Mimo}\), we first need to prove a result on the non-existence of nonzero injective summands. **Lemma 6.5**.: _Assume \(\mathcal{C}\) has injective envelopes. Let \(\mathsf{M}\in\mathcal{C}^{T(X)}\), and assume \(M\) has no nonzero injective summands. Let \(g\colon f_{!}(I)\to\operatorname{Mimo}\mathsf{M}\) be a morphism in \(\operatorname{Mono}(X)\). If \(I\) is injective, then \(g\) must be in the radical of \(\operatorname{Mono}(X)\). In particular, \(\operatorname{Mimo}\mathsf{M}\) has no nonzero summands in \(\operatorname{add}f_{!}(\operatorname{inj}\mathcal{C})\)._ Proof.: By definition, \(g\) is in the radical if for any morphism \(g^{\prime}\colon\operatorname{Mimo}\mathsf{M}\to f_{!}(I)\) the difference \(1_{f(I)}-g^{\prime}\circ g\) is an isomorphism. For this, it suffices to show that \(g^{\prime}\circ g\) is in the radical, since then by definition \(1_{f_{!}(I)}-1_{f_{!}(I)}\circ(g^{\prime}\circ g)=1_{f_{!}(I)}-g^{\prime}\circ g\) is an isomorphism. Let \(r\colon f_{!}(M)\to\operatorname{Mimo}\mathsf{M}\) and \(s\colon f_{!}(J)\to\operatorname{Mimo}\mathsf{M}\) be as in diagram (5.1.1) (where \(\mathfrak{Q}\mathsf{M}\) is replaced by \(\operatorname{Mimo}\mathsf{M}\)). Consider the diagram We want to show the existence a morphism \(k\) making the left triangle commutative. To this end, note that the triangle identities for the adjunction \(f_{!}\dashv f^{*}\) imply that \(f^{*}(\varepsilon_{\mathsf{M}})\) is a split epimorphism. Thus, the sequence \[0\to f^{*}f_{!}X(M)\xrightarrow{f^{*}(\iota_{M})-f^{*}f_{!}(h_{\mathsf{M}})}f ^{*}f_{!}(M)\xrightarrow{f^{*}(\varepsilon_{\mathsf{M}})}M\to 0\] is split exact. Hence, \(f^{*}(\iota_{M})-f^{*}f_{!}(h_{\mathsf{M}})\) is a split monomorphism. Therefore the short exact sequence (obtained by the definition of \(\operatorname{Mimo}\mathsf{M}\) as a pushout) is split exact. It follows that \((f^{*}(s),f^{*}(r))\) is a split epimorphism. By Lemma 3.5 this implies the existence of the morphism \(k\) above. Next we claim that the morphism \((\operatorname{top}_{X}(g^{\prime}\circ r),\operatorname{top}_{X}(g^{\prime} \circ s))\) is in the radical. Indeed, the morphism \(\operatorname{top}_{X}(g^{\prime}\circ r)\colon M\to I\) is in the radical by Lemma 2.5 (2) since \(M\) has no nonzero injective summands. Also, the composite \[L_{1}\operatorname{top}_{X}\mathsf{M}\xrightarrow{j}J\xrightarrow{\operatorname {top}_{X}(g^{\prime}\circ s)}I\] must be \(0\), by construction of \(\operatorname{Mimo}\). Hence \(\operatorname{top}_{X}(g^{\prime}\circ s)\) factors through the cokernel of \(j\), which is a radical morphism by Lemma 2.5 (1), since \(j\) is an injective envelope. Hence \(\operatorname{top}_{X}(g^{\prime}\circ s)\) must itself be a radical morphism. This shows that \((\operatorname{top}_{X}(g^{\prime}\circ r),\operatorname{top}_{X}(g^{\prime} \circ s))\) is in the radical. By Lemma 3.25 the morphism \((g^{\prime}\circ r,g^{\prime}\circ s)\) is in the radical of \(\operatorname{Mono}(X)\). Since \(g^{\prime}\circ g\) factors through \((g^{\prime}\circ r,g^{\prime}\circ s)\), this proves the claim. We can now prove a similar result to Lemma 6.3 for the \(\operatorname{Mimo}\)-construction. This is an analogue of [10, Proposition 4.1]. **Corollary 6.6**.: _Assume \(\mathcal{C}\) has injective envelopes. Let \(\mathsf{M}\) and \(\mathsf{N}\) be objects in \(\mathcal{C}^{T(X)}\). Assume that \(M\) and \(N\) have no nonzero injective summands, and that \(\mathsf{M}\cong\mathsf{N}\) in \(\overline{\mathcal{C}^{T(X)}}\). Then \(\operatorname{Mimo}\mathsf{M}\cong\operatorname{Mimo}\mathsf{N}\) in \(\operatorname{Mono}(X)\)._ Proof.: By Lemma 6.3 it follows that \(\operatorname{Mimo}\mathsf{M}\cong\operatorname{Mimo}\mathsf{N}\) in \(\overline{\operatorname{Mono}}(X)\). Hence, there exist injective objects \(I_{1},I_{2}\in\mathcal{C}\) such that \(\operatorname{Mimo}\mathsf{M}\oplus f_{!}(I_{1})\) and \(\operatorname{Mimo}\mathsf{N}\oplus f_{!}(I_{2})\) are isomorphic. The claim now follows from Lemma 6.5 and Lemma 2.6. ### Mimo for modulations To describe \(\operatorname{Mimo}\mathsf{M}\) for a modulation, we first give a description of it as an object in \((X\Downarrow\operatorname{Id}_{\mathcal{C}})\) and of \(r,s\) and \(p_{\mathsf{M}}\) in (5.1.1) as morphisms in \(\mathcal{C}\). Recall that \(f^{*}f_{!}(J)=\bigoplus_{i\geq 0}X^{i}(J)\) and \(f^{*}f_{!}(M)=\bigoplus_{i\geq 0}X^{i}(M)\). **Lemma 6.7**.: _Fix the notation as in Definition 5.1. Then there exists an isomorphism \(f^{*}(\mathfrak{Q}\mathsf{M})\cong M\oplus\bigoplus_{i\geq 0}X^{i}(J)\) such that:_ 1. \(f^{*}(s)=\begin{pmatrix}0\\ 1\end{pmatrix}\colon\bigoplus_{i\geq 0}X^{i}(J)\to M\oplus\bigoplus_{i\geq 0}X^{i}(J)\)_._ 2. \(f^{*}(r)\colon\bigoplus_{i\geq 0}X^{i}(M)\to M\oplus\bigoplus_{i\geq 0}X^{i}(J)\) _is induced by the maps_ * \(X^{k}(M)\xrightarrow{h_{\mathsf{M}\circ\cdots\circ X^{k-1}(h_{\mathsf{M}})}}M\) _for all_ \(k\geq 0\)_. (where_ \(k=0\) _gives the identity map)._ * \(X^{j+k+1}(M)\xrightarrow{X^{j}(e)\circ X^{j+1}(h_{\mathsf{M}})\circ\cdots \circ X^{j+k}(h_{\mathsf{M}})}X^{j}(J)\) _for all_ \(j,k\geq 0\)_._ 3. \(f^{*}(p_{\mathsf{M}})=\begin{pmatrix}1&0\end{pmatrix}\colon M\oplus\bigoplus _{i\geq 0}X^{i}(J)\to M\)_._ 4. \(h_{\mathfrak{Q}\mathsf{M}}\colon X(M)\oplus\bigoplus_{i\geq 0}X^{i+1}(J)\to M\oplus \bigoplus_{i\geq 0}X^{i}(J)\) _is induced by the maps_ * \(X^{i+1}(J)\xrightarrow{1}X^{i+1}(J)\) _for all_ \(i\geq 0\)_._ * \(X(M)\xrightarrow{h_{\mathsf{M}}}M\)_._ * \(X(M)\xrightarrow{e}J\)_._ Proof.: Assume we have an object \(\mathsf{M}^{\prime}\in(X\Downarrow\operatorname{Id}_{\mathcal{C}})\) defined by \(f^{*}(\mathsf{M}^{\prime})=M\oplus\bigoplus_{i\geq 0}X^{i}(J)\) and by a map \(h_{\mathsf{M}^{\prime}}\colon Xf^{*}(\mathsf{M}^{\prime})\to f^{*}(\mathsf{M}^{ \prime})\) as in (4). Recall that \(h_{\check{f}(M)}\colon\bigoplus_{i\geq 0}X^{i+1}(M)\to\bigoplus_{i\geq 0}X^{i}(M)\) and \(h_{\check{f}(J)}\colon\bigoplus_{i\geq 0}X^{i+1}(J)\to\bigoplus_{i\geq 0}X^{i}(J)\) are induced by the maps \(X^{i+1}(M)\xrightarrow{1}X^{i+1}(M)\) and \(X^{i+1}(J)\xrightarrow{1}X^{i+1}(J)\) for \(i\geq 0\), respectively. A straightforward computation then gives that \[h_{\mathsf{M}^{\prime}}\circ X(r^{\prime})=r^{\prime}\circ h_{\check{f}(M)} \quad\text{and}\quad h_{\mathsf{M}^{\prime}}\circ X(s^{\prime})=s^{\prime} \circ h_{\check{f}(J)}\] where \(r^{\prime}\) and \(s^{\prime}\) are defined using the formulas in (1) and (2), respectively. Hence there exist well-defined morphisms \(r\colon f_{!}(M)\to\mathsf{M}^{\prime}\) and \(s\colon f_{!}(J)\to\mathsf{M}^{\prime}\) in \(\mathcal{C}^{T(X)}\cong(X\Downarrow\operatorname{Id}_{\mathcal{C}})\) such that \(f^{*}(r)=r^{\prime}\) and \(f^{*}(s)=s^{\prime}\). Now observe that the sequence \[0\to f_{!}X(M)\xrightarrow{\begin{pmatrix}\iota_{M}-f_{!}(h_{\mathsf{M}})\\ f_{!}(e)\end{pmatrix}}f_{!}(M)\oplus f_{!}(J)\xrightarrow{\begin{pmatrix}r&-s \\ \end{pmatrix}}\mathsf{M}^{\prime}\to 0\] is exact, since it is exact when applying \(f^{*}\). This implies that the left square in is a pushout square. But since \(\mathfrak{Q}\mathsf{M}\) is also defined by this pushout square, we get an isomorphism \(\mathsf{M}^{\prime}\cong\mathfrak{Q}\mathsf{M}\). Finally, since \(f^{*}(p_{\mathsf{M}})\) as defined in (3) is the cokernel of \(f^{*}(s)\), it must lift to a map \(p_{\mathsf{M}}\colon\mathsf{M}^{\prime}\to\mathsf{M}\) such that the lower sequences in the diagram is exact. Since \(f^{*}(p_{\mathsf{M}})\circ f^{*}(r)=f^{*}(\varepsilon_{\mathsf{M}})\), the right square must be commutative. This proves the claim. **Example 6.8**.: Let \(Q\) be a finite and acyclic quiver and \(\mathfrak{B}\) a modulation by abelian categories with enough injectives and exact functors preserving injective objects, see Example 3.9. Let \((B_{i},B_{\alpha})_{\mathtt{i}\in Q_{0},\alpha\in Q_{1}}\) be a \(\mathfrak{B}\)-representation. We want to compute \(\mathfrak{Q}(B_{i},B_{\alpha})\) as in Definition 5.1. To this end, for each \(\mathtt{k}\in Q_{0}\) choose an injective object \(J_{\mathtt{k}}\) in \(\mathcal{B}_{\mathtt{k}}\) and a map \(e_{\mathtt{k}}\colon\bigoplus_{\alpha\in Q_{1},t(\alpha)=\mathtt{k}}F_{ \alpha}(B_{s(\alpha)})\to J_{\mathtt{k}}\) whose restriction to \(L_{1}\operatorname{top}_{X}(B_{\mathtt{i}},B_{\alpha})_{\mathtt{k}}=\ker B_{ \mathtt{k},\text{in}}\) is a monomorphism, see Example 3.26. From Lemma 6.7 it follows that \[\mathfrak{Q}(B_{\mathtt{i}},B_{\alpha})_{\mathtt{k}}=B_{\mathtt{k}}\oplus \bigoplus_{p\in Q_{\geq 0},t(p)=\mathtt{k}}F_{p}(J_{s(p)})\] By Lemma 6.7 (4) the morphism \(\mathfrak{Q}(B_{\mathtt{i}},B_{\alpha})_{\beta}\colon\mathfrak{Q}(B_{ \mathtt{i}},B_{\alpha})_{\mathtt{j}}\to\mathfrak{Q}(B_{\mathtt{i}},B_{\alpha}) _{\mathtt{k}}\) associated to an arrow \(\beta\colon\mathtt{j}\to\mathtt{k}\) is the map \[F_{\beta}(B_{\mathtt{j}})\oplus\bigoplus_{p\in Q_{\geq 0},t(p)=\mathtt{j}}F_{ \beta}F_{p}(J_{s(p)})\to B_{\mathtt{k}}\oplus\bigoplus_{q\in Q_{\geq 0},t(q)= \mathtt{k}}F_{q}(J_{s(q)})\] which is induced by the identity \(F_{\beta}F_{p}(J_{s(p)})\xrightarrow{1}F_{q}(J_{s(q)})\) for \(q=\beta p\), the map \(B_{\beta}\colon F_{\beta}(B_{\mathtt{j}})\to B_{\mathtt{k}}\), and the composite \(F_{\beta}(B_{\mathtt{j}})\to\bigoplus_{\alpha\in Q_{1},t(\alpha)=\mathtt{k}}F_{ \alpha}(B_{s(\alpha)})\xrightarrow{e_{\mathtt{k}}}J_{\mathtt{k}}\) where the first map is the inclusion. If the restriction of \(e_{\mathtt{k}}\) to \(\ker B_{\mathtt{k},\text{in}}\) is an injective envelope for all \(\mathtt{k}\in Q_{0}\), then \(\mathfrak{Q}(B_{\mathtt{i}},B_{\alpha})=\operatorname{Mimo}(B_{\mathtt{i}},B_{ \alpha})\) and we get a formula for the Mimo-construction. **Example 6.9**.: Consider the category of representations \(\operatorname{rep}(Q,\mathcal{B})\) as in Example 3.11. In this case the Mimo-construction \(\operatorname{Mimo}(B_{\mathtt{i}},B_{\alpha})=(B_{\mathtt{i}}^{\prime},B_{ \alpha}^{\prime})\) of an object \((B_{\mathtt{i}},B_{\alpha})\) is given as follows: Choose an injective envelope \(j_{\mathtt{i}}\colon K_{1}\to J_{\mathtt{i}}\) for each \(\mathtt{i}\in Q_{0}\), where \(K_{\mathtt{i}}\) is the kernel of the morphism \[B_{\mathtt{i},\text{in}}\colon\bigoplus_{\begin{subarray}{c}\alpha\in Q_{1}\\ t(\alpha)=\mathtt{i}\end{subarray}}B_{s(\alpha)}\xrightarrow{(B_{\alpha})_{ \alpha}}B_{\mathtt{i}}.\] Let \(e_{\mathtt{i}}\colon\bigoplus_{\alpha\in Q_{1},t(\alpha)=\mathtt{i}}B_{s( \alpha)}\to J_{\mathtt{i}}\) be a lift of \(j_{\mathtt{i}}\) via the inclusion \(K_{\mathtt{i}}\to\bigoplus_{\alpha\in Q_{1},t(\alpha)=\mathtt{i}}B_{s(\alpha)}\). Then \[B_{\mathtt{i}}^{\prime}=B_{\mathtt{i}}\oplus\bigoplus_{p\in Q_{\geq 0},t(p)= \mathtt{i}}J_{s(p)}\] where \(Q_{\geq 0}\) is the set of paths in \(Q\), and \(s(p)\) and \(t(p)\) denotes the source and target of \(p\), respectively. For an arrow \(\beta\colon\mathtt{i}\to\mathtt{k}\), the morphism \[B_{\beta}^{\prime}\colon B_{\mathtt{i}}\oplus\bigoplus_{p\in Q_{\geq 0},t(p)= \mathtt{i}}J_{s(p)}\to B_{\mathtt{k}}\oplus\bigoplus_{q\in Q_{\geq 0},t(q)= \mathtt{k}}J_{s(q)}\] is induced by the identity \(J_{s(p)}\xrightarrow{1}J_{s(q)}\) for \(q=\beta p\), the structure map \(B_{\beta}\colon B_{\mathfrak{i}}\to B_{\mathfrak{k}}\), and the composite \(B_{\mathfrak{i}}\xrightarrow{}\bigoplus_{\alpha\in Q_{1},t(\alpha)=\Bbbk}B_{s( \alpha)}\xrightarrow{\varepsilon_{\mathfrak{k}}}J_{\mathfrak{k}}\) where the first map is the canonical inclusion. This formula has already been obtained in [13, Section 3a]. In [13, Lemma 3.2 and Proposition 3.3] they show that it gives a right \(\operatorname{Mono}(X)\)-approximation. In Theorem 5.2 we prove the same results. Note that the proofs are shorter and more transparent in our language. ## 7. A characterization of the indecomposable objects in \(\operatorname{Mono}(X)\) By Theorem 5.6 there is a bijection between isomorphism classes of indecomposable objects in \(\overline{\operatorname{Mono}}(X)\) and \(\overline{\mathcal{C}}^{T(X)}\), induced by the equivalence \[\overline{\operatorname{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}.\] Under some mild additional assumptions this is also in bijection with isomorphism classes of non-injective indecomposable objects in \(\operatorname{Mono}(X)\). The goal in this section is to provide an explicit formula for this latter bijection. More precisely, we show that the Mimo-construction extends to objects in \(\overline{\mathcal{C}}^{T(X)}\), and that it gives a bijection between indecomposable objects in \(\overline{\mathcal{C}}^{T(X)}\) and non-injective indecomposable objects in \(\operatorname{Mono}(X)\). We illustrate the usefulness of this result on examples in Section 8. For the result to hold \(\mathcal{C}\) must admit maximal injective summands, see Definition 7.1. We therefore start by showing that any noetherian, artinian, or locally noetherian category admits maximal injective summands. ### Maximal injective summand **Definition 7.1**.: Let \(\mathcal{C}\) be an abelian category with injective envelopes. We say that \(\mathcal{C}\)_admits maximal injective summands_ if for any \(M\in\mathcal{C}\) there exists an isomorphism \[M\cong M^{\prime}\oplus I\] where \(I\) is injective and \(M^{\prime}\) has no nonzero injective summands. Note that \(M^{\prime}\) and \(I\) are unique up to isomorphism by Lemma 2.5 (2) and Lemma 2.6. Recall that an object \(M\in\mathcal{C}\) is _artinian_ if any decreasing sequence \[\cdots\subseteq M_{1}\subseteq M_{0}\subseteq M\] of subobjects of \(M\) stabilizes, and _noetherian_ if any increasing sequence \[M_{0}\subseteq M_{1}\subseteq M_{2}\subseteq\cdots\subseteq M\] of subobjects of \(M\) stabilizes. The category \(\mathcal{C}\) is _artinian_ if each object in \(\mathcal{C}\) is artinian, and _noetherian_ if each object in \(\mathcal{C}\) is noetherian. Finally, \(\mathcal{C}\) is called _locally noetherian_ if it is a Grothendieck category with a generating set of noetherian objects. Note that any locally noetherian category has injective envelopes, see [14, Corollary 2.5.4]. **Proposition 7.2**.: _Let \(\mathcal{C}\) be an abelian category with injective envelopes. Then \(\mathcal{C}\) admits maximal injective summands if one of the following conditions hold:_ 1. _Any injective object in_ \(\mathcal{C}\) _can be written as a finite direct sum of indecomposables._ 2. \(\mathcal{C}\) _is artinian._ 3. \(\mathcal{C}\) _is noetherian._ 4. \(\mathcal{C}\) _is locally noetherian._ Proof.: Assume property (1). Since \(\mathcal{C}\) has injective envelopes, the endomorphism ring of any indecomposable injective object in \(\mathcal{C}\) has local endomorphism ring by the proof of [14, Lemma 2.5.7]. Hence, the injective objects in \(\mathcal{C}\) form a Krull-Remak-Schmidt category, so any injective object can be written uniquely as a sum of indecomposable objects up to permutation and isomorphism. Now let \(M\in\mathcal{C}\) be arbitrary, and let \(M\to E(M)\) be its injective envelope. Since any injective summand of \(M\) must be an injective summand of \(E(M)\), it follows that the number of indecomposable summands of \(M\) must be bounded by the number of indecomposable summands of \(E(M)\), which is finite by hypothesis. So we can choose an injective summand \(I\) of \(M\) with the maximal amount of indecomposable summands. Writing \(M\cong I\oplus M^{\prime}\) we see that \(M^{\prime}\) has no injective summands, since otherwise there would exist an injective summand of \(M\) with more indecomposable summands than \(I\). This shows that \(\mathcal{C}\) admits maximal injective summands. Next we show that \(\mathcal{C}\) being noetherian or artinian implies condition (1). Indeed, assume an injective object \(I\) cannot be written as a finite direct sum of indecomposable injectives. Then there exist nonzero injective objects \(I_{n},I^{\prime}_{n}\) with \(I=I^{\prime}_{0}\oplus I_{0}\) and \(I_{n}=I^{\prime}_{n+1}\oplus I_{n+1}\) for \(n\geq 0\). We then have a strictly decreasing and a strictly increasing sequence of subobjects of \(I\) \[\cdots\subset I_{2}\subset I_{1}\subset I_{0}\subset I\quad\text{and}\quad I^{ \prime}_{0}\subset I^{\prime}_{1}\subset I^{\prime}_{0}\oplus I^{\prime}_{1} \oplus I^{\prime}_{2}\subset\cdots\subset I.\] Hence, \(\mathcal{C}\) can't be noetherian or artinian. Finally, assume \(\mathcal{C}\) is a locally noetherian category. Then injective objects are closed under filtered colimits, see [11, Theorem 11.2.12]. Therefore, Zorn's lemma implies that \(M\) has a maximal injective subobject \(I\). Furthermore, the inclusion \(I\to M\) must be split, since \(I\) is injective. Hence, we have an isomorphism \(M\cong M^{\prime}\oplus I\) for some object \(M^{\prime}\). If \(M^{\prime}\) has a nonzero injective summand \(J\), then \(I\oplus J\) is an injective subobject of \(M\) which strictly contains \(I\). This contradicts the maximality of \(I\). Hence \(M^{\prime}\) has no nonzero injective summands. **Example 7.3**.: If \(\mathcal{C}\) is the category of quasi-coherent sheaves over a noetherian scheme, or the category of all modules over a noetherian ring, then \(\mathcal{C}\) is locally noetherian. The existence of maximal injective summands in the latter case was first shown in [10]. **Example 7.4**.: Following [10] a ring \(\Lambda\) is called _right co-noetherian_ if injective envelopes of simple right \(\Lambda\)-modules are artinian. By [14, Proposition 2*] this is equivalent to injective envelopes of artinian right modules being artinian. Hence, the category \(\mathcal{C}=\operatorname{art}\Lambda\) of artinian right \(\Lambda\)-modules is an artinian abelian category with injective envelopes, and therefore admits maximal injective summands. Examples of co-noetherian rings are commutative noetherian rings [10, Proposition 3], Quasi-Frobenius rings [11, Proposition 1], Noetherian PI rings [10, Theorem 2], module finite algebras over commutative noetherian rings [12, Corollary 2.3], finite normalizing extensions of a right co-noetherian ring [12, Corollary 2.2], and the first Weyl algebra of a commutative ring finitely generated as an algebra over the integers [12, Corollary 2.7]. A commutative ring is co-noetherian if and only if its localizations at any maximal ideal is noetherian [14, Theorem 2]. **Remark 7.5**.: There exist abelian categories with injective envelopes which do not satisfy Definition 7.1. Indeed, let \(\mathcal{C}\) be any Grothendieck category which is not locally noetherian, e.g. the category of all modules over a non-noetherian ring. Since \(\mathcal{C}\) is Grothendieck, it has injective envelopes, see [11, Corollary 2.5.4]. Also, since \(\mathcal{C}\) is not locally noetherian, there exists a set \(\mathcal{J}\) of injective objects such that the sum \(M=\bigoplus_{J\in\mathcal{J}}J\) is not injective, see [11, Theorem 11.2.12]. We claim that \(M\) has no maximal injective summand. Assume otherwise, i.e. that \(M\cong M^{\prime}\oplus I\) where \(J\) is injective and \(M^{\prime}\) has no nonzero injective summands. Let \(\mathcal{J}^{\prime}\subset\mathcal{J}\) be a finite subset, and let \(I^{\prime}=\bigoplus_{J\in\mathcal{J}^{\prime}}J\) be the corresponding injective object. Choose a left inverse \(M\to I^{\prime}\) to the inclusion \(I^{\prime}\to M\). Via the isomorphism \(M\cong M^{\prime}\oplus I\) we get morphisms \[\begin{pmatrix}g_{1}\\ g_{2}\end{pmatrix}:I^{\prime}\to M^{\prime}\oplus I\quad\text{and}\quad\begin{pmatrix} g^{\prime}_{1}&g^{\prime}_{2}\end{pmatrix}:M^{\prime}\oplus I\to I^{\prime}\] such that \(g^{\prime}_{1}\circ g_{1}+g^{\prime}_{2}\circ g_{2}=1_{I^{\prime}}\). By Lemma 2.5 (2) the morphism \(g_{1}\) is in the radical of \(\mathcal{C}\), so \(g^{\prime}_{2}\circ g_{2}=1_{I^{\prime}}-g^{\prime}_{1}\circ g_{1}\) must be an isomorphism. In particular, \(g_{2}\colon I^{\prime}\to I\) is a monomorphism. Now consider the morphism \(\bigoplus_{J\in\mathcal{J}}J=M\cong M^{\prime}\oplus I\to I\). We have shown that this is a monomorphism when restricted to the direct sum of any finite subset of \(\mathcal{J}\). Since \(\bigoplus_{J\in\mathcal{J}}J\) is the filtered colimit of such finite sums, and filtered colimits in Grothendieck categories are exact, the morphism \(\bigoplus_{J\in\mathcal{J}}J\to I\) must itself be a monomorphism. Since it is clearly an epimorphism, it must be an isomorphism. But this implies that \(M=\bigoplus_{J\in\mathcal{J}}J\) is injective, which is a contradiction. ### Construction and main result Let \(\mathcal{C}\) be an abelian category with injective envelopes and maximal injective summands, and let \(X\colon\mathcal{C}\to\mathcal{C}\) be an exact functor which is locally nilpotent and preserves injective objects. Our goal is to define the Mimo-construction directly on objects in the Eilenberg-Moore category of the stable category \(\overline{\mathcal{C}}\). To do this, we need the following lemma. **Lemma 7.6**.: _Let \(\mathsf{M}\in\overline{\mathcal{C}}^{T(X)}\). Then there exists an object \(\widehat{\mathsf{M}}\in\mathcal{C}^{T(X)}\) which is isomorphic to \(\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\) and for which \(f^{*}(\widehat{\mathsf{M}})=\widehat{M}\) has no injective summands._ Proof.: By Lemma 3.13 the data of an object \(\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\) is the same as the data of a morphism \(h_{\mathsf{M}}\colon X(M)\to M\) in \(\overline{\mathcal{C}}\). Choose a lift \(h^{\prime}\colon X(M)\to M\) to \(\mathcal{C}\) of \(h_{\mathsf{M}}\), and write \(M\cong\widehat{M}\oplus J\) where \(J\) is injective and \(\widehat{M}\) has no injective summands. Now \(h^{\prime}\) gives a morphism \(X(\widehat{M})\oplus X(J)\to\widehat{M}\oplus J\). Let \(h^{\prime\prime}\colon X(\widehat{M})\to\widehat{M}\) be the restriction of \(h^{\prime}\), and let \(\widehat{\mathsf{M}}=(\widehat{M},h^{\prime\prime})\) be the corresponding object in \(\mathcal{C}^{T(X)}\). Clearly \(\widehat{\mathsf{M}}\) is isomorphic to \(\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\) and \(\widehat{M}\) has no injective summands by construction. We can now extend the Mimo-construction to objects in \(\overline{\mathcal{C}}^{T(X)}\). Here the superscript \(\cong\) indicates that we are considering isomorphism classes of objects. **Proposition 7.7**.: _For any \(\mathsf{M}\in\overline{\mathcal{C}}^{T(X)}\), choose an object \(\widehat{\mathsf{M}}\) in \(\mathcal{C}^{T(X)}\) which is isomorphic to \(\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\) and such that \(f^{*}(\widehat{\mathsf{M}})=\widehat{M}\) has no nonzero injective summands in \(\mathcal{C}\). Then the association \(\mathsf{M}\mapsto\operatorname{Mimo}\widehat{\mathsf{M}}\) induces a well-defined map_ \[\{\text{objects in }\overline{\mathcal{C}}^{T(X)}\}\stackrel{{ \cong}}{{\longrightarrow}}\{\text{objects in }\operatorname{Mono}(X)\}^{\cong}. \tag{7.7.1}\] _Furthermore, it is independent of choice of \(\widehat{\mathsf{M}}\)._ Proof.: It is well-defined and independent of choice by Corollary 6.6 and Lemma 7.6. We want to show that the map in Proposition 7.7 restricts to an isomorphism between indecomposable objects in \(\overline{\mathcal{C}}^{T(X)}\) and non-injective indecomposable objects in \(\operatorname{Mono}(X)\). We first show that it can be used to give an inverse to the bijection \[\{\text{objects in }\operatorname{\overline{Mono}}(X)\}^{\cong}\to\{\text{ objects in }\overline{\mathcal{C}}^{T(X)}\}^{\cong} \tag{7.7.2}\] coming from the equivalence \(\operatorname{\overline{Mono}}(X)\to\overline{\mathcal{C}}^{T(X)}\) in Theorem 5.6. **Proposition 7.8**.: _Composing (7.7.1) with the functor \(\operatorname{Mono}(X)\to\operatorname{\overline{Mono}}(X)\) gives an inverse to the bijection (7.7.2). In particular, (7.7.1) is injective and preserves indecomposables._ Proof.: Let \(\mathsf{M}\in\overline{\mathcal{C}}^{T(X)}\). Since the canonical morphism \(\operatorname{Mimo}\widehat{\mathsf{M}}\to\widehat{\mathsf{M}}\) is an isomorphism in \(\overline{\mathcal{C}}^{T(X)}\), and there is a canonical isomorphism \(\widehat{\mathsf{M}}\cong\mathsf{M}\) in \(\overline{\mathcal{C}}^{T(X)}\), the composite of (7.7.1) with the functor \(\operatorname{Mono}(X)\to\operatorname{\overline{Mono}}(X)\) must be an inverse to (7.7.2). In particular, the composite is injective, so (7.7.1) must be injective. To see that (7.7.1) preserves indecomposables, note that its composite with the functor \(\operatorname{Mono}(X)\to\operatorname{\overline{Mono}}(X)\) preserves indecomposables, since it is an inverse (7.7.2). Furthermore, \(\operatorname{Mimo}\widehat{\mathsf{M}}\) has no nonzero injective summands by Lemma 6.5, and must therefore be indecomposable in \(\operatorname{Mono}(X)\) if it is indecomposable in \(\operatorname{\overline{Mono}}(X)\). This proves the claim. We now characterize the indecomposables in \(\operatorname{Mono}(X)\). This is a very useful result, in particular when \(\overline{\mathcal{C}}\) is abelian, so that the indecomposable objects in \(\overline{\mathcal{C}}^{T(X)}\) are easier to compute. We illustrate its power in the next section for quiver representations of different classes of Artin algebras. **Theorem 7.9**.: _We have bijections_ (7.9.1) Proof.: By Proposition 4.2 and Corollary 4.4 the injective objects in \(\operatorname{Mono}(X)\) are precisely the objects of the form \(f_{!}(J)\) with \(J\) injective in \(\mathcal{C}\). Furthermore, \(f_{!}(J)\) is indecomposable if and only if \(J\) is indecomposable by Corollary 3.8. This implies that the map (7.9.1) is well-defined and surjective. For injectivity, note that \(f_{!}(J)\cong f_{!}(J^{\prime})\) implies \(J\cong\operatorname{top}f_{!}(J)\cong\operatorname{top}f_{!}(J^{\prime})\cong J ^{\prime}\). Next, consider the map in (7.9.2). It is well-defined and injective by Proposition 7.8. To see that it is surjective, let \(\mathsf{M}^{\prime}\in\operatorname{Mono}(X)\) be an arbitrary indecomposable non-injective object, and let \(\mathsf{M}\) be the image of \(\mathsf{M}^{\prime}\) in \(\overline{\mathcal{C}}^{T(X)}\). Since composing (7.7.1) with the functor \(\operatorname{Mono}(X)\to\overline{\operatorname{Mono}}(X)\) gives an inverse to (7.7.2) by Proposition 7.8, we get that \(\mathsf{M}^{\prime}\cong\operatorname{Mimo}\widehat{\mathsf{M}}\) in \(\overline{\operatorname{Mono}}(X)\). Hence, there exist inverse isomorphisms \[\begin{pmatrix}\phi_{1}&\phi_{2}\\ \phi_{3}&\phi_{4}\end{pmatrix}:\mathsf{M}^{\prime}\oplus f_{!}(I)\to \operatorname{Mimo}\widehat{\mathsf{M}}\oplus f_{!}(J)\quad\text{and}\quad \begin{pmatrix}\psi_{1}&\psi_{2}\\ \psi_{3}&\psi_{4}\end{pmatrix}:\,\operatorname{Mimo}\widehat{\mathsf{M}} \oplus f_{!}(J)\to\mathsf{M}^{\prime}\oplus f_{!}(I)\] in \(\operatorname{Mono}(X)\) for some injective objects \(I\) and \(J\) in \(\mathcal{C}\). In particular, we have that \[1_{\operatorname{Mimo}\mathsf{M}^{\prime}}=\phi_{1}\circ\psi_{1}+\phi_{2} \circ\psi_{3}.\] Now by Lemma 6.5 the morphism \(\phi_{2}\) is in the radical of \(\operatorname{Mono}(X)\), so \(\phi_{1}\circ\psi_{1}=1_{\operatorname{Mimo}\mathsf{M}^{\prime}}-\phi_{2} \circ\psi_{3}\) is an isomorphism. Hence \(\phi_{1}\colon\mathsf{M}^{\prime}\to\operatorname{Mimo}\widehat{\mathsf{M}}\) is a split epimorphism, and since \(\mathsf{M}^{\prime}\) is indecomposable it must be an isomorphism. This shows that (7.9.2) is surjective. **Remark 7.10**.: Theorem 7.9 recovers [17, Theorem 2] in the setting of Remark 5.11. **Example 7.11**.: Let \(Q\) be a finite and acyclic quiver and \(\mathfrak{B}\) a modulation by abelian categories \(\mathcal{B}_{\mathfrak{i}}\) and exact functors \(F_{\alpha}\) preserving injective objects, see Example 3.9. Set \(\mathcal{C}=\prod_{\mathfrak{i}\in Q_{0}}\mathcal{B}_{\mathfrak{i}}\). Then \(\mathcal{C}\) has injective envelopes resp. maximal injective summands if and only if \(\mathcal{B}_{\mathfrak{i}}\) has injective envelopes resp. maximal injective summands, for all \(\mathfrak{i}\in Q_{0}\). The Mimo map in Proposition 7.7 goes from \(\operatorname{rep}\overline{\mathfrak{B}}\) to the monomorphism category of \(\mathfrak{B}\), where \(\overline{\mathfrak{B}}\) is the modulation by stable categories as in Example 3.14. Explicitly, it sends \((B_{\mathfrak{i}},B_{\alpha})\) to \(\operatorname{Mimo}(\widehat{B}_{\mathfrak{i}},B_{\alpha}^{\prime})\) where \(\widehat{B}_{\mathfrak{i}}\) is any object in \(\mathcal{B}_{\mathfrak{i}}\) with no nonzero injective summands and which is isomorphic to \(B_{\mathfrak{i}}\) in \(\overline{\mathcal{B}}_{\mathfrak{i}}\), and \(B_{\alpha}^{\prime}\colon\widehat{B}_{\mathfrak{i}}\to\widehat{B}_{\mathfrak{ i}}\) is any lift of \(B_{\alpha}\) to \(\mathcal{B}_{\mathfrak{i}}\), and \(\operatorname{Mimo}(\widehat{B}_{\mathfrak{i}},B_{\alpha}^{\prime})\) is the Mimo-construction described in Example 6.8. ## 8. Applications to quiver representations over Artin algebras The goal of this section is to study monomorphism categories of quivers over Artin algebras, using our results. Throughout the section \(Q\) denotes a finite acyclic quiver, \(\Bbbk\) is a commutative artinian ring, and \(\Lambda\) is an artinian \(\Bbbk\)-algebra. The categories of representations of \(Q\) are denoted by \(\operatorname{rep}(Q,\operatorname{Mod}\Lambda)\) and \(\operatorname{rep}(Q,\operatorname{mod}\Lambda)\), see Example 3.11. They can be identified with the module categories \(\operatorname{Mod}\Lambda Q^{\operatorname{op}}\) and \(\operatorname{mod}\Lambda Q^{\operatorname{op}}\), respectively. If we set \(\Lambda Q_{0}\coloneqq\prod_{\mathfrak{i}\in Q_{0}}\Lambda\) then we get \[\operatorname{Mod}\Lambda Q_{0}=\prod_{\mathfrak{i}\in Q_{0}}\operatorname{Mod }\Lambda\quad\text{and}\quad\operatorname{mod}\Lambda Q_{0}=\prod_{\mathfrak{ i}\in Q_{0}}\operatorname{mod}\Lambda.\] We have adjoint functors \[f_{!}\colon\operatorname{Mod}\Lambda Q_{0}\to\operatorname{rep}(Q,\operatorname {Mod}\Lambda)\quad\text{and}\quad f^{*}\colon\operatorname{rep}(Q, \operatorname{Mod}\Lambda)\to\operatorname{Mod}\Lambda Q_{0}\] which restrict to \[f_{!}\colon\operatorname{mod}\Lambda Q_{0}\to\operatorname{rep}(Q,\operatorname {mod}\Lambda)\quad\text{and}\quad f^{*}\colon\operatorname{rep}(Q,\operatorname {mod}\Lambda)\to\operatorname{mod}\Lambda Q_{0}.\] The monomorphism subcategories of \(\operatorname{rep}(Q,\operatorname{Mod}\Lambda)\) and \(\operatorname{rep}(Q,\operatorname{mod}\Lambda)\) are denoted by \(\operatorname{Mono}_{Q}(\Lambda)\) and \(\operatorname{mono}_{Q}(\Lambda)\), respectively. They consist of representations \((M_{\mathfrak{i}},M_{\alpha})_{\mathfrak{i}\in Q_{0},\alpha\in Q_{1}}\) for which \[M_{\mathfrak{i},\mathfrak{i}}\colon\bigoplus_{\begin{subarray}{c}\alpha\in Q_{ \mathfrak{i}}\\ t(\alpha)=\mathfrak{i}\end{subarray}}M_{\mathfrak{s}(\alpha)}\xrightarrow{(M_{ \mathfrak{s}})}M_{\mathfrak{i}}\] is a monomorphism for all \(\mathfrak{i}\in Q_{0}\), see Example 3.26. Next we recall the bijection in Theorem 7.9 in this context. For \(M\in\operatorname{mod}\Lambda\) we let \(M(\mathfrak{i})\in\operatorname{mod}\Lambda Q_{0}\) denote the object given by \[M(\mathfrak{i})_{\mathfrak{j}}=\begin{cases}M,&\text{if }\mathfrak{j}=\mathfrak{i}\\ 0,&\text{if }\mathfrak{j}\neq\mathfrak{i}.\end{cases}\] **Theorem 8.1**.: _We have bijections_ \[\left\{\begin{aligned} &\begin{aligned} \text{Indecomposable objects}\\ \text{in }\text{rep}(Q,\overline{\text{\rm{Mod}}}\,\Lambda) \end{aligned}\right\}^{\cong}&\xrightarrow{\cong}& \left\{\begin{aligned} \text{Indecomposable non-injective}\\ \text{objects in }\text{\rm{Mono}}_{Q}(\Lambda)\end{aligned}\right\}^{ \cong}&\mathsf{M}\mapsto\text{\rm{Mimo}}\,\mathsf{M}\\ &\left\{\begin{aligned} \text{Indecomposable objects}\\ \text{in }\text{rep}(Q,\overline{\text{\rm{mod}}}\,\Lambda) \end{aligned}\right\}^{\cong}&\xrightarrow{\cong}& \left\{\begin{aligned} \text{Indecomposable non-injective}\\ \text{objects in }\text{\rm{mono}}_{Q}(\Lambda)\end{aligned}\right\}^{ \cong}&\mathsf{M}\mapsto\text{\rm{Mimo}}\,\mathsf{M}\\ &\left\{\begin{aligned} \text{Indecomposable injective }\\ \text{right **Corollary 8.6**.: _Let \(\Lambda\) and \(\Gamma\) be two connected selfinjective \(\Bbbk\)-algebras of finite representation type where \(\Bbbk\) is an algebraically closed field. Assume \(\Lambda\) and \(\Gamma\) are stably equivalent. Then, there exists a bijection between isomorphism classes of indecomposable objects in \(\operatorname{mono}_{Q}(\Lambda)\) and \(\operatorname{mono}_{Q}(\Gamma)\)._ Proof.: By [1, Corollary 2.2] two selfinjective algebras of finite representation type are stably equivalent if and only if they are derived equivalent. The claim follows now from Theorem 8.4. Next we investigate when \(\phi_{Q}\) commutes with an induced map on the split Grothendieck groups of \(\operatorname{mod}\Lambda\) and \(\operatorname{mod}\Gamma\). For this we need some preliminary results. Recall that the _socle_ of a module \(M\), denoted \(\operatorname{soc}M\), is the sum of all its simple submodules. It induces a left exact functor \(\operatorname{soc}(-)\colon\operatorname{mod}\Lambda\to\operatorname{mod}\Lambda\) which sends a morphism \(g\colon M\to N\) to its restriction \(\operatorname{soc}(g)\colon\operatorname{soc}M\to\operatorname{soc}N\). Let \(\mathcal{S}_{\Lambda}\) and \(\mathcal{S}_{\Gamma}\) denote the subcategories of semisimple \(\Lambda\)- and \(\Gamma\)-modules with no nonzero injective summands, respectively. For an object \(M\in\overline{\operatorname{mod}}\,\Lambda\) we write \(\widehat{M}\) for a \(\Lambda\)-module which has no nonzero injective summand and which is isomorphic to \(M\) in \(\overline{\operatorname{mod}}\,\Lambda\). Note that \(\widehat{M}\) is unique up to isomorphism. **Lemma 8.7**.: _The associations_ \[M\mapsto\operatorname{soc}\widehat{M}\quad\text{and}\quad(M\xrightarrow{g}N) \mapsto(\operatorname{soc}\widehat{M}\xrightarrow{\operatorname{soc}( \widehat{g})}\operatorname{soc}\widehat{N})\] _induce a functor \(\operatorname{soc}\colon\overline{\operatorname{mod}}\,\Lambda\to\mathcal{S} _{\Lambda}\), where \(\widehat{g}\) is any choice of a lift of \(g\) to \(\operatorname{mod}\Lambda\). Furthermore, this functor is right adjoint to the inclusion functor \(\mathcal{S}_{\Lambda}\to\overline{\operatorname{mod}}\,\Lambda\)._ Proof.: Let \(M,N\in\overline{\operatorname{mod}}\,\Lambda\). To prove that we have a well-defined functor, it suffices to show that the map \[\operatorname{Hom}_{\Lambda}(\widehat{M},\widehat{N})\to\operatorname{Hom}_{ \Lambda}(\operatorname{soc}\widehat{M},\operatorname{soc}\widehat{N})\qquad g \mapsto\operatorname{soc}(g)\] vanishes on any morphism factoring through an injective object. So assume \(g\colon\widehat{M}\to\widehat{N}\) can be written as a composite \(\widehat{M}\xrightarrow{g_{1}}I\xrightarrow{g_{2}}\widehat{N}\) where \(I\) is an injective \(\Lambda\)-module. Now by Lemma 2.5 (2) we know that the inclusion \(\ker g_{2}\to I\) is an injective envelope. Since the socle of an injective envelope is an isomorphism, it follows that the map \(\operatorname{soc}\ker g_{2}\to\operatorname{soc}I\) is an isomorphism. Since the socle is left exact, we get that \(\operatorname{soc}(g_{2})=0\), and hence \(\operatorname{soc}(g)=0\). To prove that \(\operatorname{soc}\colon\overline{\operatorname{mod}}\,\Lambda\to\mathcal{S} _{\Lambda}\) is left adjoint to the inclusion functor, it suffices to show that there is a natural isomorphism \[\overline{\operatorname{Hom}}_{\Lambda}(S,M)\xrightarrow{\cong}\operatorname {Hom}_{\Lambda}(S,\operatorname{soc}\widehat{M})\] when \(S\) is simple non-injective and \(M\in\overline{\operatorname{mod}}\,\Lambda\). First note that there is an isomorphism \[\overline{\operatorname{Hom}}_{\Lambda}(S,M)\xrightarrow{\cong}\overline{ \operatorname{Hom}}_{\Lambda}(S,\widehat{M})\] since \(M\) and \(\widehat{M}\) are isomorphic in \(\overline{\operatorname{mod}}\,\Lambda\). Let \(g\colon S\to\widehat{M}\) be a morphism in \(\operatorname{mod}\Lambda\) which factors through an injective module. Then it factors through the injective envelope \(I\) of \(S\) via a morphism \(I\to\widehat{M}\). If \(g\) is nonzero, then it must be a monomorphism, and hence the induced morphism \(I\to\widehat{M}\) must be a monomorphism. Since \(I\) is injective, the monomorphism must split, so \(I\) must be a summand of \(\widehat{M}\). This contradicts the definition of \(\widehat{M}\). Hence there are no nonzero morphisms \(S\to\widehat{M}\) which factor through an injective object. It follows that the canonical map \[\operatorname{Hom}_{\Lambda}(S,\widehat{M})\xrightarrow{\cong}\overline{ \operatorname{Hom}}_{\Lambda}(S,\widehat{M})\] is an isomorphism. Finally, any morphism from a simple module to \(\widehat{M}\) must factor through the socle of \(\widehat{M}\). Hence, we have an isomorphism \(\operatorname{Hom}_{\Lambda}(S,\widehat{M})\xrightarrow{\cong}\operatorname {Hom}_{\Lambda}(S,\operatorname{soc}\widehat{M})\). Combining these isomorphisms, we get the result. In the following let \(K_{0}(\operatorname{mod}\Lambda)\) and \(K_{0}(\operatorname{mod}\Gamma)\) denote the split Grothendieck group of the categories \(\operatorname{mod}\Lambda\) and \(\operatorname{mod}\Gamma\), respectively. The image of a module \(M\) in the split Grothendieck group is denoted by \([M]\). Let \(G_{\Lambda}\) and \(G_{\Gamma}\) be the free subgroups of \(K_{0}(\operatorname{mod}\Lambda)\) and \(K_{0}(\operatorname{mod}\Gamma)\), respectively, which are generated by all elements \([M]\) where \(M\) is indecomposable and not simple injective. **Lemma 8.8**.: _Assume we are given an equivalence \(\Phi\colon\overline{\operatorname{mod}}\,\Lambda\cong\overline{\operatorname{mod}}\,\Gamma\) which restricts to an equivalence between the simple non-injective \(\Lambda\)- and \(\Gamma\)-modules. Then there exists an isomorphism_ \[\phi\colon G_{\Lambda}\xrightarrow{\cong}G_{\Gamma}\] _uniquely defined by:_ * _If_ \([M]\in G_{\Lambda}\) _is indecomposable non-injective, then_ \(\phi([M])=[N]\) _where_ \(N\) _is the indecomposable non-injective_ \(\Gamma\)_-module whose image in_ \(\overline{\operatorname{mod}}\,\Gamma\) _is isomorphic to_ \(\Phi(M)\)_._ * _If_ \([I]\in G_{\Lambda}\) _where_ \(I\) _is indecomposable, injective, and not simple, then_ \(\phi([I])=[J]\) _where_ \(J\) _is the indecomposable injective_ \(\Gamma\)_-module whose socle is isomorphic to_ \(\Phi(\operatorname{soc}I)\) _in_ \(\overline{\operatorname{mod}}\,\Gamma\)_._ _If in addition there exists a bijection \(\psi\) between the isomorphism classes of simple injective \(\Lambda\)- and \(\Gamma\)-modules, then \(\phi\) can be extended uniquely to an isomorphism_ \[\phi\colon K_{0}(\operatorname{mod}\Lambda)\xrightarrow{\cong}K_{0}( \operatorname{mod}\Gamma).\] _by setting \(\phi([S])=[\psi(S)]\) for any simple injective \(\Lambda\)-module \(S\)._ Proof.: It is clear by construction that \(\phi\) gives a bijection between the sets of isomorphism classes of indecomposable \(\Lambda\)- and \(\Gamma\)-modules which are not simple injective. Since \(G_{\Lambda}\) and \(G_{\Gamma}\) are the free groups on these sets, this shows that \(\phi\colon G_{\Lambda}\to G_{\Gamma}\) is an isomorphism. Under the assumption that \(\psi\) exists, we see that \(\phi\) restricts to a bijection between all isomorphism classes of indecomposable \(\Lambda\)- and \(\Gamma\)-modules. Hence, \(\phi\colon K_{0}(\operatorname{mod}\Lambda)\xrightarrow{\cong}K_{0}( \operatorname{mod}\Gamma)\) must be an isomorphism. **Theorem 8.9**.: _Assume we are given an equivalence \(\Phi\colon\overline{\operatorname{mod}}\,\Lambda\xrightarrow{\cong}\overline {\operatorname{mod}}\,\Gamma\) which restricts to an equivalence between the simple non-injective \(\Lambda\)- and \(\Gamma\)-modules. Let \(\phi_{Q}\) denote the bijection in Theorem 8.2 and let \(\phi\) denote the isomorphism in Lemma 8.8. The following hold:_ 1. \(\phi_{Q}\) _commutes pointwise with_ \(\phi\)_, i.e._ \[[\phi_{Q}(\mathsf{M})_{\mathsf{i}}]=\phi([\mathsf{M}_{\mathsf{i}}]).\] 2. _Assume the existence of a bijection between the simple injective_ \(\Lambda\)_- and_ \(\Gamma\)_-modules, and let_ \(\phi\colon K_{0}(\operatorname{mod}\Lambda)\xrightarrow{\cong}K_{0}( \operatorname{mod}\Gamma)\) _be the extension given in Lemma_ 8.8_. Then_ \(\phi_{Q}\) _can be extended uniquely to a bijection between the isomorphism classes of all indecomposable objects in_ \(\operatorname{mono}_{Q}(\Lambda)\) _and_ \(\operatorname{mono}_{Q}(\Gamma)\) _such that_ \[[\phi_{Q}(\mathsf{M})_{\mathsf{i}}]=\phi([\mathsf{M}_{\mathsf{i}}]).\] Proof.: Consider the functors \(\operatorname{soc}\colon\overline{\operatorname{mod}}\,\Lambda\to\mathcal{S} _{\Lambda}\) and \(\operatorname{soc}\colon\overline{\operatorname{mod}}\,\Gamma\to\mathcal{S} _{\Gamma}\) from Lemma 8.7. We claim that the following square (8.9.1) commutes up to natural isomorphism, where the vertical functors are given by \(\Phi\) and its restriction to the simple non-injective modules. Indeed, this follows from the horizontal functors being left adjoint to the inclusion functors \(\mathcal{S}_{\Lambda}\to\overline{\operatorname{mod}}\,\Lambda\) and \(\mathcal{S}_{\Gamma}\to\overline{\operatorname{mod}}\,\Gamma\) by Lemma 8.7, and the fact that the vertical functors commute with the inclusion functors. Hence, postcomposing with the functors in (8.9.1), we get a diagram (8.9.2) of functors which commutes up to natural isomorphism. Now let \(\mathsf{M}\) be an indecomposable object in \(\operatorname{rep}(Q,\overline{\operatorname{mod}}\,\Lambda)\), and let \(\mathsf{N}\) be the corresponding indecomposable object in \(\operatorname{rep}(Q,\overline{\operatorname{mod}}\,\Gamma)\) under the left vertical equivalence in (8.9.2). Let \(\widehat{\mathsf{M}}\) and \(\widehat{\mathsf{N}}\) be objects as in Lemma 7.6, so that \(\operatorname{Mimo}\mathsf{M}=\operatorname{Mimo}\widehat{\mathsf{M}}\) and \(\operatorname{Mimo}\mathsf{N}=\operatorname{Mimo}\widehat{\mathsf{N}}\). By construction we have that \(\phi_{Q}(\operatorname{Mimo}\mathsf{M})=\operatorname{Mimo}\mathsf{N}\). Also \[(\operatorname{Mimo}\mathsf{M})_{\Bbbk}\cong\widehat{\mathsf{M}}_{\Bbbk} \oplus I_{\Bbbk}\quad\text{and}\quad(\operatorname{Mimo}\mathsf{N})_{ \Bbbk}\cong\widehat{N}_{\Bbbk}\oplus I^{\prime}_{\Bbbk}\] for all \(\Bbbk\in Q_{0}\) where \(I_{\Bbbk}\) and \(J_{\Bbbk}\) are injective \(\Lambda\)- and \(\Gamma\)-modules, respectively. By construction of \(\phi\) it follows that \(\phi([\widehat{\mathsf{M}}_{\Bbbk}])=[\widehat{\mathsf{N}}_{\Bbbk}]\), so we only need to show that \(\phi[I_{\Bbbk}]=[I^{\prime}_{\Bbbk}]\). Now by the Mimo-construction in Example 6.8 we get \[I_{\Bbbk}\cong\bigoplus_{\begin{subarray}{c}p\in Q_{\geq 0}\\ t(p)=\Bbbk\end{subarray}}J_{s(p)}\quad\text{and}\quad I^{\prime}_{\Bbbk} \cong\bigoplus_{\begin{subarray}{c}p\in Q_{\geq 0}\\ t(p)=\Bbbk\end{subarray}}J^{\prime}_{s(p)}\] where \(J_{\mathfrak{i}}\) and \(J^{\prime}_{\mathfrak{i}}\) are the injective envelopes of \(\ker\widehat{M}_{\mathfrak{i},\mathrm{in}}\) and \(\ker\widehat{N}_{\mathfrak{i},\mathrm{in}}\), respectively. Since \(\phi\) is additive, it suffices to show that \(\phi([J_{\mathfrak{i}}])=[J^{\prime}_{\mathfrak{i}}]\) for each \(\mathfrak{i}\in Q_{0}\). By definition of \(\phi\), this is equivalent to requiring \(\phi([\operatorname{soc}J_{\mathfrak{i}}])=[\operatorname{soc}J^{\prime}_{ \mathfrak{i}}]\). Since the soc of a module and the soc of its injective envelope are isomorphic, it follows that \([\operatorname{soc}J_{\mathfrak{i}}]=[\operatorname{soc}\ker\widehat{M}_{ \mathfrak{i},\mathrm{in}}]\) and \([\operatorname{soc}J^{\prime}_{\mathfrak{i}}]=[\operatorname{soc}\ker \widehat{N}_{\mathfrak{i},\mathrm{in}}]\). Hence, we need to show that \(\phi([\operatorname{soc}\ker\widehat{M}_{\mathfrak{i},\mathrm{in}}])=[ \operatorname{soc}\ker\widehat{N}_{\mathfrak{i},\mathrm{in}}]\). But this follows immediately from the commutativity of 8.9.2, which proves (1). Now assume we have a bijection between the simple \(\Lambda\)- and \(\Gamma\)-modules as in (2). We want to extend \(\phi_{Q}\) to a bijection between all indecomposable objects in \(\operatorname{mono}_{Q}(\Lambda)\) and \(\operatorname{mono}_{Q}(\Gamma)\). To do this we need to define it on the injective objects. Assume \(\mathsf{M}\in\operatorname{mono}_{Q}(\Lambda)\) is indecomposable injective. Then it is of the form \(f_{\mathfrak{i}}(J(\mathfrak{i}))\) for an indecomposable injective \(\Lambda\)-module \(J\) and a vertex \(\mathfrak{i}\) in \(Q\), see Theorem 8.1. We define \(\phi_{Q}(f_{\mathfrak{i}}(J(\mathfrak{i})))=f_{\mathfrak{i}}(J^{\prime}( \mathfrak{i}))\), where \(J^{\prime}\) is the unique (up to isomorphism) \(\Gamma\)-module satisfying \([J^{\prime}]=\phi([J])\). Clearly this gives a bijection between the indecomposable injective objects in in \(\operatorname{mono}_{Q}(\Lambda)\) and \(\operatorname{mono}_{Q}(\Gamma)\), and hence between all indecomposable objects. Finally, by the formula (3.9.2) in Example 3.9 we have \[f_{\mathfrak{i}}(M(\mathfrak{i}))_{\mathfrak{j}}=\bigoplus_{ \begin{subarray}{c}p\in Q_{\geq 0}\\ s(p)=\mathfrak{i},\ell(p)=\Bbbk\end{subarray}}M\] for any module \(M\). Hence \([\phi_{Q}(f_{\mathfrak{i}}(J(\mathfrak{i})))_{\Bbbk}]=\phi([f_{\mathfrak{i}} (J(\mathfrak{i}))_{\Bbbk}])\) for all \(\Bbbk\in Q_{0}\), which proves (2). **Remark 8.10**.: The assumptions in Theorem 8.9 are quite restrictive: Assume \(\Bbbk\) is perfect field and \(\Lambda\) and \(\Gamma\) are non-semisimple connected and selfinjective \(\Bbbk\)-algebras. Let \(\Phi\colon\overline{\operatorname{mod}}\,\Lambda\xrightarrow{\cong}\overline{ \operatorname{mod}}\,\Gamma\) be an equivalence, and assume it restricts to an equivalence between the simple non-injective \(\Lambda\)- and \(\Gamma\)-modules. We claim that if \(\Phi\) is induced from an exact functor \(F\colon\operatorname{mod}\,\Lambda\to\operatorname{mod}\,\Gamma\) which preserves projectives, then \(\Lambda\) and \(\Gamma\) must be Morita equivalent. Indeed, since \(F\) is right exact, \(F\cong-\otimes_{\Lambda}M\) where \(M\coloneqq F(\Lambda)\) is a \(\Lambda\)-\(\Gamma\)-bimodule. Since \(F\) is exact and preserves projectives, \(M\) must be projective both as a left \(\Lambda\)-module and as a right \(\Gamma\)-module. By the same argument as for [11, Proposition 2.4] we have an isomorphism \[M\cong M^{\prime}\oplus M^{\prime\prime}\] of \(\Lambda\)-\(\Gamma\)-bimodules where \(M^{\prime}\) is indecomposable non-projective and \(M^{\prime\prime}\) is projective. Since \(M^{\prime\prime}\) is projective as a bimodule, \(-\otimes_{\Lambda}M^{\prime\prime}\) sends any \(\Lambda\)-module to a projective \(\Gamma\)-module, and therefore induces the zero functor on the stable categories. Hence \(-\otimes_{\Lambda}M^{\prime}\) induces the same functor as \(-\otimes_{\Lambda}M\) on the stable categories, i.e. the functor \(\Phi\). Now if \(S\) is a simple \(\Lambda\)-module, then \(S\otimes_{\Lambda}M^{\prime}\) has no nonzero projective summands by [11, Proposition 2.3]. Since \(S\otimes_{\Lambda}M^{\prime}\) is isomorphic to a simple \(\Gamma\)-module in the stable category \(\operatorname{mod}\,\Gamma\), it must itself be a simple \(\Gamma\)-module. Therefore, by the proof of [11, Proposition 2.5] the functor \(-\otimes_{\Lambda}M^{\prime}\colon\operatorname{mod}\,\Lambda\to\operatorname{ mod}\,\Gamma\) must be an equivalence. This shows that \(\Lambda\) and \(\Gamma\) are Morita equivalent. Similarly, assume we are given a derived equivalence \(-\otimes_{\Lambda}^{\perp}T\colon D^{b}(\Lambda)\xrightarrow{\cong}D^{b}(\Gamma)\) by a tilting complex \(T\). Then by [10, Corollary 5.5] the induced equivalence \(\overline{\operatorname{mod}}\,\Lambda\xrightarrow{\cong}\overline{ \operatorname{mod}}\,\Gamma\) between the stable categories is given by an exact functor \(\operatorname{mod}\,\Lambda\to\operatorname{mod}\,\Gamma\). Hence, if the stable equivalence induced from the derived equivalence gives a bijection between the simple objects, then \(\Lambda\) and \(\Gamma\) must be Morita equivalent. It follows from this observation that most of the interesting examples of stable equivalence in Theorem 8.4 and Corollary 8.6 do not satisfy Theorem 8.9. Our main example that satisfy Theorem 8.9 are local uniserial algebras of Loewy length \(3\), which we discuss in the next subsection. ### Local uniserial rings of Loewy length \(3\) Let \(\Lambda\) and \(\Gamma\) be commutative local rings which are uniserial, i.e. they have a unique compositition series. Assume furthermore that they have Loewy length \(3\) and the same residue field. In particular, \(\Lambda\) and \(\Gamma\) are commutative artinian rings. There are three indecomposable \(\Lambda\)- and \(\Gamma\)-modules, and they are uniquely determined by their length. This can be seen for example by using [1, Theorem VI.2.1 a]. We denote the indecomposable \(\Lambda\)- and \(\Gamma\)-modules by \(M_{1},M_{2},M_{3}\) and \(N_{1},N_{2},N_{3}\), respectively, so that \(M_{i}\) and \(N_{i}\) have length \(i\). Our goal is to construct a bijection between the indecomposable objects in \(\operatorname{mono}_{\operatorname{Q}}(\Lambda)\) and \(\operatorname{mono}_{\operatorname{Q}}(\Gamma)\) as in Theorem 8.9. This is particularly useful when \(\Lambda=\mathbb{Z}/(p^{3})\) and \(\Gamma=\Bbbk[x]/(x^{3})\) where \(\Bbbk=\mathbb{Z}/(p)\), since the indecomposables in \(\operatorname{mono}_{Q}(\Gamma)\) are in general easier to compute than the ones in \(\operatorname{mono}_{Q}(\Lambda)\). For example, for \(\operatorname{mono}_{Q}(\Gamma)\) one can use covering theory (e.g. see [10] and [11]). We are not aware of such methods for \(\operatorname{mono}_{Q}(\Lambda)\). **Proposition 8.11**.: _Let \(\Lambda\) and \(\Gamma\) be commutative local uniserial rings of Loewy length smaller than or equal to \(3\) with residue field \(\Bbbk\). Then there exists an equivalence \(\overline{\operatorname{mod}}\,\Lambda\cong\overline{\operatorname{mod}}\,\Gamma\) which preserves the simple object._ Proof.: We only prove the case of Loewy length \(3\). The cases of Loewy length \(2\) and \(1\) follow similarly. Note first that \(\Lambda\) and \(\Gamma\) are selfinjective with a unique injective module given by \(M_{3}\) and \(N_{3}\), respectively. Therefore the indecomposables in \(\overline{\operatorname{mod}}\,\Lambda\) and \(\overline{\operatorname{mod}}\,\Gamma\) are \(M_{1},M_{2}\), and \(N_{1},N_{2}\), respectively. Furthermore, their hom-spaces are \[\overline{\operatorname{Hom}}_{\Lambda}(M_{i},M_{j})\cong\Bbbk\quad\text{ and}\quad\overline{\operatorname{Hom}}_{\Gamma}(N_{i},N_{j})\cong\Bbbk \tag{8.11.1}\] for \(1\leq i,j\leq 2\). Let \(f_{i,j}\) and \(g_{i,j}\) be the basis vector of \(\overline{\operatorname{Hom}}_{\Lambda}(M_{i},M_{j})\) and \(\overline{\operatorname{Hom}}_{\Gamma}(N_{i},N_{j})\), respectively, so that \(f_{i,i}=\operatorname{id}_{M_{i}}\) and \(g_{i,i}=\operatorname{id}_{N_{i}}\) for \(i=1,2\). Then we have the relations \[f_{2,1}\circ f_{1,2} =0\quad\text{and}\quad f_{1,2}\circ f_{2,1}=0\] \[g_{2,1}\circ g_{1,2} =0\quad\text{and}\quad g_{1,2}\circ g_{2,1}=0\] and hence the associations \(M_{i}\mapsto N_{i}\) and \(f_{i,j}\mapsto g_{i,j}\) extends to an equivalence \(\overline{\operatorname{mod}}\,\Lambda\xrightarrow{\infty}\overline{ \operatorname{mod}}\,\Gamma\). Since this equivalence preserves the simple object, we are done. **Remark 8.12**.: Proposition 8.11 does not hold when the Loewy length is greater than \(3\). For example, consider \(\Lambda=\mathbb{Z}/(p^{n})\) and \(\Gamma=\Bbbk[x]/(x^{n})\) with \(\Bbbk=\mathbb{Z}/(p)\) and \(n\geq 4\). Then \[\overline{\operatorname{Hom}}_{\Lambda}(\mathbb{Z}/(p^{2}),\mathbb{Z}/(p^{2}) )\cong\mathbb{Z}/(p^{2})\] and there is no object with that endomorphism ring in \(\overline{\operatorname{mod}}\,\Gamma\). Let \(\Lambda\) be a local uniserial ring of Loewy length \(3\). Then there is a bijection between finitely generated \(\Lambda\)-modules and partitions \((\alpha_{1}\geq\alpha_{2}\geq\cdots\geq\alpha_{s})\) with \(3\geq\alpha_{1}\). Explicitly, it sends the partition \(\alpha=(\alpha_{1}\geq\alpha_{2}\geq\cdots\geq\alpha_{s})\) to the module \[M(\alpha)=\bigoplus_{i=1}^{s}M_{\alpha_{i}}\] where \(M_{\alpha_{i}}\) is the indecomposable \(\Lambda\)-module of Loewy length \(\alpha_{i}\). Given a representation \(\mathsf{M}\in\operatorname{rep}(Q,\operatorname{mod}\Lambda)\), the _partition vector_ of \(\mathsf{M}\) is the tuple \((\alpha^{i})_{\Bbbk\in Q_{0}}\) where \(\alpha^{i}\) is the unique partition for which \(M(\alpha^{i})\cong\mathsf{M}_{\mathfrak{i}}\). Note that this is called the type of \(\mathsf{M}\) in [10]. **Theorem 8.13**.: _Let \(\Lambda\) and \(\Gamma\) be commutative local uniserial rings of Loewy length smaller than or equal to \(3\) with same residue field \(\Bbbk\). Then there exists a bijection which preserves partition vectors between indecomposable objects in \(\operatorname{mono}_{\operatorname{Q}}(\Lambda)\) and in \(\operatorname{mono}_{\operatorname{Q}}(\Gamma)\)._ Proof.: This follows from Proposition 8.11 and Theorem 8.9. The _length vector_ of a representation \(\mathsf{M}\in\operatorname{rep}(Q,\Lambda)\), is defined to be the tuple \(\ell(\mathsf{M})=(\ell(\mathsf{M}_{\mathsf{i}}))_{\mathsf{i}\in Q_{0}}\) where \(\ell(\mathsf{M}_{\mathsf{i}})\) denotes the length of the \(\Lambda\)-module \(\mathsf{M}_{\mathsf{i}}\). **Example 8.14**.: Let \(\mathbb{F}_{p}=\mathbb{Z}/(p)\) be the finite field with \(p\) elements. We compare \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\) and \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\) for different choices of quivers \(Q\), using Theorem 8.13. Note that by [10, Theorem 4.6 (iii)] the category \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\) is representation-finite if and only if the underlying diagram of \(Q\) is of type \(\mathbb{A}_{n}\) with \(n\leq 4\). Hence, the same statement holds for \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\). For \(Q=\mathsf{1}\to\mathsf{2}\) the indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\) have been classified in [11, Section 6.3]. They are the unique representations of the form \[\mathbb{F}_{p}[x]/(x^{i})\xrightarrow{f_{j,i}}\mathbb{F}_{p}[x]/(x^{j})\] where \(f_{j,i}\) is a monomorphism and \(0\leq i\leq j\leq 3\) with \(j\neq 0\), and where \(\pi\) is the canonical projection and \(\iota\) is the canonical inclusion. It follows that the indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\) are the representations of the form \[\mathbb{Z}/p^{i}\mathbb{Z}\xrightarrow{g_{j,i}}\mathbb{Z}/p^{j}\mathbb{Z}\] where \(g_{j,i}\) is a monomorphism and \(0\leq i\leq j\leq 3\) with \(j\neq 0\), and \[\mathbb{Z}/(p^{2})\xrightarrow{\begin{pmatrix}\pi^{\prime}\\ \iota^{\prime}\end{pmatrix}}\mathbb{Z}/(p)\oplus\mathbb{Z}/(p^{3}).\] where \(\pi^{\prime}\) and \(\iota^{\prime}\) are the canonical projection and inclusion, respectively. Now [13, Theorem 1.2] implies that the indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{n}))\) are in bijection with the indecomposable valuated \(p\)-groups (in the sense of [12]) whose value is bounded by \(n\). Using this and the fact that the group at vertex \(\mathsf{1}\) in the classification above is always cyclic, we recover [13, Corollary 4.3]. For \(Q=\mathsf{1}\to\mathsf{2}\to\mathsf{3}\), the isomorphism classes of indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\) are uniquely determined by their dimension vector [10, Theorem 3.2]. There are \(23\) indecomposable up to isomorphism, and the different dimension vectors that occur are \[\{001,002,011,012,111,112,122,222,003,013,023\\ 113,123,223,333,024,124,224,234,244,135,245,246\}.\] Since the bijection in Theorem 8.13 preserves partition vectors, it also preserves length vectors. So we can conclude that the indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\) are uniquely determined by their length vectors, and the different length vectors that occur are given in the list above. The list of partition vectors for the indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\) (and hence in \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\)), are given in Figure 1 in Section 6 of [10, Example 5.1]. For \(Q_{1}=\mathsf{1}\to\mathsf{2}\to\mathsf{3}\) and \(Q_{2}=\mathsf{1}\gets\mathsf{2}\to\mathsf{3}\) we have equivalences \[\overline{\operatorname{mono}}_{Q_{1}}(\mathbb{F}_{p}[x]/(x^{3}))\cong \overline{\operatorname{mono}}_{Q}(\Bbbk[x]/(x^{3}))\cong\overline{ \operatorname{mono}}_{Q_{2}}(\mathbb{F}_{p}[x]/(x^{3}))\] by [10, Theorem 1], where \(Q\) is the quiver \(\mathsf{1}\to\mathsf{2}\to\mathsf{3}\) considered in the previous paragraph. In particular, \(\operatorname{mono}_{Q_{1}}(\mathbb{F}_{p}[x]/(x^{3}))\) and \(\operatorname{mono}_{Q}(\Bbbk[x]/(x^{3}))\) and \(\operatorname{mono}_{Q_{2}}(\mathbb{F}_{p}[x]/(x^{3}))\) have the same number of indecomposable objects. The indecomposables in \(\operatorname{mono}_{Q_{1}}(\mathbb{F}_{p}[x]/(x^{3}))\) and in \(\operatorname{mono}_{Q_{2}}(\mathbb{F}_{p}[x]/(x^{3}))\) are classified in [10, Figure 7] and [10, Figure 8], respectively, in terms of their partition vector. Hence, we get the indecomposables for \(\operatorname{mono}_{Q_{1}}(\mathbb{Z}/(p^{3}))\) and \(\operatorname{mono}_{Q_{2}}(\mathbb{Z}/(p^{3}))\). For \(Q=\mathsf{1}\to\mathsf{2}\to\mathsf{3}\to\mathsf{4}\), there are \(84\) isomorphism classes of indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\), and a list of them is given in Figure 2 in Section 6 of [10, see also [10, Example 5.2]. The indecomposables are described in terms of their restriction to each vertex, which is equivalent to giving their partition vector. Hence, we get a list of the indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\) in terms of their partition vector. Studying the list carefully one can see that each indecomposable is uniquely determined by its length vector. If \(Q=\mathtt{1}\to\mathtt{2}\to\mathtt{3}\to\mathtt{4}\to\mathtt{5}\), then it follows from [15, Theorem 1.3] that the categories \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\) and \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\) have infinitely many indecomposable objects. Furthermore, if \(\,\Bbbk\) is an algebraically closed field, then \(\operatorname{mono}_{Q}(\Bbbk[x]/(x^{3}))\) is of tame representation type by [15, Theorem 1.5]. Hence, one could try to determine the indecomposable objects in \(\operatorname{mono}_{Q}(\mathbb{F}_{p}[x]/(x^{3}))\) using a similar approach as in [11] and [10]. Then one could use Theorem 8.13 to transfer the results to the indecomposables in \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{3}))\). ### The monomorphism category for \(\operatorname{rad}^{2}\)-zero Nakayama algebras Recall that an Artin algebra \(\Lambda\) is a **Nakayama algebra** if all indecomposable left and right \(\Lambda\)-modules are uniserial, i.e. have a unique composition series. In this section, we will describe the indecomposable objects of \(\operatorname{mono}_{Q}(\Lambda)\) when \(\Lambda\) is a \(\operatorname{rad}^{2}\)-zero-Nakayama algebra. We start with the following well-known lemma, a proof of which we provide for convenience. **Lemma 8.15**.: _Let \(\Lambda\) be a \(\operatorname{rad}^{2}\)-zero-Nakayama algebra. Then \(\overline{\operatorname{mod}}\,\Lambda\) is semisimple. More precisely, it is equivalent to the module category of a product of skew fields - as many as there are isomorphism classes of non-injective simple modules._ Proof.: Since \(\Lambda\) is a Nakayama algebra, it is well-known, see e.g. [1, Lemma IV.2.5], that every indecomposable \(\Lambda\)-module \(M\) is of the form \(P/\operatorname{rad}^{m}P\) for an indecomposable projective module \(P\). Since \(\operatorname{rad}^{2}\Lambda=0\), it follows that \(m\leq 1\). Suppose that \(M\) is not simple. Then \(m=0\) and \(M\cong P\) for some indecomposable projective. However, in this case [1, Lemma IV.2.15] implies that \(M\) is also injective. Therefore, \(M=0\) in \(\overline{\operatorname{mod}}\,\Lambda\). It follows that every object in \(\overline{\operatorname{mod}}\,\Lambda\) is isomorphic to a direct sum of simple modules. Thus, \(\overline{\operatorname{mod}}\,\Lambda\) is a semisimple category. The claim follows. We need the following variant of Gabriel's theorem for path algebras over skew fields, which is a special case of the main result of [1]. **Theorem 8.16**.: _Let \(D\) be a skew field and let \(Q\) be a finite quiver. Then \(\operatorname{mod}DQ\) is of finite representation type if and only if \(Q\) is Dynkin. In this case, there is a bijection between the set of isomorphism classes of indecomposable \(DQ\)-modules and the set of positive roots \(\Phi^{+}\) for the corresponding Dynkin diagram._ Combining these we obtain the classification of indecomposables in \(\operatorname{mono}_{Q}(\Lambda)\). If \(\Lambda=\Bbbk[x]/(x^{2})\) then the first part is equivalent to [14, Theorem 4.6 (i)]. **Theorem 8.17**.: _Let \(Q\) be a finite connected acyclic quiver and let \(\Lambda\) be a non-semisimple \(\operatorname{rad}^{2}\)-zero Nakayama algebra. Let \(m\) be the number of simple \(\Lambda\)-modules and let \(t\) be the number of non-injective simple \(\Lambda\)-modules. The following hold._ 1. _The category_ \(\operatorname{mono}_{Q}(\Lambda)\) _is of finite representation type if and only if_ \(Q\) _is Dynkin._ 2. _The number of indecomposable injective objects in_ \(\operatorname{mono}_{Q}(\Lambda)\) _is_ \(m\cdot|Q_{0}|\)_._ 3. _The number of indecomposable non-injective objects is_ \(t\cdot|\Phi^{+}|\)_, where_ \(\Phi^{+}\) _is the set of positive roots of the (corresponding) Dynkin diagram._ Proof.: Let \(\mathbb{L}\) be the endomorphism ring of the direct sum of the simple non-injective \(\Lambda\)-modules (one for each isomorphism class). Combining Theorem 7.9 with Lemma 8.15, there is a bijection between the indecomposable non-injective objects in \(\operatorname{mono}_{Q}(\Lambda)\) and the indecomposable objects in \(\operatorname{mod}\mathbb{L}Q\). As there are only finitely many indecomposable injective objects in \(\operatorname{mono}_{Q}(\Lambda)\), \(\operatorname{mono}_{Q}(\Lambda)\) is of finite representation type if and only if \(\operatorname{mod}\mathbb{L}Q\) is of finite representation type. In this case, we count indecomposables. There are exactly \(m\cdot|Q_{0}|\) indecomposable injective modules in \(\operatorname{mono}_{Q}(\Lambda)\), as each of them is of the form \(f_{!}(J)\) for some indecomposable injective module in \(\operatorname{mod}\Lambda Q_{0}\). According to Theorem 8.16, there are exactly \(t\cdot|\Phi^{+}|\) indecomposable non-injective modules in \(\operatorname{mono}_{Q}(\Lambda)\), so the claim follows. **Remark 8.18**.: It is well-known that the number of positive roots of \(\mathbb{A}_{n}\) is \(\binom{n+1}{2}\). By Theorem 8.17 this is also the number of indecomposable non-injective object in \(\operatorname{Mono}_{Q}(\Bbbk[x]/(x^{2}))\) when the underlying diagram of \(Q\) is \(\mathbb{A}_{n}\). This recovers an observation in [13, 6.1 (i)]. **Remark 8.19**.: An obvious question given the preceding theorem is whether this is something special about \(\operatorname{mono}_{Q}(\Lambda)\) or whether the same result holds even for \(\operatorname{mod}\Lambda Q\). This is however not the case. For this remark we restrict to the case of an algebraically closed ground field. Furthermore assume that \(\Lambda\) and \(Q\) are connected. There are three cases to distinguish: \(\Lambda\cong\Bbbk\), \(\Lambda\cong\left(\begin{smallmatrix}\Bbbk&\Bbbk\\ 0&\Bbbk\end{smallmatrix}\right)\), and the rest. If \(\Lambda\cong\Bbbk\), then indeed Gabriel's theorem states that \(\Lambda Q\) is of finite representation type if and only if \(Q\) is Dynkin. If \(\Lambda\cong\left(\begin{smallmatrix}\Bbbk&\Bbbk\\ 0&\Bbbk\end{smallmatrix}\right)\), then \(\Lambda Q\cong T_{2}(\Bbbk Q)\). The representation type of \(T_{2}(A)\) for a finite-dimensional algebra \(A\) was studied in [11] and from Theorem 4 therein it follows that \(\Lambda Q\) is of finite representation type if and only if \(Q\) is of type \(A_{n}\) for \(n\leq 4\), with arbitrary orientation. For \(n\leq 3\), a proof of representation-finiteness can already be found in [1, Proposition 1.2, Theorem 1.3]. Lastly, if \(\Lambda\) is any other \(\operatorname{rad}^{2}\)-zero Nakayama algebra, then \(\Lambda Q\) is of finite representation type if and only if \(Q\) is of type \(A_{n}\) for \(n\leq 3\). For \(\Lambda\cong\Bbbk[x]/(x^{2})\), this follows from [10, Proposition 13.1]. For general \(\Lambda\) an analogous proof using covering theory and the knitting algorithm yields the result. (The cases of \(Q=A_{2}\) or linearly oriented \(A_{3}\) can also be found in [1, Proposition 1.2] and [11, Theorem 2.4(b)]. In case \(Q=A_{4}\), one can use the Happel-Vossieck list [12] to find a subquiver with relations of the form to conclude that the algebra is representation-infinite.) **Example 8.20**.: Let \(\Bbbk\) be a field and let \(\Lambda=\Bbbk[x]/(x^{2})\). Let \(Q\) be a linearly oriented quiver of type \(\mathbb{A}_{n}\). In this case, there is only one simple \(\Lambda\)-module, which is non-injective. Therefore, according to the preceding theorem and its proof, there are \(n\) indecomposable injective modules given by \(f_{!}(\Lambda(\mathfrak{i}))\) where \(\Lambda(\mathfrak{i})\) denotes the representation of \(\mathbb{A}_{n}\) given by \(\Lambda\) at vertex \(\mathfrak{i}\), and zero elsewhere. As a representation, these look as follows: \[0\to\dots\to 0\to\Lambda\stackrel{{\operatorname{id}}}{{\to}} \Lambda\stackrel{{\operatorname{id}}}{{\to}}\dots\stackrel{{ \operatorname{id}}}{{\to}}\Lambda\] On the other hand, the indecomposable non-injectives are given by applying the Mimo-construction to objects in \(\operatorname{mod}\Bbbk\mathbb{A}_{n}\), since \(\operatorname{mod}\Lambda\cong\operatorname{mod}\Bbbk\). The indecomposable objects of \(\operatorname{mod}\Bbbk\mathbb{A}_{n}\) are given by the interval modules \[0\to\dots\to 0\to\Bbbk\stackrel{{\operatorname{id}}}{{\to}} \dots\stackrel{{\operatorname{id}}}{{\to}}\Bbbk\to 0\to\dots\to 0.\] where the first \(\Bbbk\) is in position \(\mathfrak{i}\) and the last \(\Bbbk\) is in position \(\mathfrak{j}\). It is easy to check that for such an interval module \(M\) we have \(\ker M_{\Bbbk,\operatorname{in}}=0\) unless \(\Bbbk=\mathfrak{j}+1\), in which case \(\ker M_{\Bbbk,\operatorname{in}}=\Bbbk\), which has injective envelope \(\Lambda\). Therefore, following the explicit description of the Mimo-construction in Example 6.8, we obtain that the indecomposable non-injective objects in \(\operatorname{Mono}_{\mathbb{A}_{n}}(\Lambda)\) are given by \[0\to\dots\to 0\to\Bbbk\stackrel{{\operatorname{id}}}{{\to}} \dots\stackrel{{\operatorname{id}}}{{\to}}\Bbbk\stackrel{{ \iota}}{{\to}}\Lambda\stackrel{{\operatorname{id}}}{{\to}} \dots\stackrel{{\operatorname{id}}}{{\to}}\Lambda,\] where \(\iota\) is a chosen embedding of \(\Bbbk\) into \(\Lambda\). This recovers [13, Theorem 3.1] in the case \(n=3\). For arbitrary \(n\), counting the number of indecomposable non-injective, we see that it is equal to \(\binom{n+1}{2}\) as noted in Remark 8.18. It follows from Theorem 8.13 that \(\operatorname{mono}_{Q}(\Bbbk[x]/(x^{2}))\) and \(\operatorname{mono}_{Q}(\mathbb{Z}/(p^{2}))\) have the same number of indecomposable objects of the same form. In fact, this could also be seen by following the same steps as above when \(\Bbbk=\mathbb{Z}/(p)\) and \(\Lambda=\mathbb{Z}/(p^{2})\). More generally, the same arguments yield that for \(\Lambda\) an arbitrary \(\operatorname{rad}^{2}\)-zero Nakayama algebra and \(Q\) a linearly oriented type \(\mathbb{A}_{n}\) quiver, the indecomposable injective objects of \(\operatorname{mono}_{Q}(\Lambda)\) are given by \[0\to\dots\to 0\to I\stackrel{{\mathrm{id}}}{{\to}}\dots\stackrel{{ \mathrm{id}}}{{\to}}I,\] where \(I\) runs through the indecomposable injective \(\Lambda\)-modules, while the indecomposable non-injective objects of \(\mathrm{mono}_{Q}(\Lambda)\) are given by \[0\to\dots\to 0\to L\stackrel{{\mathrm{id}}}{{\to}}\dots\stackrel{{ \mathrm{id}}}{{\to}}L\stackrel{{\iota}}{{\to}}I(L)\stackrel{{ \mathrm{id}}}{{\to}}\dots\stackrel{{\mathrm{id}}}{{\to}}I(L)\] where \(L\) runs through the simple non-injective \(\Lambda\)-modules up to isomorphism, \(I(L)\) denotes an injective envelope of \(L\), with \(\iota\colon L\to I(L)\) a chosen embedding. If we set \(n=3\) and let \(\Lambda\) be the algebra with relations \(\alpha\circ\beta=0=\beta\circ\alpha\), then this recovers the description of the indecomposable objects obtained in [24, Section 6.2]. **Example 8.21**.: Let \(Q\) be the quiver \(\mathtt{1}\to\mathtt{2}\leftarrow\mathtt{3}\to\mathtt{4}\). Let \(\Lambda=\Bbbk[x]/(x^{2})\). Similarly to the preceding example, the \(4\) indecomposable injective objects are given by \[f_{!}(\Lambda(\mathtt{1}))=(\Lambda\stackrel{{ 1}}{{\to}}\Lambda\gets 0\to 0), \qquad f_{!}(\Lambda(\mathtt{2}))=(0\to\Lambda\gets 0\to 0),\] \[f_{!}(\Lambda(\mathtt{3}))=(0\to\Lambda\stackrel{{ 1}}{{\leftarrow}}\Lambda\stackrel{{ 1}}{{\to}}\Lambda), \qquad f_{!}(\Lambda(\mathtt{4}))=(0\to 0\gets 0\to\Lambda).\] On the other hand, the indecomposable non-injective objects are given by applying the Mimoconstr to objects in \(\mathrm{mod}\,\Bbbk Q\), since \(\overline{\mathrm{mod}}\,\Lambda\cong\mathrm{mod}\,\Bbbk\): \[0\to\Bbbk\gets 0\to 0, 0\gets 0\to\Bbbk, \Bbbk\stackrel{{ 1}}{{\to}}\Bbbk\gets 0\to 0, 0\to\Bbbk\stackrel{{ 1}}{{\leftarrow}}\Bbbk\stackrel{{ 1}}{{\to}}\Bbbk,\] \[\Bbbk\stackrel{{ 1}}{{\to}}\Bbbk\stackrel{{ 1}}{{\leftarrow}}\Bbbk\stackrel{{ 1}}{{\to}}\Bbbk, 0\to\Bbbk\stackrel{{ 1}}{{\leftarrow}}\Bbbk\to 0, 0\to 0\leftarrow\Bbbk\stackrel{{ 1}}{{\to}}\Bbbk,\] \[\Bbbk\stackrel{{ 1}}{{\to}}\Bbbk\stackrel{{ 1}}{{\leftarrow}}\Bbbk\to 0, 0\to 0\leftarrow\Bbbk\to 0, \Bbbk\to 0\gets 0\to 0.\] This yields the following indecomposable objects in \(\mathrm{mono}_{Q}(\Lambda)\): \[0\to\Bbbk\gets 0\to 0, 0\gets 0\to\Bbbk, \Bbbk\stackrel{{ 1}}{{\to}}\Bbbk\gets 0\to 0, 0\to\Bbbk\stackrel{{ 1}}{{\leftarrow}}\Bbbk\stackrel{{ 1}}{{\to}}\Bbbk,\] \[\Bbbk\stackrel{{\binom{1}{2}}}{{\to}}\Bbbk\oplus \Lambda\stackrel{{\binom{1}{2}}}{{\leftarrow}}\Bbbk\stackrel{{ 1}}{{\to}}\Bbbk, 0\to\Bbbk\stackrel{{ 1}}{{\leftarrow}}\Bbbk\to 0, 0\to\Lambda\stackrel{{ 1}}{{\leftarrow}}\Bbbk\to 0, 0\to\Lambda\stackrel{{ 1}}{{\leftarrow}}\Bbbk\to \Bbbk,\] \[\Bbbk\stackrel{{\binom{1}{2}}}{{\to}}\Bbbk\oplus \Lambda\stackrel{{\binom{1}{2}}}{{\leftarrow}}\Bbbk\stackrel{{ 1}}{{\to}}\Lambda, 0\to\Lambda\stackrel{{ 1}}{{\leftarrow}}\Bbbk\stackrel{{ 1}}{{\to}}\Lambda, \qquad\Bbbk\stackrel{{ 1}}{{\to}}\Lambda\gets 0\to 0.\] From this and the previous example we see that the dimension vectors of \(\mathrm{mono}_{Q}(\Lambda)\) depend on the orientation of \(Q\). Replacing \(\Lambda\) by a general \(\mathrm{rad}^{2}\)-square Nakayama algebra is done similarly as in the preceding example. ### The Kronecker quiver Assume \(\Bbbk\) is algebraically closed. Let \(Q=\ 1\rTo 2\) be the Kronecker quiver and let \(\Lambda=\Bbbk[x]/(x^{2})\). An object of \(\mathrm{mono}_{Q}(\Lambda)\) consists of a triple \((U,V,T)\) where \(V\) is a finite-dimensional \(\Bbbk\)-vector space, \(U\oplus U\) is a \(\Bbbk\)-subspace of \(V\), and \(T\) is a linear operator on \(V\) satisfying \(T^{2}=0\), and which restricts to a linear operator of the form \[\begin{pmatrix}T^{\prime}&0\\ 0&T^{\prime}\end{pmatrix}\colon U\oplus U\to U\oplus U\] on \(U\oplus U\). We use Theorem 8.1 to describe the indecomposables in \(\mathrm{mono}_{Q}(\Lambda)\). Indeed, the indecomposable injective objects are given by \[f_{!}(\Lambda(\mathtt{1}))=\ \Lambda\rTo 2\] where \(i_{1}\) and \(i_{2}\) are the inclusions of the first and second summand of \(\Lambda^{2}\), respectively. By Theorem 8.1 we can obtain the indecomposable non-injective objects in \(\mathrm{mono}_{Q}(\Lambda)\) from the indecomposable objects in \(\mathrm{rep}(Q,\overline{\mathrm{mod}}\Lambda)\cong\mathrm{rep}(Q,\mathrm{mod }\,\Bbbk)\). The latter is just the category of representations of the Kronecker quiver over \(\Bbbk\). This has tame representation type and its indecomposable finite-dimensional representations are well-known, e.g. see [1, Section VIII.7]. To describe them we let \(V_{n}\) denote the vector space of homogenous polynomials of degree \(n\) in variables \(y\) and \(z\), and we let \(V_{n}^{*}\) denote its \(\,\Bbbk\)-dual. Then the indecomposables of \(\operatorname{rep}(Q,\operatorname{mod}\,\Bbbk)\) are the preprojective and preinjective representations \[P_{n}=\ V_{n-1}\xrightarrow[z]{y}V_{n}\quad\text{ and }\quad I_{n}=\ V_{n}^{*} \xrightarrow[\frac{(y)^{*}}{(z)^{*}}]{(z)^{*}}V_{n-1}^{*}\qquad n\geq 0\] and the regular representations \[R_{p^{n}}=\ V_{n-1}\xrightarrow[z]{y}V_{n}/\Bbbk p^{n}\quad\text{ where }n\geq 1\text{ and }p=ay+bz\neq 0.\] Here, \(R_{p^{n}}\cong R_{q^{n}}\) if and only if \(q=cy+dz\) and \((a\colon b)=(c\colon d)\) in \(\mathbb{P}^{1}\). Hence, the regular representations are indexed by the projective line. Since there are exact sequences \[0\to V_{n-2}\xrightarrow[z]{\begin{pmatrix}-z\\ y\end{pmatrix}}V_{n-1}\oplus V_{n-1}\xrightarrow[\begin{matrix}y&z\\ \end{matrix}V_{n}\to 0\quad n\geq 1 \tag{8.21.2}\] \[0\to V_{n+1}^{*}\xrightarrow[(y)^{*}]{\begin{pmatrix}-(z)^{*} \\ y^{*}\end{pmatrix}}V_{n}^{*}\oplus V_{n}^{*}\xrightarrow[\begin{matrix}(y)^{*}&( z)^{*}\\ \end{matrix}V_{n-1}^{*}\to 0\quad n\geq 1\] (8.21.3) \[0\to V_{n-2}\oplus\Bbbk p^{n-1}\xrightarrow[\begin{matrix}-z&a \\ y&b\end{matrix}]V_{n-1}\oplus V_{n-1}\xrightarrow[\begin{matrix}y&z\\ \end{matrix}]V_{n}/\Bbbk p^{n}\to 0\quad n\geq 1 \tag{8.21.1}\] we get \[L_{1}\operatorname{top}_{X}(P_{n})=(0,V_{n-2})\qquad L_{1}\operatorname{top}_{ X}(I_{n})=(0,V_{n+1}^{*})\qquad L_{1}\operatorname{top}_{X}(R_{\mathfrak{p}^{n}})=(0,V_{n-2}\oplus\Bbbk p^{n-1}).\] Using the formula in Example 6.8 we can calculate the Mimo of \(P_{n}\), \(I_{n}\) and \(R_{\mathfrak{p}^{n}}\). For simplicity we write \(V[x]/(x^{2})=V\otimes_{\Bbbk}\Bbbk[x]/(x^{2})\) for a \(\,\Bbbk\)-vector space \(V\). Then \[\operatorname{Mimo}P_{n}\cong\ (V_{n-1}\xrightarrow[\frac{g_{1}}{g_{2}}]{g_{2}}V_{n} \oplus V_{n-2}[x]/(x^{2}))\quad\text{ with }\quad g_{1}=\begin{pmatrix}y\\ g_{1}^{\prime}\end{pmatrix}\quad\text{and}\quad g_{2}=\begin{pmatrix}z\\ g_{2}^{\prime}\end{pmatrix}\] where \(V_{n-1}\) and \(V_{n}\) are considered as \(\Bbbk[x]/(x^{2})\)-modules with trivial action by \(x\), and where \(g_{1}^{\prime}=xg_{1}^{\prime\prime}\) and \(g_{2}^{\prime}=xg_{2}^{\prime\prime}\) and \(\begin{pmatrix}g_{1}^{\prime\prime}&g_{2}^{\prime\prime}\end{pmatrix}:V_{n-1} \oplus V_{n-1}\to V_{n-2}\) is a choice of a retraction to the leftmost map in the exact sequence (8.21.1). Similarly, \[\operatorname{Mimo}I_{n}\cong\ (V_{n}^{*}\xrightarrow[\frac{h_{1}}{h_{2}}]{h_{2}}V_{n -1}^{*}\oplus V_{n+1}^{*}[x]/(x^{2}))\quad\text{ with }\quad h_{1}=\begin{pmatrix}(y)^{*}\\ h_{1}^{\prime}\end{pmatrix}\quad\text{and}\quad h_{2}=\begin{pmatrix}(z)^{*}\\ h_{2}^{\prime}\end{pmatrix}\] where \(h_{1}^{\prime}=xh_{1}^{\prime\prime}\) and \(h_{2}^{\prime}=xh_{2}^{\prime\prime}\) and \(\begin{pmatrix}h_{1}^{\prime\prime}&h_{2}^{\prime\prime}\end{pmatrix}:V_{n}^{* }\oplus V_{n}^{*}\to V_{n+1}^{*}\) is a choice of a retraction to the leftmost map in the exact sequence (8.21.2). Finally, \(\operatorname{Mimo}R_{\mathfrak{p}^{n}}\) is given by \[\operatorname{Mimo}R_{\mathfrak{p}^{n}}\cong\ (V_{n-1}\xrightarrow[\frac{k_{1}}{k_{2}}]{k_{ 2}}V_{n}\oplus(V_{n-2}\oplus\Bbbk p^{n-1})[x]/(x^{2}))\quad\text{ with }\quad k_{1}=\begin{pmatrix}y\\ k_{1}^{\prime}\end{pmatrix}\quad\text{and}\quad k_{2}=\begin{pmatrix}z\\ k_{2}^{\prime}\end{pmatrix}\] where \(k_{1}^{\prime}=xk_{1}^{\prime\prime}\) and \(k_{2}^{\prime}=xk_{2}^{\prime\prime}\) and \(\begin{pmatrix}k_{1}^{\prime\prime}&k_{2}^{\prime\prime}\end{pmatrix}:V_{n-1} \oplus V_{n-1}\to V_{n-2}\oplus\Bbbk p^{n-1}\) is a choice of a retraction to the leftmost map in the exact sequence (8.21.3). It follows from Theorem 8.1 that \[f_{!}(\Lambda(1)),\ f_{!}(\Lambda(2)),\ \operatorname{Mimo}P_{n},\ \operatorname{Mimo}I_{n},\text{ and } \operatorname{Mimo}R_{p^{n}}\] for all possible \(n\) and \(p\) are up to isomorphism all the indecomposable objects in \(\operatorname{mono}_{Q}(\Lambda)\). We can also consider the category \(\operatorname{Mono}_{Q}(\Lambda)\) of all monic representations, consisting of triples \((U,V,T)\) as above, but where \(U\) and \(V\) are not necessarily finite-dimensional. By Theorem 8.1 the indecomposable non-injective objects in \(\operatorname{Mono}_{Q}(\Lambda)\) can be obtained from the indecomposable objects in \(\operatorname{rep}(Q,\operatorname{\overline{Mod}}\Lambda)\cong\operatorname{ rep}(Q,\operatorname{Mod}\Bbbk)\) using the Mimo-construction. For example, for the generic module \[G=\ \Bbbk(t)\xrightarrow[1]{t}\Bbbk(t)\] we have an exact sequence \[0\to\Bbbk(t)\xrightarrow{\begin{pmatrix}-1\\ t\end{pmatrix}}\Bbbk(t)\oplus\Bbbk(t)\xrightarrow{\begin{pmatrix}t&1\end{pmatrix}} \Bbbk(t)\to 0\] and hence we get an indecomposable object in \(\operatorname{Mono}_{Q}(\Lambda)\) \[\operatorname{Mimo}G=\begin{array}{c}(\Bbbk(t)\xrightarrow{l_{1}}\Bbbk(t) \oplus k(t)[x]/(x^{2}))\quad\text{with}\quad l_{1}=\begin{pmatrix}t\\ -x\end{pmatrix}\quad\text{and}\quad l_{2}=\begin{pmatrix}1\\ 0\end{pmatrix}.\end{array}\] In general, classifying all indecomposables in \(\operatorname{Mono}_{Q}(\Lambda)\) is difficult, since it is difficult for the category \(\operatorname{rep}(Q,\operatorname{Mod}\Bbbk)\). For example, there is an exact embedding of the category of representations of the \(3\)-Kronecker quiver into \(\operatorname{rep}(Q,\operatorname{Mod}\Bbbk)\), see [10]. ## 9. Applications to modulations In this section we apply our results to representations of modulations over \(\operatorname{rad}^{2}\)-zero selfinjective Nakayama algebras over \(\Bbbk\). In particular, we recover results in [11], and give a characterization for when the GLS algebras introduced in [10] are of finite Cohen-Macaulay type, assuming the entries in the symmetrizers are \(\leq 2\). ### Prospecies of \(\operatorname{rad}^{2}\)-zero cyclic Nakayama algebras Fix a field \(\Bbbk\), a finite acyclic quiver \(Q\), and a _prospecies_ on \(Q\) in the sense of [12], i.e. for each vertex \(\mathfrak{i}\in Q_{0}\) a finite-dimensional \(\Bbbk\)-algebra \(\Lambda_{\mathfrak{i}}\) and for each arrow \(\alpha\colon\mathfrak{i}\to\mathfrak{j}\) in \(Q\) a \(\Lambda_{j}\)-\(\Lambda_{\mathfrak{i}}\)-bimodule \(M_{\alpha}\) which is projective as a left \(\Lambda_{\mathfrak{j}}\)-module and right \(\Lambda_{\mathfrak{i}}\)-module. Associated to this we have a modulation \(\mathfrak{B}\) on \(Q\) where \(\mathcal{B}_{\mathfrak{i}}=\operatorname{mod}\Lambda_{\mathfrak{i}}^{\operatorname {op}}\) is the category of finite-dimensional left \(\Lambda_{\mathfrak{i}}\)-modules and \(F_{\alpha}=M_{\alpha}\otimes_{\Lambda_{\mathfrak{i}}}-\colon\operatorname{ mod}\Lambda_{\mathfrak{i}}^{\operatorname{op}}\to\operatorname{mod}\Lambda_{ \mathfrak{j}}^{\operatorname{op}}\) is given by the tensor product, see Example 3.9. We also have the tensor algebra \[T(M)\coloneqq\Lambda\oplus M\oplus(M\otimes_{\Lambda}M)\oplus\cdots\] where \(\Lambda=\prod_{\mathfrak{i}\in Q_{0}}\Lambda_{\mathfrak{i}}\) and \(M=\bigoplus_{\alpha\in Q_{1}}M_{\alpha}\) is a \(\Lambda\)-bimodule in the natural way. The category \(\operatorname{rep}\mathfrak{B}\) of \(\mathfrak{B}\)-representations is equivalent to the category \(\operatorname{mod}T(M)^{\operatorname{op}}\) of finitely generated left \(T(M)\)-modules, see [1, Lemma 2.3.4]. If the \(\Lambda_{\mathfrak{i}}\)'s are products of the field \(\Bbbk\), then \(T(M)\) is path algebra of a quiver, which we describe. **Definition 9.1**.: Let \(\mathcal{M}=(t_{\mathfrak{i}},m_{\mathfrak{k},l}^{\alpha})\) be a tuple consisting of a non-negative integer \(t_{\mathfrak{i}}\) for each vertex \(\mathfrak{i}\) in \(Q\) and a non-negative integer \(m_{\mathfrak{k},l}^{\alpha}\) for each arrow \(\alpha\colon\mathfrak{i}\to\mathfrak{j}\) in \(Q\) and each pair of integers \(1\leq k\leq t_{\mathfrak{i}}\) and \(1\leq l\leq t_{\mathfrak{j}}\). The quiver \(Q(\mathcal{M})\) is defined as follows: * \(Q(\mathcal{M})_{0}=\{(\mathfrak{i},m)\mid\mathfrak{i}\in Q_{0}\text{ and }1\leq m\leq t_{\mathfrak{i}}\}\). * The number of arrows from \((\mathfrak{i},k)\) to \((\mathfrak{j},l)\) is \(\sum m_{k,l}^{\alpha}\) where the sums runs over all arrows \(\alpha\) with source \(\mathfrak{i}\) and target \(\mathfrak{j}\). **Lemma 9.2**.: _Assume \(\Lambda_{\mathfrak{i}}\cong\Bbbk\times\cdots\times\Bbbk\) is a product of \(t_{\mathfrak{i}}\)-copies of \(\Bbbk\) for each \(\mathfrak{i}\in Q_{0}\). Let \(e_{m}^{\mathfrak{i}}\) be the idempotent corresponding to the \(m\)'th copy of \(\Bbbk\) in \(\Lambda_{\mathfrak{i}}\). For an arrow \(\alpha\colon\mathfrak{i}\to\mathfrak{j}\) in \(Q\) and integers \(1\leq k\leq t_{\mathfrak{i}}\) and \(1\leq l\leq t_{\mathfrak{j}}\) let \(m_{k,l}^{\alpha}\coloneqq\dim_{\Bbbk}e_{l}^{\mathfrak{j}}M_{\alpha}e_{k}^{ \mathfrak{i}}\). Then \(T(M)\cong\Bbbk(Q(\mathcal{M}))\)._ Proof.: Since \(\Lambda=\prod_{\mathfrak{i}\in Q_{0}}\Lambda_{\mathfrak{i}}\) and \(\Lambda_{\mathfrak{i}}\) is a product of \(t_{\mathfrak{i}}\) copies of \(\Bbbk\), the algebra \(\Lambda\) is a product of \(\Bbbk\)'s indexed over the vertex set of \(Q(\mathcal{M})\). By [1, Proposition III.1.3] the claim follows. By assumption, the functors \(F_{\alpha}=M_{\alpha}\otimes_{\Lambda_{\mathfrak{i}}}-\) are exact and preserve projective modules. If the algebras \(\Lambda_{\mathfrak{i}}\) are selfinjective, then the functors also preserve injective modules. Therefore the endofunctor \(X\) defined from the modulation satisfies the standing assumptions in this paper. Furthermore, in this case the monomorphism category \(\operatorname{Mono}(X)\) of \(\mathfrak{B}\) coincides with the category \(\operatorname{Gproj}T(M)\) of finitely generated Gorenstein projective right \(T(M)\)-modules, see [12, Proposition 3.8]. In particular, by Theorem 5.6 we have an equivalence \(\overline{\operatorname{Gproj}}\,T(M)^{\operatorname{op}}\to\operatorname{rep} \overline{\mathfrak{B}}\) where \(\overline{\mathfrak{B}}\) denotes the modulation in Example 3.14. Since \(T(M)\) is \(1\)-Gorenstein, see [12, Proposition 3.5], the stable category \(\overline{\operatorname{Gproj}}\,T(M)\) is equivalent to the singularity category of \(T(M)\)[13]. We now consider modulations of \(\operatorname{rad}^{2}\)-zero cyclic Nakayama algebras. This covers both the modulations in [1] and in [14]. Here by a \(\operatorname{rad}^{2}\)-zero cyclic Nakayama algebra we mean the path algebra of the cyclic quiver for some integer \(n\), modulo the ideal making the composite of any two arrows zero. **Theorem 9.3**.: _Assume \(\Lambda_{\mathtt{i}}\) is either a \(\operatorname{rad}^{2}\)-zero cyclic Nakayama algebras or \(\mathtt{k}\) for \(\mathtt{i}\in Q_{0}\). Set_ \[t_{\mathtt{i}}=\begin{cases}0,&\text{if }\Lambda_{\mathtt{i}}=\mathtt{k}\\ \text{number of simples of }\Lambda_{\mathtt{i}},&\text{otherwise}.\end{cases}\] _For \(1\leq k\leq t_{\mathtt{i}}\) let \(e^{\mathtt{i}}_{k}\) be the idempotent at vertex \(k\) of \(\Lambda_{\mathtt{i}}\). For each arrow \(\alpha\colon\mathtt{i}\to\mathtt{j}\) write \(M_{\alpha}\cong M^{\prime}_{\alpha}\oplus M^{\prime\prime}_{\alpha}\) where \(M^{\prime\prime}_{\alpha}\) is a maximal projective summand of \(M_{\alpha}\) as a \(\Lambda_{\mathtt{j}}\)-\(\Lambda_{\mathtt{i}}\)-bimodule. For \(1\leq k\leq t_{\mathtt{i}}\) and \(1\leq l\leq t_{\mathtt{j}}\) let \(m^{\alpha}_{k,l}\) be the corank of the linear transformation \(e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}e^{\mathtt{i}}_{k+1}\to e^{\mathtt{j}}_{ l}M^{\prime}_{\alpha}e^{\mathtt{i}}_{k}\) induced from the arrow \(k\to(k+1)\) in \(\Lambda_{\mathtt{i}}\) (where \(t_{\mathtt{i}}+1\) is identified with \(1\)). Let \(\mathcal{M}=(t_{\mathtt{i}},m^{\alpha}_{k,l})\) be the tuple formed by these integers. The following hold:_ 1. _We have an equivalence_ \(\operatorname{rep}\overline{\mathfrak{B}}\cong\operatorname{mod}\,\mathtt{k} (Q(\mathcal{M}))^{\operatorname{op}}\)__ 2. _We have an equivalence_ \[\overline{\operatorname{Gproj}}\,T(M)^{\operatorname{op}}\to\operatorname{ mod}\,\mathtt{k}(Q(\mathcal{M}))^{\operatorname{op}}.\] 3. \(T(M)\) _is Cohen-Macaulay finite if and only if_ \(Q(\mathcal{M})\) _is Dynkin._ 4. _We obtain a bijection_ \[\left\{\begin{aligned} \text{Indecomposable objects} \end{aligned}\right\}^{\cong}\quad\xrightarrow{\cong}\quad\left\{ \begin{aligned} \text{Indecomposable non-injective}\\ \text{objects in }\operatorname{Gproj}\,T(M)\end{aligned}\right\}^{\cong}\] _by composing the equivalence in_ (1) _with the Mimo-construction described in Example_ 7.11_._ Proof.: Since Gorenstein projective modules are the same as monomorphic representations, parts (2) and (4) follow from part (1) and Theorems 5.6 and 7.9. Furthermore, since equivalences induce bijections between the indecomposable objects, part (3) follows from part (2). Hence, we only need to show part (1). Since \(\Lambda_{\mathtt{i}}\) is a \(\operatorname{rad}^{2}\)-zero Nakayama algebras, \(\operatorname{mod}\Lambda^{\operatorname{op}}_{\mathtt{i}}\) is equivalent to the module category of a product of copies of \(\mathtt{k}\). Hence, we can apply Lemma 9.2 to \(\overline{\mathfrak{B}}\), so it suffices to show that the integers \(t_{\mathtt{i}}\) and \(m^{\alpha}_{\underline{k,l}}\) defined in the theorem are equal to the ones in Lemma 9.2. This is clear for the \(t_{\mathtt{i}}\)'s, since \(\operatorname{mod}\Lambda^{\operatorname{op}}_{\mathtt{i}}\cong\operatorname{ mod}k^{t_{\mathtt{i}}}\) where \(t_{\mathtt{i}}=0\) gives the zero category. Let \(S^{\mathtt{i}}_{k}\) be the simple \(\Lambda_{\mathtt{i}}\)-module concentrated at vertex \(k\) of \(\Lambda_{\mathtt{i}}\). Note that the integer \(m^{\alpha}_{k,l}\) for \(\overline{\mathfrak{B}}\) in Lemma 9.2 is equal to the number of summands of \(S^{\mathtt{j}}_{l}\) in \(M_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{\mathtt{j}}_{k}\). Since \(M^{\prime\prime}_{\alpha}\) is projective as a bimodule, \(M^{\prime\prime}_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{\mathtt{j}}_{k}\) must be a projective \(\Lambda_{\mathtt{j}}\)-module, and hence has no summands of the form \(S^{\mathtt{j}}_{l}\). Therefore the integer \(m^{\alpha}_{k,l}\) for \(\overline{\mathfrak{B}}\) in Lemma 9.2 is equal to the number of summands of \(S^{\mathtt{j}}_{l}\) in \(M^{\prime}_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{\mathtt{j}}_{k}\). Since \(M^{\prime}_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{\mathtt{j}}_{k}\) has no projective summands by [15, Proposition 2.3], the number must be equal to the dimension of \(e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{ \mathtt{j}}_{k}\). Tensoring \(e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}\) with the exact sequence \(\Lambda e^{\mathtt{j}}_{k+1}\to\Lambda e^{\mathtt{i}}_{k}\to S^{\mathtt{j}}_{k}\to 0\) gives an exact sequence \[e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}e^{\mathtt{i}}_{k+1}\to e^{\mathtt{j}}_{l} M^{\prime}_{\alpha}e^{\mathtt{i}}_{k}\to e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}\otimes_{ \Lambda_{\mathtt{i}}}S^{\mathtt{j}}_{k}\to 0.\] Since the corank of the leftmost map is equal to the dimension of its cokernel, which is \(e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{ \mathtt{j}}_{k}\), this proves the claim. **Remark 9.4**.: As noted in the proof, the integers \(m^{\alpha}_{k,l}\) in Theorem 9.3 could equivalently be defined as the number of summands of \(S^{\mathtt{j}}_{l}\) in \(M_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{\mathtt{j}}_{k}\), or as the dimension of \(e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}\otimes_{\Lambda_{\mathtt{i}}}S^{ \mathtt{j}}_{k}\). They are also equal to the nullity of the linear transformations \(e^{\mathtt{j}}_{l}M^{\prime}_{\alpha}e^{\mathtt{i}}_{k}\to e^{\mathtt{j}}_{l}M^{ \prime}_{\alpha}e^{\mathtt{i}}_{k-1}\) associated to the arrow \((k-1)\to k\) (where \(0\) is identified with \(t_{\mathtt{i}}\)). ### \(\iota\)**Quiver algebras and algebras associated to symmetrizable Cartan matrices.** Let \(Q\) be a finite acyclic quiver with an involutive automorphism \(\tau\) respecting the arrows. In [10] they call such a pair \((Q,\tau)\) an \(\iota\)_quiver_, and associate an algebra \(\Lambda^{\iota}\) to it. Their goal is to extend the work of Bridgeland on the realization of quantum groups via Hall algebras to \(\iota\)quantum groups. In particular, semi-derived Hall algebras of algebras of the form \(\Lambda^{\iota}\) are isomorphic to universal quasi-split \(\iota\)quantum groups of finite type [11, Theorems G and I]. It turns out that the category of finitely generated \(\Lambda^{\iota}\)-modules is equivalent to the category of representations of a prospecies satisfying the conditions in Theorem 9.3, see [11, Section 2.4]. Furthermore, the monomorphism category of the corresponding modulation is equal to the category of Gorenstein projective \(\Lambda^{\iota}\)-modules, and therefore plays an important role when computing the semi-derived Hall algebra, see [11, Theorem C]. For Dynkin quivers the monomorphism category is in addition equivalent to the category of finitely generated projectives over the regular Nakajima-Keller-Scherotzke categories considered in [11]. Explicitly, the prospecies they consider is as follows: Choose a representative for each \(\tau\)-orbit of a vertex in \(Q\), and let \(\mathbb{I}_{\tau}\) be the set of these representatives. The quiver \(Q^{\prime}\) has as vertices the set \(\mathbb{I}_{\tau}\), it has no double arrows, and there is an arrow from \(\mathtt{i}\) to \(\mathtt{j}\) in \(Q^{\prime}\) if and only if there is an arrow from a vertex in the \(\tau\)-orbit of \(\mathtt{i}\) to the \(\tau\)-orbit of \(\mathtt{j}\) in \(Q\). The prospecies on \(Q^{\prime}\) is given by a tuple \((\mathbb{H}_{\mathtt{i}},\mathtt{j}\mathbb{H}_{\mathtt{i}})\) where \[\mathbb{H}_{\mathtt{i}}=\begin{cases}\mathtt{k}[x]/(x^{2}),&\text{if }\tau( \mathtt{i})=\mathtt{i}\\ \mathtt{k}(\ 1\xleftrightarrow{\raisebox{-1.0pt}{\includegraphics[height=14.0pt]{./.png}}}\ 2\ )/(xy,yx)&\text{if }\tau(\mathtt{i})\neq\mathtt{i}.\end{cases}\] For the description of the bimodules \(\mathtt{j}\mathbb{H}_{\mathtt{i}}\) see page 16 in [11]. **Proposition 9.5**.: _Let \((Q,\tau)\) be an \(\iota\)quiver, let \((\mathbb{H}_{\mathtt{i}},\mathtt{j}\mathbb{H}_{\mathtt{i}})\) be the associated prospecies on the quiver \(Q^{\prime}\) as above, and let \(\mathcal{M}=(t_{\mathtt{i}},m^{\alpha}_{k,l})\) be the associated tuple of integers in Theorem 9.3. Then \(Q^{\prime}(\mathcal{M})=Q\)._ Proof.: First note that the association \[(\mathtt{i},1)\mapsto\mathtt{i}\quad\text{and}\quad(\mathtt{i},2)\mapsto\tau (\mathtt{i})\] gives a bijection between \(Q^{\prime}(\mathcal{M})_{0}\) and \(Q_{0}\). Also, all nonzero \(\mathtt{j}\mathbb{H}_{\mathtt{i}}\) are of Loewy length \(1\) as \(\mathbb{H}_{\mathtt{j}}\)-\(\mathbb{H}_{\mathtt{i}}\)-bimodules, since they vanish when multiplying with any combination of two of the nilpotent elements \(\varepsilon_{\mathtt{i}}\) and \(\varepsilon_{\mathtt{j}}\) of \(\mathbb{H}_{\mathtt{i}}\) and \(\mathbb{H}_{\mathtt{j}}\), see the description on page 16 in [11]. Since all nonzero projective \(\mathbb{H}_{\mathtt{j}}\)-\(\mathbb{H}_{\mathtt{i}}\)-bimodules have Loewy length \(2\), it follows that \(\mathtt{j}\mathbb{H}_{\mathtt{i}}\) has no nonzero summands which are projective. Therefore, by Theorem 9.3 the number of arrows from \((\mathtt{i},k)\) to \((\mathtt{j},l)\) in \(Q^{\prime}(\mathcal{M})\) is equal to the corank of the \(\mathtt{k}\)-morphism \(e^{j}_{\mathtt{j}}\mathbb{H}_{\mathtt{i}}e^{k}_{k+1}\to e^{j}_{\mathtt{j}} \mathbb{H}_{\mathtt{i}}e^{k}_{\mathtt{i}}\) as in Theorem 9.3. Using the \(\mathtt{k}\)-linear basis of the bimodules \(\mathtt{j}\mathbb{H}_{\mathtt{i}}\) on page 16 in [11], we see that this is equal to the number of arrows in \(Q\) from the vertex \(\tau^{k-1}(\mathtt{i})\) to \(\tau^{l-1}(\mathtt{j})\). The claim follows. **Remark 9.6**.: By Theorem 9.3 we have an equivalence \[\overline{\mathrm{Gproj}}\,(\Lambda^{\iota})^{\mathrm{op}}\to\mathrm{mod}( \mathtt{k}Q)^{\mathrm{op}}.\] In particular, it induces a bijection from the indecomposable non-projective Gorenstein projective \(\Lambda^{\iota}\)-modules to the indecomposable \(\mathtt{k}Q\)-modules, which recovers [11, Corollary 3.21]. Furthermore, by Theorem 9.3 (4) we have an explicit description of the inverse to this bijection. It would be interesting to investigate how these results can be used to study \(\Lambda^{\iota}\) and its Hall-algebra. Given a symmetric Cartan matrix \(C=(c_{\mathtt{i},\mathtt{j}})_{\mathtt{i},\mathtt{j}\in I}\) with (acyclic) orientation \(\Omega\subset I\times I\), one can associate a path algebra \(\mathtt{k}Q\) whose quiver \(Q\) has vertex set \(I\) and has \(|c_{\mathtt{i},\mathtt{j}}|\) arrows from \(i\) to \(j\) if \((j,i)\in\Omega\). This was extended in [18], where they associate an algebra \(H=H(C,D,\Omega)\) to the data of a symmetrizable Cartan matrix \(C=(c_{\mathtt{i},\mathtt{j}})_{\mathtt{i},\mathtt{j}\in I}\) with symmetrizer \(D=\mathrm{diag}(d_{\mathtt{i}}\ |\ \mathtt{i}\in I)\) and (acyclic) orientation \(\Omega\subset I\times I\). The category of finitely generated left modules over \(H(C,D,\Omega)\) is equivalent to representations of a propsecies \((H_{\mathtt{i}},\mathtt{j}_{H_{\mathtt{i}}})\) over a quiver \(Q^{\prime}\). Explicitly, the quiver \(Q^{\prime}\) has vertex set \(I\), it has no double arrows, and there is an arrow from \(\mathtt{i}\) to \(\mathtt{j}\) if \((j,i)\in\Omega\). The algebra \(H_{\mathtt{i}}\) is equal to \(\mathtt{k}[x]/(x^{d_{\mathtt{i}}})\), and the bimodule \(\mathtt{j}H_{\mathtt{i}}\) is described in [18, Section 5]. If \(d_{\mathtt{i}}\leq 2\) for all \(\mathtt{i}\in I\), then we can apply Theorem 9.3 to this prospecies. The following proposition gives a description of \(Q(\mathcal{M})\) in this case. We use it to deduce Theorem E in the introduction. **Proposition 9.7**.: _Let \(C=(c_{\mathtt{i},\mathtt{j}})_{\mathtt{i},\mathtt{j}\in I}\) be a symmetrizable Cartan matrix with symmetrizer \(D=\operatorname{diag}(d_{\mathtt{i}}\mid\mathtt{i}\in I)\), and let \(\Omega\subset I\times I\) be an orientation of \(C\). Assume \(d_{\mathtt{i}}\leq 2\) for all \(\mathtt{i}\in I\). Let \(I^{\prime}\subset I\) be the subset consisting of all \(\mathtt{i}\) for which \(d_{\mathtt{i}}=2\), and let \(\mathcal{M}=(t_{\mathtt{i}},m_{k,l}^{\alpha})\) be the tuple defined from the prospecies \((H_{\mathtt{i}},\mathtt{j},H_{\mathtt{i}})\) as in Theorem 9.3. Then \(Q^{\prime}(\mathcal{M})\) is equal to the quiver defined by the symmetric Cartan matrix \(C|_{I^{\prime}\times I^{\prime}}\) with orientation \(\Omega|_{I^{\prime}\times I^{\prime}}\)._ Proof.: By definition, \(t_{\mathtt{i}}=1\) if \(\mathtt{i}\in I^{\prime}\), and \(t_{\mathtt{i}}=0\) otherwise. Hence, the vertex set \(Q^{\prime}(\mathcal{M})_{0}\) can be identified with \(I^{\prime}\). Now assume \(t_{\mathtt{i}}=1=t_{\mathtt{j}}\). Note that all nonzero \(\mathtt{j}H_{\mathtt{i}}\) are of Loewy length \(1\) as \(H_{\mathtt{j}}\)-\(H_{\mathtt{i}}\)-bimodules, since they vanish when multiplying by any combination of two of the nilpotent elements \(\varepsilon_{\mathtt{i}}\) and \(\varepsilon_{\mathtt{j}}\) of \(H_{\mathtt{i}}\) and \(H_{\mathtt{j}}\), see the description in [1, Section 5]. Since all nonzero projective \(H_{\mathtt{j}}\)-\(H_{\mathtt{i}}\)-bimodules have Loewy length \(2\), it follows that \(\mathtt{j}H_{\mathtt{i}}\) have no nonzero projective summands. Therefore by Theorem 9.3 the number of arrows from \(\mathtt{i}\) to \(\mathtt{j}\) in \(Q^{\prime}(\mathcal{M})_{0}\) is equal to the corank of the map \(\mathtt{j}H_{\mathtt{i}}\to\mathtt{j}H_{\mathtt{i}}\) given by multiplication with \(x\) on the right. Now \(\mathtt{j}H_{\mathtt{i}}\cong H_{\mathtt{i}}^{[c_{\mathtt{i},\mathtt{j}}]}\) as a right \(H_{\mathtt{i}}\)-module, see Section 5 in [1]. Hence, the corank is equal to \(|c_{\mathtt{i},\mathtt{j}}|\), which proves the claim. Proof of Theorem E.: Since the algebra \(H=H(C,D,\Omega)\) is \(1\)-Gorenstein by [1, Theorem 1.2] (see also the discussion above), the singularity category of \(H\) is equivalent to the stable category \(\overline{\operatorname{Gproj}}\,H\) by Buchweitz' theorem [1]. By Theorem 9.3 there is an equivalence from \(\overline{\operatorname{Gproj}}\,H\) to \(\operatorname{mod}\mathtt{k}(Q^{\prime}(\mathcal{M}))^{\operatorname{op}}\), and hence there is a bijection between their indecomposable objects. By Proposition 9.7 the quiver \(Q^{\prime}(\mathcal{M})\) is obtained from the the symmetric Cartan matrix \(C|_{I^{\prime}\times I^{\prime}}\) with orientation \(\Omega|_{I^{\prime}\times I^{\prime}}\). Hence, by Gabriel's theorem there are finitely many isomorphism classes of indecomposable \(\mathtt{k}Q^{\prime}(\mathcal{M})\)-modules if and only if \(C|_{I^{\prime}\times I^{\prime}}\) is Dynkin, and in that case they are in bijection with the positive roots of \(C|_{I^{\prime}\times I^{\prime}}\). This proves the claim. ## Acknowledgements We would like to thank Karin M. Jacobsen for asking the question which lead to Theorem 5.14. We would also like to thank Henning Krause for mentioning how to apply [1, Theorem 13.1.28] to Dedekind domains in Remark 5.18.
2307.04975
Exoplanets Around Red Giants: Distribution and Habitability
As the search for exoplanets continues, more are being discovered orbiting Red Giant stars. We use current data from the NASA Exoplanet Archive to investigate planet distribution around Red Giant stars and their presence in the host's habitable zone. As well, we update the power law relation between planet mass and stellar radius found in previous studies and provide more detailed investigations on this topic. Ten Red Giant-hosted exoplanets are found to be in the optimistically calculated habitable zone, five of which are in a more conservatively calculated habitable zone. We believe additional exoplanets can be found in habitable zones around Red Giants using the direct imaging and other methods, along with more powerful detection instrumentation.
Ruixuan E. Chen, Jonathan H. Jiang, Philip E. Rosen, Kristen A. Fahy, Yanbei Chen
2023-07-11T02:29:43Z
http://arxiv.org/abs/2307.04975v1
# Exoplanets Around Red Giants: Distribution and Habitability ###### Abstract As the search for exoplanets continues, more are being discovered orbiting Red Giant stars. We use current data from the NASA Exoplanet Archive to investigate planet distribution around Red Giant stars and their presence in the host's habitable zone. As well, we update the power law relation between planet mass and stellar radius found in previous studies and provide more detailed investigations on this topic. Ten Red Giant-hosted exoplanets are found to be in the optimistically calculated habitable zone, five of which are in a more conservatively calculated habitable zone. We believe additional exoplanets can be found in habitable zones around Red Giants using the direct imaging and other methods, along with more powerful detection instrumentation. **1. Introduction** In the distant future when our Sun becomes a Red Giant, the habitable zone (HZ) in the Solar System may move towards the outer planets where the moons of Jupiter and Saturn might be candidates for our future generations to live (Sparrman 2022). Near-term considerations also prompt interest in exoplanet and exomoon systems of Red Giant hosts as some of these worlds may presently be in the HZ of their parent star. In this paper we examine data from the NASA Exoplanet Archive, focusing on exoplanets around Red Giant (or Red Subgiant) stars. When a star leaves the Main Sequence and begins its evolution into the Red Giant Branch (RGB), it undergoes a series of changes. As the fusion of hydrogen progresses in the core of a Main Sequence star, its effective temperature and luminosity increase slowly over time. At the end of a star's Main Sequence stage its core is composed of helium while hydrogen begins to burn in the shell surrounding the core. The star then moves along the Red Giant Branch of the Hertzsprung-Russell (H-R) diagram, with its temperature moderately decreasing, and its radius and luminosity significantly increasing. As the host star evolves beyond Main Sequence, the orbits of its planets will also evolve. Due to the host's mass loss, its surrounding planets will move outwards (Zahn 1977). On the other hand, tidal interactions tend to shrink the orbital radius of the planets (Villaver et al. 2014). In particular, Villaver et al. 2014 predicted that tidal interaction would cause planets to plunge into the star, and get _engulfed_, before \(a/R_{\rm s}\)\(<\)3, where \(a\) is the orbital semi-major axis of the planet and \(R_{\rm s}\) is the stellar radius. The first aim of our paper is to study the distribution of planets around Red Giants. Previous research (Jiang and Zhu 2018) found a power law relation between planet mass and stellar radius. Using the data of newfound exoplanets, we update these results with more extensive analysis. The associated distribution of three variables is focused upon: the mass of the planet (\(M_{p}\)), the radius of the star (\(R_{\rm s}\)), and the orbital semi-major axis (\(a\)) -- with the aim of gaining additional insight into the evolution of planets as the host star evolves post-Main Sequence. To scientists, and the general public alike, habitability and the existence of extraterrestrial life is a topic of high interest (Kaltenegger, 2017). A habitable zone is an annular region around a given star where any hosted planets have a relatively high likelihood of moderate average surface temperature, allowing for biological life (as we know it) to possibly exist. The HZ is usually determined primarily by the stellar energy flux from the host. More specifically, however, for a planet to be habitable it must not only be in its host's HZ, but also possess the appropriate atmospheric and geological conditions accommodative to maintaining surface liquid water. It has been predicted by many authors that as the Sun enters the RGB, Earth will no longer be in the Solar System's HZ. As investigated by many studies (Danchi et al., 2005; Lopez et al., 2005; Cuntz et al., 2012; Ramirez and Kaltenegger, 2016; Gallet et al., 2019; Sparrman, 2022), post-Main Sequence evolution of the Sun will alter its HZ, possibly rendering some of the outer planets' moons habitable to life such as found on Earth. For a grid of stars with various mass and metallicity, Ramirez and Kaltenegger (2016) explored the evolution models of planets with their stars, and subsequent durations of planets in the HZ in detail. Their findings suggest three candidate systems that will become habitable once the host star becomes a Red Giant. In this paper we apply the criterion used by two previous studies (Ramirez and Kaltenegger, 2016; Sparrman, 2022), initially proposed by Kopparapu and colleagues (Kopparapu et al, 2013), to current data in the NASA Exoplanet Archive (NEA), identifying those exoplanets in the HZ and discussing further parameterized regions not yet observed which may also contain habitable planets. Figure 1: H-R diagram of host stars from the 5063 confirmed planets in the NASA Exoplanet Archive. Separated out are the 215 Red Giants via the star’s location on the H-R diagram. More specifically, K-giants (with absolute magnitude less than 2.5) are shown in red dots, while K-subgiants (with absolute magnitude between 2.5 and 4) are shown in pink dots. Giants (subgiants) having optimistically calculated habitable planets are labeled by green (light green) plus signs (see Sec. 3 for details). ## 2 Data Collection and Distribution of Planets Around Red Giants In this section, we briefly introduce our data collection and then discuss the distribution of RG planets in the (\(M_{\rm p}\), \(a\), \(R_{\rm s}\)) parameter space. In Figure 1, we plot an H-R diagram of host stars using luminosity relative to the Sun (\(L/L_{\sun}\)) and stellar surface effective temperature (\(T_{\rm eff}\)) values from the NEA, identifying 215 Red Giants (and sub-Giants) - plotted in red (and pink) dots. More specifically, we identified these giants according to their location in the H-R Diagram in Figure 1, the giants here are K-giants with absolute magnitudes below 2.5, while subgiants are K-giants with absolute magnitudes between 2.5 and 4. In the Appendix, in Tables 3 and 4, we list the identifiers, stellar mass, stellar radius, orbital semi-major axis and planet mass. We note that some of these planets, including 42 Dra, \(\gamma\) Dra (Dollinger and Hartmann 2021, henceforth referred to as D&H), and \(\alpha\) Tau (Reicher et al. 2019), have been questioned as false positives. D&H further speculated that a substantial fraction of planets around K-giants with radii greater than \(21R_{\sun}\) can be false positives, based on the congregation of their orbital periods, lack of planet-metallicity correlation, as well as the excess number of planets around K-giants compared with main-sequence stars. We shall make comparisons with D&H in Section 2.3 below. ### Planet-Mass-Stellar-Radius Relation A previous study (Jiang and Zhu 2018) derived a planet mass-stellar radius relation for 150 exoplanets orbiting Red Giants: \[M_{\rm p}/M_{\earth}=a\big{(}R/R_{\sun}\big{)}^{b} \tag{1}\] with best-fit parameters \(a\)=150 and \(b\)=0.88. They further argued that Equation 1 is not due to observational bias from the radial-velocity detection method. Folding in the Archive's new data as Figure 2: \(M_{\rm p}\) vs. \(R_{\rm s}\) plot for MS (blue) and RG hosted planets discovered by radial velocity (red) and transit (purple +) methods. The black dashed line indicates the updated power-law \(M_{\rm p}\) vs. \(R_{\rm s}\) relation for all 215; the gray dashed line corresponds to the power-law fit for planets discovered by radial velocity. Orange dots represent minimum \(M_{\rm p}\) for each red giant that can lead to RV amplitude greater than the stellar intrinsic level obtained by Hekker et al. 2008 [Cf. Eq. (3)]. well, we updated the relation and found a comparably similar result: \(a=87^{+20}_{-16}\) and \(b=1.04\pm 0.09\), as shown in Figure 2. The adjustment to a lower \(a\) value and higher \(b\) value obtained here can most likely be attributed to the post-2018 data points, which have lower values for both planetary mass and stellar radius. When restricted to planets discovered by the radial velocity (RV) method (205 out of the 215 planets), we obtain an alternative fit of \(a=181^{+33}_{-28}\) and \(b=0.78\pm 0.07\). ### Observed Evolution of Exoplanet Population as the Host Star Evolves Further investigation of the origin of the \(M_{\rm p}\) vs. \(R_{\rm s}\) relation notes that the stellar radius tracks with the post-Main-Sequence evolution stage of the host star. The fact that \(M_{\rm p}\) increases with \(R_{\rm s}\) corresponds to a relative lack of less massive planets around more evolved stars. We shall use Figures 3 and 4, in addition to Figure 2, to further investigate the evolution of the population of exoplanets around stars as they evolve. In Figure 3a and 3b, we split \(R_{\rm s}\) into three different intervals and plot Main Sequence (silver dots) and Red Giant planets in each interval separately as planet mass \(M_{\rm p}\) (in Earth masses) vs. orbital semi-major axis \(a\) (in astronomical units). In particular, we separate RG planets into three categories according to \(R_{\rm s}\): \(R_{\rm s}/R_{\odot}<5\) (blue dots), \(5<R_{\rm s}/R_{\odot}<25\) (green dots), and \(R_{\rm s}/R_{\odot}>25\) (red dots). The \((a,M_{\rm p})\) region occupied by RG planets shrinks as \(R_{\rm s}\) increases -- from its left side, with small \(a\), from the bottom side, with low \(M_{\rm p}\), and from the right side, with large \(a\). This shrinkage is best viewed from the right (b) panel of Figure 3, which focuses on the specific region of RG planets and adds contours generated via Kernel Density Estimate (KDE) for clarity. In Figure 4, we plot the orbital semi-major axis vs. stellar radius ratioed to solar radii, illustrating that as \(R_{\rm s}\) increases, the distribution of \(a\) narrows. In particular, \(a/R_{\rm s}\)=3 is seen as a cutoff at lower values of \(a\). At this stage, it is useful to point out that Figure 2 and Figure 4 each separately illustrates the continuous evolutions of \(M_{\rm p}\) and \(a\), respectively, as \(R_{\rm s}\) increases, while in Figure 3b we separate Figure 3a: 3b. Left panel (a): \(M_{\rm p}\) vs. \(a\) plot for main-sequence (silver), Red Giant planets (blue for \(R_{\rm s}/R_{\odot}<5\), green for \(5<R_{\rm s}/R_{\odot}<25\), and red for \(R_{\rm s}/R_{\odot}>25\)), as well as Solar System planets (black). Right panel (b): zoomed-in version for Red Giant planets, with Kernel Density Estimate contours also shown. the evolution of \(R_{\rm s}\) into three bins and illustrate the lumped (\(a\), \(M_{\rm p}\)) distribution in each bin. The shrinkage of (\(a\),\(M_{\rm p}\)) distribution as we progress from blue to green and to red, are more continuously represented as marginal distributions in Figure 4 for \(a\) and in Figure 2 for \(M_{\rm p}\). With Figures 2, 3 and 4, let us examine the evolution of (\(a\), \(M_{\rm p}\)) in more detail. Regarding the disappearance of low-mass planets with increasing \(R_{\rm s}\), we can see from Figure 3a (left panel) that for stars with a radius less than \(25R_{\odot}\) many planets with masses 200 to \(1000M_{\oplus}\) exist at distances 2 to 3 AU. Yet, these planets are not seen orbiting stars with \(R_{\rm s}\)\(>25R_{\odot}\) - even though much more massive planets are seen at the same distance. This disappearance of low-mass planets with increasing \(R_{\rm s}\) corresponds directly from the \(M_{\rm p}\) vs. \(R_{\rm s}\) power-law fit in Figure 2. Note that Solar System planets lie on the lower part of the plot; only Jupiter is near the reach of current detection methods. However, Jovian mass exoplanets and comparable orbital distance (\(\sim\)5 AU) are not seen around Red Giants with \(R_{\rm s}/R_{\odot}\)\(>25\). For the disappearance of high semi-major axis planets as the star evolves, as can also be seen from Figure 4, one apparent explanation will be the inward migration of hosted planets. Taken a step further, if low-mass planets migrate inward more efficiently migration may explain their disappearance as well. However, it remains unclear whether migration can be sufficiently substantial within the lifetimes of these stars. Another possibility may be that more evolved host stars in our data tend to have lower metallicity and are older aged, and therefore were apt to have differently characterized populations of planets formed around them. However, such differences will likely have to be very substantial to be influential in this respect. For disappearance of planets with low semi-major axis, it is straightforward to anticipate planets with small orbital distance values to be engulfed and consumed as their host evolves and expands. According to Villaver et al. 2014, tidal interactions tend to speed up the engulfment of planets, and no planets should survive once \(a/R_{\rm s}\)\(<3\). In Figure 4, we plot the orbital semi-major Figure 4: \(a\) vs. \(R_{\rm s}\) plot for exoplanets around Red Giants (red dots), with Main Sequence planets shown in silver. Solid line indicates \(a\)=\(3R_{\rm s}\), while dashed line indicates \(a\)=\(R_{\rm s}\). axis vs. stellar radius ratioed to solar radii, clearly illustrating that \(a/R_{\rm s}\)=3 is a cutoff and providing empirical evidence for tidally-accelerated engulfment. ### Further Interpretations of the Evolutions in Population D&H pointed out a concentration of orbital periods (between 300 days and 800 days) for exoplanets around red giants with \(R_{\rm s}\)\(>21R_{\sun}\). This is consistent with our Figure 4 and Figure 3b (the horizontal spread of red dots), since orbital period is highly correlated with orbital semi-major axis \(a\) and because the host stars have very similar masses. They argued that since the range of period falls within the period of intrinsic variations of stars (as modeled by Saio et al. 2015), and hence a fraction of these may not be actual planets. On the other hand, they provided plausible reasons for planets outside of this period range not to be discovered. For longer periods (corresponding to larger \(a\)), this could be due to the smaller RV variation being hidden under intrinsic fluctuations of the surface of the host star, while for shorter periods (corresponding to smaller \(a\)), this could be due to the engulfment of planets by their host stars. In this way, one might speculate that the population of exoplanets with the concentrated distribution of orbital periods can still exist, but some data points are potentially contaminated with intrinsic stellar oscillations. We shall contrast our results with those of D&H, and extend the discussion. D&H argued that planets with periods shorter than 300 days could be absent due to engulfment, similar to what we propose in the previous section. We note that these planets are the closest to the star, tend to cause the highest RV variations, and are hence _the least_ prone to be missed in RV measurements. In Figure 4, where we plot semimajor axis \(a\) and \(R_{\rm s}\), when combined with the results of Villaver et al. 2014, provides direct evidence that short-period exoplanets are missing due to engulfment. For longer periods than 800 days, D&H argued that they may be missing due to the fact that they cause lower variation in RV and hence are more prone to be hidden below intrinsic variations of the host stars. We note that RV is a combined effect between planet mass and semi-major axis, with amplitude \(K_{1}\) of RV given by: \[K_{1}=\left(\frac{2\pi G}{P}\right)^{1/3}\frac{M_{p}\,sin\,i}{\sqrt{1-e^{2}M_{ \rm s}^{2/3}}}\propto M_{\rm p}a^{-1/2}, \tag{2}\] with \(P\) the orbital period, \(i\) the orbital inclination angle and \(e\) the orbital eccentricity. Here we have highlighted its dependence on \(M_{\rm p}\) and \(a\). In this way, the disappearance of low-mass and large \(a\) planets as \(R_{\rm s}\) increases could both be due to an observational cutoff in \(K_{1}\). Under this assumption, we can first make a re-interpretation of the \(M_{\rm p}\)-\(R_{\rm s}\) relation seen in Figure 2. More specifically, we use results from Hekker et al. 2008, who noticed that for stars with lower surface gravity \(g\) (i.e., larger radii), their amplitudes of RV variations tend to increase. They found a baseline value, given approximately by \[K_{1}^{\rm int}=2\times 10^{3}\ [g/({\rm cm/s^{2}})]^{-0.6}{\rm m/s}, \tag{3}\] which they _interpret_ as arising from intrinsic fluctuations of the star. Here, \(g\) is the surface gravitational acceleration of the star. For each red giant, combining Eqs. (2) and (3), assuming \(e\)=0, and using \(a=3R_{\rm s}\), we obtain the minimum planet mass \(M_{\rm p}^{\rm min}\) the star can host, this in order for the \(K_{1}\) due to the planet to be greater than the intrinsic \(K_{1}^{\rm int}\): \[M_{\rm p}^{\rm min}=\sqrt{\frac{3M_{\rm s}R_{\rm s}K_{\rm l}^{\rm int}}{G}}. \tag{4}\] \(M_{\rm p}^{\rm min}\) is plotted as orange dots in Figure 2; indeed providing an excellent lower bound for planets around substantially evolved stars. We also note that in Figures 2 and 3b the upper limit in \(M_{\rm p}\) for higher radii stars are the same as for lower radii stars. Together, the varying lower bound and roughly constant higher bound in \(M_{\rm p}\) for increasing \(R_{\rm s}\) is consistent with the notion that the mass distribution of exoplanets stay the same as the star evolves, yet the difficulty in observation gradually eliminates low-mass planets from the observed distribution. Further independent studies into the amplitude of stellar oscillations and practical limitations of the RV method are necessary to confirm this picture. ## 3 Habitable Planets Around Red Giants In this section we discuss the habitability of planets around Red Giants, briefly reviewing habitability criteria in 3.1 and presenting our findings in 3.2. ### Criteria for Habitability There exist multiple habitability conditions for a given exoplanet (or exomoon); most of which rely on the existence of water in liquid form to be present on at least a portion of that world's surface. The simplest criterion uses equilibrium temperature, namely, the black-body radiation from the planet has to balance the radiation it absorbs from the star. If we define S as the flux of radiation from the host, this given by \[S=\frac{L_{\rm s}}{4\pi a^{2}} \tag{5}\] where \(L_{\rm s}\) is the star's luminosity and \(a\) is the orbital semi-major axis of the star's exoplanet, the equilibrium temperature of the exoplanet is then given by \[T_{\rm eq}=k\left[\frac{S(1-A)}{4\sigma}\right]^{1/4} \tag{6}\] where \(A\) is the planetary albedo and \(\sigma\) is the Stefan-Boltzmann constant. The simplest habitability condition is \(273\,{\rm K}<T_{\rm eq}<373\,{\rm K}\), with the low \(T_{\rm eq}\) defining the Outer boundary of the Habitable Zone (OHZ) and the high \(T_{\rm eq}\) defining the Inner boundary of the Habitable Zone (IHZ). The scaler quantity \({\rm k}\) is a correction factor that can be used to approximately incorporate the greenhouse effect of an assumed planetary atmosphere. More realistic criteria exist in the literature. In this paper, we shall adopt two criteria obtained by previous study (Kopparapu et al., 2013) in which an _effective_ solar flux is expressed in terms of \[S_{\rm eff}\equiv S/S_{\bigoplus} \tag{7}\] where \(S_{\bigoplus}\) is the current solar energy flux at the location of the Earth, as well as the temperature \(T\) of the host star. Note that \(S_{\rm eff}\) is dimensionless. In this paper, we shall adopt two different ways to define HZ boundaries, one conservative, the other optimistic. The conservative HZ accounts for greenhouse effects in the atmosphere of the planet, taking the inner boundary to be defined by the moist greenhouse effect where \(S_{\rm eff}\) allows sufficient water vapor to exist in the stratosphere. The outer boundary is defined by the maximum heat retained by the planet while still providing habitable conditions. This is also known as the maximum greenhouse effect. We also summarized the boundaries using the following fitting formula (Kopparapu et al., 2013) for the host star temperature range of \(2600\) K \(<T\)\(<7200\) K: \[S_{\rm limit}(T)=S_{0}+aT_{*}+bT_{*}^{2}+cT_{*}^{3}+dT_{*}^{4},\,T_{*}=T-5780\, \,\mathrm{K}, \tag{8}\] where values of \(a\), \(b\), \(c\) and \(d\) for conservative/optimistic, inner/outer boundaries are reproduced in Table 1. As noted in Table 1, a more optimistic approach uses the (theorized) history of Solar System planets Venus and Mars to determine the inner and outer bounds of the HZ. Here, the inner boundary of the HZ is based on the assertion that Venus has not had liquid water on its surface for only the past billion years - i.e., a billion years ago (recent) Venus might have had surface conditions suitable for water to exist. On the other hand, there is mounting evidence that (early) Mars had liquid water flowing on its surface 3.8 billion years ago. For these reasons, they define the inner boundary using the \(S_{\rm eff}\) of recent Venus and the outer boundary using the \(S_{\rm eff}\) of early Mars. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(S_{0}\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline **Recent Venus** & 1.7753 & 1.4316*10\({}^{-4}\) & 2.9875*10\({}^{-9}\) & -7.5702*10\({}^{-12}\) & -1.1635*10\({}^{-15}\) \\ (optimistic inner boundary) & & & & & & \\ \hline **Moist Green House** & 1.0140 & 8.1774*10\({}^{-5}\) & 1.7063*10\({}^{-9}\) & -4.3241 *10\({}^{-12}\) & -6.6462*10\({}^{-16}\) \\ (conservative inner boundary) & & & & & & \\ \hline **Maximum Green House** & 0.3438 & 5.8942*10\({}^{-5}\) & 1.6558*10\({}^{-9}\) & -3.0045*10\({}^{-12}\) & -5.2983*10\({}^{-16}\) \\ (conservative outer boundary) & & & & & & \\ \hline **Early Mars** & 0.3179 & 5.4513*10\({}^{-5}\) & 1.5313*10\({}^{-9}\) & -2.7786*10\({}^{-12}\) & -4.8997*10\({}^{-16}\) \\ (optimistic outer boundary) & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Fitting parameters \(S_{0}\), \(a\), \(b\), \(c\) and \(d\) adapted from previous study (Kopparapu et al., 2013) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **Planet Name** & **Discovery Paper** & \begin{tabular}{c} **Spectral** \\ **Type** \\ **(NEA)** \\ \end{tabular} & \begin{tabular}{c} **Abs** \\ **Mass** \\ **(M)** \\ \end{tabular} & \begin{tabular}{c} **Host** \\ **Radius** \\ **(R)** \\ \end{tabular} & \begin{tabular}{c} **Orbital** \\ **(days)** \\ \end{tabular} & \begin{tabular}{c} **S\({}_{\rm eff}\) \\ **Mass** \\ **(M\({}_{\rm J}\))** \\ \end{tabular} \\ \hline HD 1605 c & Hirakawa et al. 2015 & K1 IV & 2.8 & 1.33 & 3.49 & 233 & 0.50 & 3.62 \\ \hline HD 219415 b & Gettel et al. 2012 & K0 III & 2.8 & 1.0 & 2.90 & 207 & 0.41 & 1.0 \\ \hline HD 4732 c & Sato et al. 2013 & K0 IV & 2.2 & 1.74 & 5.40 & 118 & 0.73 & 2.37 \\ \hline HD 73534 b & Valenti et al. 2009 & G5 (IV) & 3.6 & 1.16 & 2.58 & 168 & 0.37 & 1.11 \\ HIP 56640 b & Jones et al. 2021 & K1 III & 2.5 & 1.04 & 4.93 & 157 & 0.81 & 3.67 \\ \hline HD 125390 b & Luhn et al. 2019 & G7 V (III) & 2.3 & 1.36 & 6.47 & 342 & 1.33 & 22.2 \\ HD 145934 b & Feng et al. 2015 & K0 & 1.7 & 1.75 & 5.38 & 215 & 1.07 & 2.28 \\ \hline HD 94834 b & Luhn et al. 2019 & K0 & 2.6 & 1.11 & 4.20 & 38 & 1.31 & 1.26 \\ HD 95089 c & Bryan et al. 2016 & G8/K0 IV & 2.23 & 1.54 & 5.08 & 66 & 1.20 & 3.45 \\ \hline HIP 67851 c & Jones et al. 2015 & K0 III & 2.14 & 1.63 & 5.92 & 167 & 1.20 & 6.3 \\ \hline \end{tabular} \end{table} Table 2: Conservative (shaded) and optimistically (unshaded) habitable planets using the Kopparapu et al. 2013 criterion. Note that most HZ planets are orbiting subgiants (with magnitude between 2.5 and 4). In Table 2, we list conservative and optimistic habitable zone RG-hosted planets. All planets are gas giants with masses ranging from 1 to 22 Jupiter masses (\(M_{\rm J}\)). It should be noted for HD 73534 the luminosity class was missing from the archive and is backfilled here from other sourcing as IV (indicating subgiant). As well for HD 125390, its luminosity was mistyped as V (dwarf) but has been correctly listed in Table 2 as III (giant). On the fourth column, we list the absolute magnitude computed from the V-magnitude and distance data from the Archive, noting that the classification from absolute magnitude does not always agree with the third column. The Red Giant hosts of planets in Table 2 are also shown as green dots in the H-R diagram of Figure 1. As can be readily perceived from Table 2 and Figure 1, these host stars are all in their early stages of evolution on the RGB. ### Red Giant Planets in the Habitable Zones From the NASA Exoplanet Archive, we collected values for stellar luminosity and orbital semi-major axis to calculate \(S_{\rm eff}\). In Figure 5, we show the Red Giant planets on the \(T_{\rm eff}\) vs. \(S_{\rm eff}\) plot with lines indicating HZ boundaries. From the plot, it can be seen that there is a substantial difference between boundaries for the \(T_{\rm eq}\) HZ and Kopparapu HZ. We highlighted Kopparapu et al. 2013 optimistically habitable planets in green. Figure 6 shows Red Giant and Main Sequence planets on a semi-major axis vs. stellar radius plot with habitable planets indicated (light green dots for habitable planets around MS, and dark green dots for habitable planets around subgiants and giants). We also indicate, with purple line segments, the optimistic HZ of the host stars of all planets around subgiants and giants. As illustrated, habitable planets -- and indeed habitable zones -- follow a track with \(a\) increasing as \(R_{\rm s}\) increases, this attributable to stars with larger radii - and thus greater luminosity - having HZs farther out. We also see that the track stops at \(R_{\rm s}\)\(\sim\) 8\(R_{\odot}\), far below the maximum \(R_{\rm s}\) of Red Giants, clearly suggesting there is a missing population of planets around Red Giants with \(a\) above \(\sim\)4 AU. Figure 5: \(T_{\rm eff}\) vs. \(S_{\rm eff}\) plot for Red Giant planets (red dots) with optimistic habitable planets denoted in green. Boundaries for \(T_{\rm eq}\), conservative and optimistic HZ are shown in solid, dotted and dashed lines, respectively. This also explains why most of the conservative habitable planets so far discovered are orbiting subgiants, or giants at their early stages of evolution. If there are more Red Giant planets in this undetected region, it is likely there are undiscovered habitable zone planets as well. Here, we also recall that even though outer planets of our Solar System can become habitable as the Sun evolves, these planets are far from the detectable zone, as we can see from Figure 3a. As with Main Sequence hosts, these large semi-major axis planets were all discovered by direct imaging. However, using the same detection method to find similar planets around Red Giants may be difficult due to the direct imaging method disfavoring systems with large radii hosts. ## 4 Conclusions and Discussions In this paper we take new data from NASA's Exoplanet Archive to update and further investigate trends regarding Red Giant systems. First, we revisit the Planet Mass-Stellar Radius relation previously found (Jiang and Zhu 2018), observing that a similar power-law relationship is bolstered with the addition of more than 50 new data points. However, in our results we noted a steeper power-law relation due to the additional data points with lower mass and stellar radius values. As we focus on planets discovered by the radial velocity technique (205 out of 215 planets), the steepness of the power-law was attenuated. To further explore this trend, we separate Red Giant hosted exoplanets according to the radii of their hosts and plot planet mass against semi-major axis (Figure 3). As stellar radius increases, the region occupied by planets in the graph shrinks and for planets with smaller orbital semi-major axes, we found their disappearance to be consistent with tidal engulfment of planets where \(a/R_{s}<3\) (Figure 4). For the disappearance of planets with lower masses and those with larger orbital semi-major axes, we did not find compelling astrophysical reasons; this disappearance could be due to observational selection effects of the radial velocity method used to discover the vast majority of Figure 6: Semi-major axis \(a\) vs. stellar radius \(R_{s}\) plot of Red Giant (red) and Main Sequence planets (blue) with optimistically habitable planets in green (light green for Main Sequence and darker green for Red Giant planets.) Planets discovered by direct imaging are shown in orange. With purple vertical line segments, we indicate the optimistic HZ of each giant. planets in these regions. Since lower mass and larger orbital semi-major axis correspond to lower amplitudes of radial velocity, the disappearance can be attributed to a higher detection threshold for the amplitude of radial velocity oscillations among more evolved Red Giants. We showed that in order for this selection effect to be the origin of such disappearance, the level of intrinsic RV fluctuation of Red Giants should depend on surface gravity following equation (3), which was proposed by Hekker et al. 2018. However, selection effects may also arise due to eccentricity and stellar mass. As this possibility is beyond the scope of this paper, further investigations of the origin of such selection effects are left to future studies. Next, we examine the habitability of Red Giant exoplanets. To determine the habitable zone, we adopt criteria proposed by Kopparapu et al. 2013 and with this method found ten planets in the optimistic HZ, five of which are in the conservatively calculated HZ. However, all of these planets are gas giants and, therefore, very likely uninhabitable by life as we presently know it. Nevertheless, these planets may themselves host habitable exomoons. Finally, with habitable zone exoplanets identified, we look at possible detection bias. We see that their orbital semi-major axis increases with stellar radii until \(R_{s}/R_{\sun}\sim 7\). However, this does not necessarily rule out further habitable zone exoplanets and it is very likely there are more HZ Red Giant exoplanets with a semi-major axis greater than \(\sim\)4 AU. Even though some such planets can be seen around Main Sequence stars via direct imaging, similar planets around Red Giant stars have not yet been found. While the limitations of current imaging methods may preclude detecting planets around Red Giant stars, more advanced instrumentation coming online in the near term may enable this technique to be used for at least some Red Giant hosted exoplanetary systems. ## Acknowledgements This research was conducted at the NASA sponsored Jet Propulsion Laboratory, California Institute of Technology (Caltech) and has made use of the NASA Exoplanet Archive, which is operated by the Caltech, under contract with the NASA under the Exoplanet Exploration Program. ## Data Statement: The data underlying this article can be downloaded from the NASA exoplanet archive at [https://exoplanetarchive.ipac.caltech.edu](https://exoplanetarchive.ipac.caltech.edu). The method of data calculation and analysis are fully described in the article. ## Author Contributions: Conceptualization, J.H.J.; methodology, J.H.J., R.E.C; software, R.E.C and J.H.J; validation, J.H.J. and Y.C.; formal analysis, R.E.C and J.H.J.; investigation, R.E.C., J.H.J. and P.E.R.; resources, J.H.J.; data curation, R.E.C and J.H.J.; writing--original draft preparation, R.E.C.; writing--review and editing, J.H.J., P.E.R., and K.A.F; visualization, R.E.C and J.H.J; supervision, J.H.J. and Y.C.; project administration, J.H.J.; funding acquisition, J.H.J. All authors have read and agreed to the published version of the manuscript. ## Competing Interest: Authors declare no competing interest.
2310.04362
Einsteinian gravitational concepts throughout secondary school
Einstein's theory of relativity is largely thought of as one of the most important discoveries of the 20$^{th}$ century and continues to pass observational tests over 100 years later. Yet, it is Newtonian gravity, a 350 year old formalism proven to be less accurate than relativity, which is taught in schools. It has been shown that Einsteinian gravitational concepts can be well understood by students in both primary and secondary education. In this paper, a cross-section of students from Yr 7-13 enrolled in an English secondary school took part in an intervention designed to introduce the idea of gravity from spacetime curvature. The overall aim of this work is to assess the viability of including relativity in the secondary curriculum and to ascertain which year this material would be best placed in. We determine that all year groups where able to appreciate the effects of curvature to some extent. Visual demonstrations aided conceptual understanding at Yr 7-8 level, but this does not have a strong effect on their ideas around the source of the gravitational force. Participants in Yr 9-13 were able to understand concepts beyond those introduced in the demonstrations. However, a deeper understanding of curvature as the source of the gravitational force is not seen until years 12 & 13. We find that those in Yr 13 have the best overall understanding of the concepts introduced during our intervention.
Corey McInerney, Phil Sutton
2023-10-06T16:36:04Z
http://arxiv.org/abs/2310.04362v2
# Einsteinian gravitational concepts throughout secondary school ###### Abstract Einstein's theory of relativity is largely thought of as one of the most important discoveries of the 20\({}^{\rm th}\) century and continues to pass observational tests over 100 years later. Yet, it is Newtonian gravity, a 350 year old formalism proven to be less accurate than relativity, which is taught in schools. It has been shown that Einsteinian gravitational concepts can be well understood by students in both primary and secondary education. In this paper, a cross-section of students from Yr 7-13 enrolled in an English secondary school took part in an intervention designed to introduce the idea of gravity from spacetime curvature. The overall aim of this work is to assess the viability of including relativity in the secondary curriculum and to ascertain which year this material would be best placed in. We determine that all year groups where able to appreciate the effects of curvature to some extent. Visual demonstrations aided conceptual understanding at Yr 7-8 level, but this does not have a strong effect on their ideas around the source of the gravitational force. Participants in Yr 9-13 were able to understand concepts beyond those introduced in the demonstrations. However, a deeper understanding of curvature as the source of the gravitational force is not seen until years 12 & 13. We find that those in Yr 13 have the best overall understanding of the concepts introduced during our intervention. _Keywords_: Einstenian gravity, relativity, secondary school, conceptual understanding ## 1 Introduction In 1687, Isaac Newton published his law of universal gravitation\({}^{(\it 1)}\), it is commonly taught in schools because it combines the concept of mass and weight with everyday phenomenon like falling objects\({}^{(\it 2)}\). Gravity is regarded as being one of the key threshold concepts in education used as an indicator of a students understanding of wider physics topics\({}^{(\it 3,\,4)}\). However, research suggests that a large number of misconceptions about gravity are held by students. These include issues surrounding the direction of gravitational force, gravity outside of the Earth and the role of the Sun in celestial dynamics\({}^{(\it 5-7)}\). These issues are tied to students not being able to fully grasp where the gravitational force comes from, with some students believing that objects only have a gravitational pull if they are heavy and spherical, or that air is needed in order for gravity to act between objects\({}^{(\it 8,\,9)}\). Misconceptions can stay with students throughout their schooling, impacting confidence, ability and thus students overall enjoyment of science\({}^{(\it 10)}\). A new approach pioneered by the Einstein-First project could eliminate some of the conceptual issues faced by students. The idea is to introduce principals from modern physics, such as Einstein's Theory of General Relativity (GR) into the Australian curriculum at primary and secondary level. A similar project, ReleQuant, in Norway also pioneers the teaching of Relativity and quantum mechanics at secondary school\({}^{(\it 11,\,12)}\). Einstein's Theory of General Relativity provides a mathematical description of gravity and offers a qualitative description of how gravity works\({}^{(\it 13)}\). A key feature of GR is the non-Euclidean geometry of space and the interplay between time and space. The sophisticated mathematics involved in GR are seen as a barrier to students below degree level. The ReleQuant project bypasses this mathematical rigor by placing conceptual language at the forefront of the learning process, exploring concepts via thought experiments and visual demonstrations. This study explores the impact of a one-off intervention on students' understanding of Einsteinian gravitational concepts. Similar studies have been performed in Lebanon\({}^{(\it 14)}\), Australia\({}^{(\it 15,\,16)}\), Indonesia\({}^{(\it 17)}\) and Italy\({}^{(\it 18,\,19)}\). Each of those works report that students have a better understanding of gravity from a GR stand point after being introduced to concepts such as curvature, time dilation and spacetime diagrams. The intervention used for this study was designed to fit into a standard 50 minute lesson period at a single English state funded secondary school. Within that period, students were taught about how spacetime curvature produces the gravitational force with the help of two hands on activities. The host school is a fully government funded, selective, mixed grammar school located the town of Gainsborough, County Lincolnshire, England. This study took place during February and March 2023 and at that time a total of 1237 students were enrolled at the school. An identical intervention was delivered to all age groups (years 7 - 13) at this school. Students were issued with questionnaires before and after the intervention and by comparing the results across year groups, we hope to identify the optimum place for GR to be introduced into the secondary science curriculum. In the following section we describe the research methodology in comparison with other works, as well as the content and application method of our intervention. In Section 3, we present the results of our research, which are also discussed and analysed. ## 2 Method ### The Host School The school where this research was performed is a fully government funded, selective, mixed grammar school located in England. At the time of this study, a total of 1237 students were enrolled at the school. The UK government website ([https://www.find-school-performance-data.service.gov.uk](https://www.find-school-performance-data.service.gov.uk)) shows that in 2022, the school was ranked in the top 25% in the UK for academic performance at GCSE level and that 82% of pupils at the school achieved A-Level grades between A* and C. ### Comparison With Other Works In other works, interventions range from short, one off workshops\({}^{(2\theta)}\), to full 20 session programs\({}^{(21,\,22)}\). The offering a longer syllabus means more content and concepts can be covered, students can learn at their own pace and there are opportunities for deeper learning experiences. Nonetheless, it has been shown that one-off interventions have a significant impact on students engagement with science topics and consequently, their future career choices\({}^{(23,\,24)}\). The intervention used in this paper was designed as a single, one-off session to introduce students to the subject of GR and to explore gravitational phenomena in the realm of GR. While other works have used participants from a single class, a single year group or a particular Key Stage (KS), we offered this intervention to all physics students in the school. A total of 183 students from years 7-13, choose to take part in this study, the breakdown of which is shown in Table 1. \begin{table} \begin{tabular}{|c|c|c|} \hline **Key Stage** & **Year Group** & **Number of Students** \\ \hline \multirow{3}{*}{KS3} & 7 & 59 \\ \cline{2-3} & 8 & 41 \\ \cline{2-3} & 9 & 24 \\ \hline \multirow{3}{*}{KS4} & 10 & 20 \\ \cline{2-3} & 11 & 13 \\ \hline \multirow{2}{*}{A-Level} & 12 & 12 \\ \cline{2-3} & 13 & 14 \\ \hline \multicolumn{2}{|c|}{**Total**} & 183 \\ \hline \end{tabular} \end{table} Table 1: Number of participants in this study by year group. Surveying students from one single school allows us to see how students understanding of gravity changes throughout the educational years via this cross-sectional study and offers the opportunity for longitudinal studies in the future following specific cohort(s). Gravity is taught in England at KS3 (11-14 years old), KS4 (14-16 years old) and A-Level (16-18 years old), therefore, any misconceptions picked-up at KS3 have the potential to carry forward and impact a students understanding at A-Level. ### The Intervention The intervention implemented in this work consisted of 33 identical 50 minute workshops in the presence of 8 - 30 students from a single year group. The same researcher conducted all sessions, covering the same material and activities. The intervention began with students being asked to describe gravity in their own words before GR was introduced. Table 2 shows the order that topics were introduced, the material spoken about and where any hands-on activities involved tie-in (highlighted in blue). #### 2.3.1 The Spacetime Simulator A common analogy for the for curvature of spacetime uses a sheet of stretchy, elastic material such as Lycra, and some different massed objects. This particular analogy is commonly referred to as 'the spacetime simulator'\({}^{(25)}\) and has proven to be effective at dispelling misconceptions around gravity\({}^{(26)}\). For our demonstration, a large Lycra sheet was stretched flat across a plastic frame as seen in Figure 1. Three students were then invited forward to hold the sheet in the air and it was explained to the group that the sheet represents spacetime in the flat Euclidean plane (as per the Newtonian model). Students were then invited forward to roll tennis, squash, and golf balls across it, perceiving that they travel in straight lines and that mass does not affect motion in flat space. A mass heavy enough to deform the sheet was then placed on it and students observed that the balls now roll towards the \begin{table} \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Topic**} & \multicolumn{1}{c|}{**Details**} \\ \hline Newton’s Law of Universal Gravitation & Every particle attracts every other particle in the universe. \\ \hline Gravitational field lines & Field lines on the surface of and around celestial bodies. \\ \hline \(E=mc^{2}\) & Relation between mass and energy in Special Relativity, leading to relation between mass and gravity in GR. \\ \hline General Relativity & Space, time and spacetime curvature. \\ \hline \multicolumn{1}{|c|}{**Spacetime simulator**} \\ \hline Gravitational force in GR & Force increase with curvature. \\ \hline \multicolumn{1}{|c|}{**Triangles on balloons**} \\ \hline Geometry in GR & GPS corrections for Earth’s curvature. Positive and negative curvature. The curvature of the universe. \\ \hline \end{tabular} \end{table} Table 2: An outline of the different topics and concepts covered during our intervention. central mass much like a ball rolling towards the bottom of a hill. By adding more mass to the centre of the sheet and increasing the curvature, students observe that the balls roll towards the central mass much quicker than before. This opens up a discussion around mass, curvature and gravitational attraction. Additionally, while the sheet is deformed, squash balls were given some sideways velocity. Here, students observe the ball circling around the central mass with motion analogous to an orbit. It was important here to draw attention to the fact that the orbital motion decays due to the loss of energy from friction between the ball and the sheet, whereas orbits in the universe conserve energy in general and will be continuous. To explore this further, students were shown a plot akin to Figure 1b from Kaur _et al._ (2017)\({}^{(\ref{a})}\) which illustrates that increasing curvature (adding mass to the sheet) changes the distance between two points in spacetime, thus influencing the geometry of the surrounding spacetime. #### 2.3.2 The Geometry of General Relativity Following the introduction to curvature, the intervention moved to exploring the geometry of spacetime. Pairs of students were issued with different sized balloons. Each balloon had a triangle drawn on it and the students were tasked with carefully measuring the internal angles of the triangles by placing protractors on the balloons surface. Students then compared their results to the \(180^{\circ}\) of a triangle in Euclidean space to that of one with increased curvature and how that changes the internal angles. These results were then related to applications of non-Euclidean geometry such as the spherical curvature of Earth, it's effect on lines of longitude (such as their convergence at the poles), GR corrections to GPS and the potential geometries of the universe (hyperbolic, flat, parabolic). Figure 1: The stretched sheet of Lycra used as a spacetime simulator. Three 1 kg masses placed in the centre of the sheet provide curvature. ### Pre/Post Intervention Questionnaire Two sets of questionnaires were used in this research. One disseminated two weeks before the intervention (pre-), and one given to students two weeks after the intervention (post-). Following the format of related studies[15, 20, 21], the open-ended questions in the questionnaires examined conceptual understandings and attitudes towards GR. The pre-questionnaire comprised seven questions exploring students' understanding of gravity, its origins, workings, their perspectives on space shape, and opinions on \begin{table} \begin{tabular}{|c|c|l|} \hline \multicolumn{3}{|c|}{**Questions**} \\ \hline \multirow{3}{*}{**Pre**} & **Q1** & Can parallel lines meet? \\ \cline{2-3} & **Q2** & Can the sum of the angles in a triangle be different from \(180^{\circ}\)? \\ \cline{2-3} & **Q3** & What is gravity? \\ \cline{2-3} & **Q4** & How do objects move in gravitational fields? \\ \cline{2-3} & **Q5** & Isaac Newton is famous for his laws of motion and his law of gravity. What is Albert Einstein famous for? \\ \cline{2-3} & **Q6** & Does space have a shape? What about the space around heavy objects like stars and planets? \\ \cline{2-3} & **Q7** & Do you prefer to learn about physics by listening to your teacher, watching demonstrations or doing practical work? \\ \hline \multirow{3}{*}{**Q8**} & Did you enjoy learning about this topic? What did you like/ not like? \\ \cline{2-3} & **Q9** & Are you interested in learning more about gravity and general relativity? \\ \cline{2-3} & **Q10** & Should Einsteinian physics, like general relativity be included in the curriculum? \\ \hline \end{tabular} \end{table} Table 3: Questions from the pre/post intervention questionnaire Figure 2: An example of some of the balloons used by students to demonstrate the effects of curvature on geometry physics teaching methods. The post-questionnaire was nearly identical to the pre-questionnaire, allowing direct comparison. It included three additional questions to gain insight into students' views on the intervention and Einsteinian gravity itself. The questions themselves are open-ended and can be found in Table 3. ## 3 Results & Discussion Figure 3 and Table 4 show the responses to Q1. The majority of participants (93.8%) said that parallel lines cannot meet. Of the ten that said they can (three from Yr 7, four from Yr 10 and one each from Yr 11, 12 & 13), five of these responses (three \begin{table} \begin{tabular}{|l|l|l|l|l|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Pre} & \multicolumn{2}{c|}{Post} \\ \cline{2-5} \multicolumn{1}{c|}{} & No & Yes & No & Yes \\ \hline Yr 7 & 93.62\% & 8.51\% & 79.66\% & 20.34\% \\ \hline Yr 8 & 100.00\% & 0.00\% & 75.76\% & 21.21\% \\ \hline Yr 9 & 100.00\% & 0.00\% & 33.33\% & 61.11\% \\ \hline Yr 10 & 78.95\% & 21.05\% & 6.25\% & 75.00\% \\ \hline Yr 11 & 91.67\% & 8.33\% & 20.00\% & 80.00\% \\ \hline Yr 12 & 91.67\% & 8.33\% & 0.00\% & 100.00\% \\ \hline Yr 13 & 90.91\% & 9.09\% & 11.11\% & 88.89\% \\ \hline \end{tabular} \end{table} Table 4: Difference in pre- and post-questionnaire responses to Q1 by year group. Figure 3: Responses to Q1. Can parallel lines meet? Yr 10, one Yr 11 and one Yr 13) used the phrase '_non-Euclidean_' or '_not on flat space_', thus showing good knowledge before the intervention. It is likely that these participants have had previous exposure to curved geometries. The total number of '_yes_' responses moves from 6.2% to 44.2% after the intervention. This is not as much as the 11.3% to 83.6% rise in correct responses to Q2, shown in Figure 4 and Table 5. The intervention design contributed to this as Q2 relates to the balloon activity, whereas the information about parallel lines also being influenced by curved geometry was presented after this activity as part of the slideshow presentation. As such, participants had to pay more attention to grasp this information. Nonetheless, all year groups showed an increase in '_yes_' responses for both Q1 and Q2 with Yr 9, 11, and 12 having the highest increase in percentage. This is perhaps expected as these cohorts have studied more geometry Figure 4: Responses to Q2. Can the sum of the angles in a triangle be different from 180\({}^{\circ}\)? \begin{table} \begin{tabular}{|l|l|l|l|l|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Pre} & \multicolumn{2}{c|}{Post} \\ \cline{2-5} \multicolumn{1}{c|}{} & No & Yes & No & Yes \\ \hline Yr 7 & 93.48\% & 6.52\% & 13.79\% & 86.21\% \\ \hline Yr 8 & 88.89\% & 11.11\% & 33.33\% & 63.64\% \\ \hline Yr 9 & 87.50\% & 12.50\% & 16.67\% & 83.33\% \\ \hline Yr 10 & 84.21\% & 15.79\% & 6.25\% & 75.00\% \\ \hline Yr 11 & 83.33\% & 8.33\% & 0.00\% & 100.00\% \\ \hline Yr 12 & 83.33\% & 16.67\% & 0.00\% & 100.00\% \\ \hline Yr 13 & 90.91\% & 9.09\% & 11.11\% & 88.89\% \\ \hline \end{tabular} \end{table} Table 5: Difference in pre- and post-questionnaire responses to Q2 by year group. than younger students and so are better prepared for discussion's around non-Euclidean geometry. These results indicate that participants comprehend information better through visual and hands-on activities. Well-designed practical activities are reported to increase students levels of understanding\({}^{(\textit{28})}\), with practical work seen by students as more interesting and engaging than listening to their teacher or watching demonstrations\({}^{(\textit{29})}\). Q3 is more open to predisposed misconceptions than Q1 or Q2. As seen in Figure 5, there were many different responses to this question. We have categorised responses into those which described gravity as a force versus those which did not. It was expected that post-intervention, participants would shift their answers towards describing gravity Figure 5: Pre- (a) and post-questionnaire (b) responses to Q3. What is gravity? Responses in which gravity is described as a force are shown in the right-hand charts, with other opinions of gravity being shown in the left-hand charts. as the '_bending/warping of spacetime_'. 82% of the pre-questionnaire responses described gravity as a force, with responses ranging from '_the force that keeps us on Earth_', '_the force that stops us from floating/keeps us on the ground_' and '_the force between objects/masses_'. Only 1.8% of participants specified that gravity was the cause of spacetime curvature pre-intervention, increasing to only 8.9% post-intervention. Similar studies\({}^{(\ref{eq:p1})}\) also struggled to fully influence descriptions of gravity using the spacetime simulator. However looking at Table 6 which shows the breakdown of Q3 by year group, reveals that responses related to spacetime curvature increased notably in years 10 (up 17.52%), 12 (up 20.24%), and 13 (up 35.35%). One positive result from Table 6 is that participants associating gravity as an Earth-bound force decreased across all year groups (except Yr 10), particularly in Yr 8 and 11. The data indicates that the KS4 and A-Level groups best understood the concept of gravity from spacetime curvature. The majority of participants however either did not grasp the concepts introduced (not indicated by the results to Q1 and Q2), or the intervention's influence was limited by their pre-existing classroom-based learning and opinions. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{Pre} & \multicolumn{3}{c|}{Post} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Force} & \multicolumn{2}{c|}{Spacetime} & \multicolumn{1}{c|}{Earth} & \multicolumn{2}{c|}{Force} & \multicolumn{1}{c|}{Spacetime} & \multicolumn{1}{c|}{Earth} \\ \cline{2-9} \multicolumn{1}{c|}{} & Earth & Other & Curvature & \multicolumn{1}{c|}{} & Earth & Other & Curvature & \multicolumn{1}{c|}{} \\ \hline Yr 7 & 25.53\% & 48.94\% & 0.00\% & 6.38\% & 17.24\% & 48.28\% & 1.72\% & 6.90\% \\ \hline Yr 8 & 35.14\% & 54.05\% & 0.00\% & 2.70\% & 25.81\% & 51.61\% & 3.23\% & 6.45\% \\ \hline Yr 9 & 29.17\% & 87.50\% & 0.00\% & 0.00\% & 38.89\% & 38.89\% & 5.56\% & 0.00\% \\ \hline Yr 10 & 5.56\% & 72.22\% & 5.56\% & 0.00\% & 15.38\% & 53.85\% & 23.08\% & 0.00\% \\ \hline Yr 11 & 58.33\% & 41.67\% & 0.00\% & 0.00\% & 30.00\% & 40.00\% & 10.00\% & 0.00\% \\ \hline Yr 12 & 16.67\% & 75.00\% & 8.33\% & 0.00\% & 0.00\% & 42.86\% & 28.57\% & 0.00\% \\ \hline Yr 13 & 0.00\% & 81.82\% & 9.09\% & 0.00\% & 0.00\% & 44.44\% & 44.44\% & 0.00\% \\ \hline \end{tabular} \end{table} Table 6: Difference in pre- and post-questionnaire responses to Q3 by year group. Q4 assess participants understanding of how gravity influences the motion of objects. Presented in Figure 6, the responses to this question are extremely varied, ranging from high level answers like '_conic sections_' and '_along geodesics_' to less descriptive responses such as '_they float_' or '_fast_'. Many responses show an understanding of the attractive powers of gravity, as well as knowledge of other aspects of physics such as kinetic and gravitational potential energy. While many of these descriptions are not wholly incorrect, they fail to properly describe the motion of objects in gravitational fields. This is likely due to participants lack of formal education on gravitational orbits, a topic not covered in detail until Yr 13. It is possible that responses such as '_around_' and '_in circles_' are consequences of this as those in lower year groups attempt to describe orbits without knowledge of the correct scientific terminology. This is supported by the fact that we see a slight increase in these descriptors post-intervention as participants watched objects circle around masses on the spacetime simulator sheet. Positively, phrases such as '_they fall_', '_towards ground/source_' and '_pushed/pulled_' decrease post-intervention, whereas the number of participants describing orbital motion doubles. Those describing motion in gravitational fields as attractive also sees a significant increase post-intervention. This again is likely due to the use of the spacetime simulator demonstration where it was observed that objects on the sheet moved towards the source of curvature. This is also evidenced by the added uses of terms such as '_oval_' and '_spirals_' which appear in post-questionnaire responses only. Pre-intervention, 58.2% of participants thought that space had no shape, often Figure 7: Responses to Q6. Does space have a shape? What about the space around heavy objects like stars and planets? saying it is infinite or expanding ('_no because it is infinite/expanding_') to justify their response. This number decreases to 42.4% post-intervention, indicating that the intervention had limited success in conveying the geometry of space around celestial objects. Nonetheless, Figure 7 reveals an increase in responses other than '_yes_' or '_no_' post-intervention with the addition of more conceptualised answers like '_the shape changes with gravity_' or '_it stretches_' demonstrating an understanding of the ideas presented using the spacetime simulator. 9.2% of post-intervention participants shown deeper understanding by correctly noting that space is curved around masses but flat otherwise. Coupling these responses with the other '_yes_' responses gives a total percentage of conceptually correct answers of 58.9%. Figure 8 shows responses to Q9, where 77.1% of participants said they would like to learn more about gravity and general relativity. The last question in our questionnaires addresses the main research question of this work and asked 'Should Einsteinian physics, like general relativity be included in the curriculum? A resounding 82.5% of participants said that it should be included. Of those, 10% indicated that, if added to the curriculum, it should be included at a specific level. 41% of those respondents said that this content should not be taught before A-Level. The rest thought that it should not be covered before KS4. Figure 8: Responses to Q9. Are you interested in learning more about gravity and general relativity? ## 4 Conclusion In this work, 183 students from a state-funded English secondary school participated in a one-off intervention introducing them to General Relativity and the concept that the gravitational force is caused by the curvature of spacetime. This theory was introduced visually using the spacetime simulator demonstration. Once this idea was established, participants performed an experiment using balloons and triangles to investigate what happens to these shapes in curved spaces. The aim was to gauge students' reception to GR and to investigate where these concepts would be best placed into a school curriculum. Results showed that 77.1% of our participants overall would like to delve further into the subject. This is consistent with the opinions of teachers and the public who also encourage the addition of Einsteinian physics into the curriculum[30, 31]. Noting that the largest percentage of our participants come from Yr 7 shows how eager young students are for opportunities to learn about cosmology. KS4 and A-Level students demonstrated better understanding of the presented ideas through their responses to Q1-3 in the questionnaires. Years 11, 12 and 13 had the best improvement from pre- to post-questionnaire for Q1 and Q2, with Yr 10 also showing a good increase in understanding for Q3. While many participants from the other year groups liked the material covered, their responses to Q3 showed little grasp of spacetime curvature. Years 7, 8 and 9 all showed good awareness of the deformation of Figure 9: Responses to Q10. Should Einsteinian physics, like general relativity be included in the curriculum? triangles in curved spaces after the intervention, only Yr 9 participants showed a grasp of the curving of initially parallel lines. While Einsteinian gravitational concepts are taught completely distinct from the Newtonian gravity, it has been shown by other studies that students can understand the ideas of GR even at primary level[2, 15, 16, 18]. While the results to Q1 and Q2 show that participants were able to appreciate the effects of curvature, the results of Q3 demonstrate that it would take more than a one-off intervention to imbue a deeper understanding of GR. This result aligns with that of other works[32]. Additionally, our intervention was not successful at dispelling misconceptions around the gravitational force. A multi-stage intervention may prove more beneficial for this[33]. Regarding GR and its associated geometry, our results show that this material is better suited to the A-Level curriculum rather than lower year groups. Therefore, we conclude that introducing GR in the English secondary curriculum at Yr 13 aligns with students' ability to recognize curvature as the source of gravitational force and matches the current placement of gravity study in the curriculum. Further insights into the effects of teaching GR could be obtained through a longitudinal study that follows a group or groups of students through late primary/early secondary school to KS4 and A-Level. **Ethical Statement:**: This study has been reviewed and given favourable opinion by a University of Lincoln Research Ethics Committee. Reference Number: 12146. ## Acknowledgments The authors would like to thank the staff and students of the science department at Queen Elizabeth's High School for their support with and participation in this work.
2301.12241
Convergence and Near-optimal Sampling for Multivariate Function Approximations in Irregular Domains via Vandermonde with Arnoldi
Vandermonde matrices are usually exponentially ill-conditioned and often result in unstable approximations. In this paper, we introduce and analyze the \textit{multivariate Vandermonde with Arnoldi (V+A) method}, which is based on least-squares approximation together with a Stieltjes orthogonalization process, for approximating continuous, multivariate functions on $d$-dimensional irregular domains. The V+A method addresses the ill-conditioning of the Vandermonde approximation by creating a set of discrete orthogonal basis with respect to a discrete measure. The V+A method is simple and general. It relies only on the sample points from the domain and requires no prior knowledge of the domain. In this paper, we first analyze the sample complexity of the V+A approximation. In particular, we show that, for a large class of domains, the V+A method gives a well-conditioned and near-optimal $N$-dimensional least-squares approximation using $M=\mathcal{O}(N^2)$ equispaced sample points or $M=\mathcal{O}(N^2\log N)$ random sample points, independently of $d$. We also give a comprehensive analysis of the error estimates and rate of convergence of the V+A approximation. Based on the multivariate V+A approximation, we propose a new variant of the weighted V+A least-squares algorithm that uses only $M=\mathcal{O}(N\log N)$ sample points to give a near-optimal approximation. Our numerical results confirm that the (weighted) V+A method gives a more accurate approximation than the standard orthogonalization method for high-degree approximation using the Vandermonde matrix.
Wenqi Zhu, Yuji Nakatsukasa
2023-01-28T16:14:06Z
http://arxiv.org/abs/2301.12241v1
Convergence and Near-optimal Sampling for Multivariate Function Approximations in Irregular Domains via Vandermonde with Arnoldi ###### Abstract Vandermonde matrices are usually exponentially ill-conditioned and often result in unstable approximations. In this paper, we introduce and analyze the _multivariate Vandermonde with Arnoldi (V+A) method_, which is based on least-squares approximation together with a Stieltjes orthogonalization process, for approximating continuous, multivariate functions on \(d\)-dimensional irregular domains. The V+A method addresses the ill-conditioning of the Vandermonde approximation by creating a set of discrete orthogonal basis with respect to a discrete measure. The V+A method is simple and general. It relies only on the sample points from the domain and requires no prior knowledge of the domain. In this paper, we first analyze the sample complexity of the V+A approximation. In particular, we show that, for a large class of domains, the V+A method gives a well-conditioned and near-optimal \(N\)-dimensional least-squares approximation using \(M=\mathcal{O}(N^{2})\) equispaced sample points or \(M=\mathcal{O}(N^{2}\log N)\) random sample points, independently of \(d\). We also give a comprehensive analysis of the error estimates and rate of convergence of the V+A approximation. Based on the multivariate V+A approximation, we propose a new variant of the weighted V+A least-squares algorithm that uses only \(M=\mathcal{O}(N\log N)\) sample points to give a near-optimal approximation. Our numerical results confirm that the (weighted) V+A method gives a more accurate approximation than the standard orthogonalization method for high-degree approximation using the Vandermonde matrix. **Keywords:** least-squares, Vandermonde matrix, Arnoldi, polyval, polyfit, ill-conditioning, sample complexity, near-optimal sampling ## 1 Introduction and Overview of the Paper Many problems in computational science call for the approximation of smooth, multivariate functions. In this paper, we consider the problem of approximating a multivariate continuous function \(f:\Omega\to\mathbb{R}\) of \(d\geq 1\) variables whose domain \(\Omega\in\mathbb{R}^{d}\) may be irregular. Using Vandermonde matrices to fit polynomials is one of the most straightforward approaches. However, the Vandermonde matrix is usually exponentially ill-conditioned, even on standard domains such as an interval, unless the sample points are very carefully chosen [6, 21, 25]. Recently, in [8], the authors develop an orthogonalization framework that couples the Vandermonde matrices with Arnoldi orthogonalization for univariate function approximations. This approach, known as the univariate Vandermonde with Arnoldi method (V+A), addresses the ill-conditioning of the Vandermonde approximation by creating a set of discrete orthogonal basis and a well-conditioned least-squares system using only sample points from the domain. In this paper, we extend the univariate V+A method to a multivariate version that can be used for \(d\)-dimensional function approximations (\(d\geq 2\)). There is extensive literature on \(d\)-dimensional polynomial approximations algorithms which assume that the function \(f\) is defined over a hypercube domain containing the irregular domain \(\Omega\)[2, 3, 11]. These algorithms, commonly known as polynomial frame approximations, create an orthogonal basis in the hypercube domain. On the other hand, the V+A algorithm creates a discrete orthogonal basis directly in \(\Omega\) and effectively constructs a well-conditioned basis for the irregular domain \(\Omega\). In general, even if we have a well-conditioned basis, it is well-known that the least-squares approximations can still become inaccurate when the number of sample points \(M\) (from a sub-optimal distribution, e.g., equispaced points) is insufficient, e.g., \(M\) is close to the dimension of the approximation space \(N\). Poorly distributed sample points can also affect the quality of the solution. In some domains, the polynomial frame approximations have provable bounds on the sample complexity; namely, the scaling between the dimension of the approximation space \(N\) and the number of samples \(M\), which is sufficient to guarantee a well-conditioned and accurate approximation [1, 3, 11]. However, to our best knowledge, there appears to be no literature on the sample complexity of the V+A procedure. A key theme of this paper is to investigate how \(M\) behaves as a function of \(N\) such that the V+A least-squares approximant \(\mathcal{L}(f)\to f\) as \(N\to\infty\). We show that, in a large number of domains (i.e., real intervals, convex domains, or finite unions of convex domains), the V+A method gives a well-conditioned and accurate \(N\)-dimensional least-squares approximation using \(M=\mathcal{O}(N^{2})\) equispaced sample points or \(M=\mathcal{O}(N^{2}\log N)\) random sample points. The sample complexity of the V+A procedure is comparable size with the sample complexity of the polynomial frame approximation [2, 1, 11]. However, since the V+A constructs the discrete orthogonal basis using only sample points from the domain and does not require sample points from the bounding hypercube, V+A gives an approximation of similar accuracy using fewer sample points. Using results on sample complexity, we further prove that, under suitable sample distributions and suitable domains, the multivariate V+A approximation is near-optimal. That is, the V+A approximant (with \(N\)th degrees of freedom ) converges to \(f\) at a spectral rate with respect to \(N\), depending on the smoothness of \(f\) in \(\Omega\). In several papers [1, 11, 24], the authors proved the remarkable result that an effective weighting can lead to the near-optimal scaling of \(M=\mathcal{O}(N\log(N))\). However, in past approaches [1], the QR factorization was used to orthogonalize the least-squares system. We propose a variant of the weighted least-squares algorithm that uses the multivariate V+A as the orthogonalization method (VA+Weight). This algorithm is stable with high probability and only takes \(M=\mathcal{O}(N\log N)\) sample points to give a near-optimal approximation. Our numerical results confirm that this method gives a more accurate approximation than some orthogonalization methods for high-degree approximation using the Vandermonde matrix. Due to the reduced sample density, VA+Weight also gives a lower online computational cost than the unweighted least-squares V+A method. Last but not least, finding the optimal distribution of sample points in a high-dimensional irregular domain is an open question in the literature [10]. We highlight that VA+Weight acts as a practical tool for selecting the near-optimal distribution of sample points adaptively in irregular domains. The paper is arranged as follows. In Section 2, we explain the univariant V+A algorithm and establish the link between the discrete orthogonal polynomials and the continuous orthogonal polynomials in real intervals. We also give a sketch of proof of the convergence of the univariant V+A algorithm using the Lebesgue constant. In Section 3, we extend the univariate V+A algorithm to higher dimensions and compare the numerical results of V+A approximations with polynomial frame approximations in irregular domains. Section 4 gives our main theoretical result on sample complexity and convergence rate of V+A algorithm on general \(d\)-dimensional domains. In Section 5, we give the weighted V+A least-squares algorithm, VA+Weight, which takes only \(M=\mathcal{O}(N\log N)\) sample points to give a near-optimal approximation. ### Related Work The idea of orthogonalizing monomials and combining Vandermonde with Arnoldi is not completely new. Past emphasis has been on constructing continuous orthogonal polynomials on a continuous domain. The purpose of the orthogonalization is to obtain the orthogonal polynomial itself [17, 18]. However, V+A, also known as Stieltjes orthogonalization [14, 29], is a versatile method that can dramatically improve the stability of polynomial approximation [8]. The purpose of the orthogonalization is to improve the numerical stability of the least-squares system. The multivariate version of V+A was first discussed by Hokanson [20], who applies V+A to the Sanathanan-Koerner iteration in rational approximation problems. See also [4]. In these studies, the multivariate V+A is used for rational approximations on standard domains. Applying the V+A algorithm to approximate multivariate functions on irregular domains appears not to have been considered in the literature. It is worth noting that V+A also has numerous applications beyond polynomial approximation. For instance, it can be used in the 'lightning' solver [16], a state-of-the-art PDE solver which solves Laplace's equation on an irregular domain using rational functions with fixed poles. We also combine V+A with Lawson's algorithm [22] to improve the accuracy of the approximation. We refer to this algorithm as VA+Lawson. VA+Lawson attempts to find the best minimax approximation in difficult domains (for instance, disjoint complex domains). VA+Lawson is also a powerful tool for finding the minimal polynomial for the GMRES algorithms and power iterations in irregular complex domains. ### Notation * For any bounded domain \(\Omega\), we define _the infinity \(\Omega\)-norm_ of a bounded function \(g:\Omega\to\mathbb{R}\) as \(\|g\|_{\Omega}=\sup_{\mathbf{x}\in\Omega}|g(\mathbf{x})|.\) * For a finite set of points \(\mathbf{X}=\{x_{i}\}_{1\leq i\leq M}\) and bounded functions \(f,g:\mathbf{X}\to\mathbb{R}\), we define _the infinity \(\mathbf{X}\)-norm_ of \(g:\mathbf{X}\to\mathbb{R}\) as \(\|g\|_{\mathbf{X}}=\max_{\mathbf{x}\in\mathbf{X}}|g(\mathbf{x})|\) and \(\langle f,g\rangle_{M}=\frac{1}{M}\sum_{i=1}^{M}f(x_{i})g(x_{i}).\) * _The best minimax approximation_\(p^{*}\) of degree \(N\) minimizes the error of the polynomial approximation in the infinity \(\Omega\)-norm, such that \(\|f-p^{*}\|_{\Omega}\leq\|f-p\|_{\Omega}\) for all \(d\) dimensional degree-\(n\) polynomials \(p\in\mathcal{P}_{n}^{d}\). The best minimax approximation \(p^{*}\) is proven to exist and is unique. The error of the best minimax approximation \(f-p^{*}\) is characterized by an equioscillating error curve [26, Ch. 10]. * In this paper, we say that a least-squares approximation, \(\mathcal{L}(f)\), is _near-optimal_ if its error is within a polynomial factor of that of the best minimax approximation, for instance, \(\|f-\mathcal{L}(f)\|_{\Omega}=\mathcal{O}(N)\|f-p^{*}\|_{\Omega}.\) Note that \(\mathcal{L}\) is a linear operator, and furthermore, it is a projection \(\mathcal{L}^{2}=\mathcal{L}\). * _The least-squares approximation converges_ if \(\mathcal{L}(f)\to f\) as \(N\to\infty\) for a given a discrete measure \(\mathbf{X}=\{x_{i}\}_{1\leq i\leq M}\). * Recall that the _matrix \(\infty\)-norm_ is defined as \(\|\mathbf{A}\|_{\infty}:=\max_{1\leq i\leq M}\sum_{j=1}^{N}|\mathbf{A}_{i,j}|\) where \(\mathbf{A}_{i,j}\) denotes the \((i,j)\)th entry of \(\mathbf{A}\). ## 2 Univariate Vandermonde with Arnoldi Polynomial approximation of an unknown function by fitting a polynomial to a set of sample points from the domain is a classic problem. The interpolation and least-squares methods are two methods to solve this type of problem [24, 26]. We start with the simplest 1D polynomial approximation problem in this section. Let \(\Omega\subset\mathbb{R}\) be a bounded domain, \(\mathbf{X}:=\{x_{i}\}_{1\leq i\leq M}\in\Omega\) a set of \(M\) distinct sample points and \(f:\Omega\to\mathbb{R}\) a continuous function which gives a value at each sample point. We aim to find a degree-\(n\) polynomial approximation, \(\mathcal{L}(f)\), such that \[\mathcal{L}(f)=\operatorname*{argmin}_{p\in\mathcal{P}_{n}^{1}}\sum_{i=1}^{M}| f(\mathbf{x}_{i})-p(\mathbf{x}_{i})|^{2}. \tag{2.1}\] Here \(\mathcal{P}^{1}_{n}\) denotes the space of degree-\(n\) univariate (\(d=1\)) polynomials. We denote by \(N:=n+1\) the total degrees of freedom and write \(\mathcal{L}(f)(x)=\sum_{j=1}^{N}c_{j}x^{j-1}\) where \(\{c_{i}\}_{1\leq i\leq N}\) are the monomial coefficients to be determined. The equation (2.1) can be formulated as a Vandermonde least-squares problem \[\mathbf{c}=\operatorname*{argmin}_{\mathbf{c}\in\mathbb{R}^{N}}\|\mathbf{A}\mathbf{c}-\mathbf{\tilde {f}}\|_{2}. \tag{2.2}\] Using the pseudoinverse, we can write the solution as \(\mathbf{c}=\mathbf{A}^{\dagger}\mathbf{\tilde{f}}=(\mathbf{A}^{*}\mathbf{A})^{-1}\mathbf{A}^{*}\tilde{f}\) where \(\mathbf{A}\) is an \(M\times N\) Vandermonde matrix with the \((i,j)\)th entry \(x_{i}^{j-1}\) for \(1\leq i\leq M\), \(1\leq j\leq N\), \(\mathbf{c}:=[c_{1},\ldots,c_{N}]^{T}\), and \(\mathbf{\tilde{f}}:=[f(x_{1}),\ldots,f(x_{M})]^{T}\). Since the sample points are distinct, \(\mathbf{A}\) is full rank and thus the solution \(\mathbf{c}\) exists and is unique. If \(N=M\), we have an interpolation problem, and \(\mathbf{A}\) is a square matrix. The solution is given by \(\mathbf{c}=\mathbf{A}^{-1}\mathbf{\tilde{f}}\). If \(N<M\), we have a least-squares problem, and \(\mathbf{A}\) is a tall rectangular matrix. The least-squares problem (2.2) has the same solution as the normal equation \[\mathbf{A}^{*}\mathbf{A}\mathbf{c}=\mathbf{A}^{*}\mathbf{\tilde{f}}. \tag{2.3}\] However, solving (2.3) is not recommended, as it is not stable, whereas (2.2) can be solved stably [19, Ch. 20]. In this paper, unless otherwise stated, we focus on the least-squares problem (2.2) and assume that \(N<M\). Ideally, we can find the coefficients of the polynomial approximation by solving the least-squares problem (2.2). However, the Vandermonde matrices are well known to be exponentially ill-conditioned [21] (unless the nodes are uniformly distributed on the unit circle). The ill-conditioning of the Vandermonde matrix is due to the non-orthogonal nature of the monomial basis. A potential solution to the ill-conditioning issue is to treat the monomial basis as a Krylov subspace sequence, such that \[\operatorname{span}\{1,\mathbf{z},\mathbf{z}^{2},\ldots,\mathbf{z}^{n}\}=\operatorname{ span}\{\mathbf{q}_{1},\mathbf{Z}\mathbf{q}_{1},\mathbf{Z}^{2}\mathbf{q}_{1},\ldots,\mathbf{Z}^{n}\mathbf{q}_{ 1}\}=\mathcal{K}_{N}(\mathbf{Z},\mathbf{q}_{1}), \tag{2.4}\] where \(\mathbf{z}=[x_{1},\ldots,x_{M}]^{T}\in\mathbb{R}^{M}\), \(\mathbf{Z}=\operatorname{diag}(x_{1},\ldots,x_{m})\in\mathbb{R}^{M\times M}\), \(\mathbf{Z}^{j}\) denotes the pointwise power for \(0\leq j\leq n\) and \(\mathbf{q}_{1}=[1,\ldots,1]^{T}\in\mathbb{R}^{M}\). Based on this observation, in V+A we apply Arnoldi orthogonalization to the Krylov space \(\mathcal{K}_{N}(\mathbf{Z},\mathbf{q}_{1})\). By the Arnoldi process, we transform the ill-conditioned Vandermonde system in (2.3) into an optimally conditioned system, \[\mathbf{Q}^{*}\mathbf{Q}\mathbf{d}=\mathbf{Q}\mathbf{\tilde{f}},\qquad\text{where}\qquad\mathbf{Q}= \begin{bmatrix}\phi_{1}(x_{1})&\ldots&\phi_{N}(x_{1})\\ \vdots&\ddots&\vdots\\ \phi_{1}(x_{M})&\ldots&\phi_{N}(x_{M})\end{bmatrix}\quad\text{is orthonormal}\quad\mathbf{Q}^{*}\mathbf{Q}=M\mathbf{I}_{N} \tag{2.5}\] and \(\mathbf{\tilde{f}}\) is defined as before. \[\mathbf{\phi}:=\{\phi_{1},\phi_{2},\ldots,\phi_{N}\} \tag{2.6}\] is known as a set of **discrete orthogonal polynomials** or a discrete orthogonal basis. \(\mathbf{d}:=[d_{1},\ldots,d_{N}]^{T}\) denotes the coefficient vector related to the discrete orthogonal polynomials, such that \(\mathcal{L}(f)(x)=\sum_{j=1}^{N}d_{j}\phi_{j}(x)\). By construction, the discrete orthogonal basis \(\mathbf{\phi}\) spans the polynomial space \(\mathcal{P}^{1}_{n}\). We say that \(\phi_{j},\phi_{k}\in\mathcal{P}^{1}_{n}\) satisfies **discrete orthogonality** w.r.t. \(\mathbf{X}=\{x_{i}\}_{1\leq i\leq M}\), if \[\frac{1}{M}\sum_{i=1}^{M}\phi_{j}(x_{i})\phi_{k}(x_{i})=\delta_{j,k},\qquad 1 \leq j,k\leq N \tag{2.7}\] where \(\delta_{j,k}\) denotes the Kronecker delta. On real intervals, the discrete orthogonal polynomials are known to satisfy many algebraic properties of the continuous orthogonal polynomials. See Xu [35], and for results in the univariate case, we refer to [27]. ### Algorithm for Vandermonde with Arnoldi The Arnoldi algorithm, originally applied to finding eigenvalues, uses the modified Gram-Schmidt process to produce a sequence of orthogonal vectors. The orthogonal columns \(\{\mathbf{q}_{1},\mathbf{q}_{2},\,\ldots,\mathbf{q}_{N}\}\) are obtained by the following recurrence formula \[\mathbf{H}_{1,1}:=\frac{\mathbf{q}_{1}^{*}\mathbf{Z}\mathbf{q}_{1}}{M}=\frac{1}{M}\sum_{i=1}^{M }x_{i},\qquad\mathbf{H}_{k+1,k}\mathbf{q}_{k+1}:=\mathbf{Z}\mathbf{q}_{k}-\sum_{j=1}^{k}\mathbf{H}_ {j,k}\mathbf{q}_{j} \tag{2.8}\] where \(\|\mathbf{q}_{k+1}\|_{2}=\sqrt{M}\), \(k=1,\ldots,N-1\), \(\mathbf{Z}\) and \(\mathbf{q}_{1}\) are defined as in (2.4). \(\mathbf{H}_{j,k}\) are the coefficients of the recurrence formula and also denote the \((j,k)\)th entries of the matrix \(\mathbf{H}\). The Krylov space \(\mathcal{K}_{N}(\mathbf{Z},\mathbf{q}_{1})\) is orthogonalized by the decomposition \(ZQ_{-}=QH\) where \(Q\) is the matrix with columns \([q_{0},\ldots,q_{N}]\) and \(Q_{-}\) is the same matrix without the final column. By orthogonality, \(\mathbf{H}\) is a \(N\times(N-1)\) lower Hessenberg matrix. For real sample points \(\{x_{i}\}_{1\leq i\leq M}\), \(\mathbf{H}\) is tridiagonal and Arnoldi algorithm is equivalent to Lanczos algorithm. Unlike the standard Arnoldi process, \(\mathbf{q}_{k}\) in V+A satisfies the column scaling \(\|\mathbf{q}_{k}\|_{2}=\sqrt{M}\) for all \(k=1,\ldots,N\). The scaling ensures that the Euclidean norm of the V+A solution, \(\mathbf{d}\), is relatively constant as the number of sample points \(M\) increases. In the Vandermonde least-squares system (2.2), we form \(\mathbf{A}=[1,\mathbf{z},\ldots,\mathbf{z}^{n}]\) in one go and solve the badly-conditioned linear system. In V+A, however, we orthogonalize each column as soon as possible using the Arnoldi algorithm. Thus, the Arnoldi process gives us an optimally-conditioned least-squares problem \(\min_{\mathbf{d}\in\mathbb{R}^{N}}\|\mathbf{Q}\mathbf{d}-\mathbf{\tilde{f}}\|_{2}\). The solution for the least-squares system exists and is unique, \[\mathbf{d}=\frac{1}{M}\mathbf{Q}^{*}\mathbf{\tilde{f}} \tag{2.9}\] where the \(\frac{1}{M}\) factor comes from the column scaling of \(\mathbf{Q}\) such that \(\mathbf{Q}^{*}\mathbf{Q}=M\mathbf{I}_{N}\). The solution of the Vandermonde system, \(\mathbf{c}\), and the solution of the V+A system, \(\mathbf{d}\), are related by \(\mathbf{d}=\frac{1}{M}\mathbf{Q}^{*}\mathbf{Ac}\). In MATLAB, \(\mathbf{d}\) can be obtained either by (2.9) or by the backslash command. The blackslash command invokes the QR factorization for the least-squares problem and the Gaussian elimination for the interpolation problem. For improved accuracy, we choose to obtain \(\mathbf{d}\) using the backslash command for the extra round of orthogonalization [8]. Once the vector of coefficients \(\mathbf{d}\) is obtained, the least-squares approximant \(p\) can be evaluated at a different set of points \(\mathbf{Y}=\{y_{i}\}_{1\leq i\leq K}\). Notice that the entries of the \((k+1)\)th column of \(\mathbf{H}\) are the coefficients used in the recursion formula of the discrete orthogonal polynomial, such that \[\mathbf{H}_{k+1,k}\phi_{k+1}(x)=x\phi_{k}(x)-\sum_{j=1}^{k}\mathbf{H}_{j,k}\phi_{j}(x),\qquad 1\leq k\leq N-1. \tag{2.10}\] In the polynomial evaluation process, we use the same recursion formula as in (2.10) but apply it on a different set of points, \(\mathbf{S}=\text{diag}(y_{1},\ldots,y_{M})\), such that \[\mathbf{U}_{k+1}:=\frac{1}{\mathbf{H}_{k+1,k}}\left(\mathbf{SU}_{k}-\sum_{j=1}^{k}H_{j,k} \mathbf{U}_{j}\right),\quad 1\leq k\leq N-1, \tag{2.11}\] with \(\mathbf{U}_{1}:=[1,\ldots,1]^{T}\in\mathbb{R}^{K}\) and \(\mathbf{H}\) is given a priori by the Arnoldi process. The polynomials are evaluated at \(\mathbf{Y}\) by \(\mathbf{p}:=\mathbf{U}\mathbf{d}\), where \(\mathbf{U}:=[\mathbf{U}_{1},\ldots,\mathbf{U}_{N}]\in\mathbb{R}^{K\times N}\) and \(\mathbf{d}\in\mathbb{R}^{N}\) is obtained a priori by the Arnoldi process. The \(i\)th entry of \(\mathbf{p}\) represents \(\mathcal{L}f(y_{i})=\sum_{j=1}^{N}d_{j}\phi_{j}(y_{i})\) for \(1\leq i\leq K\). Note that the columns of \(\mathbf{U}\) are in general approximately orthogonal, but not orthogonal. To test the validity of the least-squares approximant, we usually evaluate the polynomial approximant on a much finer mesh with \(K\gg M\) and compare the evaluated values with \(f\). Throughout the paper, we measure the error of the approximant using this method, such that \(\|f-\mathcal{L}(f)\|_{\Omega}\approx\|f(y_{i})-\mathcal{L}f(y_{i})\|_{\mathbf{Y}}\). **Algorithm and Costs:** The 1-dimensional V+A fitting and evaluation is first implemented in [8] using less than 15 lines of MATLAB code. We made slight variations to the code to improve their efficiency (Algorithm 2 and 3 in Appendix A). Instead of using modified Gram-Schmidt (MGS), we use the classical Gram-Schmidt (CGS) repeated twice, as it gives excellent orthogonality without loss in speed in practice, as done also by Hokanson [20]. Doing CGS twice creates a discrete orthogonal basis that satisfies \(\|\mathbf{Q}^{*}\mathbf{Q}-\mathbf{I}\|_{F}=\mathcal{O}(MN^{3/2}\mathbf{u})\), where \(\mathbf{u}\) is the unit roundoff. This is a good enough bound for polynomial approximation problems which usually have dimensions of \(N\lesssim 10^{3}\) and \(M\lesssim 10^{6}\). ### Numerical Examples for Univariate V + A In addition to the four applications provided in the first paper on V+A [8], we consider three further examples of V+A least-squares polynomial approximations. _Example 1 (Disjoint Domain)_ Approximating \(f(x)=x\cos(10x)\) using \(M=N^{2}\) equispaced sample points in a disjoint domain \([-3,-1]\cup[3,4]\). This example challenges the algorithm's ability to handle a disjoint domain. We compare the V+A method to the Vandermonde method in the left plot of Figure 1. Initially, the two approximations give the same error. However, the Vandermonde system has an error stagnating at \(10^{-4}\) for \(N>30\) due to ill-conditioning of \(\mathbf{A}\), while the V+A method gives an error reduction down to \(10^{-10}\) as \(N\) increases. _Example 2 (Non-Smooth Function)_ Approximating \(f(x)=|x|\) in \([-1,1]\) using \(M=N^{2}\log N\) random sample points. This example tests the algorithm on approximating a non-smooth function with random uniform sample points. Similar to _Example 1_, V+A also yields a much better approximation than the Vandermonde approximation (the middle plot of Figure 1). The error for V+A is also in line with the error of the best polynomial approximation (\(\sim\frac{0.28}{n}\)) [28]. _Example 3 (Infinite Domain)_ Approximating \(f(x)=e^{x}\) in \([-1000,-0.001]\) using \(M=N^{2}\) logarithmic equispaced points. This example focuses on sample points generated by a different measure over a wide interval. The Vandermonde method fails for this problem but the V+A algorithm gives a stable error reduction for all degrees (the right plot of Figure 1). Unlike some methods used in [30, Sec. 4] which involves a transplantation of the domain, the V+A approximation is carried out directly on \([-1000,-0.001]\). This example illustrates that the V+A algorithm is able to adapt to different domains and different discrete measures. ### Stability and Accuracy of Univariant Vandermonde with Arnoldi In V+A, the columns of \(\mathbf{Q}\) form a discrete orthogonal basis with respect to a discrete measure \(\mathbf{X}=\{x_{i}\}_{1\leq i\leq M}\). With enough sample points, there is a link between the discrete orthogonal polynomials and the continuous orthogonal polynomials. We give a sketch of proof for the convergence of the univariate V+A algorithm in this subsection. #### 2.3.1 Discrete Orthogonal Polynomial It is proven in [36] that the discrete orthogonal polynomials generated by equispaced sample points in any real interval are well approximated by the scaled Legendre polynomials with \(M=\mathcal{O}(N^{2})\) Figure 1: V+A approximation is computed using the univariant V+A (Algorithm 2 in Appendix A). The Vandermonde approximation is computed using polyfit/polyval provided in MATLAB. sample points. In other words, for any real interval \(\Omega=[a,b]\), \[\phi_{j}(\mathbf{x})\approx\sqrt{2j-1}L_{j}(\eta(x)),\qquad\forall x\in\Omega,\quad \forall 1\leq j\leq N, \tag{2.12}\] where \(\eta(x):=\frac{2(x-a)}{b-a}-1\) is a linear map from \([-1,1]\) to \([a,b]\). \(L_{j}\) is the standard \(j\)th Legendre polynomial, \[L_{1}(\check{x})=1,\quad L_{2}(\check{x})=\check{x},\quad L_{3}(\check{x})= \frac{1}{2}(3\check{x}^{2}-1),\quad L_{4}(\check{x})=\frac{1}{2}(5\check{x}^{ 3}-3\check{x})\ldots.\] for \(\check{x}\in[-1,1]\). The Legendre polynomials attain their suprema at \(\check{x}=1\) with \(L_{j}(1)=1\) and satisfy continuous orthogonality, such that \[\int_{[-1,1]}L_{j}(\check{x})L_{k}(\check{x})d\check{x}=\frac{2}{\gamma_{j} \gamma_{k}}\delta_{j,k}\text{ with }\gamma_{j}:=\sqrt{2j-1},\qquad\forall j,k=1,2,\ldots. \tag{2.13}\] Similar to the three-term recurrence relationship of the Legendre polynomials, there is also a three-term recurrence relationship for the discrete orthogonal polynomial in real intervals. More details can be found in [36]. To illustrate the relationship between the discrete orthogonal polynomial and the scaled Legendre polynomial, we plot both polynomials in Figure 2. In the left plot of Figure 2, the discrete orthogonal polynomial \(\phi_{30}\) is generated using \(M=50\) equispaced sample points. Notice that \(\phi_{30}\) takes large values inside the interval, especially near the endpoints. \(\phi_{30}\) does not well approximate the scaled Legendre polynomial at all. This behaviour is consistent with the theory since we only sampled \(M=50\) sample points and did not meet the \(N^{2}\) sample complexity requirement. However, if we increase the sample points to \(M=30^{2}\), the difference between \(\phi_{30}\) and the \(30\)th scaled Legendre polynomials stays well bounded throughout the interval (as illustrated by the right plot of Figure 2). #### 2.3.2 Lebesgue Constant and Near-Optimal Approximation The result that discrete orthogonality resembles continuous orthogonality builds the foundation for the convergence of the V+A least-squares approximation. In fact, we can prove that the error of the V+A least-squares approximation is closely related to the suprema of the size of the discrete orthogonal polynomials. We use the Lebesgue constant to build this proof in Theorem 1. **Theorem 1**.: \(f:\Omega\to\mathbb{R}\) _is a continuous univariate function and \(\Omega\subset\mathbb{R}\). \(\mathcal{L}\) denotes the V+A least-squares operator with \(N\) degrees of freedom using sample points \(\mathbf{X}=\{x_{i}\}_{1\leq i\leq M}\in\Omega\) from the domain \(\Omega\). Let \(\Lambda_{\mathbf{X}}\) be defined as in (2.15). Then,_ \[\|f-\mathcal{L}(f)\|_{\Omega}\leq\left(1+N\big{\|}\sup_{1\leq j\leq N}|\phi_{ j}(x)|\big{\|}_{\Omega}\right)\|f-p^{*}\|_{\Omega}, \tag{2.14}\] _where \(p^{*}\in\mathcal{P}^{1}_{n}\) is the best minimax polynomial approximation on \(\Omega\), and \(\phi_{j}\) is the \(j\)th discrete orthogonal polynomial of the domain._ Figure 2: The discrete orthogonal polynomials and the scaled Legendre polynomials generated by \(M=50\) (left) or \(M=900\) (right) equispaced points in \([-5,10]\). Proof.:. The Lebesgue constant \(\Lambda_{\mathbf{X}}\) is defined as the norm of the linear operator \(\mathcal{L}\)[31, 32], \[\Lambda_{\mathbf{X}}:=\min\{c>0:\|\mathcal{L}(f)\|_{\Omega}\leq c\|f\|_{\Omega}, \forall f\in\mathcal{C}(\Omega)\}=\max_{\|f\|_{\Omega}=1}\|\mathcal{L}(f)\|_{ \Omega}. \tag{2.15}\] Since \(\mathcal{L}\) is the V+A least-squares operator, we have \[\mathcal{L}(f)(\mathbf{x})=\sum_{i=1}^{N}d_{i}\phi_{i}(\mathbf{x})=\frac{1}{M}\mathbf{\phi }(\mathbf{x})\mathbf{Q}^{*}\mathbf{\tilde{f}} \tag{2.16}\] where \(\mathbf{\phi}(\mathbf{x})=[\phi_{1}(\mathbf{x}),\dots,\phi_{N}(\mathbf{x})]\) is a \(N\times 1\) matrix of basis functions and the last equality comes from (2.9). In view of (2.15), the Lebesgue constant measures the infinity \(\Omega\)-norm of the worst least-squares approximant for all normalized continuous functions in \(\Omega\) with \(\|f\|_{\Omega}=1\). We define the Lebesgue function \(\lambda_{\mathbf{X}}(\mathbf{x}):=\sum_{i=1}^{M}|l_{i}(\mathbf{x})|\), where \(l_{i}(\mathbf{x})\in\mathcal{P}_{n}^{1}\) is the Lagrange basis such that \(l_{i}(\mathbf{x}_{k})=\delta_{i,k}\) for all sample points with \(1\leq i,k\leq M\). The Lebesgue function can be interpreted as the worst function in \(\mathcal{C}(\Omega)\) (with the infinity \(\Omega\)-norm equal to \(1\)) for the least-squares problem with respect to the given set of sample points. Thus, it follows from the definition that \(\Lambda_{\mathbf{X}}=\left\|\lambda_{\mathbf{X}}(\mathbf{x})\right\|_{\Omega}.\) Using \(l_{i}\in\mathcal{P}_{n}^{1}\) and therefore \(l_{i}=\mathcal{L}(l_{i})\) for all \(1\leq i\leq M\), we write \(\lambda_{\mathbf{X}}\) and \(\Lambda_{\mathbf{X}}\) as \[\lambda_{\mathbf{X}}(\mathbf{x})=\sum_{i=1}^{M}\left|\mathcal{L}(l_{i})(\mathbf{x})\right| \stackrel{{\eqref{eq:L1}}}{{=}}\frac{1}{M}\sum_{i=1}^{M}\left| \mathbf{\phi}(\mathbf{x})\mathbf{Q}^{*}\tilde{e}_{i}\right|=\frac{1}{M}\left\|\mathbf{\phi}( \mathbf{x})\mathbf{Q}^{*}\right\|_{\infty},\qquad\Lambda_{\mathbf{X}}=\frac{1}{M}\sup_{ \mathbf{x}\in\Omega}\left\|\mathbf{\phi}(\mathbf{x})\mathbf{Q}^{*}\right\|_{\infty} \tag{2.17}\] where \(\tilde{e}_{i}\in\mathbb{R}^{M}\) is a vector with \(1\) at the \(i\)th entry and \(0\) at other entries. \(\mathbf{\phi}(\mathbf{x})\mathbf{Q}^{*}\) is an \(N\times 1\) matrix and \(\left\|\mathbf{\phi}(\mathbf{x})\mathbf{Q}^{*}\right\|_{\infty}\) denotes the sum of the absolute value of its entries. By a standard argument, the best polynomial approximant \(p^{*}\) satisfies \(\|f-\mathcal{L}(f)\|_{\Omega}\leq\|f-p^{*}\|_{\Omega}+\|p^{*}-\mathcal{L}(f)\|_ {\Omega}\), and since \(p^{*}=\mathcal{L}(p^{*})\) and \(\mathcal{L}=\mathcal{L}^{2}\), we have \(\|p^{*}-\mathcal{L}(f)\|_{\Omega}=\|\mathcal{L}(p^{*}-\mathcal{L}(f))\|_{\Omega}\leq \Lambda_{\mathbf{X}}\|p^{*}-\mathcal{L}(f)\|_{\Omega}\), hence \[\|f-\mathcal{L}(f)\|_{\Omega}\leq(1+\Lambda_{\mathbf{X}})\|f-p^{*}\|_{\Omega}. \tag{2.18}\] Combining (2.17) and (2.18), we have \[\|f-\mathcal{L}(f)\|_{\Omega} \leq \left(1+\frac{1}{M}\sup_{\mathbf{x}\in\Omega}\|\mathbf{\phi}(\mathbf{x})\mathbf{Q }^{*}\|_{\infty}\right)\|f-p^{*}\|_{\Omega}. \tag{2.19}\] Using the triangle inequality and \[|\mathbf{Q}^{*}\|_{\infty}\leq\sqrt{M}\|\mathbf{Q}^{*}\|_{2}\leq M,\] we deduce that \[\frac{1}{M}\sup_{\mathbf{x}\in\Omega}\|\mathbf{\phi}(\mathbf{x})\mathbf{Q}^{*}\|_{\infty}\leq \frac{1}{M}\|\mathbf{\phi}(\mathbf{x})\|_{\Omega}\|\mathbf{Q}^{*}\|_{\infty}\leq\left\|\sum_ {j=1}^{N}|\phi_{j}(\mathbf{x})|\right\|_{\Omega}\leq N\left\|\sup_{1\leq j\leq N} |\phi_{j}(\mathbf{x})|\right\|_{\Omega}.\] Substituting into (2.19) gives the required result. Theorem 1 gives us some insight into the convergence mechanism of V+A. Firstly, Theorem 1 emphasizes the importance of choosing a good basis in the domain. If we use a bad basis, for instance, the Vandermonde system, (2.19) becomes \(\|f-\mathcal{L}(f)\|_{\Omega}\leq\left(1+\sup_{\mathbf{x}\in\Omega}\|\mathbf{\hat{\psi} }(\mathbf{x})\mathbf{A}^{\dagger}\|_{\infty}\right)\|f-p^{*}\|_{\Omega}\) where \(\mathbf{\hat{\psi}}=[\hat{\psi}_{1}(\mathbf{x}),\dots,\hat{\psi}_{N}(\mathbf{x})]^{T}\) are Vandermonde bases. The condition number \(\kappa_{2}(\mathbf{A})\) (and hence \(\|\mathbf{A}^{\dagger}\|\)) grows beyond the inverse of the machine epsilon quickly as \(N\) increases, rendering a large upper bound for \(\|f-\mathcal{L}(f)\|_{\Omega}\). Secondly, provided with a good basis, the error for the least-squares approximation can take large values only when the discrete orthogonal polynomial takes large values in \(\Omega\). If we use \(M=\mathcal{O}(N^{2})\) equispaced sample points in real intervals, from earlier results (2.12), we know that \[\sup_{1\leq j\leq N}|\phi_{j}(\mathbf{x})|\approx\sup_{1\leq j\leq N}|\sqrt{2j-1}L_ {j}(\eta(x))|=\sup_{1\leq j\leq N}\sqrt{2j-1}=\sqrt{2N-1}.\] Using (2.14), we deduce that the error of V+A least-squares approximation scales polynomially with the error of the best minimax approximation, \(\|f-\mathcal{L}(f)\|_{\Omega}\leq\big{(}1+\mathcal{O}(N\sqrt{N})\big{)}\|f-p^{ *}\|_{\Omega}\). Namely, the V+A least-squares approximation is near-optimal. The results in this subsection can be extended to general domains and multivariate approximations. In Section 4, we give full proof of the sample complexity and convergence of V+A in general domains. ## 3 Multivariate Vandermonde with Arnoldi The V+A method can be readily extended to higher dimensions (\(d>1\)). For \(\mathbf{x}=(x_{(1)},\ldots,x_{(d)})\in\Omega\in\mathbb{R}^{d}\), we use the standard multi-index notation to write the multivariate monomials \[\mathbf{x}^{\mathbf{\alpha}}=x_{(1)}^{\alpha_{1}}x_{(2)}^{\alpha_{2}}\ldots x_{(d)}^{ \alpha_{d}},\qquad\deg(\mathbf{x}^{\mathbf{\alpha}})=\alpha_{1}+\ldots+\alpha_{d}=:| \mathbf{\alpha}|. \tag{3.1}\] There are two common types of multivariate polynomial spaces: the total degree polynomial space which is \(\mathcal{P}_{n}^{d,(tot)}=\operatorname{span}\left\{\prod_{r=1}^{d}x_{(r)}^{ \alpha_{r}}\right\}_{|\mathbf{\alpha}|\leq n}\) and the maximum degree polynomial space which is \(\mathcal{P}_{n}^{d,(max)}=\operatorname{span}\left\{\prod_{r=1}^{d}x_{(r)}^{ \alpha_{r}}\right\}_{\alpha_{r}\leq n}\). The total degrees of freedom \(N\) is defined as the dimension of the multivariate polynomial spaces, \(\dim(\mathcal{P}_{n}^{d,(tot)})=\binom{n+d}{n}\) and \(\dim(\mathcal{P}_{n}^{d,(max)})=(n+1)^{d}.\) For bivariate polynomials, \(N:=\frac{(n+1)(n+2)}{2}\) for the total degree polynomial space. In this paper, we focus mainly on the case \(d=2\) and consider \(\mathcal{P}_{n}^{2}\) as the total degree polynomial space with \(N:=\dim(\mathcal{P}_{n}^{2})=\mathcal{O}(n^{2})\). Unlike the 1D Arnoldi algorithm, the multivariate monomial basis does not correspond to a Krylov subspace. Also, the multivariate monomial basis has no canonical ordering, i.e no universal order to list the columns of a multivariate Vandermonde matrix. Thus, we define two specific ordering strategies to form the columns of a Vandermonde matrix. We use the _lexicographic ordering_ for maximum degree polynomial spaces and the _grevlex ordering_ for total degree polynomial spaces. The _lexicographic ordering_ first compares exponents of \(x_{(1)}\) in the monomials, and in the case of equality then compares exponents of \(x_{(2)}\), and so forth. The _grevlex ordering_ compares the total degree \(|\mathbf{\alpha}|\) first, then uses a reverse lexicographic order for monomials of the same total degree. Using such orderings, we form a \(M\times N\) least-squares problem \(\mathbf{c}=\operatorname{argmin}_{\mathbf{c}\in\mathbb{R}^{N}}\|\mathbf{Ac}-\mathbf{\tilde{f}} \|_{2}\), where the \((i,j)\)th entry of \(\mathbf{A}\) is the \(j\)th multivariate monomial at \(\mathbf{x}_{i}\), the \(i\)th entry of \(\mathbf{\tilde{f}}\in\mathbb{R}^{M}\) is \(f(\mathbf{x}_{i})\) and \(\mathbf{c}\in\mathbb{C}^{N}\) is the vector of coefficients of the multivariate approximation with respect to the multivariate basis. In the multivariate V+A algorithm, a new column is formed by carefully selecting one coordinate of the sample points \(\{x_{(r)_{i}}\}_{i=1}^{M}\) to form a diagonal matrix and multiplying it to a particular proceeding column. We then orthogonalize this new column against previous columns using the Gram-Schmidt process. The full algorithm for multivariate V+A is provided in [20] and also in Appendix B for completeness. ### Numerical Examples for Multivariate V + A We give a few numerical examples of the V+A algorithm approximation to bivariate functions. The approximation errors are measured in an equispaced mesh \(\mathbf{Y}=\{\mathbf{y}_{i}\}_{1\leq i\leq K}\) with \(K=10M\), that is, finer than the sample points. In Figure 3, we plot the results for approximating a function \(f(x_{(1)},x_{(2)})=\sin(\frac{x_{(1)}^{2}+x_{(2)}^{2}+x_{(1)}x_{(2)}}{5})\) in a tensor-product domain using \(M=N^{2}\) equispaced points. The Vandermonde method and the V+A method give the same error reductions for the first two iterations as illustrated in the right plot of Figure 3. However, the Vandermonde least-squares system quickly becomes highly ill-conditioned at higher degrees, causing the error of the Vandermonde method to stagnate at \(\mathcal{O}(1)\). On the other hand, the multivariate V+A method gives a stable error reduction down to \(10^{-10}\) for \(N=500\). Also, as shown in the middle plot of Figure 3, the error obtained from the multivariate V+A is small throughout the domain with no spikes near the boundary. This is because the discrete orthogonal basis generated by the Arnoldi orthogonalization is well approximated by the continuous orthogonal basis with \(M=\mathcal{O}(N^{2})\) points. In the second example, we approximate the same function on an elliptical domain. We sample the points using the rejection sampling method. Namely, we enclose the domain \(\Omega\) in a hypercube domain \(\Omega_{cube}=[0,4]\times[0,6]\) and draw independent and identically distributed (i.i.d.) random samples from the uniform probability measure on \(\Omega_{cube}\). The acceptance rate for our domain is \(\text{Area}(\Omega)/\text{Area}(\Omega_{cube})\approx 59.59\%\). We end the rejection sampler once we have a total of \(M=N^{2}\log N\) sample points. More details on the random samplings are discussed in Section 4.1. We plot the bivariate approximation result for the irregular domain in Figure 4. Again, the Vandermonde method fails for any \(N>100\), but the V+A method gives a stable error reduction to \(10^{-13}\) for \(N=400\). Since the Arnoldi orthogonalization generates a basis in the domain, the error for the least-squares approximation does not worsen when we switch from the tensor-product domain to the irregular domain. This result shows that V+A is highly adaptive for approximating functions on general domains. In the left plot of Figure 5, we compare the V+A approximation with the Vandermonde approximation on bounding tensor domains (V+bounding tensor). In V+bounding tensor approximation, we enclose the irregular domain in a hypercube domain, \(\Omega\subset\Omega_{cube}\), and create an orthonormal Vandermonde basis on the bounding tensor-product domain. This is a common technique used in polynomial frame approximations and V+bounding tensor approximation is one of the simplest examples of frame approximations. Using the same number of sample points \(M=N^{2}\log N\), V+bounding tensor approximation is less accurate than the V+A approximation at all degrees. This is because, in V+bounding tensor approximation, the sample points used are distributed on the whole tensor-product domain \(\Omega_{cube}\). Only \(\sim 59.59\%\) sample points are inside the domain \(\Omega\), while in the V+A approximation, all sample points used are within \(\Omega\) and contribute to the approximation. Even if we increase the number of sample points for V+bounding tensor with \(M=\frac{1}{0.5959}N^{2}\log N\) such that we have approximately the same number of sample points within \(\Omega\), V+bounding tensor approximation with more sample points still gives a worse approximation than the V+A method. This is because the orthonormal basis generated in V+bounding tensor approximation is for the bounding tensor product domain. But when restricted to an irregular subdomain, an orthonormal basis is no longer orthogonal. Numerical computations experience potential difficulties, due to the near-linear dependence of the truncated approximation system. This difficulty is also discussed in polynomial frame approximation which also uses the orthogonal polynomials on a bounding tensor product domain. There is an extensive literature in the field of frame theory to work around this difficulty; for instance, a well-conditioned approximation can be obtained via regularization [1]. More details on the frame approximation are in [2, 3, 1]. Last but not least, we test V+A approximation on a non-differential function \(f(x_{(1)},x_{(2)})=|x_{(1)}-2||x_{(2)}-3|\) in the irregular domain. We found that the error reduction for the non-smooth function is much slower than the error reduction for the smooth function (Figure 5). The slow convergence in the error reduction is not caused by the V+A method; it is because polynomials are not great for approximating non-smooth functions. In other words, no polynomials in the space \(\mathcal{P}_{n}^{d}\) can give a high rate of error reductions while approximating the non-differential functions as \(N\) increases. Additionally, the smoothness requirement of the function is stricter for higher dimensional domains. As we observe from the right plot of Figure 5, the 2D least-squares approximation of a non-smooth function exhibits a slower error reduction than the 1D least-squares approximation of a non-smooth function. The theoretical results for convergence and explanations of these phenomena are discussed in the next section. ## 4 Sample Complexity and Convergence of Multivariate V+A Recall that in Section 2.3, we showed that the deviation from the minimax approximant in the V+A least-squares approximation (i.e. the error in the V+A least-squares approximation) can be controlled by the size of the discrete orthogonal polynomials. We also showed that the size of the discrete orthogonal polynomials can be controlled by the size of the continuous orthogonal polynomials, given enough sample complexity. In simple domains (e.g., real intervals or tensor product domains), we know that the continuous orthogonal polynomials with respect to uniform measure are the Legendre polynomials and the continuous orthogonal polynomials with respect to Chebyshev points are the Chebyshev polynomials. Therefore, we can use the properties of these polynomials to find the sample complexity requirement and induce an error bound for V+A approximation. However, for a general \(d\)-dimensional domain, the continuous orthogonal basis is in general unknown. In this section, we introduce a general approach known as admissible mesh [10, 34] to develop two near-optimal sampling methods (i.e., deterministic equispaced mesh and randomized mesh). We prove that, in many domains, using \(M=\mathcal{O}(N^{2})\) equispaced points or \(M=\mathcal{O}(N^{2}\log N)\) random sample points, the V+A approximant with \(N\)-degrees of freedom is near-optimal. Moreover, we prove that, when the sample complexity condition is satisfied, the error of the V+A approximation converges at a spectral rate with the maximum polynomial degree \(n\). More specifically, \[\|f-\mathcal{L}(f)\|_{\Omega}=o(n^{d-k}) \tag{4.1}\] where \(\|f-\mathcal{L}(f)\|_{\Omega}\) is the error of the V+A approximation, \(d\) is the dimension of the variables (i.e., \(x\in\mathbb{R}^{d}\)), and \(k\) is the highest order of derivative of \(f\) that exists and is continuous (i.e., smoothness). ### Choice of sample points **Deterministic sample points:** Let \(\Omega\in\mathbb{R}^{d}\) be a compact set. Let \(\mathbf{X}=\{\mathbf{x}_{i}\}_{1\leq i\leq M}\) be a set of \(M\) distinct sample points, and \(f(\mathbf{x}_{i})\) be bounded for \(1\leq i\leq M\), then there exists a unique polynomial \(\mathcal{L}(f)\in\mathcal{P}^{d}_{n}\) that satisfies the least-squares system in (2.1) [10, Thm 1]. We define a linear map \(\mathbb{P}_{d}\): \(p\to p|_{\mathbf{X}}\) where \(p|_{\mathbf{X}}\) denotes the restriction of \(p\) to \(\mathbf{X}\). \(\mathbb{P}_{d}\) is one-to-one, bounded and continuous [10]. Thus, for each \(\mathbf{X}\) and \(\mathcal{P}^{d}_{n}\), there exists a (smallest) constant \(C(\mathbf{X},\Omega)\) such that \[\|p\|_{\Omega}\leq C(\mathbf{X},\Omega)\|p\|_{\mathbf{X}},\qquad\forall p\in\mathcal{P }^{d}_{n}. \tag{4.2}\] \(C(\mathbf{X},\Omega)\) depends on the distribution and the number of sample points, the degree of the polynomial \(n\) and the polynomial space \(\mathcal{P}^{d}_{n}\). \(\mathbf{X}\) is called an _admissible mesh_ if \(C(\mathbf{X},\Omega)\) is bounded as \(N\) grows. The full definition of the admissible mesh is in Appendix C. We use Theorem 2 to control the size of \(C(\mathbf{X},\Omega)\) (i.e., construct an admissible mesh). The proof of this theorem can be found in [10, Thm 5]. **Theorem 2**.: _Let \(p(x)\in\mathcal{P}^{d}_{n}\) be a \(d\)-dimensional and degree-\(n\) multivariate polynomial. Let the domain \(\Omega\in\mathbb{R}^{d}\) be a compact set that admits the Markov inequality,_ \[\|D^{\mathbf{\alpha}}p\|_{\Omega}\leq C_{M}{}^{|\mathbf{\alpha}|}[\text{deg}(p)]^{r| \mathbf{\alpha}|}\|p\|_{\Omega}, \tag{4.3}\] _where \(C_{M},r\) are positive constants, \(D^{\mathbf{\alpha}}p:=\frac{\partial^{\alpha_{1}}}{\partial x_{(1)}}\ldots\frac{ \partial^{\alpha_{d}}}{\partial x_{(d)}}p\) is the total derivative and \(\mathbf{\alpha}:=(\alpha_{1},\ldots,\alpha_{d})\) is the multi-index with \(|\alpha|=\alpha_{1}+\ldots+\alpha_{d}\). Let \(\mathbf{X}=\{x_{i}\}_{1\leq i\leq M}\) be a mesh that satisfies the property_ \[\text{for every $z\in\Omega$, there exists an $x_{i}\in\mathbf{X}$ such that $\|z-x_{i}\|_{\infty}\leq\frac{2c_{1}}{C_{M}n^{r}}$} \tag{4.4}\] _where \(c_{1}\) is a constant chosen to be small enough such that \(2Nc_{1}e^{Nc_{1}}<1\). Then, \(\mathbf{X}\) is an admissible mesh. The cardinality of \(\mathbf{X}\) is \(\mathcal{O}(n^{dr})=\mathcal{O}(N^{r})\)._ **Remark 1**.: _Theorem 2 provides us with a way to discretize the domain and choose sample points that can control the size of the discrete orthogonal polynomial. For \(\Omega\in\mathbb{R}^{d}\), we enclose \(\Omega\) in a \(d\)-dimensional hypercube \(\Omega_{cube}\in\mathbb{R}^{d}\). We then cover \(\Omega_{cube}\) with equispaced grids dense enough such that the condition in (4.4) is satisfied. This gives us \(\mathcal{O}(n^{dr})=\mathcal{O}(N^{r})\) number of nodes for the admissible mesh. Note that the deterministic sampling method can be further improved in two ways. Firstly, instead of sampling in the hypercube domain \(\Omega_{cube}\). One could sample directly on \(\Omega\) using rejection sampling. Details on the randomized sampling method are discussed in Theorem 3. Secondly, instead of taking equispaced samples, we can choose a set of sample points with a 'near-optimal weight'. Details on near-optimal sampling strategy are discussed in Section 5._ There are plenty of real domains that satisfy the Markov inequality (4.3). We give a few examples following [10]. * Intervals in \(\mathbb{R}\) satisfy the Markov property with \(r=2\). We can form an admissible mesh of cardinality \(M=\mathcal{O}(n^{2})\) with equispaced sample points. * A convex body or unions of finite convex bodies in \(\mathbb{R}^{d}\) satisfy the Markov property with \(r=2\). We can form an admissible mesh of cardinality \(M=\mathcal{O}(n^{2d})=\mathcal{O}(N^{2})\) with equispaced sample grids. **Random sample points:** In a high-dimensional domain of which we have no explicit knowledge, it is sometimes easier to generate sample points randomly from the domain by rejection sampling. We prove that with \(\mathcal{O}(N^{2}\log N)\) random sample points generated i.i.d. from a uniform measure, we can also form the admissible mesh with high probability. **Theorem 3**.: _Suppose that \(\Omega\) satisfies the condition (4.3) in Theorem 2 and \(\mathbf{X}\) satisfies (4.4). Then, \(\mathbf{X}\) is a deterministic admissible mesh with cardinality \(M\). Let \(\mathbf{\tilde{X}}=\{\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\ldots,\mathbf{\tilde{x}}_ {\tilde{M}}\}\) be a set of points independently and uniformly sampled from \(\Omega\). If \(\tilde{M}\geq(c_{r}+1)M\log(M)\), where \(c_{r}\) is some positive constant, then \(\mathbf{\tilde{X}}\) is an admissible mesh with probability at least \(1-M^{-c_{r}}\)._ Proof.: We prove this theorem using a similar approach to [34, Thm 4.1]. We aim to show that with sufficiently large \(\tilde{M}\), \(\mathbf{\tilde{X}}\) satisfies the condition in (4.4) with high probability. For notational simplicity, we denote \(E:=\frac{c_{1}}{C_{M}n^{r}}\) and rewrite the condition in (4.4) as \(\mathbf{X}\cap\bar{B}_{E}(\mathbf{x})\neq\varnothing\) for all \(\mathbf{x}\in\mathbf{X}\), where \(\bar{B}\) denotes the closed ball with center \(\mathbf{x}\) and radius \(E\). Since \(\{\mathbf{\tilde{x}}_{i}\}_{1\leq i\leq\tilde{M}}\) are i.i.d. samples from the uniform distribution, we have, \[\mathbb{P}\left[\mathbf{\tilde{X}}\cap\bar{B}_{E}(\mathbf{x})\neq\varnothing\right]=1 -\left(\mathbb{P}\left[\mathbf{\tilde{x}}_{1}\cap\bar{B}_{E}(\mathbf{x})=\varnothing \right]\right)^{\tilde{M}}\geq 1-\left(1-M^{-1}\right)^{\tilde{M}}\geq 1-e^{ \tilde{M}/M}.\] By taking \(\tilde{M}=(c_{r}+1)M\log(M)\) with \(c_{r}>0\), we achieve \(\mathbf{\tilde{X}}\cap\bar{B}_{E}(\mathbf{x})\neq\varnothing\) with probability at least \(1-M^{-c_{r}}\). Applying Theorem 2 completes the proof. Assuming \(\Omega\in\mathbb{R}^{d}\) satisfies the Markov inequality in (4.3) with constant \(C_{M}\) and \(r=2\), by Theorem 2 and Theorem 3, there exists a randomized admissible mesh with cardinality \(M=\mathcal{O}(N^{2}\log N)\). This explains the sample complexity we use in Figure 4. #### 4.1.1 Related Work on Sample Complexity In this subsection, we discuss the difference in construction between the sample complexity of V+A approximation and the sample complexity of polynomial frame approximation. There are many existing proofs for the sample complexity of polynomial approximations [2, 3, 11, 12, 24]. For instance, in [11, 12], the proof by Cohen et al. first constructs a \(\mathcal{L}^{2}\)-continuous orthogonal basis \(\{J_{1},J_{2},\ldots,J_{N}\}\), such that \(\int_{\Omega}J_{i}(x)J_{j}(x)dx=\delta_{i,j}.\) Then, the solution of their least-squares problem can be computed by solving the \(M\times M\) Gram matrix system, \(\mathbf{Q}_{1}^{*}\mathbf{Q}_{1}d_{1}=f\) where the \((i,j)\)th entries of \(\mathbf{Q}_{1}\) is \(J_{j}(x_{i})\) for \(1\leq j\leq N\) and \(1\leq i\leq M\). The purpose of their analysis is to find how many sample points we need such that the discrete measure inherits the orthogonality of \(\{J_{i}\}_{1\leq i\leq N}\). Namely, how to choose \(M\) such that \[\sum_{i=1}^{M}J_{j}(x_{i})J_{k}(x_{i})\approx\delta_{j,k}\qquad\text{and} \qquad\mathbb{E}(\mathbf{Q}_{1}^{*}\mathbf{Q}_{1})\approx\mathbf{I}_{n}.\] Using the exponentially decreasing bounds on tail distributions \(\mathbb{P}(\|\mathbf{Q}_{1}^{*}\mathbf{Q}_{1}-I\|\geq\frac{1}{2})\), Cohen et al. proved that \(M=\mathcal{O}(N^{2}\log N)\) random sample points is enough to obtain a stable least-squares approximation. In [3], Adcock and Huybrechs give different proof for the sample complexity of frame approximation on irregular domains. As opposed to admissible mesh, the key step of the proof uses the Nikolskii inequality [23] on the bounding tensor-product domain \[\|p\|_{\Omega_{cube}}\leq\mathcal{N}\|p\|_{\mathcal{L}^{2}(\Omega_{cube})} \tag{4.5}\] where \(\mathcal{N}\) is a constant that depends on the domain and the sample complexity. They proved that the least squares are near-optimal if the domain satisfies the \(\lambda\)-rectangle property. Namely, the bounding domain can be written as a (possibly overlapping and uncountable) union of hyperrectangles \(\Omega_{cube}\) where \(\lambda\) is the value of \(\text{Volume}(\Omega)/\text{Volume}(\Omega_{cube})\). The sample complexity required scales by \(N^{2}\lambda^{-1}\) where \(N\) is the total degree of freedom. Although sample complexity results obtained for polynomial frame approximation and for V+A approximation are similar, the idea and the construction of the proof is different. Firstly, in polynomial frame approximation, the proof starts with a continuous orthogonal polynomial and explores the distribution of sample points such that the discrete least-squares matrix is approximately orthogonal (i.e., \(\mathbf{Q}_{1}^{*}\mathbf{Q}_{1}\approx I_{n}\)). By construction in V+A, we start with a discrete orthogonal basis (i.e., \(\mathbf{Q}^{*}\mathbf{Q}=I_{n}\)) and examine the behaviours of these discrete orthogonal polynomials in the domain. Another difference is that these two approaches are building the bound using different norm spaces in different domains. In V+A, we use the suprema of discrete orthogonal polynomials, \(\|p\|_{\mathbf{X}}\), while in the polynomial frame approach, the Nikolskii inequality uses the \(\mathcal{L}_{2}\) norm, \(\|p\|_{\mathcal{L}^{2}(\Omega_{cube})}\). Lastly, there is also a difference in the sample complexity's dependence on the domain. The sample points used by V+A do not require information on the bounding domain and, as such, are independent of the ratio \(\text{Volume}(\Omega)/\text{Volume}(\Omega_{cube})\). ### Stability and Accuracy of Multivariate Vandermonde with Arnoldi Theorem 4 shows that, under a suitable sample complexity, the error of the V+A least-squares approximant scales polynomially with the error of the best least-squares approximant. Along with multiple numerical examples, we also analyze the convergence rate and error bound for the V+A least-squares approximations. **Theorem 4**.: _Let \(f:\Omega\to\mathbb{R}^{d}\) be a continuous multivariate function. The domain \(\Omega\in\mathbb{R}^{d}\) is compact and admits the Markov inequality with \(r=2\). We choose \(M=\mathcal{O}(N^{2})\) equispaced sample points \(\mathbf{X}\) from \(\Omega\) using the sampling method in Remark 1. \(\mathcal{L}\) denotes the V+A least-squares operator with \(N\) degrees of freedom using sample points \(\mathbf{X}\) from the domain \(\Omega\). Then,_ \[\|f-\mathcal{L}(f)\|_{\Omega}\leq\left(1+\frac{1}{M}C(\mathbf{X},\Omega)\|\mathbf{QQ }^{*}\|_{\infty}\right)\|f-p^{*}\|_{\Omega}\leq(1+\mathcal{O}(N))\|f-p^{*}\|_ {\Omega} \tag{4.6}\] _where \(p^{*}\in\mathcal{P}_{n}^{d}\) is the best polynomial approximation of \(f\)._ Proof.:. Denote \(g:=\|f-p^{*}\|\). By the triangle inequality, we have \[\|f-\mathcal{L}(f)\|_{\Omega}\leq\|f-p^{*}\|_{\Omega}+\|\mathcal{L}(p^{*}-f) \|_{\Omega}=\|g\|_{\Omega}+\|\mathcal{L}(g)\|_{\Omega}. \tag{4.7}\] Since \(\mathcal{L}(g)\in\mathcal{P}_{n}^{d}\), we write it as a linear combination of discrete orthogonal polynomials \(\mathcal{L}(g):=\sum_{j=1}^{N}\beta_{j}\phi_{j}=\mathbf{Q}\mathbf{\beta}\) where \(\mathbf{\beta}:=[\beta_{1},\dots,\beta_{N}]^{T}\in\mathbb{R}^{N}.\) According to (2.9), \(\mathbf{\beta}=\frac{1}{M}\mathbf{Q}^{*}\tilde{g}\). Using the definition of \(C(\mathbf{X},\Omega)\), we have \[\|\mathcal{L}(g)\|_{\Omega}\leq C(\mathbf{X},\Omega)\|\mathcal{L}(g)\|_{\mathbf{X}}=C (\mathbf{X},\Omega)\|\mathbf{Q}\mathbf{\beta}\|_{\infty}=\frac{1}{M}C(\mathbf{X},\Omega)\| \mathbf{QQ}^{*}\tilde{g}\|_{\infty}. \tag{4.8}\] Substituting (4.8) into (4.7) gives the first inequality in (4.6). Using matrix property \(\frac{1}{M}\|\mathbf{QQ}^{*}\|_{\infty}<\frac{1}{\sqrt{M}}\|\mathbf{QQ}^{*}\|_{2}= \sqrt{M}\) and the sample complexity assumption \(M=\mathcal{O}(N^{2})\), we arrive at the second inequality in (4.6). **Remark 2**.: _A similar bound was derived by Calvia and Levenberg in [10, Thm 2] using Hermitian inner product norm,_ \[\|f-\mathcal{L}(f)\|_{\Omega}\leq\left(1+(1+\sqrt{M})C(\mathbf{X},\Omega)\right)\| f-p^{*}\|_{\Omega}. \tag{4.9}\] _Since \(\frac{1}{M}\|\mathbf{QQ}^{*}\|_{\infty}<\frac{1}{\sqrt{M}}\|\mathbf{QQ}^{*}\|_{2}= \sqrt{M}\), it is straightforward to verify that our bound in Theorem 4 gives a sharper approximation than the bound in (4.9) by Calvia and Levenberg._ In fact, due to the special properties of the V+A basis (i.e., \(\mathbf{Q}\)), our numerical results suggest that V+A approximant converges at an even faster rate than the bound in Theorem 4. The rate of convergence is \[\|f-\mathcal{L}(f)\|_{\Omega}\approx(1+\sqrt{2N})\|f-p^{*}\|_{\Omega}. \tag{4.10}\] Namely, with \(M=N^{2}\) equispaced sample points, \(\frac{1}{M}\|\mathbf{QQ^{*}}\|_{\infty}\) actually scales like \(\sqrt{2N}\) in many regions (Figure 6). Let us give a brief justification of why this should be the case. First, we rewrite \(\frac{1}{M}\|\mathbf{QQ^{*}}\|_{\infty}\) in expanded form \[\frac{1}{M}\max_{1\leq k\leq M}\sum_{i=1}^{M}\left|\sum_{j=1}^{N} \phi_{j}(\mathbf{x}_{k})\phi_{j}(\mathbf{x}_{i})\right|\leq\underbrace{\left(\max_{1 \leq j\leq N,1\leq k\leq M}|\phi_{j}(\mathbf{x}_{k})|\right)}_{:=Q_{\max}} \underbrace{\left(\frac{1}{M}\sum_{i=1}^{M}\left|\sum_{j=1}^{N}\phi_{j}(\mathbf{x }_{i})\right|\right)}_{:=S_{N}}.\] In real intervals, since the discrete orthogonal polynomials are well approximated by the scaled Legendre polynomial as illustrated by (2.12), thus \(Q_{\max}\approx\sqrt{2N-1}\) with \(M=\mathcal{O}(N^{2})\) equispaced points. On the other hand, \(\sum_{j=1}^{N}\phi_{j}(\mathbf{x}_{i})\) is small in the majority of the interval and only takes large values near the endpoints (Figure 14 in Appendix D). The supremum of \(\sum_{j=1}^{N}\phi_{j}(\mathbf{x}_{i})\) has a scaling of \(\sum_{j=1}^{N}\sqrt{2j-1}\approx\frac{(2N-1)^{3/2}}{3}\) in an interval of length \(M^{-1}\). The mean value of the absolute sum of the discrete orthogonal polynomials, \(S_{N}\), is relatively constant as \(N\) grows. Our numerical experiment shows that \(S_{N}\in[1.2,1.3]\) when \(N\) increases from \(10\) to \(300\) (Appendix D). Therefore, \(Q_{\max}S_{N}\) together gives an upper bound for \(\frac{1}{M}\|\mathbf{QQ^{*}}\|_{\infty}\), that is, \(\mathcal{O}(\sqrt{2N-1})\). Similar arguments follow for the 2D domains and the bound in 2D is obtained using \(\max_{j,k}|\phi_{j}(\mathbf{x}_{k})|\approx\sqrt{2N}\). **Convergence Rate and Error Bound**: Using the bound in (4.6), we can deduce the rate of convergence of the V+A approximation given enough sample points as the degree increases. Let \(\Omega\in\mathbb{R}^{d}\) be a compact set and \(f\in\mathcal{C}^{k}(\mathbb{R}^{d})\). Jackson's Theorem [5] gives a bound for the best minimax approximation error \[\|f-p^{*}\|_{\Omega}=o(n^{-k}),\qquad n\to\infty \tag{4.11}\] where \(p^{*}\) is the best polynomial approximation in \(\mathcal{P}^{d}_{n}\). Assume that \(\Omega\in\mathbb{R}^{d}\) satisfies the Markov inequality (4.3) with \(r=2\) and the admissible mesh has \(M=\mathcal{O}(n^{2d})=\mathcal{O}(N^{2})\) equispaced sample points, then using the estimate in (4.6), we have \[\|f-\mathcal{L}(f)\|_{\Omega}\leq(1+\mathcal{O}(N))o(n^{-k})=o(n^{d-k})=o(N^{1- \frac{k}{2}}) \tag{4.12}\] where \(n\) is the maximum degree of the polynomial and \(N\) is the total degree of freedom. The bound in (4.12) asserts that the convergence is at a spectral rate depending on the smoothness of \(f\) in \(\Omega\). It also exhibits the familiar _curse of dimensionality_. Namely, for functions with the same smoothness \(k\), the least-squares approximations converge slower for higher dimensions \(d\). Figure 6: Size of \(\frac{1}{M}\|\mathbf{QQ^{*}}\|_{\infty}\) in different domains. The left two plots are generated with \(M=N^{2}\) equispaced sample points in \([-1,4]\) and \([-1,4]\times[-1,6]\), respectively. The right plot is generated with \(M=N^{2}\log N\) random sample points on an elliptical domain (Domain 4 in Figure 12). Numerically, we test the V+A algorithm for functions of specific smoothness in 1D and in 2D. For smoothness \(k=0,1,2,\infty\) and dimension \(d=1,2\), let \[f_{k}(\mathbf{x}):=\sum_{r=1}^{d}|x_{(r)}|^{2k+1}\text{ where }\mathbf{x}=(x_{(1)}, \ldots,x_{(d)})\in[-1,1]^{d}. \tag{4.13}\] We also set \(f_{\infty}(\mathbf{x}):=\sum_{r=1}^{d}\sin[\exp(x_{(r)})\cos(x_{(r)})]\). Clearly, \(f_{k}\in\mathcal{C}^{k}([-1,1]^{d})\) for \(k=0,1,2,\infty\). For fixed \(n\), \(d\), and \(\mathbf{X}\), the term \(C(\mathbf{X},\Omega)\frac{1}{M}\|\mathbf{QQ}^{T}\|\) is the same for all functions. Thus, the improvement in convergence rates of the smoother functions comes from the best minimax approximation error terms \(\|f-p^{*}\|_{\Omega}\). Since \(\frac{1}{M}\|\mathbf{QQ}^{T}\|_{\infty}\sim\mathcal{O}(\sqrt{N})\) for tensor-product domains, the least-squares approximations of \(f_{1}\) and \(f_{2}\) converge like \(o(n^{-3/2})\) and \(o(n^{-7/2})\) in 1D and \(o(n^{-1})\) and \(o(n^{-3})\) in 2D, respectively. The convergence rates of the four bivariate functions are approximately the square root of the convergence rates of the univariate counterparts. This is expected as we only have \(\sim\sqrt{N}\) degrees of freedom in the \(x_{(1)}\) or the \(x_{(2)}\) directions for the bivariate approximation. As a result, the best minimax approximation error can only decrease at a rate of \(o(N^{-\frac{k}{2}})=o(n^{-k})\). ### Relationship between Lebesgue Constant and Admissible Mesh The bound in Theorem 4 involving \(C(\mathbf{X},\Omega)\) is analogous to the bound in Theorem 1 involving \(\Lambda_{\mathbf{X}}\). Van Barel and Humet [31] proved that \(C(\mathbf{X},\Omega)\leq\Lambda_{\mathbf{X}}\) for \(M\geq N\), with equality in the case of interpolation (\(M=N\)). That said, the relationship between the bound in Theorem 4 and the bound in Theorem 1 appears to be unknown in the literature. Computing an upper bound for \(C(\mathbf{X},\Omega)\) in a general domain is analytically difficult. This is because the size of \(C(\mathbf{X},\Omega)\) depends on the constants \(C_{M}\) and \(r\) in the Markov inequality and, thus, is domain-dependent. In this section, we compute the upper bounds of \(C(\mathbf{X},\Omega)\) for tensor-product domains and use these as examples to illustrate that the bound in Theorem 4 and the bound in Theorem 1 are comparable in size. **Lemma 5**.: _Let \(\Omega=[-1,1]\). Fix any \(N\in\mathbb{N}\), if we sample \(M=N^{2}\) equispaced sample points \(\mathbf{X}\) from \(\Omega\), then \(\|p\|_{\Omega}\leq C(\mathbf{X},\Omega)\|p\|_{\mathbf{X}}\) and \(C(\mathbf{X},\Omega)\leq 2\)._ Proof.:. By Markov inequality in \([-1,1]\) is \(\|p^{\prime}\|_{\Omega}\leq\deg(p)^{2}\|p\|_{\Omega}\) for all \(p\in\mathcal{P}_{n}^{1}\)[10, 33]. By the mean value theorem, for all \(i=1,\ldots,M-1\) and \(x\in(x_{i},x_{i+1})\), \[\exists\xi\in(x_{i},x_{i+1})=:I_{i},\text{ such that }p(x)-p(x_{i})=(x-x_{i})p^{ \prime}(\xi). \tag{4.14}\] Taking the supremum over \(\Omega\) is equivalent to taking the maximum of the supremum in each subintervals \(I_{i}\). Thus we have, \[\max_{1\leq i\leq M-1}\|p(x)-p(x_{i})\|_{I_{i}}\leq\max_{1\leq i\leq M-1}\|x- x_{i}\|_{I_{i}}\|p^{\prime}(\xi)\|_{\Omega}\leq(2M)^{-1}N^{2}\|p\|_{\Omega}, \tag{4.15}\] where the last inequality uses the Markov inequality. It also uses the property of the equispaced mesh that \(|x-x_{i}|\leq(2M)^{-1}\) for all \(x\in I_{i}\) and all \(i=1,\ldots,M-1\). Applying the triangle inequality, we have \[\|p\|_{\Omega}=\max_{1\leq i\leq M}\|p(x)-p(x_{i})+p(x_{i})\|_{I_{i}}\leq\max _{1\leq i\leq M}\|p(x)-p(x_{i})\|_{I_{i}}+\|p\|_{\mathbf{X}} \tag{4.16}\] Figure 7: The set of sample points \(\mathbf{X}=\{x_{i}\}_{1\leq i\leq N^{2}}\) and evaluation points \(\mathbf{Y}=\{y_{i}\}_{1\leq i\leq 10N^{2}}\) are chosen as equispaced points. The domain for approximation is \([-1,1]\) in 1D and \([-1,1]^{2}\) in 2D. and consequently \[\|p\|_{\Omega}\leq\frac{1}{1-N^{2}(2M)^{-1}}\|p\|_{\mathbf{X}}=2\|p\|_{\mathbf{X}},\qquad \forall N=1,2\ldots\text{ with }M=N^{2}. \tag{4.17}\] Thus, \(C(\mathbf{X_{1D}},[-1,1])\in(1,2]\) where \(\mathbf{X_{1D}}\) denotes the equispaced points in \([-1,1]\). The results in (4.15)-(4.17) can be extended to higher dimensional tensor-product domains. Let \(\mathbf{X_{2D}}\) be an equispaced mesh in \([-1,1]^{2}\) with \(M=N^{2}\) sample points and \(\{\mathbf{I}_{i}\}_{1\leq i\leq M}\) denote the sub-grids of the mesh. In the square equispaced mesh, we have \(\max_{1\leq i\leq M}\|\mathbf{x}-\mathbf{x}_{i}\|_{\mathbf{I}_{i}}\leq(\sqrt{2}M)^{-1}\), and thus \(C(\mathbf{X_{2D}},[-1,1]^{2})\in(1,2+\sqrt{2}]\). We plot the upper and lower bounds of \(\frac{1}{M}C(\mathbf{X},\Omega)\|\mathbf{QQ^{*}}\|_{\infty}\) as the dotted lines in Figure 8. We find that \(\frac{1}{M}C(\mathbf{X},\Omega)\|\mathbf{QQ^{*}}\|_{\infty}\) is of the same order of magnitude as the Lebesgue constant. This numerical result illustrates that even though we used two approaches to deduce the bound in Theorem 4 and the bound in Theorem 1, these two bounds are of the same order of magnitude in many domains. ## 5 Near-Optimal Sampling Strategy for V+A In this section, we propose a new variant of the weighted least-squares algorithm that uses the multivariate V+A to create the discrete orthogonal basis. We refer to our V+A weighted least-squares algorithm as VA+Weight. In [1, 24, 12], the authors gave comprehensive analyses on this weighted sampling strategy and proved that only \(M=\mathcal{O}(N\log N)\) sample points are needed for a well-conditioned and accurate approximation. In VA+Weight, we use the same weighting measure as in [1]. But instead of creating the discrete orthogonal basis with QR factorization, we use V+A as the orthogonalization strategy for the Vandermonde basis. Since the Vandermonde matrix \(\mathbf{A}\) is usually highly ill-conditioned, and computing its \(Q\) factor incurs errors proportional to \(\kappa_{2}(\mathbf{A})\)[19, Ch. 19]. Even though the QR factorization is a stable orthogonalization technique, the discrete orthogonal basis generated from \(\mathbf{A}\) could still be inaccurate. We refer to the weighted least-squares approximation using the QR factorization as QR+Weight. Along with multiple numerical examples, we illustrate that VA+Weight gives more accurate approximations than QR+Weight for high-degree polynomial approximations. Due to the reduced sample density, VA+Weight also gives a lower online computational cost than the unweighted V+A least-squares method. Moreover, VA+Weight acts as a practical tool for selecting the near-optimal distribution of sample points in a high-dimensional irregular domain. In Section 5.1, we explain the algorithm and the numerical setup for the weighted least-squares approximation. We provide proof of the stability of the weighting measure for sample points. In Section 5.2, we give numerical examples to compare the VA+Weight and QR+Weight algorithms. Figure 8: \(\Lambda_{\mathbf{X}}\) are computed for the four functions in (4.13). The Lebesgue constants for the four functions are the same at all degrees. This confirms the theoretical result that \(\Lambda_{\mathbf{X}}\) is independent of the function that we are approximating. ### Weighted Least-Squares Approximation The VA+Weight algorithm is presented in Algorithm 1 and the remarks for the algorithm are given in Remark 3. Algorithm 1 is a V+A variant of Method 1 in [1]. **Input:** Domain \(\Omega\) and a probability measure \(\rho\) over \(\Omega\); Bounded and continuous \(f\in\mathcal{C}(\Omega)\); \(N\)-dimensional polynomial space \(\mathcal{P}_{n}^{d}\); Number of sample points \(M\) and \(\hat{M}\), with \(M\geq\hat{M}\geq N\); Basis \(\hat{\psi}=[\hat{\psi}_{1}(\mathbf{x}),\ldots,\hat{\psi}_{N}(\mathbf{x})]^{T}\) for \(\mathcal{P}_{n}^{d}\) (no need to be orthogonal, for instance, Vandermonde basis is enough); **Output**: The coefficients \(\hat{d}\in\mathbb{R}^{N}\) of the polynomial approximant. **Step 1:** Draw \(M\) random sample points \(\mathbf{X}=\{\mathbf{x}_{i}\}_{1\leq i\leq M}\stackrel{{ i.i.d.}}{{\sim}}\rho\). Compute the function values at \(\mathbf{X}\) i.e \(\mathbf{\tilde{f}}\in\mathbb{R}^{M}\). **Step 2:** Construct \(M\times N\) Vandermonde matrix \(\mathbf{A}\) with the \((i,j)\)th entry as \(\hat{\psi}_{j}(\mathbf{y}_{i})\). If \(\text{rank}(\mathbf{A})=N\), go to Step 3, else go back to Step 1. **Step 3:** Apply the Multivariate V+A algorithm (Algorithm 4 in Appendix B) to \(\mathbf{A}\) to generate \(\mathbf{Q}\in\mathbb{R}^{M\times N},\mathbf{H}\in\mathbb{R}^{N\times N}\) such that \(\text{diag}(\mathbf{X})\mathbf{Q}=\mathbf{Q}\mathbf{H}+h_{N+1,N}q_{N+1}e_{N}\). **Step 4:** Define a probability distribution \(\pi=\{\pi_{i}\}_{1\leq i\leq M}\) on \(\{1,\ldots,M\}\), such that \(\pi_{i}=\frac{1}{\|\mathbf{Q}\|_{F}^{2}}\sum_{j=1}^{N}|\mathbf{Q}_{i,j}|^{2}\), for \(i=1,\ldots,M\), where \(\mathbf{Q}_{i,j}\) are the \((i,j)\)th entry of \(\mathbf{Q}\). **Step 5:** Draw \(\hat{M}\) integers \(\{k_{1},\ldots,k_{\hat{M}}\}\) independently from \(\pi\). Define \(\mathbf{\hat{Q}}\) and \(\mathbf{\hat{f}}\) as the corresponding scaled rows of \(\mathbf{Q}\) and \(\mathbf{\hat{f}}\), such that the point-wise entries are \[\mathbf{\hat{Q}}_{i,j}:=\frac{\mathbf{Q}_{k_{i},j}}{\sqrt{\hat{M}M\pi_{k_{i}}}}, \qquad\mathbf{\hat{f}}_{i}:=\frac{f(\mathbf{x}_{k_{i},j})}{\sqrt{\hat{M}M\pi_{k_{i}}} },\qquad i=1,\ldots,\hat{M},\quad j=1,\ldots,N.\] **Step 6:**\(\mathbf{\hat{d}}=\mathbf{\hat{Q}}\backslash\mathbf{\hat{f}}\). Solve the least-squares problem by the MATLAB backslash command. Approximate the value of \(f\) using the evaluation algorithm in Algorithm 5 and give the output. **Remark 3**.: _1): We assume that it is possible to draw samples from the measure \(\rho\) in Step 1. We use the uniform rejection sampling method to draw samples. We end the rejection sampler once we have enough sample points._ _2): To ensure that \(\text{span}\{\phi_{1},\ldots,\phi_{N}\}=\mathcal{P}_{n}^{d}\) in Step 2 of the algorithm, if \(\mathbf{A}\) is rank deficient, we add additional sample points until \(\text{rank}(\mathbf{A})=N\)._ _3): The construction of \(\mathbf{Q}\) uses the multivariate V+A algorithm (Algorithm 4 in Appendix B). Numerically, the orthogonalization algorithm is subject to a loss of orthogonality due to numerical cancellation. The numerically constructed discrete orthogonal basis is said to be \(\epsilon_{m}\)-orthonormal for \(\epsilon_{m}>0\), namely, \(\|\mathbf{Q}^{*}\mathbf{Q}-\mathbf{I}\|_{F}^{2}=\sum_{j,k=1}^{N}|\langle\phi_{j},\phi_{k} \rangle_{M}-\delta_{j,k}|^{2}\leq\epsilon_{m}^{2}.\) By [19, Thm 19.4] and [15, Thm 2], executing CGS twice gives us a good bound \(\epsilon_{m}\sim\mathcal{O}(MN^{3/2})\mathbf{u}\), where \(\mathbf{u}\) is the unit roundoff._ In Algorithm 1, \(\sum_{j=1}^{N}|\mathbf{Q}_{i,j}|^{2}\) represents the sum of the absolute value of discrete orthogonal polynomials at the sample points. The weighting measure \(\pi\) can be interpreted as choosing the sample points which maximize the absolute sum of the discrete orthogonal polynomials at the sample points. Heuristically, this weighting measure makes sense as the supremum usually happens near the boundaries and corners of the domain. Many sampling measures, such as Chebyshev points in real intervals and Padua points [7, 9] in higher dimensional tensor-product domains have highlighted the importance of sample points near the boundary and corners. \(\hat{M}\) is the number of points we selected from a total of \(M\) sample points. The main question that we analyze in this section is how large \(\hat{M}\) needs to be chosen in relation to \(N\) to ensure a near-optimal approximation. As we will prove later, with the probability distribution defined in Step 4, a log-linear scaling of \(\hat{M}=\mathcal{O}(N\log N)\) is enough for a well-conditioned, near-optimal weighted least-squares approximation (provided that \(M\) is large enough). The reduced sample complexity in Algorithm 1 gives a reduced online computational cost. The computational cost for Algorithm 1 is dominated by the cost of Step 3 and Step 6, which are \(\mathcal{O}(MN^{2})\) flops and \(\mathcal{O}(\hat{M}N^{2})\) flops, respectively. Since we need \(M=\mathcal{O}(N^{2}\log N)\) random sample points to generate a randomized admissible mesh for convex domains or unions of convex domains in \(\mathbb{R}^{d}\), Algorithm 1 gives an online computational cost of \(\mathcal{O}(N^{3}\log N)\) while the unweighted least-squares method described in Section 3 requires \(\mathcal{O}(N^{4}\log N)\). Algorithm 1 can be interpreted as a weighted least-squares system with V+A orthogonalization. We lay out the notation of the weighted least-squares system as follows. Consider the set \(\boldsymbol{X}=\{\boldsymbol{x}_{i}\}_{1\leq i\leq M}\stackrel{{ i.i.d.}}{{\sim}}\rho\), we define \(\tilde{\rho}\) as the discrete uniform measure on \(\boldsymbol{X}\), such that \(\tilde{\rho}(\boldsymbol{x}_{i})=\frac{1}{M}\) for \(1\leq i\leq M\). For all \(\boldsymbol{x}_{i}\in\boldsymbol{X}\), we write the weighted sampling method in Step 5 as a probability measure \(\sigma\) on \(\boldsymbol{X}\), \[d\sigma(\boldsymbol{x}_{i}):=w(\boldsymbol{x}_{i})^{-1}d\tilde{\rho}( \boldsymbol{x}_{i})=\frac{1}{\upsilon}\sum_{j=1}^{N}\phi_{j}(\boldsymbol{x}_ {i})^{2}d\tilde{\rho}(\boldsymbol{x}_{i}),\qquad\text{where }\upsilon:=\frac{\| \boldsymbol{Q}\|_{F}}{M}. \tag{5.1}\] Clearly, \(\int_{\Omega}d\sigma=\int_{\Omega}w^{-1}d\tilde{\rho}=1\) and \(N(1-\epsilon_{m})\leq\upsilon\leq N(1+\epsilon_{m})\) due to the \(\epsilon_{m}\)-orthogonality. For simplicity, we use \(\boldsymbol{\hat{X}}=\{\boldsymbol{\hat{x}}_{i}\}_{1\leq i\leq\hat{M}} \stackrel{{ i.i.d.}}{{\sim}}\sigma\) to denote \(\left\{\boldsymbol{x}_{k_{i}}\right\}_{1\leq i\leq\hat{M}}\), where the indices \(\left\{k_{i}\right\}_{1\leq i\leq\hat{M}}\) are drawn as described in Step 5. Then, the weighted least-squares estimator is \[\boldsymbol{\hat{d}}=\operatorname*{argmin}_{\boldsymbol{d}\in\mathbb{R}^{N}} \|\boldsymbol{\hat{Q}}\boldsymbol{d}-\boldsymbol{\hat{f}}\|_{2}\text{ or equivalently, }\mathcal{L}(f)_{W}:=\operatorname*{argmin}_{p\in\mathcal{P}_{n}^{d}}\|p-f\|_{ \hat{M}} \tag{5.2}\] where the notation \(\langle\cdot,\cdot\rangle_{\hat{M}}\) denotes \(\langle u,v\rangle_{\hat{M}}:=\frac{1}{M}\sum_{i=1}^{\hat{M}}w(\boldsymbol{ \hat{x}}_{i})u(\boldsymbol{\hat{x}}_{i})v(\boldsymbol{\hat{x}}_{i})\), and \(\|u\|_{\hat{M}}=\sqrt{\langle u,u\rangle_{\hat{M}}}\). Using the weight matrix \(\boldsymbol{\hat{W}}\), the weighted least-squares problem can be written as \[\boldsymbol{\hat{d}}=\operatorname*{argmin}_{\boldsymbol{d}\in\mathbb{R}^{N}} \|\boldsymbol{\hat{W}}\boldsymbol{Q_{S}}\boldsymbol{d}-\boldsymbol{f_{S}}\|_{2} \tag{5.3}\] where \[\boldsymbol{\hat{W}}:=\frac{1}{\sqrt{\hat{M}}}\operatorname{diag}(\sqrt{w( \boldsymbol{\hat{x}}_{i})})_{1\leq i\leq\hat{M}}=\frac{1}{\sqrt{\hat{M}}} \operatorname{diag}\left(\frac{1}{\sqrt{M\pi_{k_{i},j}}}\right)_{1\leq i\leq \hat{M}}. \tag{5.4}\] \(\boldsymbol{Q_{S}}\in\mathbb{R}^{\hat{M}\times N}\) and \(\boldsymbol{f_{S}}\in\mathbb{R}^{\hat{M}}\) are the matrix and the vector formed with \(\left\{k_{1},\ldots,k_{\hat{M}}\right\}\) selected rows of \(\boldsymbol{Q}\) and \(\boldsymbol{\tilde{f}}\) in the unweighted system. \(\boldsymbol{\hat{d}}:=[\hat{d}_{1},\ldots,\hat{d}_{N}]^{T}\) is the vector of coefficients for the weighted least-squares estimator such that \(\mathcal{L}(f)_{W}:=\sum_{i=1}^{N}\hat{d}_{i}\phi_{i}\). Note that \(\phi_{i}\) is defined as before, namely the discrete orthogonal polynomials generated by \(M\) sample points. The estimator \(\mathcal{L}(f)_{W}\) is solved using normal equations, such that \[\boldsymbol{d}=\boldsymbol{G}^{-1}(\boldsymbol{\hat{Q}^{*}\hat{f}}). \tag{5.5}\] \(\boldsymbol{G}:=\boldsymbol{\hat{Q}^{*}\hat{Q}}\in\mathbb{C}^{N\times N}\) is the Gram matrix which has the \((j,k)\)th entry \(\boldsymbol{G}_{j,k}=\left\langle\phi_{j},\phi_{k}\right\rangle_{\hat{M}}\). The vector \(\boldsymbol{\hat{Q}^{*}\hat{f}}\in\mathbb{R}^{N}\) has \(j\)th entry \(\left\langle\phi_{j},f\right\rangle_{\hat{M}}\). Well-Conditioning of Reduced Gram Matrix:Since the weighted least-squares estimator is found by solving system (5.5) and by taking the inverse of the matrix \(\boldsymbol{G}\), for stability and convergence we need to ensure that the Gram matrix \(\boldsymbol{G}\) is well-conditioned. Also, we investigate how much \(\boldsymbol{G}=\boldsymbol{\hat{Q}^{*}\hat{Q}}\) deviates from \(\boldsymbol{Q^{*}Q}\). In other words, we want to understand whether the discrete orthogonal basis at the selected sample points \(\boldsymbol{\hat{X}}\) acts as a good approximation to the discrete orthogonal basis at the full sample points \(\boldsymbol{X}\). The following theorem adapted from [24, Thm 3] establishes these links. **Theorem 6**.: _For a compact, bounded domain \(\Omega\in\mathbb{R}^{d}\) with the measure \(\rho\), consider finding a weighted least-squares approximation in the polynomial space \(\mathcal{P}_{n}^{d}\) using Algorithm 1. Assume that \(\boldsymbol{X}=\{\boldsymbol{x}_{i}\}_{1\leq i\leq M}\stackrel{{ i.i.d.}}{{\sim}}\rho\) and the \(\epsilon_{m}\)-orthogonal basis are generated as described in Step 3 of Algorithm 1. For any \(\hat{\alpha}\in(0,\frac{1}{2}),\epsilon_{m}\in(0,1),\hat{\delta}\in(0,1-\epsilon_ {m})\), and \(n\geq 1\), if the following conditions hold,_ _i)_ \(\hat{M}\geq\frac{4N(1+\epsilon_{m})}{\hat{\delta}^{2}}\log(\frac{2N}{\hat{ \alpha}}),\)__ _ii)_ \(\boldsymbol{\hat{X}}=\left\{\boldsymbol{\hat{x}}_{i}\right\}_{1\leq i\leq\hat{M}} \stackrel{{ i.i.d.}}{{\sim}}\sigma\) _where_ \(\sigma\) _is defined as in (_5.1_),_ _then, the matrix \(\mathbf{G}\) satisfies \(\mathbb{P}(\|\mathbf{G}-\mathbf{I}\|_{2}\geq\hat{\delta}+\epsilon_{m})\leq\hat{\alpha}\) where \(\mathbf{I}\) is the \(N\times N\) identity matrix._ We give a sketch of the proof for Theorem 6. We write \(\mathbb{P}(\|\mathbf{G}-\mathbf{I}\|_{2}\geq\hat{\delta}+\epsilon_{m})\) as \[\mathbb{P}(\|\mathbf{G}-\mathbf{I}\|_{2}<\hat{\delta}+\epsilon_{m})\geq \underbrace{\mathbb{P}(\{\|\mathbb{E}(\mathbf{G})-\mathbf{I}\|_{2}<\epsilon_{m}\})} _{:=P_{1}}\underbrace{\mathbb{P}(\{\|\mathbf{G}-\mathbb{E}(\mathbf{G})\|_{2}<\hat{ \delta}\})}_{:=P_{2}}.\] The first probability term \(P_{1}\) is bounded using condition \((ii)\), the equality \(\mathbb{E}(\mathbf{G}_{j,k})=\langle\phi_{j},\phi_{k}\rangle_{M}\) and the \(\epsilon_{m}\)-orthogonality. The second probability term \(P_{2}\) is bounded using the Bernstein Inequality and condition \((i)\) which gives a tail bound for sums of random matrices. A full proof of Theorem 6 can be found in [24, Thm 3]. Theorem 6 not only ensures that \(\mathbf{G}\) is well-conditioned with high probability but also guarantees that the weighted least-squares problem (5.5) is stable with high probability. Under the conditions of Theorem 6, we have \(1-\hat{\delta}-\epsilon_{m}\leq\|\mathbf{G}\|_{2}\leq 1+\hat{\delta}+\epsilon_{m}\). Using that \(\mathbf{v}^{T}\mathbf{G}\mathbf{v}=(\mathbf{\hat{Q}}\mathbf{v})^{T}(\mathbf{\hat{Q}}\mathbf{v})\) for all \(\mathbf{v}\in\mathbb{R}^{N}\), it follows that \[\|\mathbf{\hat{Q}}\|_{2}=\|\mathbf{G}\|_{2}^{1/2},\text{ and }\|\mathbf{G}^{-1}\|_{2}\|\mathbf{ \hat{Q}}^{T}\|_{2}\leq\frac{\sqrt{1+\hat{\delta}+\epsilon_{m}}}{1-\hat{\delta }-\epsilon_{m}}=:C_{\hat{\delta},\epsilon_{m}}. \tag{5.6}\] Thus, we arrive at the stability result \(\|\mathbf{\hat{d}}\|_{2}=\|\mathbf{G}^{-1}\mathbf{\hat{Q}}\mathbf{\hat{f}}\|_{2}\leq C_{\hat{ \delta},\epsilon_{m}}\|\mathbf{\hat{f}}\|_{2}.\) Numerical example on the growth of \(\kappa_{2}(\mathbf{G})\) with respect to \(N\) for different sampling densities \(\hat{M}\) can be found in Appendix E. To ensure the convergence of Algorithm 1, in addition to conditions \((i)\) and \((ii)\) in Theorem 6, we also require a sample density of \(M=\mathcal{O}(N^{2}\log N)\) random sample points. This result is expected as we do need \(M=\mathcal{O}(N^{2}\log N)\) random sample points to form an admissible mesh and to form discrete orthogonal polynomials which are well-bounded in the domain. There are a few papers that discuss the convergence for Algorithm 1, namely [1, Thm 3.1-3.5], [3, Thm 6.6] and [24, Thm 2]. ### Numerical Examples for Weighted V+A Algorithm In this subsection, we compare the VA+Weight algorithm with the QR+Weight algorithm. The two algorithms use the same weights, but the QR+Weight algorithm uses the QR factorization of the ill-conditioned Vandermonde matrix to create the discrete orthogonal basis. In Figure 9, we plot the numerical results of approximating a smooth function using VA+Weight and QR+Weight in a real interval. As shown in the left plot, both algorithms converge with \(\hat{M}=\mathcal{O}(N\log N)\) number of weighted sample points. Before the ill-conditioning of \(\mathbf{A}\) builds in, the two algorithms performed similarly as expected. Note that the error from the two approximations will not be exactly the same as we selected the weighted sample points in a non-deterministic fashion (i.e., following the probability measure \(\sigma\)). However, when \(\kappa_{2}(\mathbf{A})\) grows beyond the inverse of the machine epsilon for \(N>60\), the QR+Weight approximation has an error stagnating at \(10^{-5}\) as illustrated in the left plot of Figure 9. Although the QR factorization and the weighted sampling method generate a well-conditioned \(\mathbf{\hat{Q}}\) (Figure 9), the columns of \(\mathbf{\hat{Q}}\) do not approximate the orthogonal basis in the domain (Figure 10). Thus, accuracy is lost for large \(N\). On the other hand, as shown in the right plot of Figure 9, VA+Weight approximation gives a stable error reduction down to \(10^{-13}\). The error reduction in VA+Weight also matches with the error reduction in the unweighted least-squares approximations. This is because the discrete orthogonal polynomials generated by V+A are unaffected by the ill-conditioning of \(\mathbf{A}\). The value of the 30th discrete orthogonal polynomial overlaps with the value of the 30th scaled Legendre polynomial in the domain (Figure 10). The same patterns of convergence are found while approximating bivariate functions. As plotted in Figure 11, VA+Weight gives an approximation with higher accuracy than QR+Weight in both domains. The difference in the two approximations is less in the 2D domain than in the 1D domain. This is because the multivariate Vandermonde matrix \(\mathbf{A}\) is, in general, less ill-conditioned in 2D domains than in 1D domains. That said, the conditioning of the multivariate Vandermonde matrix \(\mathbf{A}\) varies greatly with the shape of the domain. VA+Weight provides a stable and generalized method for multivariate approximations in irregular domains. Figure 11: Comparison of VA+Weight and QR+Weight in 2D domains. The sample points are chosen as \(M=N^{2}\log N\) random points. We use Domain 2 in Figure 12. Figure 10: The discrete orthogonal polynomials and the scaled Legendre polynomials generated by VA+ Weight and QR+ Weight using \(M=900\) equi-spaced points in \([-5,10]\). Figure 9: Comparison of VA+Weight and QR+Weight in a 1D domain. The sample points are chosen as \(M=N^{2}\log N\) random points. The unweighted least-squares approximation is computed by the multivariate V+A Algorithm (i.e., Algorithm 4 in Appendix B). Finding the best distribution of sample points in a high-dimensional irregular domain is an open question in the literature. We highlight that the weighting method in Algorithm 1 is a practical tool for finding the near-optimal distribution of the sample points. We plot the \(\hat{M}=N\log N\) sample points selected by VA+Weight for different domains in Figure 12. Notice that the selected points are clustered near corners and boundaries. This distribution pattern matches the pattern in Padua points in tensor-product domains. Note that we did not instruct the VA+Weight algorithm to place the sample points near the boundary. The algorithm naturally selected those sample points because at these points the discrete orthogonal polynomials take larger absolute values. VA+Weight shows strong adaptability in selecting near-optimal sample points in irregular domains. Finally, we note that different weighting measures can be used to improve different aspects of the approximation algorithm. As illustrated in the left two plots of Figure 13, the VA+Weight algorithm gives a similar accuracy as the unweighted V+A least-squares approximation but with improved efficiency. That said, the approximants obtained from VA+Weight are only near-optimal, but not the best polynomial approximation. We propose combining the multivariate V+A with Lawson's algorithm (VA+Lawson) to obtain the best multivariate polynomial approximation. VA+Lawson is based on an iterative re-weighted least-squares process and can be used to improve the approximation accuracy. Our numerical experiments showed that through the VA+Lawson algorithm, we improved the approximation error from \(1.4\times 10^{-6}\) to \(3.2\times 10^{-7}\). Moreover, we found an equioscillating error curve in 2D which is a characteristic for the best polynomial approximation [26, Thm. 24.1]. ## 6 Conclusions and Perspectives In this paper, we analyzed the multivariate Vandermonde with the Arnoldi method to approximate \(d\)-dimensional functions on irregular domains. V+A technique resolves the ill-conditioning issue of the Vandermonde matrix and builds the discrete orthogonal polynomials with respect to the domain. Our main theoretical result is the convergence of the multivariate V+A least-squares Figure 12: \(M=N^{2}\log N\) random sample points are drawn from the domain with \(N=200\), plotted in the yellow dots. The weighted sample points selected are plotted in blue dots. Figure 13: Approximating \(f=\sin(x_{(1)}x_{(2)})\) with total degrees of freedom \(N=66\) with \(M=N^{2}\log N\) random sample points. We use the same domain as the middle plot of Figure 12. The error curve of the approximant using V+A, VA+Weight, VA+Lawson are plotted in the left, middle, and right plots, respectively. For VA+Lawson, we carried out 10 Lawson’s iterations. approximation for a large class of domains. The sample complexity required for the convergence is quadratic in the dimension of the approximation space, up to a log factor. Using a suitable weighting measure, we showed that the sample complexity can be further improved. The weighted V+A least-squares method requires only log-linear sample complexity \(M=\mathcal{O}(N\log(N))\). Our numerical results confirm that the (weighted) V+A method gives a more accurate approximation than the standard orthogonalization method for high-degree approximation using the Vandermonde matrix. V+A has many applications beyond least-squares fitting. Our numerical experiments showed that the VA+Lawson algorithm improves the approximation accuracy and generates an equioscillating error curve. Yet, the convergence profile for VA+Lawson still seems to be unknown and could be an objective of future work. Another extension under consideration involves using multivariate V+A in vector- and matrix-valued rational approximations [13, Subsec. 2.4]. Generally speaking, in any application which involves matrix operations of Vandermonde matrix (or its related form), it seems likely that the V+A procedure could be an effective idea to apply.
2303.13726
Topology-Based MPC for Automatic Footstep Placement and Contact Surface Selection
State-of-the-art approaches to footstep planning assume reduced-order dynamics when solving the combinatorial problem of selecting contact surfaces in real time. However, in exchange for computational efficiency, these approaches ignore joint torque limits and limb dynamics. In this work, we address these limitations by presenting a topology-based approach that enables model predictive control (MPC) to simultaneously plan full-body motions, torque commands, footstep placements, and contact surfaces in real time. To determine if a robot's foot is inside a contact surface, we borrow the winding number concept from topology. We then use this winding number and potential field to create a contact-surface penalty function. By using this penalty function, MPC can select a contact surface from all candidate surfaces in the vicinity and determine footstep placements within it. We demonstrate the benefits of our approach by showing the impact of considering full-body dynamics, which includes joint torque limits and limb dynamics, on the selection of footstep placements and contact surfaces. Furthermore, we validate the feasibility of deploying our topology-based approach in an MPC scheme and explore its potential capabilities through a series of experimental and simulation trials.
Jaehyun Shim, Carlos Mastalli, Thomas Corbères, Steve Tonneau, Vladimir Ivan, Sethu Vijayakumar
2023-03-24T00:38:45Z
http://arxiv.org/abs/2303.13726v2
# Topology-Based MPC for Automatic Footstep Placement and Contact Surface Selection ###### Abstract State-of-the-art approaches to footstep planning assume reduced-order dynamics when solving the combinatorial problem of selecting contact surfaces in real time. However, in exchange for computational efficiency, these approaches ignore joint torque limits and limb dynamics. In this work, we address these limitations by presenting a topology-based approach that enables model predictive control (MPC) to simultaneously plan full-body motions, torque commands, footstep placements, and contact surfaces in real time. To determine if a robot's foot is inside a contact surface, we borrow the winding number concept from topology. We then use this winding number and potential field to create a contact-surface penalty function. By using this penalty function, MPC can select a contact surface from all candidate surfaces in the vicinity and determine footstep placements within it. We demonstrate the benefits of our approach by showing the impact of considering full-body dynamics, which includes joint torque limits and limb dynamics, on the selection of footstep placements and contact surfaces. Furthermore, we validate the feasibility of deploying our topology-based approach in an MPC scheme and explore its potential capabilities through a series of experimental and simulation trials. ## I Introduction To _traverse discrete terrains_ such as stepping stones, legged robots need to carefully plan their footsteps and motions [1, 2, 3]. In previous works, footsteps and motions were computed separately [4, 5, 6] to reduce the combinatorial complexity of these nonlinear problems [7, 8]. However, in doing so, assumptions need to be introduced into the gait pattern, kinematics, and dynamics model. Alternatively, to focus on the footstep planning (i.e., combinatorial problem), other approaches neglect limb dynamics, thereby allowing for formulating this problem as a mixed-integer convex problem [9, 10]. However, for robots with heavy limbs or limited actuation torque, this assumption does not hold, nor can the full reachability (i.e., kinematics) of the robot be exploited. These limitations also lead to errors in footstep tracking, which are caused by improper tracking of angular momentum [11]. The above-mentioned limitations can be addressed by considering the robot's full-body dynamics, as it also takes into account limb dynamics and joint torque limits, and computing motions and footsteps together. However, this results in a large optimization problem that is challenging to solve within a control loop of a few milliseconds. In our recent works, we demonstrated an MPC that takes into account the full-body dynamics of the robot [11, 12]. A key advantage of our previous approaches is the ability of our MPC to generate agile and complex maneuvers through the use of feasibility-driven search [13] and optimal policy tracking. However, it requires a predefined sequence of footstep placements and contact surfaces. To address these limitations, this paper borrows concepts from topology and classical electrostatics to enable the automatic selection of footstep placements and contact surfaces in real time. Compared to other state-of-the-art approaches, our footstep plans ensure joint torque limits, friction-cone constraints, and full-body kinematics and dynamics (Fig. 1). To the best of our knowledge, our work is the first to _introduce full-body dynamics MPC that optimizes footstep placement and contact surface_ as well. Fig. 1: Visualization of the planned motions and footsteps, taking into account the robot’s full-body dynamics (including joint torque limits and limb dynamics), friction cones, and all potential contact surfaces. Swing-foot trajectories are represented in different colors, while candidate contact surfaces are indicated by dark gray squares. Our approach can be used to plan footstep placements and contact surfaces for both quadruped and humanoid robots. A video demonstrating our approach is available at [https://youtu.be/uweesAj5_x0](https://youtu.be/uweesAj5_x0). ### _Related Work_ Recent methods for footstep planning can optimize footstep placement and gait pattern [9, 14, 15, 16]. These approaches can handle discrete terrains by smoothing their geometry [7, 14, 15] or using discrete variables in mixed-integer optimization [9, 10, 16, 17]. However, their computational complexity makes it infeasible to deploy them online. This is because they result in a combinatorial explosion of hybrid modes, which cannot be resolved by employing simplified models to cast the problem as mixed-integer convex optimization [9, 17]. Alternatively, it can be formulated as a continuous problem with complementary constraints. However, the ill-posed nature of the complementary constraints increases computation time, making it difficult to deploy this approach online [7, 8, 14]. To avoid combinatorial complexity or ill-conditioning, our work _focuses on optimizing footstep placement without optimizing gait pattern_. The narrow focus on optimizing footstep placement enables online re-planning. For instance, as shown in [18], MPC can generate a walking motion with automatic footstep placement. However, despite the impressive achievement, this approach assumes that the robot behaves as a linear inverted pendulum, which cannot account for its kinematic feasibility, joint torque limits, orientation, or the effect of non-coplanar contact conditions and limb and vertical motions. Later, Di Carlo et al. [19] proposed a convex relaxation of the single rigid body dynamics (SRBD) that can accommodate the robot's orientation. As in [18], this boils down to a linear MPC that can be solved with a general-purpose quadratic programming solver. More recently, other MPC approaches employ SRBD or centroidal dynamics (CD), using direct transcription and general-purpose nonlinear programming solvers [20, 21]. Although these approaches can address non-coplanar contacts and vertical motions, SRBD still cannot account for the robot's kinematic limits and the effects of limb dynamics. Moreover, neither SRBD nor CD can account for joint torque limits. One may think that these limitations are not critical for robots with lightweight legs; however, a recent study shows that they still have a substantial impact on the control [22]. This justifies why, in our work, we _compute motions that ensure the robot's full-body dynamics_. Real-time handling of both body and leg kinematics and dynamics in an MPC manner can be achieved by taking advantage of the temporal structure of the optimal control problem. Differential dynamic programming (DDP) [23] takes advantage of this structure by factorizing a sequence of smaller matrices, rather than employing sparse linear solvers [24] commonly done in nonlinear programming [25]. This reduction in computational complexity makes it feasible to use MPC with full-body dynamics, as shown in simulation results presented in [26]. Inspired by these results, recent research has shown the application of MPC with SRBD and full kinematics [27, 28, 29] and full-body dynamics [11, 12, 30, 31, 32]. However, these approaches do not address the issue of selecting footstep placement and contact surface with full-body dynamics. While algorithms based on DDP, such as iLQR [33], Box-FDDP [13], and ALTRO [34], have been also developed, they have not yet been applied to address the aforementioned problems. In contrast to other MPC approaches, our topology-based approach selects optimal footstep placement and contact surface within the framework of full-body dynamics MPC by _creating a continuous cost function for terrain_. ### _Contribution_ The main contribution of our work is an MPC that simultaneously plans full-body motions, torque commands, feedback policies, footstep placements, and contact surfaces in real time. Specifically, we identify three technical contributions as follows: 1. A novel topology-based MPC that uses potential field and winding number to plan footstep placements and contact surfaces in real time; 2. Demonstration of the advantages of our topology-based MPC that takes into account full-body dynamics; 3. Experimental validation of our topology-based MPC on the ANYmal robot and simulations that showcase its potential capabilities. In the next section, we introduce the concepts of potential field and winding number that we use to _create a contact-surface penalty function_. ## II Potential Field and Winding Number in Contact Surfaces In this section, we explain in detail our novel approach to footstep placement and contact surface selection. It borrows the concepts of electric potential (Section II-A and II-B) and winding number (Section II-C and II-D) from classical electrostatics and topology, respectively. These concepts enable us to design a contact-surface penalty function, allowing our MPC to choose optimal footstep placement and contact surface (Section II-E). ### _Potential Field_ We begin with an introduction to the electric potential that arises from a charged particle. This is also called a Coulomb potential and is defined as \[V_{E}(\mathbf{r})=\frac{Q}{4\pi\varepsilon_{0}}\frac{1}{|\mathbf{r}-\mathbf{p }|}, \tag{1}\] where \(Q\) is the charge of the particle, \(\varepsilon_{0}\) is the permittivity of vacuum, \(\mathbf{r}\) is the point at which the potential is evaluated, and \(\mathbf{p}\) is the point at which the charged particle is located. Since the physical meaning of the electric potential is not relevant to the definition of our cost function, we drop the scaling factor \(\frac{Q}{4\pi\varepsilon_{0}}\) and make the potential unitless. Moreover, the potential measured across a closed curve \(\boldsymbol{\gamma}(s)\) can be obtained by integrating the potential field: \[V(\boldsymbol{\gamma})=\int\frac{1}{|\boldsymbol{\gamma}(s)-\mathbf{p}|}ds. \tag{2}\] ### _Potential Field in Contact Surfaces_ We describe the contact surfaces as polygons, whose boundaries can be represented by a sequence of linear segments. Similar to Eq. (2), we calculate the potential measured across the \(i\)-th segment of the polygon as follows: \[p_{i}=\int_{0}^{1}\frac{1}{|\mathbf{a}_{i}+(\mathbf{b}_{i}-\mathbf{a}_{i})s- \mathbf{\hat{p}}|}ds, \tag{3}\] where \(\mathbf{a}_{i},\mathbf{b}_{i}\in\mathbb{R}^{2}\) are the endpoints of the segment and \(\mathbf{\hat{p}}\in\mathbb{R}^{2}\) is the position of the robot's foot in the _horizontal plane_. It has an analytical solution, and the resulting potential produced by each potential segment \(p_{i}\) can be calculated as follows: \[V_{\boldsymbol{\gamma}}=-\sum_{i}^{\boldsymbol{\gamma}}\frac{\mathrm{atan}( \frac{e_{i}}{e_{i}})-\mathrm{atan}(\frac{d_{i}}{e_{i}})}{e_{i}}, \tag{4}\] with \[\begin{array}{ll}c_{i}=(\mathbf{a}_{i}\cdot\mathbf{b}_{i})-(\mathbf{a}_{i} \cdot\mathbf{\hat{p}})+(\mathbf{b}_{i}\cdot\mathbf{\hat{p}})-(\mathbf{b}_{i} \cdot\mathbf{b}_{i}),\\ d_{i}=-(\mathbf{a}_{i}\cdot\mathbf{b}_{i})-(\mathbf{a}_{i}\cdot\mathbf{\hat{p }})+(\mathbf{b}_{i}\cdot\mathbf{\hat{p}})+(\mathbf{a}_{i}\cdot\mathbf{a}_{i}),\\ e_{i}=[\mathbf{a}_{i}\times\mathbf{b}_{i}]_{z}-[\mathbf{a}_{i}\times\mathbf{ \hat{p}}]_{z}+[\mathbf{b}_{i}\times\mathbf{\hat{p}}]_{z},\end{array}\] where \([\mathbf{a}\times\mathbf{b}]_{z}\) returns the \(z\)-coordinate of the cross product of the two vectors \(\mathbf{a}=[a_{x},a_{y}]\) and \(\mathbf{b}=[b_{x},b_{y}]\) in the XY plane (i.e., \(a_{x}b_{y}-a_{y}b_{x}\)). ### _Winding Number_ To represent discrete terrain as a topological space, we leverage the concept of _winding number_. The winding number is a measure of how many times a curve is wound around a point in the 2D plane. There are different ways to define the winding number. For instance, from the perspective of differential geometry, we can associate this number with the polar coordinates for a point at the origin (\(\mathbf{\hat{p}}=\mathbf{0}\)), i.e., \[\text{wind}(\boldsymbol{\gamma},\mathbf{\hat{p}}=\mathbf{0})=\frac{1}{\pi} \oint_{\boldsymbol{\gamma}}\left(\frac{x}{r^{2}}dy+\frac{y}{r^{2}}dx\right), \tag{5}\] where \(x,y\) defines the parametric equation of a continuous closed curve \(\boldsymbol{\gamma}\), with \(r^{2}=x^{2}+y^{2}\). ### _Winding Number in Contact Surfaces_ As in Section II-B and using the formula derived by [35], we calculate the winding number of the contact surface by summing over the \(i\)-th linear segments, as follows: \[\text{wind}(\boldsymbol{\gamma},\mathbf{\hat{p}})=\sum_{i}^{\boldsymbol{ \gamma}}\frac{\mathrm{atan}2(c_{i},d_{i})}{2\pi}, \tag{6}\] with \[\begin{array}{ll}c_{i}=[(\mathbf{a}_{i}-\mathbf{\hat{p}})\times(\mathbf{b}_{ i}-\mathbf{\hat{p}})]_{z},\\ d_{i}=(\mathbf{a}_{i}-\mathbf{\hat{p}})\cdot(\mathbf{b}_{i}-\mathbf{\hat{p}}).\end{array}\] Then, we can determine if a point \(\mathbf{\hat{p}}\) is inside the contact surface by checking if \(\text{wind}(\boldsymbol{\gamma},\mathbf{\hat{p}})\geq\frac{1}{2}\). ### _Contact-Surface Penalty Function_ To enable the robot to select a placement \(\mathbf{p}_{\mathcal{C}_{k}}\) (with \(\mathcal{C}_{k}\) as each foot) within a contact surface, we assign a zero cost when the placement is inside the surface. To achieve this, we use an indicator function based on the winding number to compute the _contact-surface penalty_ cost \(\ell_{\mathbf{p}_{\mathcal{C}_{k}}}\) as follows: \[\ell_{\mathbf{p}_{\mathcal{C}_{k}}}=\begin{cases}0&\text{if }\sum_{j}V_{ \boldsymbol{\gamma}_{j}}=0,\\ \sqrt{\frac{\sum_{j}N_{j}}{\sum_{j}\boldsymbol{\gamma}_{j}}}&\text{if }\sum_{j}\text{wind}( \boldsymbol{\gamma}_{j},\mathbf{\hat{p}}_{\mathcal{C}_{k}})<\frac{1}{2},\\ 0&\text{if }\sum_{j}\text{wind}(\boldsymbol{\gamma}_{j},\mathbf{\hat{p}}_{ \mathcal{C}_{k}})\geq\frac{1}{2},\end{cases} \tag{7}\] where \(V_{\boldsymbol{\gamma}_{j}}\) is the potential, \(N_{j}\) is the number of linear segments, and \(\text{wind}(\boldsymbol{\gamma}_{j},\mathbf{\hat{p}}_{\mathcal{C}_{k}})\) is the winding number obtained for the polygon \(j\). The potential and winding number are computed using Eq. (4) and Eq. (6). The analytical derivatives of this penalty function can be computed using the chain rule. As shown in Eq. (7), we define the penalty function by using a square root (second line). However, to avoid division by zero, we set the penalty function to zero when the potential is zero (first line). This happens when \(\mathbf{p}_{\mathcal{C}_{k}}\) lies on the boundary of the polygon. Fig. 2 illustrates our penalty function when we have two contact surfaces. Our penalty function is based on electric potential, which results in a harmonic field that remains harmonic for convex polygons, non-convex polygons, collections of polygons, self-intersecting curves, and overlapping polygons. This means that it contains the fewest possible number of local minima and saddle points. Our topology-based representation allows us to enforce all of these properties using the winding number. Furthermore, inspired by [36], our penalty function exploits invariances within a homology class defined by the winding number. In other words, this function captures containment, which makes it an ideal candidate for numerical optimization. Fig. 2: Contact-surface penalty function for two contact surfaces. The cost is zero inside the two candidate contact surfaces outlined in black and positive elsewhere. To determine if the robot’s feet are inside the contact surfaces, we compute the winding number. ## III MPC and Pipeline In this section, we describe our MPC formulation for selecting footstep placements and contact surfaces (Section III-A). Then, we present the control pipeline and setup used in our experiments with the ANYmal robot (Section III-B). ### _Topology-Based MPC_ Building upon our previous work [11], our MPC solves a hybrid optimal control problem at each control time step. The different modes of hybrid dynamics define different contact conditions along the optimization horizon. These rigid contact conditions are subject to the robot's full-body dynamics (i.e., _contact dynamics_). We also model the contact-gain transitions between these modes using the _impulse dynamics_. Our Box-FDDP solver [13] then computes full-body motions, torque commands, and feedback policies while keeping within the robot's joint torque limits, given a predefined set of footstep placements. Here, our contact-surface penalty function extends the capabilities of our previous MPC by enabling it to automatically plan footstep placements and contact surfaces, given a reference velocity and a set of candidate contact surfaces. In Eq. (8), the modifications introduced in this work are highlighted in blue. \[\min_{\mathbf{x}_{i},\mathbf{u}_{k}} \sum_{k=0}^{N-1}\left(\ell_{k}^{reg}+w_{r}\|\mathbf{\hat{r}}_{k}- \mathbf{\hat{r}}^{ref}\|^{2}+w_{f}\sum_{\mathcal{C}_{k}}\ell_{\mathbf{P}_{ \mathcal{C}_{k}}}\right)\] s.t. if \[k\] is a contact-gain transition: \[\mathbf{q}_{k+1}=\mathbf{q}_{k},\] \[\begin{bmatrix}\mathbf{v}_{k+1}\\ -\mathbf{\Lambda}_{\mathcal{C}_{k}}\end{bmatrix}=\begin{bmatrix}\mathbf{M}_{k} &\mathbf{J}_{\mathcal{C}_{k}}^{\top}\\ \mathbf{J}_{\mathcal{C}_{k}}&\mathbf{0}\end{bmatrix}^{-1}\begin{bmatrix} \boldsymbol{\tau}_{b_{k}}^{\mathcal{C}}\\ -\mathbf{a}_{\mathcal{C}_{k}}^{\top}\end{bmatrix},\] (impulse dyn.) else: \[\mathbf{q}_{k+1}=\mathbf{q}_{k}\oplus\int_{t_{k}}^{t_{k}+\Delta t_ {k}}\mathbf{v}_{k+1}\,dt,\] \[\mathbf{v}_{k+1}=\mathbf{v}_{k}+\int_{t_{k}}^{t_{k}+\Delta t_{k}} \mathbf{\hat{v}}_{k}\,dt,\] (integrator) \[\begin{bmatrix}\hat{\mathbf{v}}_{k}\\ -\mathbf{\Lambda}_{\mathcal{C}_{k}}\end{bmatrix}=\begin{bmatrix}\mathbf{M}_{k} &\mathbf{J}_{\mathcal{C}_{k}}^{\top}\\ \mathbf{J}_{\mathcal{C}_{k}}&\mathbf{0}\end{bmatrix}^{-1}\begin{bmatrix} \boldsymbol{\tau}_{b_{k}}^{\mathcal{C}}\\ -\mathbf{a}_{\mathcal{C}_{k}}^{\top}\end{bmatrix},\] (contact dyn.) \[\log(\mathscr{V}_{\mathbf{P}_{\mathcal{Q}_{k},k}^{\top}}^{-1} \mathscr{V}_{\mathbf{P}_{\mathcal{Q}_{k},k}^{ref}})=\mathbf{0},\] (vertical foot pos.) \[\mathscr{V}_{\mathbf{P}_{\mathcal{Q}_{k},k}^{\top}}^{-1} \mathscr{V}_{\mathbf{P}_{\mathcal{Q}_{k},k}^{\top}}^{ref}=\mathbf{0},\] (vertical foot vel.) \[\mathbf{A}^{\mathscr{V}_{\mathbf{P}_{\mathcal{C}_{k}}^{ref}}}= \mathbf{a}_{\mathcal{C}_{k}},\] (contact surface height) \[\mathbf{C}\mathbf{\Lambda}_{\mathcal{C}_{k}}\geq\mathbf{c},\] (friction-cone) \[\underline{\mathbf{x}}\leq\mathbf{x}_{k}\leq\overline{\mathbf{x}},\] (state bounds) \[\underline{\mathbf{u}}\leq\mathbf{u}_{k}\leq\overline{\mathbf{u}},\] (control bounds) (8) where \(\mathbf{x}=(\mathbf{q},\mathbf{v})\) is the state of the system, \(\mathbf{q}\in\mathbb{SE}(3)\times\mathbb{R}^{n_{j}}\) (with \(n_{j}\) as the number of joints) is the joint configuration, \(\mathbf{v}\in\mathbb{R}^{n_{v}}\) (with \(n_{v}=6+n_{j}\)) is the generalized velocity, \(\mathbf{u}\in\mathbb{R}^{n_{j}}\) is the joint torque input, \(\boldsymbol{\lambda}_{\mathcal{C}}\in\mathbb{R}^{n_{e}}\) (with \(n_{c}\) as the dimension of the contact forces) is the contact force, \(\mathbf{p}_{\mathcal{C},\mathcal{G}}\in\mathbb{R}^{n_{e}}\) (with \(\mathcal{C}\) and \(\mathcal{G}\) as active and inactive contacts, respectively) is the position of the foot, \(\mathbf{A}\) and \(\mathbf{a}\) describes the contact surface, \(\oplus\) and \(\ominus\) denote _integration_ and _difference operations_ needed to optimize over manifolds [37]--notations introduced in Crocoddyl[5]. Furthermore, \(\ell_{k}^{reg}\) regularizes the robot's configuration around a nominal posture \(\mathbf{q}^{ref}\), the generalized velocity, joint torques, and contact forces as follows: \[\ell_{k}^{reg}=\|\mathbf{q}_{k}\ominus\mathbf{q}^{ref}\|_{\mathbf{Q}}^{2}+\| \mathbf{v}_{k}\|_{\mathbf{N}}^{2}+\|\mathbf{u}_{k}\|_{\mathbf{R}}^{2}+\| \boldsymbol{\lambda}_{\mathcal{C}_{k}}\|_{\mathbf{K}}^{2},\] where \(\mathbf{Q},\mathbf{N}\in\mathbb{R}^{n_{v}\times n_{v}}\), \(\mathbf{R}\in\mathbb{R}^{n_{j}\times n_{j}}\) and \(\mathbf{K}\in\mathbb{R}^{n_{e}\times n_{e}}\) are a set of (positive definite) diagonal weighting matrices. To enable the automatic footstep placement, we first define a quadratic cost \(w_{r}\|\mathbf{\hat{r}}_{k}-\mathbf{\hat{r}}^{ref}\|^{2}\) (with \(w_{r}\) as its weight) that tracks the reference base velocity \(\mathbf{\hat{r}}^{ref}\) in the horizontal plane. Next, for all active contacts, we include our contact-surface penalty function \(\sum_{\mathcal{C}_{k}}\ell_{\mathbf{P}_{\mathcal{C}_{k}}}\) (with \(w_{f}\) as its weight). We also add a constraint that specifies the contact surface height. Finally, we impose constraints that define the vertical motion of the swing foot. Note that we define the \(\log(\cdot)\) operator in the vertical foot velocity constraint as the footstep placement may lie on a \(\mathbb{SE}(3)\) manifold. \(\mathscr{V}_{\mathbf{P}_{\mathcal{G}}^{-1}}\cdot\mathscr{V}_{\mathbf{P}_{ \mathcal{G}}^{\mathscr{G}}}^{ref}\) describes the inverse composition between the reference and current contact placements [38]. Again, we employ the robot's full-body dynamics (contact and impulse dynamics) in Eq. (8), where \(\mathbf{M}\in\mathbb{R}^{n_{v}\times n_{v}}\) is the joint-space inertia matrix, \(\mathbf{J}_{\mathcal{C}}\in\mathbb{R}^{nc\times nv}\) is the active contact Jacobian, \(\boldsymbol{\tau}_{b}^{\mathcal{C},\mathcal{I}}\in\mathbb{R}^{n_{v}}\) is the force-bias vector, \(\mathbf{a}_{\mathcal{C}}\in\mathbb{R}^{n_{c}}\) is the desired acceleration, and \(\mathbf{C}\) and \(\mathbf{c}\) describes the linearized friction cone (or wrench cone for humanoids). Note that the definition of the force-bias term (\(\boldsymbol{\tau}_{b}^{\mathcal{C}}\) or \(\boldsymbol{\tau}_{b}^{\mathcal{I}}\)) changes between the contact and impulse dynamics. For more details on the regularization cost, dynamics, friction cone, and implementation aspects of our MPC, we encourage readers to refer to [11]. Fig. 3: Overview of our locomotion pipeline. ### _Locomotion Pipeline and Experimental Setup_ Fig. 3 depicts our locomotion pipeline. Reference velocities are sent using a joystick that runs at \(30\,\mathrm{Hz}\). Candidate footstep surfaces in the predefined environment are extracted from mesh files at \(10\,\mathrm{Hz}\). Our topology-based MPC operates at \(50\,\mathrm{Hz}\) with an optimization horizon of \(0.85\,\mathrm{s}\). Our MPC, teleoperation, and surface extractor run on two external PCs. The joystick and surface extractor run on an Intel Core i9-9980 PC (8 core, 2.40 GHz), while the MPC runs on an Intel Core i9-9900 PC (8 core, 3.60 GHz). Lastly, the state feedback controller and state estimator operate at \(400\,\mathrm{Hz}\) on two separate onboard PCs equipped with an Intel Core i7-7500 (2 core, 2.70 GHz). ## IV Results In this section, we first show the impact of considering the robot's full-body dynamics when selecting footstep placements and contact surfaces (Section IV-A). This demonstrates the benefits of our approach compared to other state-of-the-art methods that use simplified models. We then validate our approach in an MPC scheme on the ANYmal robot (Section IV-B). Finally, we showcase the potential highly-dynamic maneuvers that our approach can generate by exploiting limb dynamics in simulations (Section IV-C). ### _Footstep Planning with the Robot's Full-Body Dynamics_ As explained in Section I-A, neither SRBD nor CD can enforce joint torque limits. Furthermore, SRBD cannot account for limb dynamics and kinematics. Here, we present results that justify our full-body dynamics MPC that is able to consider the robot's joint torque limits and limb dynamics when planning footsteps placements and contact surfaces. To strictly focus on the independent variables (i.e., joint torque limits and friction coefficients), we set up the simulation as shown in Fig. 4. #### Iv-A1 Joint Torque Limits We investigated the ability of the robot to adjust to changing torque limits. The result will allow us to consider having the robot carry a payload (e.g., an arm), or perform more dynamic movements (e.g., crossing large gaps), without exceeding the robot's torque limit. First, we used the ANYmal's default torque limits of \(40\,\mathrm{N}\,\mathrm{m}\) to have the robot walk with a trotting gait on the footstep surface of two pallets with a gap of \(30\,\mathrm{cm}\). Next, we reduced the torque limits by half. Fig. 4a and 4b show the selection of footstep placements and contact surfaces for the default and reduced torque limits, respectively. These demonstrate that our MPC with the reduced torque limit positions the robot's legs closer together. As a result, the joint torque commands are reduced and kept within the torque limit, as shown in Fig. 5. #### Iv-A2 Limb Dynamics We varied the friction coefficient to investigate how the robot leverages limb dynamics when determining footstep placements and contact surfaces. We used the ANYmal's default torque limits and the same reference velocity used in Section IV-A1. As shown in Fig. 4c, our approach stretches the robot's leg perpendicular to the ground, allowing it to maintain the contact forces within the friction cone (with a friction coefficient of \(0.14\)). Fig. 4: Visualization of footstep placements and contact surfaces planned by our MPC when changing the joint torque limit and friction coefficient. The figures in the left and right columns of each row show the start and end of the crossing of the second foot, respectively. (a) ANYmal’s default torque limits of \(40\,\mathrm{N}\,\mathrm{m}\) and high friction coefficient of \(1.0\). (b) Reduced torque limits of \(20\,\mathrm{N}\,\mathrm{m}\). (c) Low friction coefficient of \(0.14\). When the torque limit is reduced in (b), our approach moves the footstep closer to the robot’s hip and selects contact surfaces that require lower torque commands. Conversely, when the friction coefficient is low, our approach stretches the robot’s leg perpendicular to the ground to maintain the reaction force within the small friction cone. Fig. 5: Torque commands on the knee joints when the robot crosses the patches. The red line represents the torque commands computed with the ANYmal’s default torque limit of \(40\,\mathrm{N}\,\mathrm{m}\), while the blue line represents the torque commands computed with the reduced torque limit of \(20\,\mathrm{N}\,\mathrm{m}\). The black dashed line indicates the reduced torque limit. With the default torque limit, the torque command for the right front knee computed by our MPC reaches its peak at \(2.7\,\mathrm{s}\). When the torque limit is lowered, our MPC changes the patch crossing moment, as shown in Fig. 4b. The command torque for the left front knee reaches the limit at \(2.9\,\mathrm{s}\), but it is maintained within it during that gait. ### _Validation of MPC on the ANYmal Robot_ We validated our MPC approach on the ANYmal robot. The robot did not perceive its surroundings, but it received the candidate contact surface information extracted from the predefined environment, as shown in Fig. 3. The robot moved with a walking gait. It selected the next contact surface and planned its footstep placement within it, considering the full-body dynamics and kinematics, as shown in Fig. 6a. With our topology-based MPC, the ANYmal robot successfully traversed a discrete terrain with a gap of \(10\,\mathrm{cm}\). This demonstrates that our approach can operate fast enough in an MPC scheme on real robots, with an optimization horizon of \(0.85\,\mathrm{s}\) and a control frequency of \(50\,\mathrm{Hz}\). ### _Achieving Dynamic Locomotion_ We conducted simulations to further evaluate our MPC scheme in achieving highly dynamic quadrupedal locomotion by utilizing limb dynamics. To showcase the potential of our approach in generating highly dynamic maneuvers with robots equipped with high-torque actuators in the future, we disabled the robot's torque limits. #### Iv-C1 Stair Climbing We extracted the contact surfaces of a staircase with \(30\,\mathrm{cm}\) depth and \(10\,\mathrm{cm}\) height. Our MPC approach selected footstep placements and contact surfaces to climb stairs in real time. The approach demonstrated the ability to handle both trotting and pacing dynamics, as shown in Fig. 6b and 6c. #### Iv-C2 Dynamic Jumping We tested an even more dynamic jumping gait on terrain with a \(40\,\mathrm{cm}\) gap. Fig. 6d shows that our MPC can achieve highly dynamic motions and adjust the footstep placements and contact surfaces to land reliably on the next contact surface. ## V Conclusion In this work, we introduced a novel topology-based approach that enables full-body dynamics MPC to automatically select footstep placements and contact surfaces for locomotion over discrete terrains. Specifically, we proposed a contact-surface penalty function that uses potential field and winding number to optimize both footstep placement and contact surface. With our method, we first justified the importance of considering full-body dynamics, which includes joint torque limits and limb dynamics, in footstep planning. We evaluated the planned footsteps and showed that our full-body dynamics MPC can effectively adapt to variations in joint torque limits and friction coefficients. Second, to demonstrate the practical implementation of our method in an MPC scheme on real robots, we conducted hardware experiments on discrete terrain using the ANYmal quadruped robot. Finally, we showcased the potential capabilities of our approach in various dynamic locomotion maneuvers with robots equipped with high-torque actuators through stair-climbing and gap-crossing simulations. For future work, we plan to implement a perceptive locomotion pipeline using our MPC to demonstrate the practical usefulness of our approach in solving real-world problems with changing environments. Furthermore, we will conduct empirical comparisons between our full-body dynamics MPC approach and reduced-order dynamics MPC to highlight the significance of our proposed approach. Lastly, as demonstrated in Fig. 1, we showed the potential of our method for humanoid locomotion. Although the control complexity is higher, our method can be considered a future research direction for humanoid footstep planning. Fig. 6: Snapshots of various locomotion maneuvers computed by our topology-based MPC with automatic footstep placement and contact surface selection. All experimental and simulation trials were conducted using an MPC with an optimization horizon of \(0.85\,\mathrm{s}\) and a control frequency of \(50\,\mathrm{Hz}\). To further explore the potential for more dynamic maneuvers that could be achieved with high-torque actuators, we increased the torque limit for simulations. (a) Experimental validation of a gap-crossing maneuver with a gap of \(10\,\mathrm{cm}\). (b) Climbing up a staircase of \(30\,\mathrm{cm}\) depth and \(10\,\mathrm{cm}\) height with a trotting gait. (c) Climbing up the same staircase with a pacing gait. (d) Jumping over a gap of \(40\,\mathrm{cm}\).
2302.04173
A Survey of Feature detection methods for localisation of plain sections of Axial Brain Magnetic Resonance Imaging
Matching MRI brain images between patients or mapping patients' MRI slices to the simulated atlas of a brain is key to the automatic registration of MRI of a brain. The ability to match MRI images would also enable such applications as indexing and searching MRI images among multiple patients or selecting images from the region of interest. In this work, we have introduced robustness, accuracy and cumulative distance metrics and methodology that allows us to compare different techniques and approaches in matching brain MRI of different patients or matching MRI brain slice to a position in the brain atlas. To that end, we have used feature detection methods AGAST, AKAZE, BRISK, GFTT, HardNet, and ORB, which are established methods in image processing, and compared them on their resistance to image degradation and their ability to match the same brain MRI slice of different patients. We have demonstrated that some of these techniques can correctly match most of the brain MRI slices of different patients. When matching is performed with the atlas of the human brain, their performance is significantly lower. The best performing feature detection method was a combination of SIFT detector and HardNet descriptor that achieved 93% accuracy in matching images with other patients and only 52% accurately matched images when compared to atlas.
Jiří Martinů, Jan Novotný, Karel Adámek, Petr Čermák, Jiří Kozel, David Školoudík
2023-02-08T16:24:09Z
http://arxiv.org/abs/2302.04173v1
A Survey of Feature detection methods for localisation of plain sections of Axial Brain Magnetic Resonance Imaging ###### Abstract Matching MRI brain images between patients or mapping patients' MRI slices to the simulated atlas of a brain is key to the automatic registration of MRI of a brain. The ability to match MRI images would also enable such applications as indexing and searching MRI images among multiple patients or selecting images from the region of interest. In this work, we have introduced robustness, accuracy and cumulative distance metrics and methodology that allows us to compare different techniques and approaches in matching brain MRI of different patients or matching MRI brain slice to a position in the brain atlas. To that end, we have used feature detection methods AGAST, AKAZE, BRISK, GFTT, HardNet, and ORB, which are established methods in image processing, and compared them on their resistance to image degradation and their ability to match the same brain MRI slice of different patients. We have demonstrated that some of these techniques can correctly match most of the brain MRI slices of different patients. When matching is performed with the atlas of the human brain, their performance is significantly lower. The best performing feature detection method was a combination of SIFT detector and HardNet descriptor that achieved 93% accuracy in matching images with other patients and only 52% accurately matched images when compared to atlas. ## 1 Introduction Image matching is a feature extraction technique, which seeks to establish a correspondence between two or more images using keypoints and descriptors. It is an active area of research used in many fields such as robotics (Sankowski and Nowakowski, 2014; Nakashima, Morio, and Mu, 2019), object detection, object tracking, security (Deng, Xuan, Wang, Li, Yao, and Wang, 2020; Sun, Bebis, and Miller, 2004; Moore, Bihl, Kenneth W Bauer, and Dube, 2017), and medicine (Chowdhary and Acharjya, 2020; Ker, Wang, Rao, and Lim, 2018; Joshi and Phadke, 2012). However, not all methods applicable to images of everyday environments (rooms, landscapes, etc.) are suitable to be used in medicine. Specifically for use on images from various modalities such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, thermograph, electroencephalography, nuclear medicine functional imaging, etc. It was found that although the currently used open-source methods for keypoint search and image similarity matching work well for general images, they are often unsatisfactory for medical images from modalities such as CT, MRI and ultrasound [Noborio, Uchibori, Koeda, and Watanabe, 2019]. Diagnosis of patients is currently labour intensive process limited to only a few experts, who are often overloaded. Even with recent advantages in the development of diagnostic tools [Ker et al., 2018, Deng et al., 2020, Litjens, Kooi, Bejnordi, Setio, Ciompi, Ghafoorian, van der Laak, van Ginneken, and Sanchez, 2017], this trend may not improve in the future. Training personnel, although vital, is a slow and challenging process as the skills required are complex. Among those is the spatial orientation needed to diagnose patients based on the MRI images. A visual aid that would locate MRI images within the atlas of a human brain would help experts diagnose patients faster by pre-selecting in MRI images only regions of interest (ROI). It could also help in training new medical personnel. Such a tool can also register images according to the correct shift in MRI images automatically and can serve as a filter for further processing. To enable this, we need to find corresponding brain slices from an atlas (or another patient) with a brain slice of interest and rank them based on the position within the atlas. However, there is a lack of suitable methodology that could be used to evaluate such tools. In Noborio et al. [2019] the authors addressed the comparison of brain images that changed due to gravitational forces during brain surgery and compared the suitability of using SIFT, KAZE, AKAZE and ORB methods. They found that these methods were not suitable for detecting key points inside the skull and proposed an image pre-processing method using Sobel filter and Canny edge operator to improve their performance, with which they achieved significantly better results. According to our own research, this approach is satisfactory in the case of the same patient and the same data-set (created at the same time on the same device), but not to compare images (slices) between a reference (atlas, phantom) and different patients. Tareen, Saleem, Koeda, and Watanabe [2018] performed a comparison of the SIFT, SURF, KAZE, AKAZE, ORB and BRISK methods. However, this comparison was performed on common types of images (mountains, graffiti, terrain, buildings, bricks) containing a large number of edges and corners, which are used by these methods for keypoint detection. Bojanic, Bartol, Pribanic, Petkovic, Donoso, and Mas [2019] conducted a wide-ranging study of different combinations of classical detectors and descriptors. Based on this study, we have selected a detector-descriptor combination of AGAST + SIFT (referred in the text as AGAST) and GFTT + SIFT (GFTT). Besides hand-crafted keypoint detectors and descriptors, alternative deep-learning detectors and descriptors are also being actively researched. A large survey of deep-learning-based methods was performed by Jin, Mishkin, Mishchuk, Matas, Fua, Moo Yi, and Trulls [2020]. Jin et al. [2020] concluded that the performance of the deep-learning methods is on par with the hand-crafted method. Still, when the parameters of classical methods are tuned appropriately, these methods outperform deep-learning approaches. Among the best deep-learning models from this survey were HardNet by Mishchuk, Mishkin, Radenovic, and Matas [2017] and SOSNet by Tian, Yu, Fan, Wu, Heijnen, and Balntas [2019]. In our comparison, we have used HardNet as it is a part of the end-to-end pipeline which is ready to use. A comprehensive survey and detailed description of available feature detection (FD), both handcrafted and deep-learning methods and techniques were performed by Ma, Jiang, Fan, Jiang, and Yan [2021]. For further details on methods used in this work we refer inter ested readers to this survey. This survey however does not consider medical images in any detail. In practice, MRI images from different patients or the same patient but taken at different times or on different equipment can be affected by noise, different scaling, and rotation. Therefore, to compare them and find the most similar slice, is necessary to optimise methods to be more robust and invariant to these transformations and noise or to use appropriate preprocessing steps. The novelty of this work and its contribution is the definition of a methodology for performance comparison of the different image processing techniques in searching, registration, and selection of MRI brain scans in the region of interest. Furthermore, using our methodology, we have demonstrated that it is possible to automatically register MRI brain scans and select images from the region of interest using state-of-the-art FD techniques (AGAST, AKAZE, BRISK, GFTT, HardNet, and ORB). We evaluate their effectiveness and sensitivity to the noise and geometrical deformations of MRI brain scans. In section 2 we defined three metrics: robustness \(R\), accuracy \(A\), and cumulative distance \(C\) and described used methods, datasets, and experimental setup. In section 3.2 we test the FD methods invariability on a set of images from the same patient affected by selected image degradations (rotation, noise, scaling). In section 3.3 we investigate the ability of FD methods to match MRI images of different patients and how different image preprocessing steps can improve the results. In section 3.4 and determine how well the FD methods can locate specific MRI slices within the simulated atlas. Lastly, we conclude obtained results in section 4. ## 2 Experimental setup, datasets and methods The image preprocessing for medical images is an important and challenging task [21], especially when a diagnosis is in consideration (e.g. tumour segmentation) [14]. Contrast and Image quality are in general the major problems in medical imagery. Also, when we want to interpret images of the human brain with different data sets or atlases often an initial alignment (also called registration) of the images is needed [11]. The various image processing applications are for example provided by The McConnell Brain Imaging Centre (e.g. BEAST - Brain Extraction based on non-local Segmentation Technique [15], INSECT - method to separate a structural MRI). Part of our investigation is to study and define the effects of an image enhancement on the ability of FD methods to match MRI images of healthy patients. For our tests we downloaded a dataset from open-science neuroinformatics database Open-Neuro (formerly known as OpenfMRI) maintained by a research group around Russell Poldrack [16]. We used a dataset CEREBRuM: a 3T MRI segmentation tool [21] used in the design and testing of an optimised end-to-end convolutional neural network (CNN) architecture [16]. The dataset consists of 947 off-scanner MR images (3 Tesla T1-weighted 1 mm isotropic MP-RAGE 3D sequences scanned at the Centre for Cognitive Neuroimaging at the Institute of Neuroscience and Psychology, University of Glasgow), which of 7 are publicly available as Neuroimaging Informatics Technology Initiative file format (NIFTI). For simplicity this work is limited to axial MRI slices which were exported into Portable Network Graphics (PNG) files by using nii2png software1 from original NIfTI files. Also, the FD methods are not well suited to work with NIFTI formats. Following this process we obtain a series of 170 PNG MRI brain axial slices for each patient. Footnote 1: [https://github.com/alexlaurence/NIFTI-Image-Converter](https://github.com/alexlaurence/NIFTI-Image-Converter) Comparing different FD methods is not trivial. These methods tend to produce a different number of keypoints per image and to complicate the comparison further, each FD method may associate different weights to the matched keypoints. Therefore, to rank different methods, we need to depart from absolute number of keypoints and base our comparison on a more general metric. For this purpose, we have devised four metrics, each describing a different aspect of the FD method's ability to match MRI slices. Those are signal-to-noise ratio (SNR), which conveys confidence in matching the two MRI slices. An _accuracy_, which expresses how many MRI slices are correctly matched within a given error, and a _cumulative distance_, which is a total error measure in the number of slices for both correctly matched and mismatched slices. Furthermore, we have defined _robustness_ to evaluate the sensitivity of different FD methods to the amount of noise, change in scale and rotation. The signal-to-noise ratio (SNR) is a standard metric used in many areas of signal processing. In this work, we have used it because it allows us to compare FD methods that produce a different number of keypoints. The SNR is calculated as follows \[\text{SNR}_{ij}=\frac{x_{ij}-\mu(k)}{\sigma(k)}\,, \tag{1}\] where \(x_{ij}\) is the number of matched keypoints between reference image \(j\) and image \(i\). The mean \(\mu(k)\) and the standard deviation \(\sigma(k)\) are determined based on the number of matched keypoints between image \(j\) with a set of all images from patient \(k\), which also contains image \(i\) used in the comparison. Image \(j\) could belong to a different patient or the same patient. First, we have investigated the sensitivity of the selected methods (AGAST, AKAZE, BRISK, GFTT, HardNet, ORB) to the degradation imposed on the MRI images, such as the scaling factor, different noise levels, rotation, alignment and changes in contrast. To measure the invariability of a method to MRI image degradation, we used a set of MRI images from the same patient. We have calculated robustness \(R\) as \[R_{x}=\frac{\text{SNR}_{ii,x}}{\text{SNR}_{ii}}\,, \tag{2}\] where the \(\text{SNR}_{ii}\) corresponds to the SNR of the match between non-degraded image \(i\) with itself, and \(\text{SNR}_{ii,x}\) is the SNR of the match between degraded image \(i\) and non-degraded image \(i\). The type of degradation \(x\) used might be rotation (r), noise (N) and scaling (s). For simplicity, we choose the rotation of \(5^{\circ}\), upscaling by 5%, Gaussian noise with 10, 20 and 30 standard deviations (STD). The \(5^{\circ}\) angle was chosen based on recent study by Prabhu, Karunakar, Sinha, Mariyappa, Bhargava, Vellurugan, and Anitha (2021) which shows that observed in the T1- and T2-weighted MR images obtained is in range from \(2^{\circ}\)-\(6^{\circ}\). The choice of upscaling of 5% is selected as a potential accepted difference for different age ranges and gender. In MRI images, the typical types of noise are Rician noise, Gaussian Noise and Rayleigh noise (Goyal, Dogra, Agrawal, and Sohi, 2018). However, in most of the clinical applications prevails the Gaussian noise (Biswal, Zerrin Yetkin, Haughton, and Hyde, 1995). To extend our study of FD methods to the comparison of two different patients or a patient to the atlas, we have introduced two more metrics: accuracy, and cumulative distance. In such a case the correct alignment (rotation, scaling) and contrast (noise) are essential, especially when the selected FD method is sensitive to one or more of these factors. This process is commonly known as "registering data", e.g., transforming the medical data to the Talairach coordinate system (Collins et al., 1994). Accuracy is an aggregated metric and it tells us how many MRI slices were correctly matched, that is when normalised, it tells us a ratio of correctly matched images in percentage. Accuracy is calculated as follows \[A_{d,c}=\sum_{i=1}^{n}x_{i}/n\,,\quad x_{i}=\begin{cases}1\,\text{if},|y_{e}-y_ {b}|\leq d\\ 0,\,\text{otherwise}\end{cases}\,, \tag{3}\] where \(y_{e}\) is the correct index of the image which should be selected by the FD method, \(y_{b}\) is the index of the image which is the best match of the source image \(i\), and \(n\) is the total number of images we took into account. Since MRI slices that are next to each other are similar (depending on resolution and MRI slice step) the interval parameter \(d\) is the neighbourhood of image indices where the image match is considered correct. Lastly, \(c\) indicates preprocessing step(s) applied to the data. For the general overview of how compact or spread out the predictions of the selected FD method and preprocessing is, we also introduce the cumulative distance as \[C_{c}=\sum_{i=1}^{n}\left|y_{e}-y_{b}\right|. \tag{4}\] When combined the three metrics allow us to describe the behaviour of the FD method. High accuracy together with high cumulative distance means that images are mostly correctly matched with some mismatched images with a large distance from the correct index. A typical mismatched image has a low or high image index. On the other hand low accuracy with a small cumulative distance means that results are centred around correct values but have a larger spread. In this case, increasing \(d\) would increase the accuracy. The SNR role is to indicate how the other two metrics are reliable as it expresses confidence in the match. A high SNR value indicates that matching two images is probably not due to noise while low SNR indicates that selecting which images match was affected by the noise thus accuracy and cumulative distance may be affected. To find a match for an image \(i\) of a patient, we have to compare the image using one of FD methods with all images from the reference (another patient or the atlas). This involves calculating the number of matched keypoints for all images, respectively calculating SNR for each one and then selecting the highest SNR to get the image slice that matches the source image the most. In the end of this process, we have a series of SNR values, which could be affected by noise. To reduce the effect of the noise on the selection of the best matching image, we use a moving average with a window size of 7 samples. The optimal size of the window depends on the slice step. For larger slice steps shorter window is more appropriate. An example of the SNR series for input image \(i=100\) with the computed moving average is shown in Fig. 1. For the all mentioned FD methods and feature matching methods (AGAST, AKAZE, BRICK, GFTT, HardNet, ORB), we wrote a few simple python scripts (version 3.9) using the Open Source Computer Vision Library (OpenCV - version 4.5.1.48) [Bradski, 2000] and Kornia library (version - 0.6.4) [Riba, Mishkin, Ponsa, Rublee, and Bradski, 2020]. For the data analysis, we used the R programming language (version 3.6.1). All scripts are available at [https://github.com/jan2nov/FDMinM](https://github.com/jan2nov/FDMinM). The criteria and parameters used for matching keypoints are for the simple Low ratio test, we used the limit around 0.75, knnMatch with the out-of-best matches found per each query descriptor equal to 2 or as in the case of HardNet mutual nearest neighbour function with threshold distance 0.95. To illustrate the ability of different FD methods to match different MRI slices, we have chosen representative samples that are shown in Fig. 2. In Figure 3 we illustrate a flowchart of the process of selecting the most similar MRI Figure 1: Example of computed series of SNR computed by using Eq. (1) on the sets of same subject. The blue line with points represents SNR in the case of input image \(i=100\), the red line corresponds to the moving average. image in the atlas (or in some of our tests a patient) relative to the input ID image. In the preprocessing part, it is then possible to apply various corrections to the image (rotation, scaling, contrast), including possible other corrections. Everything is then passed through the selected FD method and computed a SNR distribution function for each atlas image is created. The one with the largest SNR value then indicates the position of the most similar input image from the atlas. ## 3 Results and Discussion ### Relation between the signal to noise ratio with image degradation To determine how each FD method is behaving with each image degradation we first present the case of SNR values (Eq. (1)) computed on the same set of non-enhanced, non-degraded images as the input images ("identity"). The results of this test for a few selected ID images (25, 50, 100, 150) are shown in Fig. 4 (higher value is better). As we can see, most of the FD methods are able to find the right axial image with high certainty, respectively on average the SNR values for all input images from best to worst FD methods are AGAST (\(\sim\)10.6), GFTT (\(\sim\)9.7), ORB (\(\sim\)8.9), BRISK (\(\sim\)8.1), AKAZE (\(\sim\)7.7), and HardNet (\(\sim\)7.4). Only HardNet in a few cases (for example the input image ID=150) noted the neighbouring images as better matches than the input image itself. In Figures 5 and 6, similarly as in the previous case in Figure 4 for image IDs 25, 50, 100 and 150, we show the behaviour of the SNR values when the image degradation to the input images are applied. Namely, in the first columns a clock-wise rotation of 5\({}^{\circ}\) is used, in the second columns images were up-scaled by 5%, and in the third columns a Gaussian noise with STD=30 is used. Please note, that we also tested the Gaussian noise with 10 and 20 STD and found out, that in general it slowly decreases the SNR values overall. For this reason we included only the results with STD 30 as the extreme case. In general, the scaling factor is the one where the SNR values are affected the most, followed by rotation, and lastly the image noise (see the values of SNR). From the perspective of the FD methods the GFTT and AKAZE are the most invariant in all image degradation factors, especially in the case of Gaussian noise, where the loss in the values of SNR is almost negligible. AGAST is the most invariant in the case of rotation degradation. The ORB and BRISK methods are affected by the changes in the input images and we can see a \(\sim\)40% loss in the SNR values. HardNet is benefiting from the image degradations of the neighbouring images, respectively the decreasing number of matching points, and thus improving the SNR. In the Table 1 we summarise the average of obtained SNR for all input images when related degradation is used. Figure 4: Behaviour of the SNR (y-axis) defined by Eq. (1) for selected input images (25, 50, 100, 150) in case of used FD methods: AGAST (first row left), AKAZE (first row center), BRISK (first row right), GFTT (second row left), HardNet (second row center) and ORB (second row right). The solid lines correspond to an average computed from a series of SNR of each subject, whilst the coloured area to the standard error. Figure 3: Schematic diagram of the process of selecting the most similar MRI image relative in the atlas in relative to the input MRI image. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Feature detection & \multicolumn{4}{c}{Image degradation factor:} \\ method & None & Rotation & Upscaling & Noise \\ \hline AGAST & 10.60 & 10.31 & 4.70 & 9.15 \\ AKAZE & 7.70 & 7.02 & 5.78 & 6.92 \\ BRISK & 8.06 & 4.53 & 3.87 & 4.36 \\ GFTT & 9.66 & 9.22 & 8.28 & 8.47 \\ HardNet & 7.41 & 9.43 & 9.39 & 9.19 \\ ORB & 8.93 & 6.47 & 5.24 & 6.87 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the mean SNR for AGAST, AKAZE, BRISK, GFTT, HardNet, and ORB methods for cases when the non-degenerated and degenerated input images are taken into account. Figure 5: The values of SNR (y-axis) computed by using Eq. (1) (higher values is better) for selected input images (25, 50, 100, 150) for the first three used FD methods: AGAST (first row), AKAZE (second row) and BRISK (third row). The first column corresponds to an input image degradation with rotation by \(5^{\circ}\), the second column to up-scaling the image by 5%, and the third column to the case when Gaussian noise of standard deviation 30 is added. The colour lines and the coloured region are following the same description as in Fig.4. Figure 6: Continuation of Figure 5 showing the value of SNR (y-axis) computed by Eq. 1 with the next three FD methods (in rows): GFTT, HardNet, and ORB, for each image degradation (columns: rotation, scale, noise). keypoints (also called interest points) in most input slices of the MRI images, followed by GFTT (\(\sim\)600), and then by BRISK (\(\sim\)450). When we applied the rotation the number of keypoints dropped or remained the same for all used methods. All other image degradations (upscaling and Gaussian noise) behave slightly differently for each feature detector tested. For AKAZE, ORB, HardNet (SIFT) and GFTT, the upscaling factor show the highest increase in the number of interest points, whilst for BRISK and AGAST it is the Gaussian noise. Please note, that from image ID=150 the number of keypoints rapidly decreases. This can be expected as input images approach the top of the head, where there is less and less usable information not only from the medical point of view, but also from the computer vision point of view. However, for the completeness of our survey, we present the results in these cases as well. The average number of found keypoints for each used method and image degradation are shown in Fig. 7. From the point of computational performance, the order is from the fastest to the slowest GFTT, AGAST, HardNet, AKAZE, ORB and BRISK (0.2, 0.25, 1, 5, 8 and 15 minutes) to compare 1 input image with 170 others. ### Robustness These tests serve especially for a better estimation of the problems with the selected FD methods and to compare their invariability to typical image degradation in medical science, respectively in our case for MRI. In Figure 8 we show the robustness \(R_{x}\) calculated by Eq. 2 for each method and image degradation as a function of the input image IDs. Since the robustness \(R_{x}\) corresponds to the ratio between the SNR affected by the image degradation and the unaffected, a higher value of the robustness \(R_{x}\) reflects the behaviour of the FD method invariant behaviour invariant to the selected image degradation. The most problematic image degradation type for selected FD methods is the upscale factor, where AGAST, ORB and BRISK show the loss of SNR around 50 %. The rotation and the Gaussian noise demonstrate similar changes in robustness for all FD methods, with the greatest influence in the case of the BRISK and the least in the case of AGAST and AKAZE. On the other hand HardNet show for all image degradation an increase in robustness, respectively value higher than 1 (see the second row, the middle column of Fig. 8). This is due to the fact that the SNR of the neighbouring image IDs ("the noise") decreases (see Figure 5), and thus the input image ID SNR increase. The most invariant FD methods are GFTT and AKAZE, followed by AGAST, ORB and BRISK. The GFTT and AKAZE method shows in general a loss of 10 and 14 % SNR compared with their original SNR (\(R_{x}\)\(\sim\)0.86-0.90), AGAST has, except for the rotation degradation where is the most invariant from all methods, around 20 % (\(R_{x}\)\(\sim\)0.60), ORB has around 32 % (\(R_{x}\)\(\sim\)0.68), and BRISK 47 % (\(R_{x}\)\(\sim\)0.53). Instead, HardNet shows a gain of 10 % (\(R_{x}\)\(\sim\)1.1). ### Accuracy As mentioned in the previous section the accuracy \(A_{d,c}\) (defined by Eq. 3) gives us the general overview of how the selected FD methods behave when a patient is compared to another patient or an atlas. For our test we selected patient number 7 to perform as an atlas. In this section we compare data taken from the same machine, i.e., with similar image properties. In the next section we provide the comparison with a selected atlas provided by the International Consortium for Brain Mapping (ICBM). Figures 9, 10 shows the result of accuracy \(A_{5,c}\) (noted in each figure, corresponds with the colour of the dots) and distance of the best matched ID slice and the expected one (x-axis). We show the accuracy \(A_{5,c}\) for each FD method used in columns for different preprocessing steps in rows. Namely AGAST, AKAZE and BRISK in Fig. 9, whilst GFTT, HardNet and ORB in Fig. 10. For the preprocessing steps we applied only or a combination of rotation (r), scaling (s), skull extraction (b) and equalisation (e). In each case we also pro vide the value of cumulative distance \(C_{c}\) (see Eq. 4), where the lower value the better as it shows the spread of the IDs differences across all input images. Finally, the Fig. 11 shows the visualisation whenever the patient 6 MRI slice was correctly matched (blue bar) or not (red bar) with the effect of selected preprocessing steps, i.e. we can define which section of the axial plane have the FD methods problems. The condition \(x_{i}\) (here is equal to 5) is used for the calculation of the accuracy \(A_{d,c}\) defined in Eq. 3 and it corresponds to the allowed range of correctness. In general we see, that the FD methods (except HardNet) have problems with the MRI slices from the top and bottom of the head. The overall summary with their mean \(\text{SNR}_{ij}\) (Eq. 1), accuracy \(A_{d,c}\) (Eq. 3) and cumulative distance \(C_{c}\) (Eq. 4) for each featured detection and image preprocessing is stated in the Table 2 and in Fig. 12. The results from the subsection 3.2 show that, when correcting the input images by preprocessing, we could expect improvements in SNR and accuracy \(A_{d,c}\). However, only in the case of AKAZE do we see a significant increase (from the accuracy of 67% up to 87%), whilst AGAST, ORB, BRISK and GFTT show rather worse results. This decrease is probably due to the fact that methods are too sensitive to such image degradation/improvements or to the fact that their methods identify key regions with insufficiently unique features to compare MRI images from a different patient. However, the SNR in the case of AGAST and GFTT when a rotation, a skull extraction or both is applied slightly increases. From the tested methods HardNet shows the most consistent behaviour when a preprocessing is applied or not. Although the HardNet shows quite good results in accuracy \(A_{5,c}\) and cumulative distance \(C_{c}\), the SNR value is around 4 at best (AGAST \(\sim\)4.5, AKAZE \(\sim\)4, GFTT \(\sim\)4.4, BRISK \(\sim\)2.5, ORB \(\sim\)2.4), which is relatively low. Figure 7: The average number of keypoints for each input image found by FD methods AGAST (the first row left), AKAZE (first row middle), BRISK (first row right), GFTT (the second row left), HardNet (second row middle), and ORB (second row right) with and without image degradation. The solid line represents the mean value computed from the 7 subjects and the coloured area is the standard deviation. \begin{table} \begin{tabular}{c r r r r r r r} \hline & \multicolumn{6}{c}{Image preprocessing} \\ \cline{2-7} & none & rotation & rotation & rotation & \multicolumn{1}{c}{rotation} & \multicolumn{1}{c}{equalisation} & \multicolumn{1}{c}{rotation} \\ & & & skull ex. & scaling & skull ex. & & \multicolumn{1}{c}{equalisation} \\ AGAST & & & & & & & \multicolumn{1}{c}{skull ex.} \\ & & & & & & & \\ \hline Accuracy [\%] & 72 & 78 & 65 & 68 & 65 & 78 & 74 \\ Cumulative distance & 3396 & 4939 & 9077 & 4013 & 7099 & 2854 & 4538 \\ mean SNR & 2.93 & 2.64 & 3.19 & 2.78 & 2.94 & 3.12 & 3.2 \\ \hline AKAZE & & & & & & \\ \hline Accuracy [\%] & 67 & 70 & 72 & 67 & 73 & 87 & 82 \\ Cumulative distance & 9083 & 8639 & 6888 & 10233 & 4315 & 3149 & 3272 \\ mean SNR & 2.26 & 2.23 & 2.2 & 2.09 & 2.27 & 2.74 & 2.76 \\ \hline BRISK & & & & & & \\ \hline Accuracy [\%] & 58 & 49 & 57 & 45 & 11 & 63 & 7 \\ Cumulative distance & 9925 & 15063 & 9276 & 13299 & 23708 & 8383 & 25774 \\ mean SNR & 1.35 & 1.34 & 1.57 & 1.64 & 1.94 & 1.49 & 1.61 \\ \hline GFTT & & & & & & \\ \hline Accuracy [\%] & 81 & 83 & 87 & 68 & 70 & 53 & 73 \\ Cumulative distance & 2616 & 2373 & 2075 & 9727 & 4128 & 17027 & 5760 \\ mean SNR & 2.93 & 2.64 & 3.19 & 2.78 & 2.94 & 3.12 & 3.2 \\ \hline HardNet & & & & & & \\ \hline Accuracy [\%] & 90 & 91 & 89 & 88 & 89 & 93 \\ Cumulative distance & 2000 & 1918 & 1934 & 2045 & 1923 & 2062 & 1648 \\ mean SNR & 2.24 & 2.32 & 2.02 & 2.56 & 2.25 & 3.11 & 3.53 \\ \hline ORB & & & & & & \\ \hline Accuracy & 57 & 41 & 43 & 45 & 8 & 58 & 33 \\ Cumulative distance & 7610 & 12133 & 12046 & 13299 & 25910 & 10748 & 20342 \\ mean SNR & 1.2 & 1.16 & 1.58 & 1.64 & 1.92 & 1.18 & 1.44 \\ \hline \end{tabular} \end{table} Table 2: Summary of the FD methods compared with images from different subjects. In the columns we show the effect of the selected combination of image preprocessing techniques and their influence on the Accuracy \(A_{5,c}\) (see Eq. 3), cumulative distance \(C_{c}\) (see Eq. 4) and the average of the signal to noise ratio. ### Comparing with Atlas Nowadays, as the amount of medical image data is growing, such atlases are mostly used as a gold standard validation for quantitative analysis or testing new techniques. In addition, as the atlases are mostly simulated models, they are also independent of the technological aspects of MRI devices, or in general of the used device. Moreover, comparing a set of MRI images of one patient with a simulated or "ideal" atlas by using FD methods can bring us a basic awareness of how to create atlases, or what additional information (metadata) is good to keep, which preprocessing steps are required and how to work with them in general. Here, we follow the same methodology of testing as in the previous subsection 3.3 but for the reference we have used a simulated atlas. Respectively, as the ground truth for our analysis procedure, we downloaded the atlas/phantom generated from BrainWeb: Simulated MRI Volumes for Normal Brain [Cocosco, Kollokian, Kwan, and Evans, 1997, Kwan, Evans, and Pike, 1999, 1996, Collins, Zijdenbos, Kollokian, Sled, Kabani, Holmes, and Evans, 1998]. Currently, the simulated brain database (SBD) offers 3 modalities, 6 slice thicknesses, 6 levels of noise, and 3 levels of intensity non-uniformity, i.e., we chose the one that most closely matches our downloaded data set (1 mm slice thickness, T1 modality, 0 % noise level, and 0 % intensity non-uniformity). For brevity we show only results for HardNet and GFTT, as they demonstrate the best results in general, however, we provide all results of tested FD methods in Table 3. Similarly, we show only the results for no image preprocessing enhancements and selected ones, specifically the combination of rotation, scaling, equalisation, and skull extraction. The results are summarised in Fig. 13. Overall, when we compare MRI slices with the atlas all the FD methods show poor results. The best-performing method is HardNet. HardNet accuracy without preprocessing Figure 8: The robustness \(R_{x}\) (y-axis) computed by the Eq. 2 for each input image for AGAST (first row left), AKAZE (first row center), BRISK (first row right), GFTT (second row left), HardNet (second row center), and ORB (second row right). The solid line represents the mean value computed from the 7 subjects, their colour to the appropriate image degradation (rotation, upscaling, noising) and the coloured area to the standard deviation. Figure 9: Accuracy \(A_{d,c}\) of each used FD method (from the left column to right column: AGAST, AKAZE, BRISK) and the applied image preprocessing (from the first row to last: none; rotation; rotation and skull extraction; rotation and scaling; rotation, skull extraction and scaling; equalisation; rotation, equalisation, skull extraction and scaling. The blue dots correspond to the input images satisfying the condition in Eq. 3. The y-axis corresponds to the running mean SNR and the x-axis to the distance from the expected. The vertical dashed line represents the expected image ID. The accuracy \(A_{d,c}\) and cumulative distance \(C_{d}\) computed by Eq. 4 are stated in each figures. Figure 10: Accuracy \(A_{d,c}\) of each used FD method (from left column to right column: GFTT, HardNet, ORB) and the applied image preprocessing (from first row to last: none; rotation; rotation and skull extraction; rotation and scaling; rotation, skull extraction and scaling; equalisation; rotation, equalisation, skull extraction and scaling. The blue dots correspond to the input images satisfying the condition in Eq. 3. The y-axis corresponds to the running mean SNR and the x-axis to the distance from the expected. The vertical dashed line represents the expected image ID. The accuracy \(A_{d,c}\) and cumulative distance \(C_{d}\) computed by Eq. 4 are stated in each figures. Figure 11: Relation between the condition \(x_{i}\) of accuracy \(A_{5,c}\) (Eq. 3) and the image ID (from image index 20 up to index 150) with respect to image corrections (rows in the plot) and FD methods. The blue bars indicate that the condition \(x_{i}\) is met, i.e., the found position of the best matching image ID is not more than \(c=5\) mm from both sides, whilst the red bars indicate the opposite. Figure 12: Summary of the FD methods relation between the cumulative distance \(C_{c}\) (Eq. 4) and accuracy \(A_{5,c}\) (Eq. 3) with respect to each image preprocessing (labels). The point colours correspond to the FD methods while the point size to the running mean SNR. achieved \(A_{5,\mathrm{none}}\)\(\sim\)53 % with running mean SNR \(\sim\)1.5 and cumulative distance \(\sim\)5400. Apply the preprocessing of rotation, brain extraction, scaling and equalization slightly improved only the running SNR to \(\sim\)1.57. In terms of accuracy and cumulative distance, it remained similar or slightly worse. AGAST and GFTT also show good results in accuracy (\(A_{5,\mathrm{none}}\)\(\sim\)30 %, especially when the image preprocessing is applied the accuracy increase significantly (for GFTT \(\sim\)53 % and AGAST \(\sim\)31 %. However, the cumulative distance \(C_{c}\) for both methods is high \(\sim\)10000-20000, i.e., showing certain uncertainty. AKAZE accuracy without preprocessing is 20 % with a running mean SNR \(\sim\)1.97 and cumulative distance \(\sim\)26000. The best result is achieved with all preprocessing steps where \(A_{5,\mathrm{rebs}}=29\)%, \(C_{\mathrm{rebs}}=19888\), and running mean SNR \(\sim\)1.22. This does not represent a significant improvement. The ORB and BRISK methods achieve similar results with accuracy \(A_{5,\mathrm{none}}=12\) % and \(A_{5,\mathrm{none}}=16\) %, cumulative distance \(C_{\mathrm{none}}=26804\) and \(C_{\mathrm{none}}=24196\), running SNR 1.12 and 0.97. The low value of SNR (\(<3\)) suggests that matched slices are found due to random occurrence or are found with low confidence liable to noise fluctuations. We suppose, that these results are caused by the different interpretations of the grayscale tissues between the atlas and MRI slices. Therefore, additional data information on grayscale ranges for each component of the brain could be useful for later analysis. #### 3.4.1 Other anatomical planes In the previous sections, we have analysed in detail the behaviour of different FD methods for the axial plane case. In general practice, however, other anatomical planes, the sagittal and the coronal planes are used as well. Therefore, here we provide a brief analysis of the accuracy \(A_{5,c}\) for the two best-performing FD methods HardNet and GFTT. We present the comparison of all three anatomical planes with the atlas. The results for all anatomical planes are summarised in Fig. 13. Similarly to the axial plane, the HardNet FD method achieved better results compared to the GFTT method and achieves at best \(A_{5,be}\)\(\sim\)53 % accuracy for the sagittal plane and \(A_{5,be}\)\(\sim\)75 % for the coronal plane. Whilst GFTT accuracy for the sagittal plane is at best \(A_{5,be}\)\(\sim\)36 % and coronal plane \(A_{5,be}\)\(\sim\)60 %. The cumulative distance \(C_{c}\) in the case of the \begin{table} \begin{tabular}{c l c c c} \hline \hline \multicolumn{5}{c}{Comparing FD method with atlas (BRAINWEB)} \\ \hline FD method & Im. processing & Accuracy \(A_{5,c}\) & Cum. distance \(C_{c}\) & run. mean SNR \\ \hline \multirow{2}{*}{AGAST} & none & 18 & 18823 & 2.64 \\ & e & 32 & 21153 & 2.50 \\ \hline \multirow{2}{*}{AKAZE} & none & 20 & 26849 & 1.97 \\ & rebs & 29 & 19888 & 1.22 \\ \hline \multirow{2}{*}{BRISK} & none & 16 & 24196 & 0.93 \\ & e & 15 & 22960 & 1.31 \\ \hline \multirow{2}{*}{GFTT} & none & 27 & 10361 & 1.83 \\ & rb & 52 & 8313 & 2.17 \\ \hline \multirow{2}{*}{HardNet} & none & 53 & 5438 & 1.51 \\ & rebs & 52 & 6000 & 1.57 \\ \hline \multirow{2}{*}{ORB} & none & 11 & 26804 & 1.12 \\ & e & 17 & 19560 & 1.29 \\ \hline \hline \end{tabular} \end{table} Table 3: Achieved performance of Accuracy \(A_{5,c}\) (Eq. 3), Cumulative distance \(C_{c}\) (Eq. 4) and running mean SNR of all tested FD methods without and with applied their best image processing with respect of comparison with the atlas. sagittal plane for both FD methods is similar \(\sim\)10000-30000. This is due to the fact that the FD method generally fails to distinguish well between the left and the right hemispheres. Although, in general, the two hemispheres are not necessarily equal, the FD methods used gave similar SNR values, and without additional information, we were not able to select the correct image. However, as part of most standards, the orientation and location information is included in the metadata. Therefore we can limit the search for matching images only to the left or right hemispheres, thus avoiding the problem of symmetry. If we use the additional information to select the correct hemisphere, we see that the cumulative distance \(C_{c}\) significantly improves to a value \(\sim\)2500-8000, as well as the accuracy \(A_{5,c}\), improves up to 85% for HardNet and 66% for GFTT. These results are summarised in Fig. 14, where the blue/green dots represent the images satisfying the accuracy condition \(x_{i}\) the correct/opposite hemisphere. In the case of the coronal plane, both FD methods with the atlas show better accuracy than in the case of the axial plane. Specifically, \(A_{5,c}\)\(\sim\)75% and a cumulative distance \(C_{c}\)\(\sim\)5000 for HardNet, and \(A_{5,c}\)\(\sim\)60% and \(C_{c}\)\(\sim\)15000 for GFTT. ## 4 Conclusion and Future work We have used six freely available FD methods (FD methods), namely AGAST, AKAZE, BRISK, GFTT, HardNet, and ORB, and evaluated their ability to match MRI brain slices. To that end, we have tested how are selected FD methods invariant to various image degradations (rotation, up-scaling, added Gaussian noise). Then we compared methods based on their capability of matching MRI brain slices of different patients but using the same MRI scanner. Lastly, we have evaluated how well different methods can match MRI scans of a patient with a simulated brain atlas. Methods for localisation of plain sections of the axial brain by magnetic resonance are one of the basic prerequisites for testing programs for digital image analysis that would detect pathological changes in the image. In order to test these programs for changes in a given brain structure, it is necessary to have methods that localise the given region-of-interest and displays it. Manual selection of these regions for further marking of such structures would be time-consuming. Especially in the case of use in longitudinal studies, where it is necessary to obtain similar information during the same type of examination on a different machine. In clinical practice, these methods of imaging brain structures could be used in combination with programs for digital image analysis, especially in patients with neurodegenerative diseases, when changes in the brain parenchyma occur gradually and only a visual assessment by a doctor is not sufficient to recognize pathological processes. The method invariance to image degradation was tested on MRI scans of the same patient. The task was to match the selected MRI slice to the same but degraded MRI slice among the rest of the MRI slices of the same patient. The invariance itself was measured as the decrease in signal-to-noise ratio (SNR) when the MRI slice was matched with an unmodified and degraded image. From tested methods the most invariant was GFTT with decreases of 5% for rotation, 15% for upscaling, and 12% for Gaussian noise, followed by AKAZE (10%, 25%, 10%) and AGAST (3%, 55%, 14%). The method used in the case of HardNet showed a gain in SNR (+27%, +27%, +24%). Although, there is a decrease in the number of matched points the level of noise decreased even further, therefore HardNet is one of the best methods regarding invariance to image degradation. Although it is worth noting that during our testing of HardNet, we found out that the number of matching points changed by 1-2 points when the test was run on the same sets again. The BRISK method demonstrated an average 50% decrease for all applied degradations thus being most sensitive to image degradation from selected methods. ORB exhibited on average 35% reduction in SNR with a 30% decrease for rotation and Gaussian noise and a 40% decrease Figure 13: Comparison of the achieved accuracy \(A_{5,c}\) for HardNet (first three rows) and GFTT (second three rows) method with Atlas and selected image preprocessing for different anatomical planes: axial, sagittal and coronal (first, second and third column). In the case of the sagittal plane, we also included the green colour representing images with similar, but lower, SNR that were mismatched due to the symmetry of hemispheres. These images, if selected, would have fulfilled the accuracy condition. Figure 14: The accuracy \(A_{5,c}\) for HardNet (first row) and GFTT (second row) methods without preprocessing (left) and with preprocessing (right) for the sagittal plane. Different hemispheres are distinguished by colour. Blue dots for the left and green for the right hemisphere satisfy the accuracy condition \(x_{i}\) (\(d=5\)) by Eq. 3. for up-scaling. This shows that for MRI slices BRISK and ORB are not well suited. When FD methods were compared based on the ability to match all MRI slices from a patient to their counterparts from different patients provided on the same machine then the best method is HardNet which managed to correctly match 93% of the images, followed by AKAZE and GFTT both matched 87% of the images, then AGAST with 78%. BRISK and ORB managed to match 63% and 58% of images correctly. However the confidence in these matches is arguably still low. For the leading FD methods, the SNR for most of the matches was below SNR\(=4\), whilst for ORB and BRISK the most matched with a value of SNR\(=2.5\). Comparing MRI slices with simulated brain atlas failed for all tested methods. The best result was achieved with HardNet where only 53% of images were correctly matched with low confidence. The reason for this mismatch is probably that the grayscale values of interest are too different and may prevent the FD method to select "unique" points of interest, thus prevent us from matching the correct images. As one solution a dynamical histogram renormalization may be needed based on the raw (e.g. DICOM) data format, or in the case of atlases a defined map of grayscale values ranges. As future work, testing the potential FD method would be useful to increase the tests of robustness with a mapping of grayscale ranges as well as include other free available FD methods or, at best, to develop a specific FD method suited for medical images from the beginning. From the tests we provided, such FD method needs to locate points of interest similarly like AKAZE, which features are generally detected in the form of blobs instead as corners (for example BRISK, ORB). It could even lead to the interpretation of the medical images as a whole or a larger part of it. Another possibility would be to tune internally the available FD methods. However, as would be helpful in the future to compare or search slices from different modalities. The tuning could help only specific modalities, and for other it will need to be tuned again. Also would be interesting the creation of a custom atlas suited for image indexing using FD methods should be considered. Such atlas would need to add more information relevant to FD methods and could be an indicator for other types of atlases in the future. Also, it would be interesting to run a similar summary on different images than the brain. ## 5 Acknowledgement The authors would like to express their gratitude to the Research Centre for Theoretical Physics and Astrophysics, Institute of Physics, Silesian University in Opava for institutional support. This work was supported by the Ministry of Health of the Czech Republic grants number NV-19-04-00270, NV-19-04-00362 and NU21-09-00357, and Palacky University grant number JG_2019_004.
2306.16080
A serial dual-channel library occupancy detection system based on Faster RCNN
The phenomenon of seat occupancy in university libraries is a prevalent issue. However, existing solutions, such as software-based seat reservations and sensors-based occupancy detection, have proven to be inadequate in effectively addressing this problem. In this study, we propose a novel approach: a serial dual-channel object detection model based on Faster RCNN. This model is designed to discern all instances of occupied seats within the library and continuously update real-time information regarding seat occupancy status. To train the neural network, a distinctive dataset is utilized, which blends virtual images generated using Unreal Engine 5 (UE5) with real-world images. Notably, our test results underscore the remarkable performance uplift attained through the application of self-generated virtual datasets in training Convolutional Neural Networks (CNNs), particularly within specialized scenarios. Furthermore, this study introduces a pioneering detection model that seamlessly amalgamates the Faster R-CNN-based object detection framework with a transfer learning-based object classification algorithm. This amalgamation not only significantly curtails the computational resources and time investments needed for neural network training but also considerably heightens the efficiency of single-frame detection rates. Additionally, a user-friendly web interface and a mobile application have been meticulously developed, constituting a computer vision-driven platform for detecting seat occupancy within library premises. Noteworthy is the substantial enhancement in seat occupancy recognition accuracy, coupled with a reduction in computational resources required for neural network training, collectively contributing to a considerable amplification in the overall efficiency of library seat management.
Guoqiang Yang, Xiaowen Chang, Zitong Wang, Min Yang
2023-06-28T10:27:17Z
http://arxiv.org/abs/2306.16080v2
# A serial dual-channel library occupancy detection system based on Faster RCNN ###### Abstract The phenomenon of seat occupancy in university libraries is a prevalent issue. However, existing solutions, such as software-based seat reservations and sensors-based occupancy detection, have proven to be inadequate in effectively addressing this problem. In this study, we propose a novel approach: a serial dual-channel object detection model based on Faster RCNN. This model is designed to discern all instances of occupied seats within the library and continuously update real-time information regarding seat occupancy status. To train the neural network, a distinctive dataset is utilized, which blends virtual images generated using Unreal Engine 5 (UE5) with real-world images. Notably, our test results underscore the remarkable performance uplift attained through the application of self-generated virtual datasets in training Convolutional Neural Networks (CNNs), particularly within specialized scenarios. Furthermore, this study introduces a pioneering detection model that seamlessly amalgamates the Faster R-CNN-based object detection framework with a transfer learning-based object classification algorithm. This amalgamation not only significantly curtails the computational resources and time investments needed for neural network training but also considerably heightens the efficiency of single-frame detection rates. Additionally, a user-friendly web interface and a mobile application have been meticulously developed, constituting a computer vision-driven platform for detecting seat occupancy within library premises. This research effectively addresses the persisting issue of seat occupancy management within library systems. Noteworthy is the substantial enhancement in seat occupancy recognition accuracy, coupled with a reduction in computational resources required for neural network training, collectively contributing to a considerable amplification in the overall efficiency of library seat management. ## I Introduction A [wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide,, wide, wide, wide, wide, wide,, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide, wide,, wide, wide, wide,, wide, wide, wide, wide,, wide, wide, wide,, wide, wide,, wide, wide,, wide, wide, wide,, wide,, wide, wide,, wide,, wide,, wide,, wide, wide,, wide,, wide,, wide,, wide,, wide,, wide,, wide,, wide,, wide,, wide,,, wide,,, wide,,, wide images taken by UAVs, which greatly improved the search performance of UAVs. Chen et al. (2021) [11] used YOLOv5 algorithm in face recognition to improve the accuracy of face recognition. Liang et al. (2022) [12] deployed YOLOv3 algorithm on NVIDIA Jetson drive platform and proposed a new solution for obstacle avoidance of unmanned driving. YOLO algorithm is a one-stage algorithm with extremely fast detection speed, which is suitable for real-time dynamic detection. Faster RCNN algorithm is a two-stage algorithm, whose detection speed is slower than YOLO [13], but its network complexity makes its accuracy higher than YOLO, which is suitable for single frame image detection. In general, Faster RCNN is more accurate than YOLO in detecting small objects due to its ability to detect more candidate frames and handle the size and position variations of objects. On the other hand, YOLO may be more accurate than Faster R-CNN in detecting large objects because it considers the whole image in the prediction process instead of processing each region individually. This enables YOLO to capture the contextual information of large objects more effectively. Therefore, Faster RCNN algorithm is more suitable for this model. The COCO dataset built by Lin et al. (2014) [14] has greatly reduced the time for researchers to tag photos, making migration learning in complex scenes faster and more accurate. But at present, no scholar has applied target detection to library occupancy detection. Therefore, this paper uses the deep learning method to carry out research on the detection of library occupancy. However, deep learning network training requires a large number of training sets, especially for a complex scene. It is difficult to obtain library scenes. Virtual reality technology can build various scenes and personalized characters through computers, and can provide massive training sets for deep network models. After decades of development, virtual reality technology has become highly reductive. Liu. (2022) [15] used virtual reality technology to build a virtual fitting room and virtual characters, making it more convenient to buy clothes online. Similarly, virtual reality technology is also widely used in the medical field. The use of virtual characters for remote surgery and diagnosis reduces the time cost of treatment. The construction of human organs and the simulation of physical activities are of great significance to medical research [16, 17, 18].Virtual reality technology also makes it possible to teach in a virtual environment that is unimaginable in the physical classroom, such as entering virtual laboratories, virtual classrooms and virtual conference rooms [19, 20].The huge possibility of accessible virtual technology will make it possible to break the boundaries of physical reality.. On the basis of the above research, as shown in Fig. 1, this paper proposes a du-al channel detection model based on Faster RCNN algorithm for serial target detection and object recognition. And integrate image acquisition, image detection and seat management platform to develop a library seat intelligent management platform. The system consists of image input terminal, dual channel detection model and user terminal. After image segmentation and pre-processing, each sub image first performs target detection, and no sub image of human is detected for object recognition. If a book is recognized, it is determined that the seat is suspected of occupying a seat. The innovation points of this paper are summarized as follows: (1) A dual channel detection model is proposed, in which target detection and object recognition are carried out serially to reduce the labeling of data sets. Only people and books are trained online, which greatly reduces the cost of computing re-sources and time. (2) Use the Unreal Engine 5 to build virtual scenes and create characters. You can generate data sets of various scenes, various characters and various perspectives at will. The rest of this article is organized as follows: Section 2 shows how the virtual engine builds the dataset. Section 3 shows the selection and training of network in object detection. Section 4 shows the selection and training of network in object classification. Section 5 shows the Web Interface and APP of the system, and conducts a live test of the system. Section 6 summarizes the full text and puts forward prospects. ## II Virtual engine data set construction ### _Virtual scene construction_ The process of creating a virtual scene using Unreal Engine 5 and open source materials can be broken down into several steps. Firstly, all the open source materials required are collected, including classrooms, seats, chairs, books, boxes, schoolbags, and people sitting and standing. Then, all the collected materials are imported into the same project. Using the rendering feature of UES, the classroom, table, and chair are added to the viewpoint and their relative positions and size parameters are adjusted. To enhance the realism of the virtual scene, other materials are continued to be added to the viewpoint, such as white plastic boxes and curtains. Afterwards, the material of different items is standardized using the material editor. Finally, the brightness of the scene is adjusted and the construction of the virtual scene is completed. Fig. 1: Flow chart of library intelligent seat management system \begin{table} \begin{tabular}{c c c c} \hline \hline **Algorithm** & **Type** & **Advantage** & **Application** \\ \hline Faster & double-stage & Higher & Traffic Detection [7], Agriculture [8], Medical Science [9] \\ RCNN & algorithm & precision & Agriculture [8], Medical Science [9] \\ & & & UAV Search [10], \\ YOLO & One-stage & Faster & Face Detection [11], Autonomous Cars [12] \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of two target detection algorithms ### _Data collection under different viewing angles and illumination_ In this section, we simulate the phenomenon of occupied and non-occupied seats in a library by adding different materials and adjusting different viewpoints and lighting to obtain a virtual dataset. The simulation of the non-occupancy phenomenon is done as follows: randomly select different items from the item network diagram to add to the viewport and drag them to the desk. Then randomly select a character from the sit-stand character material set and add it to the seat. The simulation of seat occupation is as follows: randomly select items from the item material set and add them to different desks without other processing. Finally, simulate the viewpoint detected by the camera in a real library scene. Adjust the camera position to the top view and fix it. Repeat the above steps, and change the camera position to change the view angle, change the light intensity, and finally a large virtual reality dataset can be obtained. Fig. 3 shows some examples of the virtual dataset we have constructed. Fig. 3(a) and Fig. 3(b) have fixed camera views at the top of the back of the classroom, and the pictures contain 5 and 3 people respectively. The fixed camera view in Fig. 3(c) Fig. 3(d) is located right above the classroom and contains eight desks. In Fig. 3(c), four seats are occupied, and in Fig. 3(d), two seats are occupied. Adjust the light intensity in Fig. 3(e) and Fig. 3(f), where 4 seats are occupied in Fig. 3(e), and 5 seats are occupied in Fig. 3(f). ### _Data pre-processing_ We collected a total of 366 images, including 266 images actually taken and 100 constructed based on virtual reality. The actual shooting location is the library of Xidian University (Xi'an, Shaanxi, China), Shandong University (Jinan, Shandong, China) and South China University of Technology (Guangzhou, Guangdong, China). In order to further expand the dataset and enhance the robustness of the model in processing different data, image pre-processing is required for the dataset. In actual library image acquisition, the camera image acquisition process can easily degrade the image quality due to some uncontrollable factors. One of the most influential factors for image recognition is illumination. When the viewing angle is fixed, three different lighting conditions including morning, evening and night can have an impact on person recognition. In camera imaging, mis-exposed photos can be divided into two categories as follows: (1) When the light intensity is too low, the overall brightness of the captured image is low due to underexposure, and the detailed information in the dark areas of the image cannot be clearly presented. (2) When the light intensity is too high, the overall brightness of the captured image is high due to overexposure, and the detail information of the bright area of the image cannot be clearly presented. Since the identification of library seats does not require too fine a brightness adjustment for local areas, this section selects a histogram equalization-based method for overall image pre-processing. Histogram equalization is the most basic and stable image processing method, which is also the most simple, intuitive and effective. The main purpose of this method is to redistribute the gray values of pixels with different luminance in a low-light image. By expanding the range of grayscale values in the image, the pixel values are evenly distributed in the image. This improves the brightness and contrast of the original input image, reveals the details hidden in the dark areas of the image, and effectively improves the visual effect of the image. For color images, histogram equalization can be performed for R, G, and B components respectively, but this may lead to color distortion of the resulting image. Therefore, this paper converts RGB space into HSV (hue, saturation and brightness), and equalizes the histogram of V component to ensure that the image color is not Fig. 3: These images are some examples of the virtual datasets we have constructed Fig. 2: U5 Interface distorted. This part calls the Matlab function histeq(), but uses the self-coding method for processing. The specific effect is shown in Fig. 4. Through the repair of overexposure and underexposure, the recognition rate of people and objects can be effectively distinguished. Fig. 5(a) shows that people and books cannot be accurately identified due to insufficient exposure, and the accuracy rate of identifying people is only 75%. However, Fig. 5(b) shows that 100% recognition can be achieved after correction. Therefore, in this paper, before the person detection, the histogram equalization of the image captured by the camera is carried out, and the image brightness is automatically adjusted, and then the part of occupancy detection is entered. ## III Object Detection Based on Faster RCNN ### _Faster R-CNN core architecture_ Faster R-CNN is an object detection algorithm based on classification, which belongs to monocular two-stage detection algorithm. It inherits the idea of traditional object detection and treats object detection as a classification problem. Faster RCNN goes further on the basis of Fast RCNN, and integrates the candidate box generation part into the CNN network, so that the four parts of candidate box generation, feature extraction, candidate box classification, and candidate box boundary regression are all combined in a CNN network. This avoids step by step training and truly realizes end-to-end object detection. The Region Proposal Network (RPN) is utilized to generate candidate bounding boxes, while the ROI Pooling layer maps these generated boxes onto a fixed-size feature map and extracts features. The predicted coordinates and category scores of the bounding boxes are ultimately produced by two fully connected layers. The main network structure of Faster RCNN is shown in Fig. 6, including the following four parts: feature extraction network, region generation network, ROI pooling layer, and classification regression layer. ### _Standard accuracy and loss value of evaluation index_ Single object detection belongs to the binary classification problem. The result which is detected is called Positive, the undetected is called Negative. The correct pre-diction of the classifier is marked as True, and the wrong prediction is marked as False. These four basic terms are combined to form four basic elements of evaluation object detection, as shown in TABLE II. TP and TN are both accurate predictions. We call their proportion in all prediction results Accuracy. The formula is: \[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}} \tag{1}\] L1 Loss is also called Mean Absolute Error, or MAE, which measures the average error amplitude of the distance between the Fig. 4: Comparison before and after image preprocessing Fig. 5: Character recognition before and after repair Fig. 6: Faster R-CNN Network Architecture predicted value and the true value. The formula is: \[Loss=\frac{1}{n}\sum_{i=1}^{n}\mid y_{i}-f(x_{i})\mid \tag{2}\] where n is the number of samples, \(y_{i}\) is real border data, \(f(x_{i})\) is forecast box data. The training results in this paper use accuracy and loss as evaluation indicators. ### _Pre-selection of characteristic network_ For Faster RCNN algorithm, the selection of feature extraction network directly affects the performance of the network. Through a series of convolution and pooling operations, the feature extraction network can extract features from input images and generate feature maps. More and more convolutional neural networks (CNN) with superior performance, such as AlexNet [21], GooleNet [22], SqueezeNet [23], VGGNet [24] and ResNet [25], have been proposed by scholars. This section uses AlexNet, GooleNet, SqueezeNet, VGG19, VGG16 and ResNet18 as the pre-training network. The training set includes different scenes, such as individual portrait Fig. 7(a) [26], field scene Fig. 7(b) and library scene Fig. 7(c), to ensure the diversity of dataset. Through pre-training, the network structure with the best performance and the strongest generalization ability is evaluated. Fig. 8 shows the changes in the accuracy of the six networks during the training iteration. The accuracy changes curves of ResNet18 and VGG19 networks are relatively stable, indicating that the convergence speed of both networks is fast. Fig. 9 shows the changes of the loss of the six networks during the training iteration. VGG16 and VGG19 networks decline fastest, indicating that they converge fastest. The maximum accuracy, average accuracy, minimum loss and average loss in the six network iteration processes are calculated. The results are shown in TABLE III and TABLE IV. TABLE III shows the maximum and average accuracy rates in different network training processes. The maximum accuracy that VGG19 can achieve is the maximum, and that of AlexNet is the minimum. This is because VGG19 has more network layers and uses smaller convolutional kernels to better preserve the image characteristics. On the whole, the average accuracy of ResNet18 is the highest, while that of AlexNet is still the lowest. This is because ResNet18 uses a deeper network and has a deeper classification accuracy. TABLE IV shows the minimum and average loss in different network training processes. On the whole, the average loss of VGG19 is the smallest and that of SqueezeNet is the largest. This may be because there are many full connection layers in VGG19 network, which makes the output score at the classification stage higher and the fitting degree with the real samples higher. Fig. 8: Accuracy of six kinds of network training Fig. 7: Examples of training set Fig. 9: Six network training loss According to the comprehensive evaluation of accuracy, loss and convergence speed, VGG19 and ResNet18 networks have the best performance. According to the actual detection capability of the network, use the trained network to detect and com-pare the test set, and the results are shown in Fig. 10. Fig. 10(a) and Fig. 10(b) show that under the condition of single person detection, the confidence of VGG19 recognition is far greater than ResNet. Fig. 10(c) and Fig. 10 (d) show that the recognition rate of ResNet is lower than VGG19 and the recognition confidence of the same person is still lower than VGG19 when the number of people to be detected is in-creased. The dataset in Fig. 10(e) and Fig. 10(f) continue to increase the number of people to be detected, at this time, the clarity of everyone in the picture is greatly reduced. There are 22 people in the picture, 11 people are detected in Fig. 10(e), the recognition rate is 50%, and 17 people are detected in Fig. 10(f), the recognition rate is 77%. To sum up, the actual detection performance of VGG is better than ResNet network, so VGG is selected as the feature extraction network. ### _Network training_ Section 3.3 uses a small sample size to select a suitable convolutional neural network. This section aims to improve the adaptability of the network to complex scenarios such as libraries, and to verify that the network trained using a training set containing both virtual and real dataset outperforms the network trained using only real dataset as the training set. The number of dataset is increased, especially the photos of students studying in the library. There are two datasets in this time, a and b. The dataset a consists of 103 virtual reality constructed images and 103 realistically captured images. The dataset b consists of 206 realistically captured images, of which 103 images are duplicated with the images in the dataset a. Dataset a and b both have a training set to test set allocation ratio of 8:2. The whole training process is carried out in Matlab 2022 (b), using the single GPU mode. Fig. 11(a) and Fig. 11(c) show the change of accuracy with the number of iterations in the training process. Since migration learning is conducted on the basis of pre-trained networks, the original network dataset is COCO set [13], so the accuracy of the network is extremely high. However, there are many litters in the curve, which indicates that there is a problem of local optimization in this training. Fig. 11(b) and Fig. 11(d) show the change of the loss with the number of iterations in the training process. The network trained using dataset a starts to converge at about 2000 times, and the convergence speed is slow. However, the network trained using dataset b converges in a few dozen iterations, and convergence is extremely fast. Fig. 51(e) and 11(f) display the PR and mAP curves, respectively, of the training process using different training sets. These curves demonstrate that the network trained with both virtual and real dataset outperforms the network trained solely with real dataset. TABLE V calculates the highest accuracy, average accuracy, minimum loss and average loss of training. Compared with the training results of small and medium-sized samples in the previous section, with the increase of dataset, the network performance is getting better. In terms of the training accuracy, the accuracy of the networks trained with dataset a and dataset b are both above 91%, which is almost the same. However, in terms of training loss, the average loss of the network trained with dataset a is only 0.6165, but the average loss of the network trained with dataset b is as high as 1.2659. This shows that the performance of the network can be improved by training the network with both virtual and real datasets. Fig. 10: Test results of ResNet 18 and VGG19 object is a single person, the recognition confidence is 99%. Fig. 12 (b) increases the number of people tested to 4, the number of people identified is 4, and the recognition rate is 100%. Fig. 12(c) continues to increase the number of people tested to 6, the number of people identified is 6, and the recognition rate is 100%. Fig. 12(d) continues to increase the number of people tested to 18, and the number of people identified is 18, with a recognition rate of 100%. From the above results, the recognition rate of this network can reach 100%. Fig. 12(e)-12(h) simulates the impact of different perspectives and lighting conditions on network testing. The results show that the viewing angle and lighting conditions will not affect the performance of the network. ## IV Object Classification Based on the person image information that has been identified in Section 3, the acquired images are firstly aliquoted based on the distribution of study room tables and chairs. Based on the result of object detection, the pixel blocks of images without people next to the tables and chairs in the photos are filtered. And classify them into two categories, objects and no-objects, to conclude whether the seat is suspected to be occupied or not. Fig. 11: VGG19 Training Process Curve based on different type of training set training set Fig. 12: Network Test Results ### _Image selection and processing_ The training data are stored in certain subfolders according to the number of categories, and the name of each subfolder is the label of the corresponding category: objects and no-objects. The selected dataset has better accuracy when the selected dataset has 100 images per category and 200 images in total. The dataset we selected in the item classification are both collected from the Internet and include dataset built based on section 2. In order to ensure that the dataset has high application value and practical significance for reality, this paper still focuses on the real scene data in the construction of the migration learning dataset, with the virtual reality constructed images as the auxiliary. The ratio of image data from the two sources is 7:3 for the following experimental design and extension application. ### _Selection of neural network structure and parameter setting_ Looking at the effect of classical deep networks and network layer construction, in order to ensure the suitability of network applications, this paper selects a total of five networks, AlexNet, GoogLeNet, VGG16, ResNet50 and SqueezeNet, for testing and analysis based on a comprehensive consideration of both accuracy and execution efficiency. Finally, the network with the highest fitness and accuracy is selected from these five networks as the main framework architecture for our model. The classical network comparison diagram is shown in Fig. 13. In the initial network selection and testing, the five selected networks AlexNet, GoogLeNet, VGG16, ResNet50 and SqueezeNet are trained and evaluated in this section. To address the differences in their network recognition results, this section obtains the training time and predicted classification accuracy of the five networks under different parameter conditions by fixing a series of parameter values, so as to deter-mine which network is the best network for object classification. The training options are pre-trained so as to select the most stable values for most of the networks' recognition rates, and are set well before training the model. The parameters and their values are shown in TABLE VI. These parameter settings were trained with six different networks. Only the image set classification ratio was considered as the basis for comparing the network performance, and the specific data is shown in TABLE VII. Compared with AlexNet, SqueezeNet has a shorter overall training time, but its stability in object classification is inferior to AlexNet. Although both networks have an accuracy of over 90% in recognition accuracy, the highest recognition accuracy is still achieved by AlexNet, at 93.6%. Although the accuracy of the GoogLeNet network is also high, it is prone to overfitting due to its significantly longer training time than AlexNet and the limitations of the training set. Specifically, AlexNet has been shown to be one of the simplest networks that can be trained in detection applications and perform with high accuracy. From the perspective of library occupancy detection system application, its most valuable factors and benchmarks for recognition judgment are recognition accuracy and network stability, so AlexNet is chosen as the network structure in this section. As shown in Fig. 14, AlexNet is a pre-trained eight-layer CNN architecture consisting of five convolutional layers and three fully connected layers. A nonlinear and nonsaturated ReLU function is selected for the activation function, and local response normalization (LRN) is used to suppress other neurons with smaller feedback, which enhances the generalization ability of the model. The AlexNet uses overlapping pooling, which makes each pooling have overlapping parts and can avoid the overfit-ting phenomenon. ### _Model training_ Since the parameter initial learning rate has a large impact on the accuracy of the training model, its value is too large to cause the model to not converge, and too small to cause the model to converge particularly slowly or fail to learn. After determining the initial learning rate boundary value, three values of 0.0001, 0.0005 and 0.001 are selected as the base learning rate in this paper. Since the Fig. 14: AlexNet Network Structure diagram Fig. 13: Classical network performance comparison \begin{table} \begin{tabular}{c c c} \hline \hline **Optimizing Algorithm** & **Stochastic Gradient Descent with Momentum** \\ \hline Mini Batch Size & 32 \\ Maximum Epochs & 40 \\ Learning rate & 0.0005 \\ Classification rate & 7/3 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Training parameters multi-GPU collaborative feature of AlexNet can greatly improve the training speed and use dropout to effectively reduce overfitting, the number of training iterations is increased appropriately to avoid overfitting and maximize the accuracy of object classification. Finally, 40 is chosen as the maximum number of iterations. TABLE VIII shows the training results. By increasing the number of iterations or changing the ratio of training and testing images and the base learning rate, different evaluation criteria such as runtime, mini-batch accuracy, mini-batch loss, and final accuracy can be obtained for the dataset under different conditions. By varying the base learning rate, the maximum number of iterations, and the training/prediction classification ratio, this section yields data on the training time, mini-batch accuracy, mini-batch loss, and final accuracy required for image classification under different combinations. TABLE VIII shows that as the classification ratio decreases, the required training time gradually becomes larger, while the accuracy rate shows different trends at different learning rates. Taking the base learning rate of 0.001 as an example, its recognition rate is gradually lower as the classification ratio de-creases, indicating the occurrence of overfitting; however, when the base learning rate is 0.0001, it shows the opposite trend, indicating that it is still not saturated. The final classification accuracy is highest when the base learning rate is 0.001,the classification ratio is 6/4, the base learning rate is 0.0005 and the classification ratio is 7/3, the final classification accuracy is 0.936. However, the former parameter setting requires less training time and has a lower mini-batch loss, making it the optimal setting for image classification. To visually determine the accuracy of object classification, the concept of confusion matrix is introduced here. The confusion matrix is a situation analysis table for summarizing the prediction results of classification models in deep learning. In library occupancy detection, the confusion matrix is divided into four quadrants in order of top to bottom and left to right for exploration. As shown in Fig. 15, the number of correct predictions is represented in the blue quadrant: the first quadrant is the number of objects correctly identified, and the fourth quadrant is the number of objects correctly identified without objects. The second and third quadrants represent mis-classification: the second quadrant element is the number of incorrectly identified objects as no objects, and the third quadrant element is the opposite. It can clearly reflect the accuracy of image classification and the problems that occur. When the number of objects in the blue quadrant is larger, it means that the model has a higher recognition rate under this condition. For example, with a learning rate of 0.001 and a classification ratio of 6/4, the number of objects accurately identified is 52 (26+26), but the number of objects incorrectly identified as having objects on the table is 5, and the number of objects incorrectly identified as no objects is 6. Fig. 15 shows the confusion matrix for each of the 12 parameter conditions. After continuous testing of the dataset, the success rate was 0.936 after selecting the optimal parameters to set the conditions. The trained network was used to classify the images and the results are shown in Fig. 16. From the test classification results, the trained network has a more satisfactory classification rate for the presence or absence of objects, and the confidence level is high enough to undertake the task of checking the classification of the presence or absence of objects on the seats, so that it can well determine whether the students are suspected of occupying the seats. ## V Experiment and Result Display ### _System Hardware Module Construction_ In order to verify the validity of the model, the model is tested in this section. As shown in Fig. 17, the camera model used is logitech C270 HD Webcam [27]. The maximum resolution is 720 p / 30 fps, and photos can be transferred to a computer via WiFi, Bluetooth, etc. The test platform is Matlab 2022 (b) and the test site is the study room on the 2nd floor of the library of Xidian University (XI'an, Shaanxi, China). Fig. 16: Test results of object classification Fig. 15: Confusion matrix of different parameters ### _System Software Module_ For the presentation of the system output, a Web interface for librarians and an APP for students are designed. The system achieves end-to-end functionality. Fig. 18 shows the Web interface designed using Matlab APP Designer, which can be used by the librarian. The images can be imported from the camera and saved to the database of the computer, or any image can be imported manually. The image box on the left shows the imported photos, and the image box on the right shows the segmented sub-images. **v** means there is no occupancy at the seat, **x** means there is occupancy at the seat. When the button is green, it means the detection is completed, and it turns red means the detection is in progress. Fig. 19 shows the app designed for students, written in JavaScript and running on Android 6.0.1 and above. Fig. 19(b) indicates that seats with occupancy will be shown in red, blue means that the seat is not occupied, and gray means that the seat is not used. If the student is occupying the seat, the APP will make a message alert via cell phone. In addition, the APP also adds the announcement function, which can re-lease some school-related information such as recruitment and notification. ### _Experiments Under Different Lighting and Sparseness of Figures_ The test site is the study room on the 2nd floor of the library at the South campus of Xidian University (Xi'an, Shaanxi, China). As in Fig. 20, the test results are dis-played on the Web interface. In Fig. 20(a), the test time is in the morning with strong light, and in Fig. 20(b), the test time is in the evening with weak light. Fig. 20 (c) shows a sparse distribution of characters compared to Fig. 20(b). In Fig. 20(a), there are 16 seats, and 2 seats are occupied. The test result show that seats 6 and 12 are occupied. In Fig. 20(b), There are 16 seats in total, and 3 seats are occupied. The test result show that seats 7, 14, and 16 are occupied. In Fig. 20(c),there are 16 seats in total, and 8 seats are occupied. The test result show that seats 1, 3, 5, 8, 9, 11, 12, and 14 are occupied. The results of all three tests are consistent with reality. Fig. 17: System hardware block diagram Fig. 18: Web interface Fig. 19: APP interface ## VI Conclusion In this paper, we propose a serial dual-channel detection model based on Faster RCNN algorithm for object detection and object classification, and develop a computer vision-based library occupancy detection system. The improved detection model only requires labeling of people and network training, which greatly reduces computational resources and time costs. the Faster RCNN algorithm increases the accuracy of detection. Virtual reality technology provides a massive training set for network training and reduces the cost of human photography. The final results are displayed in a specially designed Web interface and APP, which truly realizes the end-to-end functionality of the system. The system has been experimented in a school library and the final test results show that the system is fully operational and detects the occupancy in the library. At this stage, the model can only cut and detect specific images, which can only contain neatly placed tables and chairs in a top view. The segmentation and detection of images of arbitrary tables and chairs in different views will be a future research direction. ## Acknowledgment This work was supported by the Fundamental Research Funds for the Central Universities under S202210701123.
2301.02052
Relaxing Instrument Exogeneity with Common Confounders
Instruments can be used to identify causal effects in the presence of unobserved confounding, under the famous relevance and exogeneity (unconfoundedness and exclusion) assumptions. As exogeneity is difficult to justify and to some degree untestable, it often invites criticism in applications. Hoping to alleviate this problem, we propose a novel identification approach, which relaxes traditional IV exogeneity to exogeneity conditional on some unobserved common confounders. We assume there exist some relevant proxies for the unobserved common confounders. Unlike typical proxies, our proxies can have a direct effect on the endogenous regressor and the outcome. We provide point identification results with a linearly separable outcome model in the disturbance, and alternatively with strict monotonicity in the first stage. General doubly robust and Neyman orthogonal moments are derived consecutively to enable the straightforward root-n estimation of low-dimensional parameters despite the high-dimensionality of nuisances, themselves non-uniquely defined by Fredholm integral equations. Using this novel method with NLS97 data, we separate ability bias from general selection bias in the economic returns to education problem.
Christian Tien
2023-01-05T12:59:57Z
http://arxiv.org/abs/2301.02052v3
# Relaxing Instrument Exclusion with Common Confounders ###### Abstract Instruments can be used to identify causal effects in the presence of unobserved confounding, under the famous relevance and exclusion assumptions. As exclusion is difficult to justify and to some degree untestable, it often invites criticism in applications. Hoping to alleviate this problem, we propose a novel identification approach, which relaxes traditional IV exclusion to exclusion conditional on some unobserved common confounders. We assume there exist some relevant proxies for the unobserved common confounders. Unlike typical proxies, our proxies can have a direct effect on the endogenous regressor and the outcome. We provide point identification results with a linearly separable outcome model in the disturbance, and alternatively with strict monotonicity in the first stage. Using this novel method with NLS97 data, we demonstrate the insignificant role of ability bias compared to general selection bias in the economic returns to education problem. Beyond economics, the approach is just as relevant in health treatment evaluation with an unobserved underlying health status, or a psychological study where character traits are unobserved common confounders. **Keywords:** Causal Inference, Unobserved Confounding, Instrumental Variables, Control Function, Proximal Learning Introduction Unobserved confounding complicates the identification of a causal effect of a regressor of interest on an outcome. Despite the endogeneity of a regressor of interest, instrumental variable (IV) approaches can identify their causal effect if the famous relevance and exclusion assumptions hold for the instruments. These assumptions are strong and often invite criticism of IV estimates in practice. We propose a novel approach to relax exclusion, in favour of exclusion conditional on an unobserved common confounder, for which some relevant variables are observed. Relaxing exclusion (or exogeneity) is only possible when it is replaced by other strong assumptions. One way to identify causal effects without instrument exclusion is from residual distributions, not variation in the explanatory variables (Heckman, 1979, Millimet and Tchernis, 2013). Very specific forms of heteroskedasticity across the first stage and outcome model can also be used to establish identification without an exclusion restriction (Klein and Vella, 2010, Lewbel, 2012). Others have suggested to use irrelevant variation in instruments to test for the exclusion of the relevant variation in the instruments (D'Haultfoeuille et al., 2021). A similar idea is followed when integrated conditional moments use nonlinear mean-dependence of endogenous variables on instruments, such that the instruments may violate the exclusion restrictions in pre-specified parametric ways. Despite recent advances in estimation with integrated conditional moments (Tsyawo, 2021), the strong identifying assumptions render all these approaches difficult to justify in applications. Our solution differs significantly from these approaches as it only uses a relaxed exclusion restriction and variation in explanatory variables to identify the causal effect of interest. From the perspective of IV, we allow for some endogeneity in the instruments. That endogeneity originates from unobserved common confounders. We call these unobserved confounders _common_, because there are some observed variables, which are relevant for them. These observed variables are called proxies. In other words, we assume there are some unobserved variables that explain all correlation (association) between the instruments and these proxies. This assumption is testable. Then, we need to argue for the exclusion of the instruments, but only conditional on the unobserved common confounders (and observable variables). In general, this is a strong relaxation of instrument exclusion conditional on observed variables only. Another way to understand our proposal is as a solution to measurement error in observed confounders for IV. Residual bias is a well-known problem when confounders are measured with error. Proximal learning (Cui et al., 2020) is a solution to the problem, where observed variables measure all unobserved confounders with some error. In proximal learning, the proxies for the unobserved confounder may either be causes of the treatment or outcome variable. These proxies must be sufficiently relevant (e.g. complete) for all unobserved confounders. Separately developed from the proximal learning approach, a control function solution exists with identical conditional independence assumptions and mismeasured confounders (Nagasawa, 2018). Our solution is different, as we do not assume the existence of measurements for all confounders. Instead, we use instruments and assume that measurements exist for all confounders, conditional on which the instruments would be exogenous. In this sense, our solution can be understood as IV with mismeasured confounders. As it is standard in the control function literature, our approach will identify average causal (structural) quantities of interest. Unless the outcome model is fully linearly separable in the treatment and disturbance (Newey et al., 1999), where in our case the disturbance includes the effect of the common confounder, we identify those average causal (structural) quantities of interest that integrate out the unobservables without dependence on the treatment using a control function (Imbens and Newey, 2009). Our identification approach is most similar to recent advances in nonlinear panels (Liu et al., 2021). In panel data, unobserved fixed effects are common to the same variables across time, in a similar way as the unobserved common confounders are common to the instruments and proxies in our setup. In Liu et al. (2021), identification stems from a parametric dimension reduction of the effect of observed variables on the outcome, and an index sufficiency assumption that renders the observed variables independent from the fixed effects conditional on an index of the observed variables. In our approach, identification stems from the existence of more instruments than treatments, and an index sufficiency assumption that renders the instruments independent from the unobserved common confounders conditional on an index of the instruments. Just like Blundell and Powell (2004); Liu et al. (2021) do not explain how to derive this crucial index function. One of our main contributions is the derivation of the index function, which arises naturally in the common confounding setup. A motivating example for our proposal is the returns to college education identification problem. It features various biases, ability and selection, and we motivate pre-college test scores as instruments exogenous to selection, while clearly endogenous to ability. proxies are pre-college risky behaviour dummies, which appear to correlate negatively with ability. With NLS97 data, we show that selection bias is the much more economically relevant bias compared to ability bias in this problem. Setup The treatment (action) \(A\in\mathcal{A}\) is discrete or continuous with base measure \(\mu_{A}\) of \(\mathcal{A}\subseteq\mathbb{R}^{d_{A}}\). \(Y\in\mathcal{Y}\subseteq\mathbb{R}\) is the one-dimensional outcome variable. Other important variables are the instruments \(Z\in\mathcal{Z}\in\mathbb{R}^{dz}\), the proxies \(W\in\mathcal{W}\subseteq\mathbb{R}^{d_{W}}\), and the common confounders \(U\in\mathcal{U}\subseteq\mathbb{R}^{d_{W}}\). **Assumption 2.1** (Common Confounding IV Model).: 1. _SUTVA:_ \(Y=Y(A,Z)\)_._ 2. _Instruments_ 1. _Exclusion:_ \(Y(a,z)=Y(a)\perp\!\!\!\perp(A,Z)\ |\ U.\)__ 2. _Index sufficiency: For some_ \(\tau\in L_{2}(Z)\)_, where_ \(T\coloneqq\tau(Z)\)_,_ \(U\perp\!\!\!\perp Z\ |\ T\)_._ 3. _Relevance (completeness): For any_ \(g(A,T)\in L_{2}(A,T)\)_,_ \[\mathbb{E}\left[g(A,T)|Z\right]=0\ \text{only when}\ g(A,T)=0.\] (2.1) 3. _Proxies_ 1. _Exclusion:_ \(W\perp\!\!\!\perp Z\ |\ U.\)__ 2. _Relevance (completeness): For any_ \(g(U)\in L_{2}(U)\)_,_ \[\mathbb{E}\left[g(U)|W\right]=0\ \text{only when}\ g(U)=0.\] (2.2) Assumption 2.1.1 is the standard stable unit treatment value assumption (SUTVA), which implies no interference across units. In assumption 2.1.2a we capture the key relaxation of this model compared to standard IV. It states that the instruments are excluded, yet this exclusion may be conditional on an unobserved (vector-valued) random variable \(U\). This is a significant relaxation of the standard exclusion restriction, which is possible only with assumptions 2.1.2b, 2.1.2c, and 2.1.3. In assumption 2.1.2b, we introduce a control function \(\tau\) and a control variable \(T=\tau(Z)\). Conditional on the control variable \(T\), the instruments \(Z\) are independent from the common confounders \(U\). This assumption describing the existence the control function \(\tau\) is often called index sufficiency, where \(T\) is a (multiple) index of \(Z\). In assumption 2.1.2c, we require that conditional on the control variable \(T\), the instruments \(Z\) are complete for treatment \(A\). This is a standard completeness condition. It simply means that keeping the variation of \(Z\) described by \(T\) fixed, the instruments must remain sufficiently relevant for \(A\). In slightly different words, after conditioning on \(T\), enough variation must be left in the instruments \(Z\) to infer the effect of treatment \(A\) on outcome \(Y\). As in standard IV with observed confounders, this relevance requirement is typically testable. Assumption 2.1.3a states that the proxies \(W\) are independent from instruments \(Z\) conditional on the common confounders \(U\). The proxies \(W\) must also be complete for the unobserved common confounders \(U\), as stated in 2.1.3b. Again, completeness means the proxies \(W\) are sufficiently relevant for \(U\). A different way to understand these assumptions is that the unobserved variable \(U\), which explains all association (correlation) between the proxies \(W\) and instruments \(Z\), renders the instruments exogenous when observed. In this sense, \(W\) can be (possibly quite poor) proxies for what we consider the unobserved common confounder \(U\), as long as they are sufficiently relevant. Conditional on \(W\), the instruments \(Z\) are still endogenous. Common confounders \(U\) are never observed, and \(W\) could be quite poor proxies for it. Yet, we prove that conditioning on a control function \(T\), which makes the instruments \(Z\) and proxies \(W\) independent, restores the exclusion of instruments \(Z\) which holds conditional on the unobserved \(U\). ## 3 Learning the Confounding Structure In this section, we describe the main idea of the paper. Using only observable information, we find a control variable \(T\), conditional on which the instruments \(Z\) are independent from the unobserved common confounders \(U\). We then explain what may be considered the optimal control variable \(T\). ### Learning a Control Function The control function \(\tau\in L_{2}(Z)\), described in lemma 3.0.1, generates the control variable \(T\). This control variable renders the instruments \(Z\) independent from the unobserved common confounders \(U\). Logically, if the instruments \(Z\) and the proxies \(W\) are independent conditional on \(U\), it follows that any such control variable \(T\) also renders \(Z\) and \(W\) independent conditional on \(T\). **Lemma 3.0.1**.: _Assume \(W\perp\!\!\!\perp Z\ |\ U\) (2.1.3a). Take any \(\tau\in L_{2}(Z)\), where \(T\coloneqq\tau(Z)\), such that \(U\perp\!\!\!\perp Z\ |\ T\). Then, also \(W\perp\!\!\!\perp Z\ |\ T\)._ One possible such control variable is \(T=Z\), yet this would leave no remaining variation in \(Z\) to instrument for \(A\) conditional on \(T\). Also, lemma 3.0.1 does not provide a way to identify any control function \(\tau\) apart from a function which captures the same information as \(Z\) itself. For this purpose, we need lemma 3.0.2. In this lemma, we establish that any \(T=\tau(Z)\), conditional on which the instruments \(Z\) and proxies \(W\) are independent, also renders \(Z\) conditionally independent from the unobserved common confounders \(U\). **Lemma 3.0.2**.: _Assume \(W\perp\!\!\!\perp Z\mid U\) (2.1.3a), and for any \(g(U)\in L_{2}(U)\), \(\mathbb{E}\left[g(U)|W\right]=0\) only when \(g(U)=0\) (2.1.3b). Take any \(\tau\in L_{2}(Z)\), where \(T\coloneqq\tau(Z)\), such that \(W\perp\!\!\!\perp Z\mid T\). Then, also \(U\perp\!\!\!\perp Z\mid T\)._ Unlike lemma 3.0.1, the conclusion of lemma 3.0.2 is not obvious and requires the completeness of proxies \(W\) for the unobserved common confounders \(U\). Again, completeness means that the proxies \(W\) must be sufficiently relevant for \(U\). If this were not the case, it would be impossible to keep all variation in \(Z\) that is associated with \(U\) fixed, using a control variable \(T\) derived only using information about the association of \(Z\) and \(W\). We can interpret \(U\) as all unobserved confounders that associate (correlate) \(Z\) and \(W\). Lemma 3.0.2 is an important result, because it allows the identification of a control function that does not capture all variation in instruments \(Z\). For any \(T\) that we identify conditional on which instruments \(Z\) and proxies \(W\) are independent, the instrument exclusion assumption can be relaxed to exclusion conditional on all unobservables \(U\) that associate (correlate) \(Z\) and \(W\) (assumption 2.1.2a). The parallels to standard IV are quite clear: The conditional exclusion assumption 2.1.2a is untestable, yet relaxed compared to standard IV. The relevance requirement of \(Z\) for \(A\) conditional on \(T\) (assumption 2.1.2c) is testable, yet stricter compared to standard IV. The requirement for relevance of \(Z\) for \(A\) conditional on \(T\) implies that only a subset of control functions \(\tau\in L_{2}(Z)\), which leave enough relevant variation in \(Z\) conditional on \(T\), enable model identification under assumption 2.1: \[\mathcal{T}^{\rm valid}\coloneqq\left\{\tau\in L_{2}(Z):\ (W\perp\!\!\!\perp Z \mid\tau(Z))\,\text{ and }\left(\mathbb{E}\left[g(A,\tau(Z)|Z\right]=0\text{ only when }g(A,\tau(Z))=0\right)\right\} \tag{3.1}\] As both defining relevance conditions of this set \(\mathcal{T}^{\rm valid}\) are testable, its non-emptiness is testable as well. ### Optimal Control Function Under assumption 2.1, the optimal control function \(\tau^{*}\in\mathcal{T}^{\rm valid}\) out of the set of valid control functions captures the minimum feasible information in \(Z\) in a sense of minimising the variance of the asymptotically unbiased estimator \(\hat{J}\) of some causal estimand \(J\). Figure 1 illustrates schematically how the bias and variance of the IV estimator conditional on the control variable \(T\) depend on the complexity of \(\tau\). In figure 1, the complexity of \(T=\tau(Z)\) on the x-axis increases from \(T=0\) on the extreme left to \(T=Z\) on the extreme right. Moving further to the right on the x-axis means that the control function captures more information in \(Z\), starting with variation in \(Z\) which correlates with the unobserved common confounders \(U\). In the left rectangle, the complexity of \(\tau\) is low. \(T\) does not capture all information in \(Z\) that correlates with \(U\), so even conditional on \(T\) the instruments \(Z\) remain endogenous, and the estimator \(\hat{J}\) is inconsistent. However, as all information in \(Z\) is used for inference, the asymptotic variance of the estimator \(\hat{J}\) will be relatively small. As the complexity of \(\tau\) increases towards the right in the left quadrant, more information about the elements of \(Z\) which correlate with \(U\) is captured in \(T\). Increasing the complexity of \(\tau\) increases the asymptotic variance of \(\hat{J}\) as less information in \(Z\) is used. Importantly, as this information corresponds to variation in \(Z\) that correlates with \(U\), inconsistency is being reduced. In the central rectangle of figure 1, the complexity of \(\tau\) is sufficient for \(Z\) to be exogenous conditional on \(T\). Hence, the estimator \(\hat{J}\) is consistent. However, as \(\tau\) increases in complexity, we use less information in \(Z\) to infer the causal estimand \(J\). Hence, inevitably the asymptotic variance of the estimator \(\hat{J}\) increases. Consequently, the optimal \(\tau\) would be that of minimal Figure 1: Implied typical estimator properties with control functions \(\tau\) of varying complexity complexity, such that \(Z\) is excluded conditional on \(T\). In practice, we do not know but can only estimate \(\mathcal{T}^{\text{valid}}\), so the exact minimum complexity valid \(\tau\) is unknown and estimated with sampling error. However, even if a \(\tau\) is chosen with slightly too little complexity, the resulting inconsistency may still be small. The margin of sufficient complexity is at the border of the left and central rectangle. When a \(\tau\) with slightly too little complexity is chosen, a small degree of inconsistency is incurred, but depending on the sample size possibly outweighed in terms of mean squared error contribution by the associated standard deviation reduction. This is an example of a small in-sample bias-variance tradeoff, while we otherwise focus on identification to enable the construction of consistent estimators. As the complexity of \(\tau\) increases, at some point the instruments \(Z\) are no longer relevant for treatment \(A\) conditional on \(T\). Asymptotically, the estimator \(\hat{J}\) no longer exists. This is the case in the right rectangle of figure 1. In the extreme, \(\tau\) is simply an identify function and \(T=Z\). No variation in \(Z\) remains to infer the effect of \(A\) on \(Y\). However, even in less extreme cases where there is some variation left in \(Z\) conditional on \(T\), it may simply be insufficient variation to be relevant for \(A\). ### Specification Test A straightforward way to ensure the sufficient complexity of some \(\tau\) is to test \(W\perp\!\!\!\!\perp Z\mid T\). An alternative to this is a specification test, similar in spirit to specification testing in overidentified IV models. Consider the two control functions \(\tau_{1}\) and \(\tau_{2}\), such that \(T_{1}=\tau_{1}(Z)\) and \(T_{2}=\tau_{2}(Z)\). Without loss of generality, let \(\tau_{1}\) be _less_ complex than \(\tau_{2}\). The null hypothesis is the conditional exogeneity of \(Z\) given \(T_{1}\), \[H_{0}:Z\perp\!\!\!\!\perp Y(a)\mid T_{1},\text{ with alternative }H_{a}:Z\not \perp\!\!\!\!\perp Y(a)\mid T_{1}.\] Let \(\hat{J}_{1}\) and \(\hat{J}_{2}\) be the two causal estimators of estimand \(J\) based on \(\tau_{1}\) and \(\tau_{2}\). Suppose that under both control functions, the instruments \(Z\) remain conditionally relevant for treatment \(A\), so that both estimators \(\hat{J}_{1}\) and \(\hat{J}_{2}\) have some probability limit. We also still assume that conditional on \(U\), the instruments \(Z\) are exogenous. The conditional exogeneity of \(Z\) given \(U\) is assumed, because here we only test for the sufficient complexity of \(\tau\), i.e. \(Z\perp\!\!\!\perp U\mid T\).1 Footnote 1: To test whether some instruments \(Z\) are exogenous conditional on \(U\), we can use a standard specification test for different \(Z\), if \(J\) is overidentified conditional on \(T\). Under the null hypothesis \(H_{0}\), both estimators \(\hat{J}_{1}\) and \(\hat{J}_{2}\) converge to the true causal effect \(J\). However, the asymptotic variance of \(\hat{J}_{2}\) with the more complex control function \(\tau_{2}\) will be larger than that of \(\hat{J}_{1}\), as \(\hat{J}_{2}\) uses less variation in \(Z\) than \(\hat{J}_{2}\). Under the alternative \(H_{a}\), the estimators do not have the same probability limit. If \(\tau_{2}\) still captures enough variation in \(Z\) for the instruments \(Z\) to be conditionally exogenous, \(\hat{J}_{2}\) still converges to \(J\). \(\hat{J}_{1}\) on the other hand will no longer converge to \(J\). Generally, there is no guarantee that \(T_{2}\) still renders \(Z\) conditionally exogenous. In this case, \(\hat{J}_{2}\) converges to some value other than \(J\). However, unless the additional variation that we condition on in \(T_{2}\) compared to \(T_{1}\) is exogenous due to some particularly poor construction of \(\tau_{2}\), \(\hat{J}_{1}\) and \(\hat{J}_{2}\) still have different probability limits. A specification test using this logic is generally possible for the sufficient complexity of a control function \(\tau\). ## 4 Point Identification Without further parametric restrictions on the outcome or first stage model, at most set identification is possible. When the outcome model is linearly separable in the observables and unobservables, we show how to point-identify the model part relating to the observables (Newey and Powell, 2003). If instead the first stage is monotonous, a control function approach can be used to point-identify average structural functions and thus causal effects with a common support assumption (instead of completeness) (Imbens and Newey, 2009). We construct a control function for the endogenous variation in \(A\) while already keeping the endogenous variation in \(Z\) fixed. ### Linearly separable outcome model An outcome model with linear separability in the treatment and a disturbance is one special case where point identification is possible. With a linearly separated disturbance \(\varepsilon\), it is straightforward to represent the exclusion of instrument \(Z\) as mean-independence conditional on common confounders \(U\). Assumption 4.1 fully describes this setting. **Assumption 4.1** (Linearly separable outcome model).: _There exists some function \(k_{0}\in L_{2}(A)\) such that_ \[Y=Y(A)=k_{0}(A)+\varepsilon, \mathbb{E}\left[\varepsilon|Z,U\right]=\mathbb{E}\left[\varepsilon |U\right]. \tag{4.1}\] The conditional moment describes the mean-independence of instruments \(Z\) conditional on the unobserved common confounders \(U\). **Theorem 4.1** (Identification in linearly separable model).: _Let assumptions 2.1.(1/2b/2c/3) and 4.1 hold. There is a unique \(h\in L_{2}(A,T)\) for which \(\mathbb{E}\left[Y|Z\right]=\mathbb{E}\left[h(A,T)|Z\right]\), and it satisfies \(h(A,T)=k_{0}(A)+\delta_{T}(T)\), where \(\delta_{T}(T)=\mathbb{E}\left[\varepsilon|T\right]\)._ Theorem 4.1 establishes point identification of the function \(k_{0}\) of the effect of the observable treatment \(A\) on outcome \(Y\). While we do not make this explicit, \(k_{0}\) may also be a function of the proxies \(W\) or other observed covariates \(X\). The linear separability in combination with the completeness assumption leads to a straightforward identification in the linearly separable model. Unlike in Tien (2022), identification of an average structural function when there are interactions of the observables and unobservable \(U\) is much more difficult in this model where the proxies \(W\) may also have a direct effect on treatment \(A\). We could have considered other model specifications or versions of completeness to establish identification (D'Haultfoeuille, 2011). For now, we leave this exercise for future work. ### First stage monotonicity If the outcome model is not linearly separable in treatment and disturbance, monotonicity in the first stage reduced form is an alternative assumption to identify average causal (structural) effects (Imbens and Newey, 2009). If the common confounders \(U\) were observed, there would be a simple control function for the endogenous variation in \(A\) due to monotonicity. Assumption 4.2 describes the necessary first stage reduced form monotonicity. **Assumption 4.2** (Monotonicity).: \[A=h(Z,\eta)\] (4.2) 1. \(h(Z,\eta)\) _is strictly monotonic in_ \(\eta\) _with probability 1._ 2. \(\eta\) _is a continuously distributed scalar with a strictly increasing conditional CDF_ \(F_{\eta|U}\) _on the conditional support of_ \(\eta\)_._ 3. \(Z\perp\!\!\!\perp\eta\ |\ U\)_._ Assumption 4.2.1 describes the strict montonicity of \(A\) in the disturbance \(\eta\). This disturbance \(\eta\) is scalar and continuously distributed conditional on the unobservable \(U\), with a strictly increasing conditional CDF according to assumption 4.2.2. Jointly, these two assumptions ensure that for any given \(Z\), any \(A\) is associated with a unique \(\eta\). The unobserved confounders \(U\) may affect \(A\), but only through their effect on \(\eta\). This restriction keeps the model monotonous in the unobservables to ensure point identification. Finally, assumption 4.2.3 requires full independence of instruments \(Z\) and \(\eta\) conditional on the common confounders \(U\). The above setup does not immediately help with identification, because the common confounders \(U\) are always unobserved. In lemma 4.1.1, we establish a few useful facts about the conditional distribution of the scalar disturbance \(\eta\) given \(T\). Notably, this conditional distribution \(F_{\eta|T}\) is also strictly increasing on the conditional support of \(\eta\), and unsurprisingly the instruments \(Z\) are independent from \(\eta\) conditional on \(T\). **Lemma 4.1.1**.: \[F_{\eta|T}\coloneqq\int_{\mathcal{U}}F_{A|Z,U}(A,Z,u)f_{U|T}(u,T)\,\mathrm{d}\mu _{U}(u)\] _is a strictly increasing CDF on the conditional support of \(\eta\), and \(Z\perp\!\!\!\perp\eta\ |\ T\)._ The above lemma 4.1.1 implies that \(F_{\eta|T}(\eta)\) is a one-to-one function of \(\eta\) conditional on \(T\), just like \(F_{\eta|U}(\eta)\) conditional on \(U\). This fact is useful, because it is no longer necessary to condition on the unobservable \(U\) to identify the endogenous variation in \(A\), which is \(\eta\), exactly. Instead, if we can identify \(F_{\eta|T}(\eta)\), \(\eta\) is held fixed as long as \((F_{\eta|T}(\eta),T)\) is held fixed. The remaining difficulty is to identify \(F_{\eta|T}(\eta)\). In this regard, theorem 4.2 states that \(F_{\eta|T}(\eta)\) is equal to the conditional CDF of \(A\) given \(Z\). This conditional CDF is defined as \(V_{T}\) in equation 4.3. **Theorem 4.2**.: _Let_ \[V_{T}\coloneqq F_{A|Z;T}(A,Z). \tag{4.3}\] _Under assumption 4.2, \(V_{T}=F_{\eta|T}(\eta)\), and_ \[A\perp\!\!\!\perp Y(a)\ |\ (V_{T},T),\ \text{for all}\ a\in\mathcal{A}. \tag{4.4}\] Theorem 4.2 states that despite the unobservable common confounder \(U\), there exist the observable control functions \(V_{T}\) and \(T\) conditional on which we retrieve unconfoundedness. Specifically, we retrieve unconfoundedness because all variation in treatment \(A\) stems from instruments \(Z\) once we condition on \(V_{T}\) and \(T\). Fortunately, conditional on \(T\), the instruments \(Z\) are fully exogenous. So far, our arguments have only been with respect to exogeneity, not yet relevance. To describe relevance, we use a common support assumption 4.3, with focus on a causal effect of interest \[J\coloneqq\int_{\mathcal{A}}Y(a)\pi(a)\,\mathrm{d}\mu_{A}(a).\] This common support assumption requires the sufficient relevance of instruments \(Z\) for treatment \(A\), and sufficient variation in \(Z\), both conditional on \(T\). In slightly different words, after holding all variation in \(Z\) associated with \(U\) fixed through \(T\), the variation in \(Z\) must still be sufficiently rich and relevant for \(A\). **Assumption 4.3** (Common Support).: _For all \(a\in\mathcal{A}\), where the contrast function is non-zero (\(\pi(a)\neq 0\)), the support of \((V_{T},T)\) equals the support of \((V_{T},T)\) conditional on \(A\)._ With the common support assusmption 4.3, average causal quantities \(J\) in our model are identified under monotonicity (assumption 4.2). In theorem 4.3, we explicitly replace the completeness assumption in assumption 2.1.2c by the common support assumption 4.3, which is the correct relevance requirement with a control function. **Theorem 4.3** (Average causal quantity identification).: _Suppose assumption 2.1.(1/2a/2b/3) [relaxed IV model], 4.2 [monotonicity], and 4.3 [common support] hold. Then, any \(J\coloneqq\int_{\mathcal{A}}Y(a)\pi(a)\,\mathrm{d}\mu_{A}(a)\) is identified as_ \[J=\int_{\mathcal{V}_{T},\mathcal{T}}\int_{\mathcal{A}}\mathbb{E}\left[Y|A=a,( V_{T},T)=(v_{T},t)\right]\pi(a)\,\mathrm{d}\mu_{A}(a)\,\mathrm{d}F_{V_{T},T}(v_{T},t).\] We simply integrate out the control functions \((V_{T},T)\) without dependence on treatment \(A\) to obtain the causal quantity of interest \(J\). Typically, \(J\) will be some form of average treatment effect. Other functions of interest than the above (weighted) averages of potential outcomes, e.g. quantile structural functions, are also identified as a consequence of theorem 4.2, but require corresponding common support assumption which will differ from assumption 4.3. ## 5 Linear Model In this section, we explain identification in the common confounding model in linear terms. Apart from the illustrative purpose of linear models, their tractability and ease of use make them attractive. For the common confounding IV approach, the linear model provides useful intuition regarding the relevance and exclusion assumptions. First, we describe the model assumptions in linear form. The outcome variable \(Y\in\mathbb{R}\) is one-dimensional. For ease of notation, we let \(A\in\mathbb{R}\) be one-dimensional too. All other variables \(X\) are of some general dimensions \(d_{X}\), i.e. \(Z\in\mathbb{R}^{d_{Z}}\), \(W\in\mathbb{R}^{d_{W}}\), and \(U\in\mathbb{R}^{d_{U}}\). As previously, instruments are called \(Z\), proxies \(W\), and the unobserved common confounders \(U\). \[Y =A\beta+U\gamma_{Y}+W\upsilon_{Y}+\varepsilon_{Y}, \mathbb{E}\left[\varepsilon_{Y}|Z\right] =0 \tag{5.1}\] \[A =Z\zeta+U\gamma_{A}+W\upsilon_{A}+\varepsilon_{A}, \mathbb{E}\left[\varepsilon_{A}|Z\right] =0 \tag{5.2}\] Equation 5.1 simply states that \(Y\) is a linear function of \(A\), \(U\), \(W\) and a disturbance \(\varepsilon_{Y}\). The disturbances \(\varepsilon_{Y}\) are mean-independent from the instruments \(Z\). With the conditional moment equation, the model parameters would be identified under a conditonal relevance requirement of \(Z\) for \(A\) if \(U\) could be observed. The parameter of interest in this model is \(\beta\), the effect of treatment \(A\) on \(Y\). In equation 5.2, \(A\) is a linear function of \(Z\), \(U\), \(W\), and a disturbance \(\varepsilon_{A}\). The \(d_{Z}\)-dimensional vector of parameters \(\zeta\) describes the marginal effect of \(Z\) on \(A\). Equation 5.2 for \(A\) describes the model's first stage. The conditional relevance requirement of \(Z\) for \(A\) would simply be \(\zeta\neq\mathbf{0}\), if \(U\) were observed. With observable \(U\), the model would be sufficiently described at this point to point identify \(\beta\). As the common confounders \(U\) are never observed, the model requires further assumptions. \[Z =U\gamma_{Z}+\varepsilon_{Z}, \mathbb{E}\left[\varepsilon_{Z}|U,W\right] =0 \tag{5.3}\] \[W =U\gamma_{W}+\varepsilon_{W}, \mathbb{E}\left[\varepsilon_{W}|U,Z\right] =0 \tag{5.4}\] Equations 5.3 and 5.4 imply that all correlation between \(Z\) and \(W\) stems from the unobserved common confounders \(U\). There is no direct effect from either on the other. If there were, we could model this by increasing the dimension of \(U\) by the corresponding element of \(Z\) or \(W\) until \(Z\) and \(W\) are uncorrelated conditional on \(U\). Assumption 5.1 describes the rank condition for the richness of \(W\) with respect to \(U\). **Assumption 5.1** (Rank conditions for \(\gamma_{W}\)).: \[\text{rank}(\gamma_{W})\geq d_{U}\] (5.5) This first rank condition 5.5 implies \(d_{W}\geq d_{U}\). For simplicity, suppose that \(\text{rank}(\gamma_{W})=d_{W}=d_{U}\). Then, we can simply invert \(\gamma_{W}\) to write \[\mathbb{E}\left[U|Z\right]=\mathbb{E}\left[W|Z\right]\gamma_{W}^{-1}.\] The expected value of \(U\) given \(Z\) is proportional to the expected value of \(W\) given \(Z\), as long as the rank condition 5.5 for \(\gamma_{W}\) holds. With this result, we can write both \(\mathbb{E}\left[Y|Z\right]\) and \(\mathbb{E}\left[A|Z\right]\) as functions of the random variables \(Z\), \(\mathbb{E}\left[W|Z\right]\), and model parameters: \[\mathbb{E}\left[Y|Z\right] =Z\zeta\beta+\mathbb{E}\left[W|Z\right]\left(\gamma_{W}^{-1} \gamma_{A}\beta+\gamma_{W}^{-1}\gamma_{Y}+\upsilon_{A}\beta+\upsilon_{Y} \right),\] \[\mathbb{E}\left[A|Z\right] =Z\zeta+\mathbb{E}\left[W|Z\right]\left(\gamma_{W}^{-1}\gamma_{A} +\upsilon_{A}\right).\] From the above derivations, it follows that the parameter of interest \(\beta\) can be written in the following ratio form: \[\beta=\frac{\mathbb{E}\left[\left(Z\zeta\right)Y\middle|\mathbb{E}\left[W|Z\right] \right]}{\mathbb{E}\left[\left(Z\zeta\right)A\middle|\mathbb{E}\left[W|Z\right] \right]}. \tag{5.6}\] As we found previously, the instruments \(Z\) are endogenous only due to the common confounders \(U\). In the linear model, all endogeneity in \(Z\) stems from changes in \(\mathbb{E}\left[U|Z\right]\), as \(Z\) varies. As discussed previously, \(\mathbb{E}\left[U|Z\right]\) is held fixed with \(\mathbb{E}\left[W|Z\right]\) as long as \(W\) is sufficiently relevant for \(U\) as defined via the rank condition 5.5 in assumption 5.1. With the conditional exclusion of \(Z\) established, the focus shifts to the conditional relevance of \(Z\) for \(A\). \(\mathbb{E}\left[U|Z\right]\) is \(d_{U}\)-dimensional, and thus is \(\mathbb{E}\left[W|Z\right]\). Given that the variation in \(\mathbb{E}\left[W|Z\right]\) has dimension \(d_{U}\), there is spare variation in \(Z\) to infer the causal effect \(J\) of \(A\) on \(Y\), as long as \(d_{Z}>d_{U}\). After keeping the \(d_{U}\)-dimensional variation in \(\mathbb{E}\left[W|Z\right]\) fixed, the expected predicted values of treatment \(A\) given instruments \(Z\), \(\mathbb{E}\left[Z\zeta\right|\mathbb{E}\left[W|Z\right]\right]\), must be non-degenerate. The rank condition for any \(d_{A}\) is described in assumption 5.2. **Assumption 5.2** (Rank conditions for \(Z\zeta\)).: \[\text{rank}\left(\mathbb{E}\left[\left(Z\zeta\right)A\middle|\mathbb{E}\left[ W|Z\right]\right]\right)=d_{A}\] (5.7) Usually, this slightly involved rank condition can be understood more simply as \[\text{rank}(\zeta)-d_{U}\geq d_{A}.\] \(d_{U}\) dimensions of variation in \(Z\) are lost by conditioning on \(\mathbb{E}\left[W|Z\right]\). The remaining variation in \(Z\) after this conditioning step must be sufficiently relevant for \(A\). With its transparency, the linear model sheds light on the assumptions in IV with common confounders. The same intuition for relevance and exclusion assumptions in the linear model carries on to nonparametric models - specifically the idea of a pre-IV control function. In the linear model, instrument variation is used conditional on the \(d_{U}\)-dimensional control function \(\mathbb{E}\left[W|Z\right]\), while in nonlinear settings the control function is a general \(\tau\in L_{2}(Z)\), which serves the same purpose: To render the instruments \(Z\) conditionally independent from the proxies \(W\) and hence from the unobserved common confounders \(U\). Exclusion of the instruments \(Z\) conditional on this control function is the desired consequence. Practical Guide Straightforward testing and discussion of model assumptions is key in any application. In this section, we provide a practical guide to identification with this approach. Each of the four steps we describe has its own subsection. As in standard IV, the relevance assumptions generally remain testable. The conditional exclusion assumption is not testable up to a specification test. ### Find \(T\) and test relevance of \(Z,w\) for \(U\) In this step, we test the relevance of \(W\) for \(U\) (assumption 2.1.3), as well as the relevance of \(Z\) for \(U\). The latter is a necessary condition for the relevance of \(Z\) for \(A\) conditional on \(T\) (assumption 2.1.2c), which is tested explicitly in subsection 6.2. First, we find some \(T=\tau(Z)\) such that \(Z\perp\!\!\!\perp W\mid T\) is satisfied. As long as some \(T=\tau(Z)\) leads to the conditional independence of \(Z\) and \(W\), this control function \(\tau\in L_{2}(Z)\) also renders \(Z\) and \(U\) conditionally independent. To test for the sufficient relevance of \(Z\) and \(W\) with respect to \(U\), we can use \(T\). A sufficient condition for relevance of \(Z\) and \(W\) for \(U\) is that both \(Z\) and \(W\) contain spare information conditional on \(T\). This motivates the choice of a valid \(\tau\in{\cal T}^{\rm valid}\), which captures the least information about \(Z\) while ensuring the conditional exclusion of \(Z\) conditional on \(T\), as discussed in section 3. To simplify the argument, suppose that our model is linear and the dimensions for \((Z,T,W,U)\) are \((d_{Z},d_{T},d_{W},d_{U})\). Often, relevance of \(Z\) and \(W\) for \(U\) imply \(\min\{d_{Z},d_{W}\}\geq d_{U}\). The minimum dimension of \(T\) to ensure conditional independence of \(Z\) and \(W\) is \(d_{U}\), so we know that any \(T=\tau(Z)\) such that \(Z\perp\!\!\!\perp W\mid T\) must satisfy \(d_{T}\geq d_{U}\). If we can reject a test with the null hypothesis \[H_{0}:\min\{d_{Z},d_{W}\}\leq d_{T},\mbox{ and alternative }H_{a}:\min\{d_{Z},d_{W}\}>d_{T},\] this implies \(\min\{d_{Z},d_{W}\}>d_{U}\). Thus, there is a test whether \(Z\) and \(W\) are relevant for \(U\). However, \(\min\{d_{Z},d_{W}\}>d_{T}\) is a sufficient, not a necessary condition for the relevance of \(Z\) and \(W\) for \(U\). \(Z\) and \(W\) are still relevant for \(U\) when \(\min\{d_{Z},d_{W}\}=d_{U}\). Unfortunately, this hypothesis is not testable with unobserved \(U\). So, how should an applied researcher proceed when \(\min\{d_{Z},d_{W}\}=d_{T}\) (which could mean \(\min\{d_{Z},d_{W}\}=d_{U}\))? Here, we need to distinguish \(d_{Z}=d_{T}\) from \(d_{W}=d_{T}\). \(d_{Z}=d_{T}\): When \(T\) contains as much information as \(Z\), there is no point in moving to step 2. The instruments \(Z\) contain no variation conditional on \(T\), so \(Z\) cannot be relevant for treatment \(A\) conditional on \(T\). \(d_{W}=d_{T}\): We know for sure that \(d_{W}\leq d_{U}\). If \(d_{W}=d_{U}\), the proxies \(W\) are exactly relevant for \(U\) without spare information. If we are willing to assume \(d_{W}=d_{U}\), we could move forward to step 2. The variation in \(U\) associated with \(Z\) would still be held fixed with \(T\) in this case. Yet, \(d_{W}=d_{U}\) is not testable. It may well be that \(d_{W}<d_{U}\). Then, the variation in \(U\) associated with \(Z\) is _not_ held fixed with \(T\). We can never test the completeness of \(W\) for \(U\) when \(d_{W}=d_{T}\). Accordingly, we do not generally suggest to move to step 2 by relying on the assumption \(d_{W}=d_{U}\), when \(d_{W}=d_{T}\) is observed. ### Test relevance of \(Z\) for \(A\) given \(T\) In step 1, the relevance of \(Z,W\) for \(U\) was confirmed. Now, we test the relevance of the instruments \(Z\) for treatment \(A\) conditional on \(T\) (assumption 2.1.2c). Depending on the additional model assumptions, this is either a test of completeness (2.1.2c), or common support (4.3). As any \(T=\tau(Z)\) is simply the control function \(\tau\in L_{2}(Z)\) applied to instruments \(Z\), the test of relevance of \(Z\) for \(A\) given \(T\) is straightforward for any given \(\tau\). It is as simple as a test of instrument relevance with observed confounders. Let us return to a linear model. If all components of the instrument vector \(Z\) are correlated with both \(U\) and \(A\), the conditional relevance requirement simplifies to \((d_{Z}-d_{T})\geq d_{A}\). After conditioning on all variation in \(Z\) which correlates with \(U\) by holding \(T\) fixed, \(d_{Z}-d_{T}\) dimensions of instruments \(Z\) remain to infer the causal effect of treatment \(A\) on outcome \(Y\). The remaining instrument variation of dimension \(d_{Z}-d_{T}\) is relevant for treatment \(A\) only if the treatment's dimension \(d_{A}\) is smaller than or equal to \(d_{Z}-d_{T}\). If \(Z\) is found to be relevant for \(A\) given \(T\), it implies that \(Z\) is relevant for both \(U\) and \(A\). Interpreting \(U\) as any source of unobserved variation which associates instruments \(Z\) and proxies \(W\), \(Z\) would have no variation conditional on \(T\) if they were not sufficiently relevant for \(U\) (\(d_{Z}<d_{U}\)). So, if \(Z\) is relevant for \(A\) given \(T\), it implies that \(Z\) already had to be relevant for \(U\). ### Exclusion of \(Z\) conditional on \(U\) In step 1 and 2 we tested all relevance assumptions in this model. The conditional exclusion assumption for \(Z\) conditional on \(U\) remains untestable. To be precise, \(Y\perp\!\!\!\perp Z\mid(A,U)\) (assumption 2.1.2a) can only be justified on theoretic grounds, not observed data. In order to justify conditional exclusion theoretically, \(T\) can be used to understand the unobserved common confounders \(U\). As \(T\) captures all variation in \(Z\) associated with \(U\), \(T\) immediately explains \(U\) in terms of its association with \(Z\) and \(W\). From the association of \(T\) and \(W\), we can interpret \(U\) even better. For example: If subject-specific pre-college GPA measures are used as instruments \(Z\), and \(T\) turns out to capture average GPA, then \(U\) could be interpreted as general ability. Suppose \(W\) contains dummies capturing whether someone has engaged in risky behaviour, including drugs and illegal activity, while in high school. From theory and empirical evidence we would expect high ability to lead to less risky behaviour. Thus, if \(T\) is an average GPA, it would be expected to negatively correlated with \(W\). Once we used \(T\) to understand the variation reflected by the unobserved confounders \(U\), we can construct a theoretic argument with respect to the conditional exclusion of instruments \(Z\). In our example, the common confounder \(U\) reflected general ability. The conditional exclusion assumption reduces to whether conditional on general ability \(U\), the subject-specific pre-college GPA measures \(Z\) are excluded. This argument clearly depends on the respective choice of treatment \(A\) and outcome \(Y\). As in standard IV, specification tests, which can be revealing about the exclusion of \(Z\), are possible if the model is overidentified. If a specification test suggests that different subset of instruments \(Z\) conditional on \(T\) result in estimators with different probability limits, we reject that all instruments \(Z\) are excluded conditional on \(T\) (unless the estimand is the local average treatment effect which can vary across subpopulations). Necessary for any such test is model overidentification for the causal effect \(J\) conditional on \(T\). In a simple linear model, overidentification would e.g. mean \((d_{Z}-d_{T})>d_{A}\). After keeping \(d_{T}\) dimensions of \(Z\) fixed, the instruments must still contain overidentifying information. Ultimately, just as in standard IV, the conditional exclusion assumption for \(Z\) remains largely untestable. Therefore, it is crucial to better understand \(U\) from the control variable \(T\). ### Estimation In the final fourth step, use \(Z\) to instrument for treatment \(A\) conditional on control variable \(T\), to identify (and estimate) the structural function or average causal effect \(J\) of \(A\) on \(Y\). Having established that all necessary relevance and exclusion requirements hold, an estimator \(\hat{J}\) can be formulated for the causal effect of interest \(J\). The form of this estimator depends on the type of parametric model assumptions made. ## 7 Example: Linear Returns to Education Interested in the returns to education, we use data from the National Longitudinal Survey of Youth 1997 (Bureau of Labor Statistics, 2019). The variables of interest are introduced below. * Household net worth at 35: continuous variable, in USD. * BA degree: 1 if individual \(i\) obtained a BA degree, 0 otherwise. * Pre-college test results: subject-specific and overall GPA; ASVAB percentile. * Risky behaviour dummies: whether \(i\) drank, smoked, or engaged in other behaviours considered risky by the age of 17. * Ability: Unmeasured intellectual capacity. * Other biases: Selection on unobservables into obtaining a BA degree (at least in part result of optimising individuals). * Covariates: sex, college GPA, parental education/net worth, siblings, region, etc. A review of the vast literature on returns to education is far beyond the scope of this paper (Psacharopoulos and Patrinos, 2018). Instead, we focus on estimation of a very specific return to education: The causal effect of obtaining a bachelor's degree \(A\) on household net worth at 35. Even in a simple linear model like \[Y=\alpha_{Y}+A\beta+U\gamma_{Y}+W\upsilon_{Y}+X\eta_{Y}+\varepsilon_{Y}, \tag{7.1}\] two distinct potential sources of confounding are easily identified via the unobservable components of 7.1: * Ability \(U\) likely has a positive effect on household net worth \(Y\), by means of salary and non-salary based net worth accumulation (Griliches, 1977). The vector-valued linear parameter \(\gamma_{Y}\) captures this positive linear effect of ability on net worth. * The disturbance \(\varepsilon_{Y}\) captures all variation in \(Y\), which is jointly unexplained by \((A,U,W,X)\). This can be understood as individual-specific, heterogeneous characteristics, and chance. Any correlation of \(A\) with either of these terms leads to biased estimates of \(\beta\). How does obtaining a BA degree \(A\) correlate with ability \(U\) and a general disturbance \(\varepsilon_{Y}\)? In this identification problem, selection bias in inherent. At least to some degree, individuals choose whether to obtain a BA degree as a result of an optimisation problem of expected utility subject to an information set \(\mathcal{I}\): \[A=\operatorname*{arg\,max}_{a\in\{0,1\}}\,\left(\mathbb{E}\left[u(Y(a))-c(a)|A=a,\mathcal{I}\right]\right), \tag{7.2}\] where \(u:\mathcal{Y}\to\mathbb{R}\) is a utility function for net worth with diminishing returns, and \(c:\{0,1\}\to\mathbb{R}\) a cost function for obtaining a BA degree \(A\). Both utility and cost function can be individual-specific. For ease of illustration, suppose individuals are perfectly informed with \(\mathcal{I}=(A,U,W,X,\varepsilon_{Y})\). Then, each individual chooses \(a\in\{0,1\}\) to maximise the utility associated with potential outcome \(Y(a)\) minus cost \(c(a)\). In this case, there is an easy decision rule to determine optimal \(A\): \[A =\operatorname*{arg\,max}_{a\in\{0,1\}}u(Y(a))-c(a),\] \[=\mathds{1}\left[u(Y(1))-u(Y(0))\ >\ c(1)-c(0)\right].\] * Ceteris paribus, an increase in ability \(U\) equally increases \(Y(0)\) and \(Y(1)\) according to model 7.1. Due to diminishing returns in the utility function \(u\), \(u(Y(1))-u(Y(0))\) decreases. However, also \(c(1)\) decreases as higher-ability individuals experience a lower utility cost of obtaining a BA degree. The overall effect on the choice of \(A\) is ambiguous and depends on the utility functions of the individual. * The effect of \(\varepsilon_{Y}\) on the choice of \(A\), on the other hand, is unambiguous. An increase in \(\varepsilon_{Y}\) reduces \(u(Y(1))-u(Y(0))\) due to the diminishing returns of \(u\). Cost \(c\), however, is unaffected by \(\varepsilon_{Y}\). Hence, \(A\) inevitably negatively correlates with \(\varepsilon_{Y}\). This logic regarding negative selection bias when treatment is chosen by utility-maximising individuals is by no means novel (Heckman et al., 2006), or unique to the returns to education identification problem. Negative selection bias is inherent to the treatment variable when it is at least in part the result of optimising behaviour by utility-maximising heterogeneous individuals. Novel in our approach is the ability to explicitly account for certain biases, in this case ability bias, when proxies for them exist. Finding excluded instruments can be much more straightforward when pertinent biases, like ability bias, have already been taken care of. In our identification approach, instruments \(Z\) are pre-college test results. These results are strongly correlated with ability \(U\). Yet, conditional on ability, and some other covariates, pre-college test results contain random variation, which is excluded with respect to household net worth \(Y\) (at age 35). Concurrently, even random variation in pre-college test results is a strong predictor of obtaining a BA degree. Hence, instrument relevance likely holds. The proxies \(W\) are dummies for whether an individual engaged in risky behaviours at high school age. Among others, the risky behaviour dummies include drinking, smoking (marijuana), selling drugs and stealing. Theory and empirical evidence suggest the correlation of low intelligence and risky behaviour (Loeber et al., 2012). Therefore, ability \(U\) both causes instruments \(Z\) and proxies \(W\) in our data. Ability \(U\) is the common confounder in this causal question. Clearly, additional covariates are necessary to justify instrument exclusion. These include sex, college GPA, parental education and net worth, the number of siblings, region of residence, etc. ### Assumptions The linear equivalent to the general common confounding IV model in assumption 2.1 is described as assumption 7.1. Again, for ease of notation assume \(d_{A}=1\), just as in this returns to education identification problem. **Assumption 7.1** (Linear IV Model with Common Confounding).: 1. _Linear outcome model projection:_ \[Y=\alpha_{Y}+A\beta+U\gamma_{Y}+W\upsilon_{Y}+X\eta_{Y}+\varepsilon_{Y}\] (7.3) 2. _Instruments_ 1. _Exclusion:_ \(\mathbb{E}\left[\varepsilon_{Y}(Z,U,W,X)\right]=\mathbf{0}\)_._ 2. _Relevance: For the linear projection of_ \(A\) _on_ \((Z,U,W,X)\)_,_ \[A=\alpha_{A}+Z\zeta+U\gamma_{A}+W\upsilon_{A}+X\eta_{A}+ \varepsilon_{A},\ \mathbb{E}\left[\varepsilon_{A}(Z,U,W,Z)\right]=\mathbf{0}\] (7.4) \[\text{rank}\left(\mathbb{E}\left[\left(Z\zeta\right)A\ \right|\ T,X\right]\right)=d_{A}.\] (7.5) 3. _Proxies_ 1. _Exclusion: For the linear projection of_ \(W\) _on_ \((Z,U,X)\) _and_ \((Z,X)\)_,_ \[W=\alpha_{W}+U\gamma_{W}+X\eta_{W}+\varepsilon_{W}, \mathbb{E}\left[\varepsilon_{W}(Z,U,X)\right]=\mathbf{0},\] (7.6) \[W=\tilde{\alpha}_{W}+Z\tilde{\gamma}_{W}+X\tilde{\eta}_{W}+ \tilde{\varepsilon}_{W}, \mathbb{E}\left[\tilde{\varepsilon}_{W}(Z,X)\right]=\mathbf{0}.\] (7.7) _with_ \(T\coloneqq Z\tilde{\gamma}_{W}+X\tilde{\eta}_{W}\)_._ _._ 2. _Relevance:_ \(\text{rank}(\gamma_{W})\geq d_{U}\)__ \[.\] (7.8) To simplify notation, let \(Z_{|X}\) be the true residual of a projection of \(Z\) onto \(X\). The linearity of the outcome model implies that the covariance \(\text{Cov}\left[(Z_{|X}\zeta),Y\right]\) is \[\text{Cov}\left[(Z_{|X}\zeta),Y\right]=\text{Cov}\left[(Z_{|X}\zeta),A\right] \beta+\text{Cov}\left[(Z_{|X}\zeta),U\right]\gamma_{Y}+\text{Cov}\left[(Z_{|X} \zeta),W\right]\upsilon_{Y}.\] The above expression uses the uncorrelatedness of \(Z\) and \(\varepsilon_{Y}\) in assumption 7.1.2a. If it were not for the linear confounding from the unobserved common confounders \(U\) and proxies \(W\), \(Z\) would be excluded. Next, we demonstrate how to use the proxies \(W\) to keep \(\text{Cov}\left[(Z\zeta)U\right]\) fixed. \[\text{Cov}\left[(Z_{|X}\zeta),U\right]=\text{Cov}\left[(Z_{|X}\zeta),W\right] \gamma_{W}^{\intercal}\left(\gamma_{W}\gamma_{W}^{\intercal}\right)^{-1}\] The inverse \(\left(\gamma_{W}\gamma_{W}^{\intercal}\right)^{-1}\) exists under assumption 7.1.3 that the rank of \(\gamma_{W}\) is at least \(d_{U}\). Then, slightly rewriting \(\text{Cov}\left[(Z\zeta)Y\right]\) as \[\text{Cov}\left[(Z_{|X}\zeta),Y\right] =\text{Var}\left[Z_{|X}\zeta\right]\beta+\text{Cov}\left[(Z_{|X} \zeta),W\right]\tilde{\upsilon}_{W},\] \[\tilde{\upsilon}_{Y} \coloneqq\upsilon_{Y}+\upsilon_{A}\beta+\gamma_{W}^{\intercal} \left(\gamma_{W}\gamma_{W}^{\intercal}\right)^{-1}\left(\gamma_{Y}+\gamma_{A} \beta\right),\] implies that any endogeneity of residualised instruments \(Z_{|X}\) is controlled for by conditioning on \(Z\tilde{\gamma}_{W}\) from the linear projection 7.7. To be precise, \[\text{Cov}\left[(Z_{|X}\zeta),Y|Z\tilde{\gamma}_{W}\right]=\text{Var}\left[Z_{ |X}\zeta|Z\tilde{\gamma}_{W}\right]\beta+\underbrace{\text{Cov}\left[(Z_{|X} \zeta)W|Z\tilde{\gamma}_{W}\right]}_{=0}\tilde{\upsilon}_{Y}.\] The covariance of the first stage can be rewritten as \[\text{Cov}\left[(Z_{|X}\zeta),A|Z\tilde{\gamma}_{W}\right]=\text{Var}\left[Z _{|X}\zeta|Z\tilde{\gamma}_{W}\right]+\underbrace{\text{Cov}\left[(Z_{|X} \zeta)W|Z\tilde{\gamma}_{W}\right]}_{=0}\tilde{\upsilon}_{A}.\] Using both of these results, and one-dimensional treatment \(A\) to simplify notation, a simple ratio form for the linear effect of \(A\) on outcome \(Y\) is \[\beta=\frac{\mbox{Cov}\left[\left(Z_{|X}\zeta\right)Y\mid Z\tilde{\gamma}_{W} \right]}{\mbox{Cov}\left[\left(Z_{|X}\zeta\right)A\mid Z\tilde{\gamma}_{W} \right]}=\frac{\mbox{Cov}\left[\left(Z\zeta\right)Y\mid T,X\right]}{\mbox{Cov} \left[\left(Z\zeta\right)A\mid T,X\right]},\mbox{ with }T=Z\tilde{\gamma}_{W}. \tag{7.9}\] Hence, the estimator differs from standard IV based estimation only by also holding a linear prediction \(T\) of \(W\) fixed as the partial predicted values \(Z\zeta\) for \(A\) change. Thus, the relevance requirement 7.1.2b for the instruments \(Z\) is conditional on \(T\) and \(X\). \(T\) can be represented by a \(d_{U}\)-dimensional linear function of \(Z\), \(\mathbb{E}\left[U|Z,X\right]\), multiplied by \(\gamma_{W}\). Hence, a simpler way to understand the relevance requirement 7.1.2b is as \[\mbox{rank}(\zeta)\geq(d_{U}+d_{A}) \tag{7.10}\] A total of \(d_{U}\) dimensions of variation in \(Z\) are typically needed to account for the \(d_{U}\)-dimensional confounding effect of \(U\) via \(\mathbb{E}\left[U|Z,X\right]\), while the remaining variation in \(Z\) still needs to be relevant for \(A\). Other than in trivial cases2, equation 7.10 describes this relevance requirement satisfactorily as a rank condition on \(\zeta\), the partial linear projection effect of \(Z\) on \(A\) conditional on \((U,W,X)\). Footnote 2: e.g. when \(Z\) contains perfectly collinear variation conditional on \((U,W,X)\). ### Find \(T\) and test relevance of \(Z\), \(W\) for \(U\) A valid control function is the linear prediction \(T=Z\tilde{\gamma}_{W}\) under assumption 7.1.3, meaning that conditional on \((T,X)\), instruments \(Z\) are still relevant for \(A\). However, its OLS estimate \(T=Z\hat{\tilde{\gamma}}_{W}\) generally is not a valid control function, because \(T\) and \(Z\) are perfectly correlated due to sampling variation, unless \(d_{Z}>d_{W}\). However, even when \(d_{Z}>d_{W}\), the true \(\tilde{\gamma}_{W}\) will have rank \(d_{U}\leq d_{W}\), while its OLS estimate \(\hat{\tilde{\gamma}}_{W}\) always has possibly larger than necessary rank \(d_{W}\) due to sampling variation. Ultimately, the estimate \(\hat{\tilde{\gamma}}_{W}\) should at best have exactly rank \(d_{U}\). A test is needed for the rank \(r_{0}\) of matrix \(\tilde{\gamma}_{W}\). If \(\mathbb{E}\left[U|Z,X\right]=Z\gamma_{Z}+X\gamma_{X}\), then \(\tilde{\gamma}_{W}=\gamma_{Z}\gamma_{W}\). Sufficient for the rank condition in assumption 7.1.3 is \(r_{0}<\min\{d_{Z},d_{W}\}\). This condition means that an unobservable variable of smaller dimension than both \(W\) and \(Z\) can explain all correlation between \(W\) and \(Z\) conditional on \(X\). This unobserved variable is the common confounder \(U\). By the definition of \(U\) as the (minimum information) unobserved variable which renders \(W\) and \(Z\) mean-independent, \(\gamma_{Z}\) has \(d_{U}\leq d_{Z}\) linearly independent rows (\(\mbox{rank}(\gamma_{Z})=d_{U}\)). As \(\gamma_{W}\) has dimensions \(d_{U}\times d_{W}\) and \(d_{U}\leq d_{W}\), \(\mbox{rank}(\gamma_{W})\leq d_{U}\) and thus \(r_{0}=\mbox{rank}(\gamma_{Z}\gamma_{W})=\mbox{rank}(\gamma_{W})\). While \(r_{0}<d_{W}\) suffices to confirm the relevance of \(W\) for \(U\) in assumption 7.1.3, \(r_{0}<d_{Z}\) is necessary for \(Z\) to be relevant for treatment \(A\) in assumption 7.1.2b. A suitable test for some \(r<\min\{d_{Z},d_{W}\}\) has null hypothesis \[H_{0}:r_{0}\leq r,\text{ and alternative }H_{a}:r_{0}>r. \tag{7.11}\] With the OLS estimator \(\hat{\bar{\gamma}}_{W}\), we apply a bootstrap based test for its rank. First, write the singular value decomposition as \[\tilde{\gamma}_{W}=\underset{d_{Z}\times d_{Z}d_{Z}\times d_{W}d_{W}\times d_ {W}}{P_{0}} \tag{7.12}\] Then, let \(\phi_{r}(A)\coloneqq\sum_{j=r+1}^{m_{A}}\pi_{j}^{2}(A)\) be the sum of squared singular values of \(A\) from the \((r+1)\) largest to the smallest singular value, which is the \(m_{A}\)-th singular value, where \(m_{A}\) is the minimum across \(A\)'s number of rows and columns. Then, an equivalent test to 7.11 is a test with null hypothesis \[H_{0}:\phi_{r}\left(\tilde{\gamma}_{W}\right)=0,\text{ and alternative }H_{a}:\phi_{r}\left(\tilde{\gamma}_{W}\right)>0. \tag{7.13}\] The bootstrap procedure is as follows: 1. For each binary proxy \(W_{j}\in W\), calculate the probability \(p_{j}\coloneqq\Pr\left(W_{j}=1\right)\) under \(H_{0}\) as \(p_{j,0}=\text{Logit}\left(\left(\underset{d_{Z}\times r\times r\times r}{ZP_ {0,r}\Pi_{0,r}Q_{0,r,j}^{\intercal}+X\tilde{\eta}_{W,j}}{d_{X}\times 1} \right)\beta_{0}+\alpha_{0}\right)\), where 1. \(P_{0,r}\) corresponds to the first \(r\) columns of \(P_{0}\), 2. \(Q_{0,j}\) corresponds to the first \(r\) entries of the \(j\)-th row of \(Q_{0}\) 3. \(\Pi_{0,r}\) corresponds to the \(r\times r\) matrix of of the first \(r\) rows and columns of \(\Pi_{0}\), 4. \(\tilde{\eta}_{W,j}\) corresponds to \(j\)-th column of \(\tilde{\eta}_{W}\), 5. \(\beta_{0}\) and \(\alpha_{0}\) are univariate coefficients, which need to be estimated. 2. Draw 1000 new bootstrap samples \(b\in\mathcal{B}\) of binary proxies as \(W_{0}^{b}\) using the \(n\times d_{W}\) probability matrix \((p_{0,0},p_{1,0},\ldots,p_{d_{W},0})\). 3. For each bootstrap sample \(b\in\mathcal{B}\): Calculate the sample projection coefficient \(\hat{\bar{\gamma}}_{W,0}^{b}\) by projecting \(W_{0}^{b}\) onto \((Z,X)\) (all demeaned), and the sum of its smallest squared singular values starting at the \((r+1)\) largest as \(\phi_{r,0,b}\coloneqq\phi_{r}\left(\hat{\bar{\gamma}}_{W,0}^{b}\right)\). 4. Obtain the \(p\)-value as \(1-\frac{1}{|\mathcal{B}|}\sum_{b\in\mathcal{B}}\mathds{1}\left(\phi_{r,0,b}< \phi_{r}(\tilde{\gamma}_{W})\right)\). In figure 2, the bootstrapped distributions of the test statistic \(n\phi_{r}\left(\hat{\bar{\gamma}}_{W}\right)\) are depicted under two different null hypotheses: \(r_{0}=0\) and \(r_{0}\leq 1\). Non-rejection of the test is evidence in favour of the low rank \(r_{0}\) of \(\tilde{\gamma}_{W}\). In the left diagram of figure 2, where the test concerns \(r_{0}=0\), the \(p\)-value is at zero. The test provides strong evidence against \(r_{0}=0\), which indicates some correlation between \(W\) and \(Z\) conditional on \(X\). The right diagram of figure 2 depicts the test statistic bootstrap distribution for \(H_{0}:r_{0}\leq 1\), and provides strong evidence against rejection. The associated \(p\)-value is \(93.3\%\). Thus, we can conclude that the rank of \(\gamma_{W}\) is at most one. In the NLS97 data, pre-college test results \(Z\) and risky behaviour dummies have dimensions \(d_{Z}=7\) and \(d_{W}=9\). Thus, \(r_{0}\leq 1\) allows the conclusion that the common confounder dimension is small: \(d_{U}\leq 1\). Conditional on covariates \(X\), all covariance between \(Z\) and \(W\) is explained by a one-dimensional unobserved \(U\). Successfully, the proximal assumption 7.1.3 was tested. In addition, the necessary \(d_{Z}>d_{U}\) condition for conditional instrument relevance (assumption 7.1.2b) was confirmed. ### Test relevance of \(Z\) for \(A\) given \(T\) Despite satisfying the necessary \(d_{Z}>d_{U}\) condition for IV relevance (assumption 7.1.2b), a proper test for the conditional relevance of \(Z\) for \(A\) given the control function \(T\) is still missing. In this step, we first explain how to construct the here one-dimensional control variable \(T\) after having conducted the tests in section 7.3. Then, we test for the conditional relevance of instrument \(Z\) for treatment \(A\) given this control function \(T\). Given the statistical evidence in favour of \(d_{U}\leq 1\), we construct the variable \[\underset{N\times 1}{T}\coloneqq\underset{N\times d_{Z}}{Z}\hat{P}_{0,1}\hat{ \Pi}_{0,1}, \tag{7.14}\] with the singular value decomposition of the OLS estimator \(\hat{\bar{\gamma}}_{W}=\hat{P}_{0}\hat{\Pi}_{0}\hat{Q}_{0}^{\intercal}\). \(\hat{P}_{0,1}\) is the first column of \(\hat{P}_{0}\), and \(\hat{\Pi}_{0,1}\) is the top-left entry of \(\hat{\Pi}_{0}\). Aside from sampling error, proxies \(W\) are mean-independent from instruments \(Z\) conditional on \((T,X)\). With the control \(T\) now defined, we can use a bootstrap based test to confirm the relevance of instruments \(Z\) for \(A\) conditional on \((T,X)\). The null hypothesis can be formulated as \[H_{0}:\text{rank}\left(\mathbb{E}\left[(Z\zeta)A|T,X\right] \right)<d_{A},\text{ with alternative }H_{a}:\text{rank}\left(\mathbb{E}\left[(Z\zeta)A|T,X\right] \right)=d_{A}. \tag{7.15}\] Importantly, under \(H_{0}\) the effect of \(Z\) on \(A\) (given \(X\)) would be fully described by a one-dimensional \(T\), as the dimension of \(U\) was found to be \(r_{0}\leq 1\) in section 7.2. When treatment \(A\) is one-dimensional, a simple test for this null hypothesis compares the \(R^{2}\) of an unrestricted (7.16) and restricted regression (7.17). \[A =\tilde{\alpha}_{A,ur}+Z\tilde{\zeta}+X\tilde{\eta}_{A,ur}+ \tilde{\varepsilon}_{A,ur}, \mathbb{E}\left[\tilde{\varepsilon}_{A,ur}|Z,X\right] =0, \tag{7.16}\] \[A =\tilde{\alpha}_{A,r}+T\tilde{\gamma}_{A}+X\tilde{\eta}_{A,r}+ \tilde{\varepsilon}_{A,r}, \mathbb{E}\left[\tilde{\varepsilon}_{A,r}|T,X\right] =0. \tag{7.17}\] Under \(H_{0}\), both regressions would predict \(A\) equally well, despite the dimension reduction on \(Z\) in the second regression, 7.17. With the uncertainty in estimated \(T\), we use a simple bootstrap-based test. With 1000 bootstrap samples \(b_{t}\in\mathcal{B}_{t}\), we obtain a bootstrap distribution of \(R_{r}^{2}\). Under \(H_{0}\), \(R_{ur}^{2}\) is (asymptotically) distributed as \(R_{r}^{2}\). _Notes_: This figure illustrates the bootstrap distribution of the restricted \(R_{r}^{2}\) in regression 7.17. The dimension of \(T\),\(d_{T}=1\), is based on the test in section 7.3. Figure 3 depicts the bootstrap distribution of \(R_{r}^{2}\) based on the restricted regression 7.17. The control variable \(T\) is constructed for each bootstrap sample as described in 7.14. The unrestricted \(R_{ur}^{2}\) based on the unrestricted regression 7.16 fits the data significantly better, which indicates rejection of \(H_{0}\). The \(p\)-value is 0.023. There is predictive information in \(Z\) for \(A\), beyond that controlled for in \((T,X)\). In other words, \(Z\) satisfies the conditional instrument relevance requirement 7.1.2b. ### Exclusion of \(Z\) conditional on \(U\) While both relevance assumptions 7.1.3 and 7.1.2b could be tested successfully, the exclusion of instrument \(Z\) conditional on \(U\) in assumption 7.1.2a remains generally untestable. To argue whether \(Z\) is exogenous conditional on \(U\), it is worth asking: Which information is being held fixed in \(T\), and what does this imply about \(U\)? The linear construction of \(T\) from \(Z\) is illustrated in table 1, where the instruments have been normalised to standard deviation one. \(Z\) is mean-independent from \(W\) conditional on \((T,X)\). \(T\) mostly consists of an average of subject-specific pre-college GPA measures. In this sense, \(T\) closely measures academic ability, as captured by pre-college GPA measures. Without the transcript GPA and ASVAB percentile, the subject-specific GPA measures describe 94.5% of variation in \(T\). Despite the negative dependence of \(T\) on ASVAB percentile in its construction, \(T\) positively correlates with ASVAB percentile unconditional on the GPA measures with a 0.31 correlation coefficient. The interpretation of \(T\) and consequently \(U\) is pretty straightforward: It positively reflects (academic) ability. As \(U\) reflects (academic) ability, an increase in \(T\) is expected to result in a reduction of risky behaviour (Loeber et al., 2012). Indeed, a one standard deviation increase in \(T\) reduces the probability of having engaged in risky behaviour by the age of 17 between 3% and 9%, as illustrated in table 2. All effects have strong statistical and economic significance. Compared to the average probability of engaging in risky behaviour, the estimated effect of a one standard deviation change in \(T\) is largest for some of the riskiest behaviour we considered: selling drugs (-54%), running away (-46%), and attacking someone (-41%). \(T\) captures the information we expected based on our suspicion about the unobserved confounder ability. \(T\) closely reflects (academic) ability as measured by high-school GPA measures, which reduces the probability of engaging in risky behaviours during high-school. Thus, we can conclude that the common confounder \(U\) contains the unobserved variable ability. Now, an argument is required for the conditional exogeneity of instruments \(Z\) given unobserved ability \(U\) and observed covariates \(X\): \[Y =\alpha_{Y}+A\beta+U\gamma_{Y}+W\upsilon_{Y}+X\eta_{Y}+\varepsilon_{ Y}, \mathbb{E}\left[\,\varepsilon_{Y}\!\mid\!Z,X\right] =0.\] While ability is the obvious confounder of the effect of pre-college GPA measures on net worth later in life, there are other possible confounders. Among everyone who goes to college, those with higher pre-college GPA are likely to also have a higher college GPA. Even conditional on whether someone obtained a BA degree, a higher college GPA likely leads to higher earnings later in life. Thus, college GPA is an important observed confounder. Family and individual net worth at young age can affect pre-college GPA measures as more learning resources are available. Their effect on net worth later in life is undeniable. Apart from net worth, other family background characteristics likely affect both pre-college test scores and net worth later in life. We include parental education, maternal age at first birth and the individual's birth, as well as the number of siblings to capture family background characteristics. Individual-specific characteristics are other important confounders. We include sex and citizenship status based on birth as further covariates. Conditional on this rich set of covariates \(X\), and the unobserved variable ability \(U\), there is no reason to believe that pre-college test scores \(Z\) would affect or be correlated with post-college earnings through any other channel than obtaining a BA degree \(A\). Despite our best efforts in explaining \(U\), and the provided arguments in favour of assumption 7.1.2a, a test or conditional instrument exclusion is not possible. A specification test is not feasible, because in this example the model is not overidentified. \begin{table} \begin{tabular}{c|c c c c c c c c c} & & try & run & attack & sell & destroy & steal & steal \\ & drink & smoke & marijuana & away & someone & drugs & property & \(<50\$\) & \(>50\$\) \\ \hline Pr & 65.5\% & 47.1\% & 29.8\% & 10.9\% & 19.0\% & 9.1\% & 32.0\% & 38.1\% & 8.0\% \\ \(T\) & -7.9\% & -9.5\% & -9.6\% & -5.0\% & -7.8\% & -4.9\% & -3.5\% & -4.7\% & -3.2\% \\ \end{tabular} _Notes_: The table contains sample probabilities for engaging in risky behaviour by the age of 17 in the Pr row. The estimated decrease in the probability of engaging in risky behaviour from a linear probability model for a one standard deviation increase in \(T\) is noted in the row corresponding to \(T\). \end{table} Table 2: Effect of \(T\) on \(W\) ### Estimation Estimation of the fully linear model is now straightforward. As in Tien (2022), we call the estimator an _instrumented common confounding_ (ICC) estimator. \[\hat{\beta}_{ICC}=\left(A^{\intercal}P_{Z}M_{T,X}A\right)^{-1}\left(A^{\intercal}P _{Z}M_{T,X}Y\right). \tag{7.18}\] Here, \(P_{Z}=Z\left(Z^{\intercal}Z\right)^{-1}Z^{\intercal}\) is the projection matrix of \(Z\), and \(M_{T,X}=I_{n}-P_{T,X}\) is the annihilator matrix of \((T,X)\). In table 3, the estimates of four major methods are compared: ordinary least squares (OLS), instrumental variables (IV), proximal learning (PL), and the here suggested ICC estimator. The row corresponding to \(T\) describes the estimated partial effect of \(T\) (normalised to standard deviation one) on net worth \(Y\) (at 35) in the respective regressions. \(T\) is only used in proximal learning and ICC, but derived from the covariation of \((Z,A)\) and \(W\) in negative control (Cui et al., 2020), as opposed to \(Z\) and \(W\) in our approach. The row corresponding to \(A\) contains estimates for \(\beta\), the causal effect of obtaining a BA degree \(A\) on net worth \(Y\) (at 35). Their unit is US Dollar. OLS estimates that obtaining a BA degree increases net worth at 35 by 59k$. The proximal learning estimator conditions on its own \(T\), so implicitly anything fixed that covariates \((Z,A)\) and proxies \(W\). The proximal learning estimate at 31k$, is indeed economically significantly smaller than the OLS estimate. As hypothesised, this might indicate that unobserved ability, which correlates \((Z,A)\) and \(W\), is a confounder which biases the estimated effect of education on net worth upwards. In contrast, the IV estimate is much larger at 223k$. The inherent negative selection bias may thus be quite large. However, the IV \begin{table} \begin{tabular}{l|c c c c} \hline \hline & OLS & PL & IV & ICC \\ \hline \(A\) & 59.18*** & 30.90*** & 222.97*** & 125.15** \\ & (9.12) & (10.40) & (34.74) & (52.93) \\ \hline \(T\) & \multicolumn{3}{c}{27.76***} & \multicolumn{3}{c}{16.05**} \\ & \multicolumn{3}{c}{(4.82)} & \multicolumn{3}{c}{(7.37)} \\ \hline \hline \end{tabular} _Notes_: The table contains estimates and their standard errors (in parentheses) for \(\beta\) in the \(A\) row, and the linear parameter on \(T\) if used in the method from four estimators: Ordinary Least Squares (OLS), Proximal Learning (NC), Instrumental Variables (IV), and Instrumented Common Confounding (ICC). Asterisks indicate significance at the 1% (***), 5% (**) and 10% (*) level. \end{table} Table 3: Estimates with different estimators (\(Y\) in thousands (k)) estimator ignores the strong correlation of the pre-college test score instruments \(Z\) with ability \(U\), which may lead to an accentuated ability bias compared to that in OLS. As the estimator should be robust to ability bias, we condition on \(T\) and obtain the ICC estimator at 125k$. Indeed, conditioning on \(T\) attenuates the estimate by the expected ability bias. As relevance is not strongly satisfied for the instruments \(Z\) conditional on \(T\) in the ICC estimator, the standard error is expectedly large for this method. Still, both general selection bias and ability bias appear to be strong confounders in this difficult identification problem. Quantitatively separating ability and general selection bias helps add the necessary credibility to IV, which misses under the original IV exclusion assumption. ## 8 Conclusion In this work, we relax instrument exclusion in the presence of mismeasured confounders. Other observed variables, the proxies, must be relevant for the unobserved confounders, which cause endogeneity in the instruments. The mild parametric index sufficiency assumption is also required. Importantly, the proxies can be economically meaningful variables, with their own effects on treatment and outcome. This method can be useful in various causal identification problems with observational data, where the unobserved confounders are otherwise unrestricted observed variables. The linear returns to education identification problem illustrates how this method can identify causal effects when instrument exclusion, as often in practice, is a strong and hardly testable assumption. This paper established two point identification results. When point identification is impossible, this approach can still identify informative bounds on causal effects. This set identification exercise is left to future work. Further, we have not demonstrated how to construct estimators other than in the linear example. Uncertainty in the control function estimation will be reflected in the performance of any estimator using this identification approach. The integration of this approach, which at best identifies averages of causal effects across unobservables, with marginal treatment effects, is another remaining task.
2303.14049
Weakly Markov categories and weakly affine monads
Introduced in the 1990s in the context of the algebraic approach to graph rewriting, gs-monoidal categories are symmetric monoidal categories where each object is equipped with the structure of a commutative comonoid. They arise for example as Kleisli categories of commutative monads on cartesian categories, and as such they provide a general framework for effectful computation. Recently proposed in the context of categorical probability, Markov categories are gs-monoidal categories where the monoidal unit is also terminal, and they arise for example as Kleisli categories of commutative affine monads, where affine means that the monad preserves the monoidal unit. The aim of this paper is to study a new condition on the gs-monoidal structure, resulting in the concept of weakly Markov categories, which is intermediate between gs-monoidal categories and Markov ones. In a weakly Markov category, the morphisms to the monoidal unit are not necessarily unique, but form a group. As we show, these categories exhibit a rich theory of conditional independence for morphisms, generalising the known theory for Markov categories. We also introduce the corresponding notion for commutative monads, which we call weakly affine, and for which we give two equivalent characterisations. The paper argues that these monads are relevant to the study of categorical probability. A case at hand is the monad of finite non-zero measures, which is weakly affine but not affine. Such structures allow to investigate probability without normalisation within an elegant categorical framework.
Tobias Fritz, Fabio Gadducci, Paolo Perrone, Davide Trotta
2023-03-24T15:01:05Z
http://arxiv.org/abs/2303.14049v2
# Weakly affine monads ###### Abstract Introduced in the 1990s in the context of the algebraic approach to graph rewriting, gs-monoidal categories are symmetric monoidal categories where each object is equipped with the structure of a commutative comonoid. They arise for example as Kleisli categories of commutative monads on cartesian categories, and as such they provide a general framework for effectful computation. Recently proposed in the context of categorical probability, Markov categories are gs-monoidal categories where the monoidal unit is also terminal, and they arise for example as Kleisli categories of commutative _affine_ monads, where affine means that the monad preserves the monoidal unit. The aim of this paper is to study a new condition on the gs-monoidal structure, resulting in the concept of _weakly Markov categories_, which is intermediate between gs-monoidal categories and Markov ones. In a weakly Markov category, the morphisms to the monoidal unit are not necessarily unique, but form a group. As we show, these categories exhibit a rich theory of conditional independence for morphisms, generalising the known theory for Markov categories. We also introduce the corresponding notion for commutative monads, which we call weakly affine, and for which we give two equivalent characterisations. The paper argues that these monads are relevant to the study of categorical probability. A case at hand is the monad of finite non-zero measures, which is weakly affine but not affine. Such structures allow to investigate probability without normalisation within an elegant categorical framework. String diagrams, gs-monoidal and Markov categories, categorical probability, affine monads 10.4230/LIPIcs... 10.4230/LIPIcs. A canonical way of obtaining a gs-monoidal category is as the Kleisli category of a commutative monad on a cartesian monoidal category. As argued in [17], commutative monads can be seen as generalising theories of distributions of some kind, and the fact that their Kleisli categories are gs-monoidal can be seen as the correspondence between distributions and (possibly unnormalised) probability theory. In particular, when the monad is affine (i.e. it preserves the monoidal unit [16, 14]), the Kleisli category is Markov - this can be seen as the correspondence between normalised distributions and probability theory. In this work we introduce and study an intermediate notion between gs-monoidal and Markov categories, which we call _weakly Markov categories_. These are defined as gs-monoidal categories where for every object its morphisms to the monoidal unit form a group (Definition 3.2). Weakly Markov categories can be interpreted intuitively as gs-monoidal categories where each morphism is discardable up to an invertible normalisation (see Proposition 3.4 for the precise mathematical statement). The choice of the name is due to the fact that every Markov category is (trivially) weakly Markov. In parallel to weakly Markov categories we also introduce _weakly affine monads_, which are commutative monads on cartesian monoidal categories preserving the (internal) group structure of the terminal object (Definition 3.5). As a particular concrete example of relevance to probability and measure theory, we consider the monad of finite non-zero measures on **Set** (Example 3.7), and we use it as a running example in the rest of the work. As we show (see Proposition 3.6), a commutative monad on a cartesian monoidal category is weakly affine if and only if its Kleisli category is weakly Markov, analogously to what happens with affine monads and Markov categories. Markov categories come equipped with a notion of _conditional independence_, which has been one of the main motivations for their use in categorical probability and statistics [1, 9, 12]. It is noteworthy that a notion of conditional independence can also be given for any gs-monoidal category. As we show, for weakly Markov categories it has convenient properties which can be considered "up-to-normalisation" versions of their corresponding Markov-categorical counterpart. These concepts allow us to provide an equivalent condition for weak affinity of a monad, namely a pullback condition on the associativity diagram of the structural morphisms \(c_{X,Y}:TX\times TY\to T(X\times Y)\) (Theorem 4.7), widely generalising the elementary statement that a monoid is a group if and only if its associativity diagram is a pullback (Proposition 2.1). As such, we believe that weak affine monads are relevant to the study of categorical probability, as they allow to investigate probability without normalisation within an elegant categorical framework. ### Outline In Section 2 we review the main structures used in this work, in particular group and monoid objects, gs-monoidal and Markov categories, and their interaction with commutative monads. In Section 3 we define the main original concepts, namely weakly Markov categories and weakly affine monads. We study their relationship and we prove that a commutative monad on a cartesian monoidal category is weakly affine if and only if its Kleisli category is weakly Markov (Proposition 3.6). We then turn to concrete examples using finite measures and group actions (Section 3.3). In Section 4 we extend the concept of conditional independence from Markov categories to general gs-monoidal categories. We specialise to the weakly Markov case and show that the situation is then similar to what happens in Markov categories, but in a certain precise sense only up to normalisation. We use this formalism to equivalently reformulate weak affinity in terms of a pullback condition (Theorem 4.7). Together with the newly introduced concepts, this result can be considered the main outcome of our work. Finally, in the concluding Section5, we pose further questions, such as when we can iterate the construction of weakly Markov categories by means of weakly affine monads, and the relation to strongly affine monads in the sense of Jacobs [15]. ## 2 Background In this section, we develop some relevant background material for later reference. To begin, the following categorical characterisation of groups will be useful to keep in mind. A monoid \((M,m,e)\) in \(\mathbf{Set}\) is a group if and only if the associativity square (1) is a pullback. Proof.: The square (1) is a pullback of sets if and only if given \(a,g,h,c\in M\) such that \(ag=hc\), there exists a unique \(b\in M\) such that \(g=bc\) and \(h=ab\). First, suppose that \(G\) is a group. Then the only possible choice of \(b\) is \[b=a^{-1}h=gc^{-1}\] which is unique by uniqueness of inverses. Conversely, suppose that (1) is a pullback. We can set \(g,h=e\) and \(c=a\) so that \(ae=ea=a\). Instantiating the pullback property on these elements gives \(b\) such that \(ab=e\) and \(ba=e\), that is, \(b=a^{-1}\). Proposition1 holds generally for a monoid object in a cartesian monoidal category, where the element-wise proof still applies thanks to the following standard observation. Given an object \(M\) in a cartesian monoidal category \(\mathcal{D}\), there is a bijection between internal monoid structures on \(M\) and monoid structures on every hom-set \(\mathcal{D}(X,M)\) such that pre-composition with any \(f:X\to Y\) defines a monoid homomorphism \[\mathcal{D}(Y,M)\longrightarrow\mathcal{D}(X,M).\] The proof is straightforward by the Yoneda lemma. It follows that Proposition1 holds for internal monoids in cartesian monoidal categories in general. For the consideration of categorical probability, we now recall the simplest version of a commutative monad of measures. It works with measures taking values in any semiring instead of \([0,\infty)\) (see e.g. [7, Section 5.1]), but we restrict to the case of \([0,\infty)\) for simplicity. Let \(X\) be a set. Denote by \(MX\) the set of _finitely supported measures on \(X\)_, i.e. the functions \(m:X\to[0,\infty)\) that are zero for all but a finite number of \(x\in X\). Given a function \(f:X\to Y\), denote by \(Mf:MX\to MY\) the function sending \(m\in MX\) to the assignment \[(Mf)(m)\,:\,y\longmapsto\sum_{x\in f^{-1(y)}}p(x).\] ### Weakly affine monads This makes \(M\) into a functor, and even a monad with the unit and multiplication maps where \[\delta_{x}(x^{\prime})=\begin{cases}1&x=x^{\prime},\\ 0&x\neq x^{\prime},\end{cases}\qquad\qquad(E\xi)(x)=\sum_{m\in MX}\xi(m)\,m(x).\] Call \(M\) the _measure monad_ on \(\mathbf{Set}\). Denote also by \(DX\subseteq MX\) the subset of _probability measures_, i.e. those finitely supported \(p:X\to[0,\infty)\) such that \[\sum_{x\in X}p(x)=1.\] \(D\) forms a sub-monad of \(M\) called the _distribution monad_. It is known that \(M\) is a commutative monad [7]. The corresponding lax monoidal structure \[MX\times MY\overset{c}{\longrightarrow}M(X\times Y)\] is exactly the formation of product measures given by \(c(m,m^{\prime})(x,y)=m(x)m^{\prime}(y)\). Also \(D\) is a commutative monad with the induced lax monoidal structure, since the product of probability measures is again a probability measure. ### GS-monoidal and Markov categories We recall here the basic definitions adopting the graphical formalism of string diagrams, referring to [18] for some background on various notions of monoidal categories and their associated diagrammatic calculus. **Definition 2.4**.: A **gs-monoidal category** is a symmetric monoidal category \((\mathcal{C},\otimes,I)\) with a commutative comonoid structure on each object \(X\) consisting of a comultiplication and a counit \[\text{copy}_{X}\quad=\quad\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{G-monoidal}}\qquad\text{del}_{X}\quad=\quad\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{G-monoidal}}\] which satisfy the commutative comonoid equations: \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{G-monoidal}}\quad= \quad\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{G-monoidal}}\] These comonoid structures must be multiplicative with respect to the monoidal structure **Definition 2.5**.: A morphism \(f:X\to Y\) in a gs-monoidal category is called **copyable** or **functional** if \[\tikzfig{f}\quad=\quad\tikzfig{f}\] It is called **discardable** or **full** if \[\tikzfig{f}\quad=\quad\tikzfig{f}\] **Example 2.6**.: The category \(\mathbf{Rel}\) of sets and relations with the monoidal operation given by the direct product of sets is a gs-monoidal category [6]. In this gs-monoidal category, the copyable arrows are precisely the partial functions, and the discardable arrows are the total relations. **Remark 2.7**.: It is well-known that if every morphism is copyable and discardable, or equivalently if the copy and discard maps are natural, then the monoidal product is the categorical product, and thus the category is cartesian monoidal [8]. In other words, the following conditions are equivalent for a gs-monoidal category \(\mathcal{C}\) * \(\mathcal{C}\) is cartesian monoidal; * every morphism is copyable and discardable; * the copy and discard maps are natural. In recent work [10] it has been shown that gs-monoidal categories naturally arise in several ways, such as Kleisli categories of commutative monads or span categories. In the following proposition, we recall the result regarding Kleisli categories. **Proposition 2.8**.: _Let \(T\) be a commutative monad on a cartesian monoidal category \(\mathcal{D}\). Then its Kleisli category \(\mathrm{Kl}_{T}\) is canonically a gs-monoidal category with the copy and discard structure induced by that of \(\mathcal{D}\)._ **Example 2.9**.: The Kleisli categories of the monads \(M\) and \(D\) of Definition 2.3 are gs-monoidal. We can write their Kleisli categories concretely as follows * a morphism \(k:X\to Y\) of \(\mathrm{Kl}_{M}\) is a _matrix_ with rows indexed by \(Y\) and columns indexed by \(X\), and non-negative entries \(k(y|x)\) such that for each \(x\in X\), the number \(k(y|x)\) is non-zero only for finitely many \(x\); * a morphism \(k:X\to Y\) of \(\mathrm{Kl}_{D}\) is a morphism of \(\mathrm{Kl}_{M}\) such that moreover, for all \(x\in X\), the sum of each column satisfies \[\sum_{y\in Y}k(y|x)=1\] If \(X\) and \(Y\) are finite, such a matrix is called a _stochastic matrix_. In both categories, identities are identity matrices, composition is matrix composition, monoidal structure is the cartesian product on objects and the Kronecker product on matrices, and the copy and discard maps are the images of the standard copy and discard maps on \(\mathbf{Set}\) under the Kleisli inclusion functor. Nowadays, _Markov categories_[9] represent one of the more interesting specialisations of the notion of gs-monoidal category. Based on the interpretation of their arrows as generalised Markov kernels, Markov categories are considered the foundation for a categorical approach to probability theory. A gs-monoidal category is said to be a **Markov category** if any (hence all) of the following equivalent conditions are satisfied * the monoidal unit is terminal; * the discard maps are natural; * every morphism is discardable. We recall from [16, 14] the notion of _affine monad_. A monad \(T\) on a cartesian monoidal category is called **affine** if \(T1\cong 1\). It was observed in [9, Corollary 3.2] that if the monad preserves the terminal object, then every arrow of the Kleisli category is discardable, and this makes the Kleisli category into a Markov category. Since the converse is easy to see, we have the following addendum to Proposition 2. Let \(T\) be a commutative monad on a cartesian monoidal category \(\mathcal{D}\). Then \(\mathrm{Kl}_{T}\) is Markov if and only if \(T\) is affine. The distribution monad \(D\) of Definition 2 is affine, and so its Kleisli category (Example 2) is a Markov category. It is one of the simplest examples of categories of relevance for categorical probability. The measure monad \(M\) is not affine, as it is easy to see that \(M1\cong[0,\infty)\), and so its Kleisli category is not Markov. ## 3 Weakly Markov categories and weakly affine monads In this section, we introduce an intermediate level between gs-monoidal and Markov called _weakly Markov_, and its corresponding notion for monads, which we call _weakly affine_. ### The monoid of effects In a gs-monoidal category \(\mathcal{C}\) we call a _state_ a morphism from the monoidal unit \(p:I\to X\), and _effect_ a morphism to the monoidal unit \(a:X\to I\). As is standard convention, we represent such morphisms as triangles as follows \(X\)\(X\)\(X\)\(X\)\(X\)\(X\)\(X\) Effects, i.e. elements of the set \(\mathcal{C}(X,I)\), form canonically a commutative monoid as follows: the monoidal unit is the discard map \(X\to I\), and given \(a,b:X\to I\), their product \(ab\) is given by copying1 Footnote 1: See also e.g. the \(\odot\) product in [2, Proposition 3.10]. If a morphism \(f:X\to Y\) is copyable and discardable, the pre-composition with \(f\) induces a morphism of monoids \(\mathcal{C}(Y,I)\to\mathcal{C}(X,I)\). **Remark 3.1**.: The monoidal unit \(I\) of a monoidal category is canonically a monoid object via the coherence isomorphisms \(I\otimes I\cong I\) and \(I\cong I\). However, in a general (i.e. not necessarily cartesian) gs-monoidal category \(\mathcal{C}\), the monoid structure on \(\mathcal{C}(X,I)\) is not, as in Remark 2.2, coming from considering the presheaf represented by \(I\). Indeed, in order for Remark 2.2 to hold, we would need that every pre-composition is a morphism of monoids. As remarked above, this fails in general unless all morphisms are copyable and discardable (i.e. if \(\mathcal{C}\) is not cartesian monoidal). Let us now consider the case where the gs-monoidal structure comes from a commutative monad on a cartesian monoidal category \(\mathcal{D}\). In this case, the monoid structure on Kleisli morphisms \(X\to 1\) does come from the canonical internal monoid structure on \(T1\) (and from the one on \(1\)) in \(\mathcal{D}\). Indeed, \(T1\) is a monoid object with the following unit and multiplication [17, Section 10] \[\begin{CD}1@>{\eta}>{}>T1,\hskip 28.452756ptT1\times T1@>{c_{1,1}}>{}>T(1 \times 1)@>{\cong}>{}>T1.\end{CD}\] For example, for the monad of measures \(M\), we obtain \(M1=[0,\infty)\) with its usual multiplication. The resulting monoid structure on Kleisli morphisms \(X\to 1\) is now given as follows. The unit is given by \[\begin{CD}X@>{\mathrm{del}_{X}}>{}>1@>{\eta}>{}>T1,\end{CD}\] and the multiplication of Kleisli morphisms \(f,g:X\to 1\) represented by \(f^{\sharp},g^{\sharp}:X\to T1\) is the Kleisli morphism represented by \[\begin{CD}X@>{\mathrm{copy}_{X}}>{}>X\times X@>{f^{\sharp}\times g^{\sharp}_{ \star}}>{}>T1\times T1@>{c_{1,1}}>{}>T(1\times 1)@>{\cong}>{}>T1.\end{CD}\] For the monad of measures \(M\), Kleisli morphisms \(X\to 1\) are represented by functions \(X\to[0,\infty)\), and this description shows that their product is the point-wise product. For a general \(\mathcal{C}\), the commutative monoid \(\mathcal{C}(X,I)\) acts on the set \(\mathcal{C}(X,Y)\): given \(a:X\to I\) and \(f:X\to Y\), the resulting \(a\cdot f\) is given as follows **XX:8**: **Weakly affine monads** It is straightforward to see that this indeed amounts to an action of the monoid \(\mathcal{C}(X,I)\) on the set \(\mathcal{C}(X,Y)\). For the monad of measures \(M\), this action is given by point-wise rescaling. Moreover, for a general \(\mathcal{C}\) the operation \[\mathcal{C}(X,Y)\times\mathcal{C}(X,Z) \longrightarrow\mathcal{C}(X,Y\otimes Z)\] \[(f,g) \longmapsto f\cdot g\coloneqq(f\otimes g)\circ\operatorname{copy}_ {X}\] commutes with this action in each variable (separately). ### Main definitions **Definition 3.2**.: A gs-monoidal category \(\mathcal{C}\) is called **weakly Markov** if for every object \(X\), the monoid \(\mathcal{C}(X,I)\) is a group. Every Markov category is weakly Markov: for every object \(X\), the monoid \(\mathcal{C}(X,I)\) is the trivial group. **Definition 3.3**.: Given two parallel morphisms \(f,g:X\to Y\) in a weakly Markov category \(\mathcal{C}\), we say that \(f\) and \(g\) are called **equivalent**, denoted \(f\sim g\), if they lie in the same orbit for the action of \(\mathcal{C}(X,I)\), i.e. if there is \(a\in\mathcal{C}(X,I)\) such that \(a\cdot f=g\). Note that if \(a\cdot f=g\) for some \(a\), then \(a\) is unique. This can be seen by discarding \(Y\) in the following diagram In other words, the action of \(\mathcal{C}(X,I)\) on \(\mathcal{C}(X,Y)\) is free, i.e. it has trivial stabilisers. For the next statement, let us first call the _mass_ of a morphism \(f:X\to Y\) in a gs-monoidal category \(\mathcal{C}\) the morphism \(m_{f}\coloneqq\operatorname{dely}\circ f:X\to I\). Note that \(f\) is discardable if and only if \(m_{f}=\operatorname{dely}_{X}\), i.e. if its mass is the unit of the monoid \(\mathcal{C}(X,I)\). **Proposition 3.4**.: _Every morphism \(f:X\to Y\) in a weakly Markov category is equivalent to a unique discardable morphism._ We call the discardable morphism the _normalisation_ of \(f\) and denote it by \(n_{f}:X\to Y\). Proof.: Consider the mass \(m_{f}\), and denote its group inverse by \(m_{f}^{-1}\). The morphism \(n_{f}\coloneqq m_{f}^{-1}\cdot f\) is discardable and equivalent to \(f\). Suppose now that \(d:X\to Y\) is discardable and equivalent to \(f\), i.e. there exists \(a:X\to I\) such that \(d=a\cdot f\). Since \(d\) is discardable which means that \(a=m_{f}^{-1}\), i.e. \(d=n_{f}\). In other words, every morphism \(f\) can be written as its mass times its normalisation. Let us now look at the Kleisli case. A commutative monad \(T\) on a cartesian monoidal category is called **weakly affine** if \(T1\) with its canonical internal commutative monoid structure is a group. This choice of terminology is motivated by the following proposition, which can be seen as a "weakly" version of Proposition 2. Let \(\mathcal{D}\) be a cartesian monoidal category and \(T\) a commutative monad on \(\mathcal{D}\). Then the Kleisli category of \(T\) is weakly Markov if and only if \(T\) is weakly affine. Proof.: First, suppose that \(T1\) is an internal group, and denote by \(\iota:T1\to T1\) its inversion map. The inverse of a Kleisli morphism \(a:X\to 1\) in \(\mathrm{Kl}_{T}(X,1)\) represented by \(a^{\sharp}:X\to T1\) is represented by \(\iota\circ a^{\sharp}\): indeed, the following diagram in \(\mathcal{D}\) commutes where the bottom rectangle commutes since \(\iota\) is the inversion map for \(T1\). The analogous diagram with \(\iota\times\mathrm{id}\) in place of \(\mathrm{id}\times\iota\) similarly commutes. Conversely, suppose that for every \(X\), the monoid structure on \(\mathrm{Kl}_{T}(X,1)\) has inverses. Then in particular we can take \(X=T1\), and the inverse of the Kleisli morphism \(\mathrm{id}:T1\to T1\) is an inversion map for \(T1\). This result can also be thought of in terms of the Yoneda embedding, via Remark 2: since the Yoneda embedding preserves and reflects pullbacks (and all limits), the associativity square for \(T1\) is a pullback in \(\mathcal{D}\) if and only if the associativity squares of all the monoids \(\mathcal{D}(X,T1)\) are pullbacks. Note that Remark 2 applies since we are assuming that \(\mathcal{D}\) is _cartesian_ monoidal. In the proof of Proposition 3, this is reflected by the fact in the main diagram, the morphism \(a^{\sharp}\) commutes with the copy maps. ### Examples of weakly affine monads Every affine monad is a weakly affine monad. Below you find a few less trivial examples. Let \(M^{*}:\mathbf{Set}\to\mathbf{Set}\) be the monad assigning to every set the set of finitely supported discrete _non-zero_ measures on \(M^{*}\), or equivalently let \(M^{*}(X)\) for any set \(X\) be the set of non-zero finitely supported functions \(X\to[0,\infty)\). It is a sub-monad \(M^{*}\subseteq M\), meaning that the monad structure is defined in terms of the same formulas as for the monad of measures \(M\) (Definition 2). Similarly, the lax structure components \[c_{X,Y}\::\:M^{*}X\times M^{*}Y\longrightarrow M^{*}(X\times Y)\] are also given by the formation of product measures, or equivalently point-wise products of functions \(X\to[0,\infty)\). Since \(M^{*}1\cong(0,\infty)\not\cong 1\), this monad is not affine. However the monoid structure of \((0,\infty)\) induced by \(M^{*}\) is the usual multiplication of positive real numbers, which form a group. Therefore \(M^{*}\) is weakly affine, and its Kleisli category is weakly Markov. On the other hand, if the zero measure is included, we have \(M1\cong[0,\infty)\) which is not a group under multiplication, so \(M\) is not weakly affine. Let \(A\) be a commutative monoid. Then the functor \(T_{A}\coloneqq A\times-\) on \(\mathbf{Set}\) has a canonical structure of commutative monad, where the lax structure components \(c_{X,Y}\) are given by multiplying elements in \(A\) while carrying the elements of \(X\) and \(Y\) along. Since \(T_{A}1\cong A\), the monad \(T_{A}\) is weakly affine if and only if \(A\) is a group, and affine if and only if \(A\cong 1\). As for negative examples, consider the free abelian group monad \(F\) on \(\mathbf{Set}\). Its functor takes a set \(X\) and forms the set \(FX\) of finite multisets (with repetition, where order does not matter) of elements of \(X\) and their formal inverses. We have that \(F1\cong\mathbb{Z}\), which is an abelian group under addition. However, the monoid structure on \(F1\) induced by the monoidal structure of the monad corresponds to the _multiplication_ on \(\mathbb{Z}\), which does not have inverses. Therefore \(F\) is not weakly affine. ## 4 Conditional independence in weakly Markov categories Markov categories have a rich theory of conditional independence in the sense of probability theory [12]. It is noteworthy that some of those ideas can be translated and generalised to the setting of weakly Markov categories. A morphism \(f:A\to X_{1}\otimes\cdots\otimes X_{n}\) in a gs-monoidal category \(\mathcal{C}\) is said to exhibit **conditional independence of the \(X_{i}\) given**\(A\) if and only if it can be expressed as a product of the following form Note that this formulation is a bit different from the earlier definitions given in [1, Definition 6.6] and [9, Definition 12.12], which were formulated for morphisms in Markov categories and state that \(f\) exhibits conditional independence if the above holds with the \(g_{i}\) being the _marginals_ of \(f\), which are \[\tikzfig{f_{i}}\quad\coloneqq\quad\tikzfig{f_{i}}\quad\coloneqq\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{ i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{ i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{ i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{ i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{ i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikzfig{f_{i }}\quad\tikfig{f_{i}}\quad\tikzfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}} \quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}} \quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}} \quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}} \quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig {f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{ i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tikfig{f_{i}} \quad\tikfig{f_{i}}\quad\tikfig{f_{i}}\quad\tik **Proposition 4.3**.: _Let \(\mathcal{D}\) be a cartesian monoidal category and \(T\) a commutative monad on \(\mathcal{D}\). Then a Kleisli morphism represented by \(f^{\sharp}:A\to T(X_{1}\times\cdots\times X_{n})\) exhibits conditional independence of the \(X_{i}\) given \(A\) if and only if it factors as_ _for some Kleisli maps \(g_{i}^{\sharp}:A\to TX_{i}\), where the map \(c\) above is the one obtained by iterating the lax monoidal structure (which is unique by associativity)._ Proof.: In terms of the base category \(\mathcal{D}\), a Kleisli morphism in the form of Definition 4.1 reads as follows \[A\xlongrightarrow{\rm copy}A\times\cdots\times A\xrightarrow{g_{1}^{\sharp} \times\cdots\times g_{9}^{\sharp}}TX_{1}\times\cdots\times TX_{n}\xrightarrow{ c}T(X_{1}\times\cdots\times X_{n}).\] Therefore \(f^{\sharp}:A\to T(X_{1}\times\cdots\times X_{n})\) exhibits the conditional independence if and only if it is of the form above. In the Kleisli category of the measure monad \(M\), and for any objects, the morphism \(A\to X_{1}\otimes\cdots\otimes X_{n}\) given by the zero measure on every \(a\in A\) exhibits conditional independence of its outputs given its input. For example, for \(A=1\), the zero measure on \(X\times Y\) is the product of the zero measure on \(X\) and the zero (or any other) measure on \(Y\). Notice that both marginals of the zero measure are zero measures - therefore, the factors appearing in the product are not necessarily related to the marginals. In a weakly Markov category, the situation is similar to the Markov case discussed above, but up to equivalence: an arrow exhibits conditional independence if and only if it is _equivalent to_ the product of its marginals. **Proposition 4.5**.: _Let \(f:A\to X_{1}\otimes\cdots\otimes X_{n}\) be a morphism in a weakly Markov category \(\mathcal{C}\). Then \(f\) exhibits conditional independence of the \(X_{i}\) given \(A\) if and only if it is equivalent to the product of all its marginals._ Proof.: Denote the marginals of \(f\) by \(f_{1},\ldots,f_{n}\). Suppose that \(f\) is a product as in Definition 4.1. By marginalising, for each \(i=1,\ldots,n\) we get \[A\] Therefore for each \(i\) we have that \(f_{i}\sim g_{i}\). Conversely, suppose that \(f\) is equivalent to the product of its marginals, i.e. that there exists \(a:X\to I\) such that \(f\) is equal to the following ### Weakly affine monads One can then choose \(g_{i}=f_{i}\) for all \(i<n\), and \(g_{n}=a\cdot f_{n}\), so that \(f\) is in the form of Definition 4.1. For \(n=2\), a morphism \(f:A\to X\otimes Y\) in a weakly Markov category \(\mathcal{C}\) exhibits conditional independence of \(X\) and \(Y\) given \(A\) if and only if the following equation holds ### Main result The concept of conditional independence for general weakly Markov categories allows us to give an equivalent characterisation of weakly affine monads. The condition is a pullback condition on the associativity diagram, and it recovers Proposition 2.1 when applied to the monads of the form \(A\times-\) for \(A\) a commutative monoid. Let \(\mathcal{D}\) be a cartesian monoidal category and \(T\) a commutative monad on \(\mathcal{D}\). Then the following conditions are equivalent 1. \(T\) is weakly affine; 2. the Kleisli category \(\mathrm{Kl}_{T}\) is weakly Markov; 3. for all objects \(X\), \(Y\), and \(Z\), the following associativity diagram is a pullback \[\begin{CD}T(X)\times T(Y)\times T(Z)@>{\mathrm{id}\times c_{Y,Z}}>{}>T(X) \times T(Y\times Z)\\ @V{c_{X,Y\times\mathrm{id}}}V{}V@V{c_{X,Y\times Z}}V{}V\\ T(X\times Y)\times T(Z)@>{c_{X\times Y,Z}}>{}>T(X\times Y\times Z)\end{CD}\] (2) We prove the theorem by means of the following property of weakly Markov categories. [localised independence property] Let \(\mathcal{C}\) be a weakly Markov category. Whenever a morphism \(f:A\to X\otimes Y\otimes Z\) exhibits conditional independence of \(X\otimes Y\) (jointly) and \(Z\) given \(A\), as well as conditional independence of \(X\) and \(Y\otimes Z\) given \(A\), then it exhibits conditional independence of \(X\), \(Y\), and \(Z\) given \(A\). Proof of Lemma 4.1. Suppose \(f:A\to X\otimes Y\otimes Z\) exhibits conditional independence of \(X\otimes Y\) (jointly) and \(Z\) given \(A\), as well as conditional independence of \(X\) and \(Y\otimes Z\) given \(A\). By marginalising out \(X\), we have that \(f_{YZ}\) exhibits conditional independence of \(Y\) and \(Z\) given \(A\). Since by hypothesis \(f\) exhibits conditional independence of \(X\) and \(Y\otimes Z\) given \(A\), by Proposition 4.1 we have that \(f\) is equivalent to the product of \(f_{X}\) and \(f_{YZ}\). But, again by Proposition 4.1, \(f_{YZ}\) is equivalent to the product of \(f_{Y}\) and \(f_{Z}\), so we have that \(f\) is equivalent to the product of all its marginals. Using Proposition 4.1 in the other direction, this means that \(f\) exhibits conditional independence of \(X\), \(Y\) and \(Z\) given \(A\). We are now ready to prove the theorem. Proof of Theorem 4.1. \(1\Leftrightarrow 2\): see Proposition 3.1. \(1\Rightarrow 3\): By the universal property of products, a cone over the cospan in (2) consists of maps \(g_{1}^{\sharp}:A\to TX\), \(g_{23}^{\sharp}:A\to T(Y\times Z)\), \(g_{12}^{\sharp}:A\to T(X\times Y)\) and \(g_{3}^{\sharp}:A\to TZ\) such that the following diagram commutes By Proposition 4.3, this amounts to a Kleisli morphism \(f^{\sharp}:A\to T(X\times Y\times Z)\) exhibiting conditional independence of \(X\) and \(Y\otimes Z\) given \(A\), as well as of \(X\otimes Y\), and \(Z\) given \(A\). By the localised independence property (Lemma 4.8), we then have that \(f\) exhibits conditional independence of all \(X\), \(Y\) and \(Z\) given \(A\), and so, again by Proposition 4.3, \(f^{\sharp}\) factors through the product \(TX\times TY\times TZ\). More specifically, by marginalising over \(Z\), we have that \(g_{12}^{\sharp}\) factors through \(TX\times TY\), i.e. the following diagram on the left commutes for some \(h_{1}^{\sharp}:A\to TX\) and \(h_{2}^{\sharp}:A\to TY\), and similarly, by marginalising over \(X\), the diagram on the right commutes for some \(\ell_{2}^{\sharp}:A\to TY\) and \(\ell_{3}^{\sharp}:A\to TZ\) In other words, the upper and the left curved triangles in the following diagram commute By marginalising over \(Y\) and \(Z\), and by weak affinity of \(T\), there exists a unique \(a^{\sharp}:A\to T1\) such that \(h_{1}=a\cdot g_{1}\). Therefore \[g_{12}=h_{1}\cdot h_{2}=(a\cdot g_{1})\cdot h_{2}=g_{1}\cdot(a\cdot h_{2}),\] and so in the diagram above we can equivalently replace \(h_{1}\) and \(h_{2}\) with \(g_{1}\) and \(a\cdot h_{2}\). Similarly, by marginalising over \(X\) and \(Y\), there exists a unique \(c^{\sharp}:A\to T1\) such that \(\ell_{3}=c\cdot g_{3}\), so that \[g_{23}=\ell_{2}\cdot\ell_{3}=\ell_{2}\cdot(c\cdot g_{3})=(c\cdot\ell_{2}) \cdot g_{3}\] and in the diagram above we can replace \(\ell_{2}\) and \(\ell_{3}\) with \(c\cdot\ell_{2}\) and \(g_{3}\), as follows Now, marginalising over \(X\) and \(Z\), we see that necessarily \(a\cdot h_{2}=c\cdot\ell_{2}\). Therefore there is a unique map \(A\to TX\times TY\times TZ\) making the whole diagram commute, which means that (2) is a pullback. \(3\Rightarrow 1\): If \(T\) is weakly affine, then taking \(X=Y=Z=1\) in (2) shows that this monoid must be an abelian group: we obtain a unique arrow \(\iota\colon T1\to T1\) making the following diagram commute and the commutativity shows that \(\iota\) satisfies the equations making it the inversion map for a group structure. In the Kleisli category of the measure monad \(\operatorname{Kl}_{M}\) (which is not weakly affine) consider the following diagram In the top-right corner \(MX\times M(Y\times Z)\), take the pair \((0,p)\) where \(p\) is any non-zero measure on \(Y\times Z\), and similarly, in the bottom-left corner take the pair \((q,0)\) where \(q\) is any non-zero measure on \(X\times Y\). Following the diagram, both pairs are mapped to the zero measure in the bottom-right corner. If the diagram was a pullback, we would be able to express the top-right and bottom-left corners as coming from the same triple in \(MX\times MY\times MZ\), that is, there would exist a measure \(m\) on \(Y\) such that \(m\cdot 0=p\) and \(0\cdot m=q\). Since \(p\) and \(q\) are non-zero, this is not possible. It is worth noting that the pullback condition on the associativity square is not equivalent to the localised independence property of Lemma 4.2: recall that a zero measure always exhibits conditional independence of all its outputs (Example 4.2). Therefore, for zero measures, the localised independence property is always trivially valid, and hence the Kleisli category of the measures monad \(M\) satisfies it in general. However, the example above shows explicitly that the pullback property fails. For now it is an open question whether the localised independence property for a Kleisli category is reflected by an equivalent condition on the monad. ## 5 Conclusions and future work Our paper introduces weak Markov categories and weakly affine monads and explore their relationship. More explicitly, our main result (Theorem 4.7) establishes a tight correspondence between the algebraic properties of \(T1\) and the universal properties of certain commutative squares given by the structural arrows of \(T\) for a commutative monad \(T\) on a cartesian category. We believe that this theorem suggests at least two potential directions for future research, namely * generalising the statement to weakly affine monads on weakly Markov categories; * generalising other Markov-categorical notions, such as the positivity axiom, to weakly Markov or even general gs-monoidal categories. We will provide further details on these potential directions in the following. #### Regarding possible generalisations In Theorem 4.7, we provide a characterisation of weakly affine monads on cartesian monoidal categories. Taking inspiration from the case of affine monads on Markov categories [9, Corollary 3.2], it seems natural to consider whether our main result can be extended to commutative monads on _weakly Markov categories_. However, solving this problem is non-trivial and requires clever adjustments to the main definitions. The crucial point is that, in general, the structure of the internal group of \(T1\) and the structure of the group \(\mathcal{D}(X,T1)\) are not necessarily related in the current definitions. One approach could be to introduce a form of _compatibility_ for \(T1\) and \(\mathcal{D}(X,T1)\) by defining a weakly affine monad on a weakly Markov category as a commutative monad such that \(T1\) is an internal group and \(\mathcal{D}(X,T1)\) is a group with the composition and units induced by those of \(T1\). With this change, for example, Proposition 3.6 would work for any weakly Markov category, but Theorem 4.7 would likely fail as its proof involves the universal property of products. #### On the positivity axiom Recall that a strong monad \(T\) on a cartesian monoidal category is _strongly affine_[15] if for every pair of objects \(X\) and \(Y\) the following diagram is a pullback where \(s\) denotes the strength and \(\eta\) denotes the unit of the monad. Every strongly affine monad is affine. The corresponding condition on the Markov category \(\mathrm{Kl}_{T}\) has recently been characterised as an information flow axiom called _positivity_[11, Section 2]. For a generic commutative monad, the diagram above may even fail to commute (take for example the measure monad \(M\), and start with \((x,0)\) in the top left corner). One can however consider the following diagram, which reduces to the one above (up to isomorphism) in the affine case and which always commutes by naturality of the strength. One can then call the monad \(T\)_positive_ if this second diagram is a pullback. Upon defining _positive gs-monoidal categories_ analogously to positive Markov categories, one may conjecture that \(T\) is positive if and only if \(\operatorname{Kl}_{T}\) is positive. This would generalise the existing result for Markov categories.
2309.01379
MLGuard: Defend Your Machine Learning Model!
Machine Learning (ML) is used in critical highly regulated and high-stakes fields such as finance, medicine, and transportation. The correctness of these ML applications is important for human safety and economic benefit. Progress has been made on improving ML testing and monitoring of ML. However, these approaches do not provide i) pre/post conditions to handle uncertainty, ii) defining corrective actions based on probabilistic outcomes, or iii) continual verification during system operation. In this paper, we propose MLGuard, a new approach to specify contracts for ML applications. Our approach consists of a) an ML contract specification defining pre/post conditions, invariants, and altering behaviours, b) generated validation models to determine the probability of contract violation, and c) an ML wrapper generator to enforce the contract and respond to violations. Our work is intended to provide the overarching framework required for building ML applications and monitoring their safety.
Sheng Wong, Scott Barnett, Jessica Rivera-Villicana, Anj Simmons, Hala Abdelkader, Jean-Guy Schneider, Rajesh Vasa
2023-09-04T06:08:11Z
http://arxiv.org/abs/2309.01379v1
# MLGuard: Defend Your Machine Learning Model! ###### Abstract. Machine Learning (ML) is used in critical highly regulated and high-stakes fields such as finance, medicine, and transportation. The correctness of these ML applications is important for human safety and economic benefit. Progress has been made on improving ML testing and monitoring of ML. However, these approaches do not provide i) pre/post conditions to handle uncertainty, ii) defining corrective actions based on probabilistic outcomes, or iii) continual verification during system operation. In this paper, we propose MLGuard, a new approach to specify contracts for ML applications. Our approach consists of a) an ML contract specification defining pre/post conditions, invariants, and altering behaviours, b) generated validation models to determine the probability of contract violation, and c) an ML wrapper generator to enforce the contract and respond to violations. Our work is intended to provide the overarching framework required for building ML applications and monitoring their safety. design by contract, error handling, system validation, ML validation + Footnote †: 2023: Copyright held by the owner/author(s). However, how to specify the conditions to monitor and the actions to take when a condition fails is left to the developer. ML toolsuites for validation (e.g., Tensorflow Data Validation) provide a set of validators and abstractions for their specification. However, these tools only offer partial support for ML failure modes and automation. Specifically, Tensorflow supports data schema and constraint suggestions, but leave more sophisticated validation checks such as out of distribution detection to the developer to implement and configure. Inspired by concepts from design by contract (Kang et al., 2018), we propose a new approach, MLGuard, for specifying and validating ML contracts. Our approach takes an ML Contract Specification and generates an ML wrapper with _both the code and trained ML models for validating ML contracts_. We propose an ML contract specification language with i) ML specific concepts (e.g., 'uncertain') and ii) actions to take when the ML contract fails (e.g., log warning, throw exception, propagate uncertainty). To the best of our knowledge, this is the first proposed approach for specifying and validating probabilistic contracts for ML. Although no formal guarantees can be made of the absolute safety of the ML system, our approach provides a structured semi-automated way to help developers work towards improving the safety of the ML applications they develop by automatically detecting and responding to contract violations. MLGuard is designed to provide the scaffolding for specifying and validating contracts that, optionally, include additional validation algorithms i.e. a data drift detector. ## 2. Motivation To motivate our work, consider the case of software for automated epileptic seizure detection--classifying a segment of electroencephalogram (EEG) data as seizure or non-seizure. While a large number of papers have developed ML models for this task (Kang et al., 2018), these models come with (often undocumented) conditions that must be satisfied for the output produced by the model to be reliable. This poses a safety concern as over-interpretation of EEG data (false positives) might lead to incorrect diagnosis and treatment, and this causes numerous medication side-effects, driving restrictions, increased chance of mental illness and discrimination of job opportunities while under-diagnosis (false negatives) causes delayed treatment which might increase the risk of mortality and other physical injuries. We elaborate on these challenges below, and propose a vision in section 3 to address them, thereby improving the safety of ML models when deployed in real-world applications. **No machine-checkable ML specification language**. Well-designed software components document inputs, outputs, types, pre/post conditions to be satisfied, and exceptions that may be raised. However, ML models lack a full machine-checkable specification. For example, even though an ML model accepts a vector with the same type and dimensions as the EEG data, this does not necessarily mean that the model is a suitable choice. To determine if the model is compatible with the application, one also needs to consider whether the statistical characteristics of the training data and modelling assumptions match those of the application domain. To assist in assessing the suitability of an ML model, proposals have been made for standardised documentation templates (i.e. data sheets (Bowman et al., 2017) and model cards (Kang et al., 2018)). However, documentation templates require users of the ML model to manually read and interpret the ML model documentation (if any) without providing any machine-verifiable rules for safe use. **No mechanism to express uncertainty in validation rules.** Different electrode placements, sampling frequencies, and filtering are possible. If these do not match those of the data the ML model was trained on, the ML model will still produce a result, but it cannot be trusted. The patient demographics may also affect the accuracy of the model. For example, it should not be assumed that a model trained on EEG from adults will work as well on EEG from children. To ensure that the EEG data is compatible with the data on which the model was trained, one can make use of an out of distribution detection model to determine whether the assumptions of the model have been violated, i.e. the serving data in production should be distributed similarly to the data the model was trained on. However, this violation can only be detected probabilistically rather than with absolute certainty. Figure 1. Our approach consisting of 1) an ML contract specification, 2) an ML contract model trainer to generate validation ML models that determine the probability of contract violations, and 3) a generated wrapper to defend models from contract violations and trigger contract violation handling logic if violations occur. The conditions for these contracts operate in high-dimensional latent spaces. For ML, the input and learned latent spaces are where pre/post conditions are required to verify system behaviour. Currently we lack the mathematical constructs for guaranteeing behaviour across a high-dimensional latent space. **It is unclear how the application should respond to probabilistic violations.** Unlike traditional software, ML behaviour is dependent on training data--demonstrating correctness offline is no guarantee of the system's operating behaviour online. Thus violations of contracts need to be detected and responded to during operation rather than at design time. Best practice recommends setting up existing alerting and monitoring infrastructure (Bradley et al., 2017). However, what is not specified is 1) how to configure alarms and alerts, and 2) how the system should respond. ## 3. A Vision for ML Contracts We propose MLGuard for automatically validating whether incoming data conforms to an ML contract and handling violations. MLGuard is a practical approach for dealing with the limitations outlined in the Motivation. An overview of our proposed approach is presented in Fig. 1, and more detailed descriptions of each MLGuard component are provided below. ### ML Contract Meta-model and Specification The ML Contract Meta-model provides the abstractions needed to specify ML contracts, serving as the basis for a **machine-checkable ML specification language**. For example, in addition to validating the data schema, one can specify that the model requires input data to be distributed similarly to the training data. The meta-model also provides the abstractions to define strategies for detecting probabilistic violations and for responding to probabilistic violations (the components to support this are elaborated on in the following sections). We borrow the concept of declarative definition of constraints developed by Deequ (Deequ, 2017; Deequ, 2018), and extend these ideas to allow for probabilistic conditions and specify actions to take. This approach will enable specifications be made regarding i) probabilistic conditions on inputs, ii) what methods will be used to detect probabilistic violations, and iii) approaches for dealing with probabilistic outcomes. For example, what should happen when conditions are violated with a confidence of 55% produced by a ML system. Software engineers write an ML Contract Specification tailored to their application needs that instantiates concepts in the meta-model. A sample contract is provided in Listing 1 using a YAML based syntax, but in future we will explore use of domain specific languages and fluent APIs. ### ML Contract Model Trainer Validating compliance with the ML Contract Specification requires Validation ML Models to detect probabilistic violations of contracts. The type of Validation ML Model to use may be specified as part of the ML Contract Specification, along with configurable thresholds at which to trigger contract violation handling logic, which together form a **mechanism to express uncertainty in validation rules**. For example, to validate that an instance of the input data is from the same distribution as the data the ML model was trained on, one may make use of an out of distribution detector. In the case of the Likelihood Ratios for Out-of-Distribution Detection method (Deequ, 2018), this requires training deep-generative models to determine the probability that the data is out of distribution and correct for background statistics. The role of the ML Contract Model Trainer is to automatically train the Validation ML Models (not to be confused with the ML Model that they are guarding) according to the configuration provided in the ML Contract Specification. To support software engineers uncertain about which type of ML Validation Model to use and how to configure it, we intend to explore approaches based on AutoML (Bradley et al., 2017) to automatically select and train appropriate Validation ML Models to enforce the constraints in the ML Contract Specification when the Validation ML Model to use is left unspecified. Our approach will also allow for threshold conditions at which to trigger a violation to be automatically learned from data and refined based on user feedback in the case the the software engineer is uncertain about which threshold value to specify. ### ML Contract Wrapper A standard approach to addressing the issue of robustness is to introduce a wrapper (Deequ, 2018) to guard against invalid behaviour. The ML Contract Code Generator selects and instantiates Contract Code Templates with information in the ML Contract Specification. The generated wrapper code includes the i) trained ML model, ii) Validation ML Models for use in pre/post conditions, and iii) code for checking pre/post conditions (using the Validation ML Models) to guard the trained model and trigger contract violation handling logic to **respond to probabilistic violations**. The wrapper can be configured (via the ML Contract Specification) to respond to contract violations in a manner appropriate to the application and nature of violation. For example, should an exception be thrown, error messages logged, or uncertainty be propagated through the system? ## 4. Research Questions The work proposed and discussed in the previous sections led us to pose the following research questions. Our plan to answering them is further discussed in section 5. * **RQ1** What are the abstractions required for specifying ML contracts? * **RQ2** What software architecture is required to enable the generation of a wrapper for enforcing ML contracts? * **RQ3** How effective is an ML contract in practice? ## 5. Future Plans Our research will progress in three phases. Phase 1 will focus on extracting concepts for the ML contract meta-model from the literature and defining the ML contract specification language. Phase 2 will involve experimental evaluation of the the ML Contract. Finally, Phase 3 will investigate the effectiveness of MLguard in an industry context. _Phase 1: ML Contract meta-model and specification language:_ To answer research question _RQ1: What are the abstractions required for specifying ML contracts?_ we will expand an existing specification language. We plan to follow an iterative approach inspired by Grounded theory (Krishnan et al., 2017) to mine concepts from the literature (both academic and grey literature). The goal of this phase is to identify the concepts for specifying ML Contracts and defining a validation plan. The expected outcome from this phase will be 1) the ML Contract meta-model, 2) an ML Contract specification language, and 3) a set of ML specific conditions for validation. _Phase 2: Experimental evaluation of ML Contracts:_ The next phase of research will involve developing a prototype of our solution to answer _RQ2: What software architecture is required to enable the generation of a wrapper for enforcing ML contracts?_ Concepts borrowed from Model Driven Engineering (MDE) will be applied to design the generators (code and trained models). Our experiment will evaluate ML Contracts against other automated and manual approaches to specifying validation logic for ML. The expected outcomes from this phase will be 1) a prototype tool MLGuard, 2) code templates for ML Contract wrappers, 3) configurations for training Validation ML models, and 4) a set of ML Wrappers generated for existing models. _Phase 3: Industry case study:_ To address _RQ3: How effective is an ML contract in practice?_ the final phase of the study will evaluate how our approach can be integrated into existing software engineering workflows. We plan to run a series of industry case studies where practitioners evaluate MLGuard on ML projects. The focus of the case studies will be to identify i) user-acceptance of the approach, ii) barriers to adoption, and iii) ongoing maintenance implications. ###### Acknowledgements. The research was supported by a Deakin University Postgraduate Research Scholarship (DUPR) and a National Intelligence Postdoctoral Grant (NIPG-2021-006).
2307.09289
Paranatural Category Theory
We establish and advocate for a novel branch of category theory, centered around strong dinatural transformations (herein known as "paranatural transformations"). Paranatural transformations generalize natural transformations to mixed-variant difunctors, but, unlike other such generalizations, are composable and exceptionally well-behaved. We define the category of difunctors and paranatural transformations, prove a novel "diYoneda Lemma" for this category, and explore some of the category-theoretic implications. We also develop three compelling uses for paranatural category theory: parametric polymorphism, impredicative encodings of (co)inductive types, and difunctor models of type theory. Paranatural transformations capture the essence of parametricity, with their "paranaturality condition" coinciding exactly with the "free theorem" of the corresponding polymorphic type; the paranatural analogue of the (co)end calculus provides an elegant and general framework for reasoning about initial algebras, terminal coalgebras, bisimulations, and representation independence; and "diYoneda reasoning" facilitates the lifting of Grothendieck universes into difunctor models of type theory. We develop these topics and propose further avenues of research.
Jacob Neumann
2023-07-18T14:32:27Z
http://arxiv.org/abs/2307.09289v1
# Parantural Category Theory ###### Abstract. We establish and advocate for a novel branch of category theory, centered around strong dinatural transformations (herein known as "paranatural transformations"). Paranatural transformations generalize natural transformations to mixed-variant difintors, but, unlike other such generalizations, are composable and exceptionally well-behaved. We define the category of difintors and paranatural transformations, prove a novel "diYoneda Lemma" for this category, and explore some of the category-theoretic implications. We also develop three compelling uses for paranatural category theory: parametric polymorphism, impredicative encodings of (co)inductive types, and difunctor models of type theory. Paranatural transformations capture the essence of parametricity, with their "paranaturality condition" coinciding exactly with the "free theorem" of the corresponding polymorphic type; the paranatural analogue of the (co)end calculus provides an elegant and general framework for reasoning about initial algebras, terminal coalgebras, bisimulations, and representation independence; and "diYoneda reasoning" facilitates the lifting of Grothendieck universes into difunctor models of type theory. We develop these topics and propose further avenues of research. Key Words and Phrases: category theory, Yoneda Lemma, parametricity, inductive types, coinductive types, semantics of type theory + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 + Footnote †: 2475-1421/0/0-ART0 + Footnote †: journal: 2475-1421/0/0-ART0 is to generalize homomorphisms from functions to relations." In the study of parametricity, it is structure-preserving _relations_ which play the central role. To understand the impasse Reynolds refers to, it is worth reiterating the summary by [Hermida et al. 2014] of the intuitive affinity between parametric polymorphism and the category-theoretic notion of **naturality**. The goal of parametric polymorphism is to have polymorphic terms in our programming language, i.e. ones which can assume different types, but to do so in a principled, regular way. As Reynolds explains, "a parametric polymorphic function is one that behaves the same way for all types". A polymorphic function which behaves differently based on the type it assumes is _ad hoc_ polymorphic, _not_ parametric [Strachey 2000]. The basic task of parametricity is making precise what "behaving the same way" means. In the world of category theory, a natural transformation between two functors is defined by a class of maps, which must be "given simultaneously" for every object of the relevant category. More precisely, these maps are subject to a _naturality condition_ ensuring that the choice of maps is appropriately uniform, and not the result of "arbitrary choices". As Hermida et al. observe, "the pre-theoretic intuitions expressed by the pioneers of category theory and those of programming language theory match up essentially word for word: 'artificial choices' corresponds to 'ad hoc polymorphism;' 'given simultaneously' corresponds 'work on all types' and 'behave the same way;' and 'natural' corresponds to 'parametric polymorphism.' So, one might expect to have a single mathematical theory that captures the intuitions expressed in both the contexts." Why isn't there such a mathematical theory? Indeed, we have the beginnings of one: for instance, a polymorphic term of type \(\operatorname{List}\,\,\alpha\to\operatorname{List}\,\,\alpha\) (where \(\alpha\) is the "type parameter") is precisely a natural transformation from the \(\operatorname{List}\) functor to itself: naturality is exactly the "free theorem", a la [Wadler 1989]. So, in some cases, _parametricity is naturality_. The issue arises when dealing with mixed-variant operators like \(\to\), which is contravariant in one argument and covariant in the other. It is not a functor, but a **difunctor**.1 So the notion of natural transformation (which is defined only between two covariant functors or between two contravariant functors) must be generalized to difunctors. At this point, several works [Bainbridge et al. 1990; Hermida et al. 2014; Scott 2000] all rehearse the same story: the most obvious notion of a _dinatural transformation_[Dubuc and Street 1970] is not closed under composition, and therefore not suitable as a solution. Bainbridge et al. do carve out a class of composable dinatural transformations--realizable dinaturals--but otherwise the failure of dinatural transformations to compose is taken as justification to abandon the category-theoretic pursuit. Consequently, Reynolds's approach, based on _relations_ rather than functions, is followed. Footnote 1: This notion is often called a _profunctor_, but here we reserve that term for functors of the form \(\mathbb{C}^{\mathbb{P}}\times\mathbb{D}\to\operatorname{Set}\), i.e. difunctors whose positive and negative arguments can come from different categories. The present author views this as a false impasse. Certainly, the above-mentioned notion of dinatural transformation is not suited for the task, in part because of its lack of composability but, more seriously, because dinaturality doesn't match parametricity anyways [Pare and Roman 1998, 2.2]. However, this doesn't at all indicate that there isn't _some_ generalization of natural transformations to difunctors which _is_ composable and which _does_ appropriately capture parametricity. Indeed, our claim is that such a notion has indeed been identified: **strong dinatural transformations**. Strong dinatural transformations first appeared in [Mulry 1992], though they appear to have been independently invented as "Barr dinatural transformations" in [Pare and Roman 1998]. The latter work represents the most systematic development of strong dinaturality, including a development of the natural numbers as strong dinaturals (partially repeated here as Example 4.2) and some generalization to abstract categories. A connection between strong dinaturality and parametricity was first suggested in [Eppendahl 1999] and [Vene 2006]. Some more recent work in this vein includes (Hackett and Hutton, 2015), though no definitive statement on the relationship between parametricity and strong dinaturality is made there. Another noteworthy aspect of the strong dinaturality literature are the rich connections to initial algebras and fixpoint operators, such as in the work of (Uustalu, 2000, 2010). ### Contribution However, development of the category theory surrounding strong dinatural transformations has been lacking. For instance, there has not been (to the author's knowledge) a systematic account of the category of difuntors and strong dinatural transformations, despite it being known for thirty years now that strong dinaturals compose. The primary aim of the present work is to remedy this situation. To emphasize their correlation to parametricity and their status as a fundamental category-theoretic concept in their own right (and not a mere variation on dinatural transformations),2 we will throughout be referring to strong dinatural transformations as **paranatural transformations**, and the study centered around them as **paranatural category theory**. With the present work, we aim to establish some core results of paranatural category theory, which hopefully will serve as a base for future exploration. Footnote 2: And because the double-adjective name occasionally proves annoying Additionally, our goal is to present some ideas of what paranatural category theory is _for_. We outline three applications. First, as indicated above, parametric polymorphism is a key motivation. Though we refrain (for now) from pronouncing that _parametricity is paranaturality_, the examples we explore here will indicate that, at least in the most important cases, these ideas do seem to coincide. We take one step towards a systematic connection between parametricity and paranaturality, by outlining a paranaturality-based procedure for calculating Wadler's free theorems. Our second application of paranatural category theory will be the development of a calculus for Church encodings of (co)inductive types, combining the work of (Awodey et al., 2018) and (Uustalu, 2010). And, finally, we apply the tools of paranatural category theory towards defining a new class of categorical models of type theory, leveraging a deep analogy between presheaf categories and our new category of difuntors and paranatural transformations. ### Overview In Section2 the basics of paranatural category theory are laid out, including a Yoneda Lemma for difuntors which, to the author's knowledge, is entirely novel. In Section3, the connection between paranaturality and relational parametricity is made more explicit, and the method for calculating free theorems is described. In Section4, the connection between paranaturality and encodings of (co)inductive types is detailed; a special case of this construction forms the backbone of (Awodey et al., 2018), but otherwise this theory has only appeared in talks (Neumann, 2022; Uustalu, 2010). Finally, in Section5, we develop a different application of the tools of paranaturality: defining a difunctor model of dependent type theory analogous to the presheaf model (Hofmann, 1997), complete with a universe type obtained via our difunctor Yoneda Lemma (analogously to (Hofmann and Streicher, 1999)). The hope is that this model will prove useful for defining directed/parametric variants of homotopy type theory, though its precise properties are still the topic of ongoing investigation. We conclude by enumerating some possible avenues of further research. ## 2. Paranaturality We begin by establishing paranatural category theory. Throughout, we adopt an informal dependent type theory as our metatheory, e.g. writing \(x:X\) instead of \(x\in X\) to indicate that \(x\) is an element of the set \(X\). Our notation will generally match that of (Univalent Foundations Program, 2013), though we don't make use of the more elaborate features of that text, like univalence or higher inductive types. Our main motivation for using a type-theoretic metatheory is to facilitate a future formalization of this work in a typed computer proof assistant. Throughout, we use \(\equiv\) to denote _judgmental_ or _definitional_ equality and \(=\) to denote identity types, that is, _propositional equality_. We use the notations \[(x:A)\to B(x)\qquad\text{and}\qquad\sum_{x:A}B(x)\] for the dependent product and dependent sum types, and assume function extensionality. We'll occasionally use lambda-expressions to define functions, e.g. \((\lambda(x:X)\to x)\colon X\to X\) for the identity on \(X\). When defining dependent functions of many arguments, we will often omit implicit arguments which are clear from context. Additionally, for dependent sums \(\sum_{x:A}B(x)\) where \(B(x)\) is a proposition (e.g. an equality of elements of a set), we will often omit the second component when writing elements of the type, e.g. \(\operatorname{id}_{X}\colon\sum_{g:X\to X}g\circ g=g\) where \(X\) is some set. We will not make any systematic distinction between _curried_ and _uncurried_ multivariable functions, writing the arguments separated by spaces or as a tuple, whichever is more convenient. Our presentation of category theory is relatively standard. A **category**\(\mathbb{C}\) will consist of a type of objects, denoted \(|\mathbb{C}|\); _hom-sets_, denoted \(\operatorname{Hom}_{\mathbb{C}}(I,J)\) or just \(\operatorname{Hom}(I,J)\), for every \(I,J\colon\ |\mathbb{C}|\); identity morphisms denoted \(\operatorname{id}_{I}\); and composition denoted \(\circ\) (written in the traditional order, i.e. \(g\circ f\) for \(f\) followed by \(g\)), with the usual associativity and unit laws. The category \(\operatorname{Set}\) of sets and functions will be our canonical example of a category, which we will complement with further examples. ### Difunctors and Dinaturality An essential notion for us will be that of a _difunctor_. Definition 2.1 ().: For a given category \(\mathbb{C}\), a **difunctor**\(\Gamma\) on \(\mathbb{C}\), denoted \(\Gamma\colon\mathbb{C}^{\operatorname{op}}\times\mathbb{C}\to\operatorname{Set}\) consists of a function, the "object part": \[\Gamma\colon\ |\mathbb{C}|\to|\mathbb{C}|\to|\operatorname{Set}|\] along with two "morphism parts": \[\operatorname{map}_{\Gamma}^{-}\colon\operatorname{Hom}(I_{0},I_{1})\to \Gamma(I_{1},J)\to\Gamma(I_{0},J)\] \[\operatorname{map}_{\Gamma}^{+}\colon\operatorname{Hom}(J_{0},J_{1})\to \Gamma(I,J_{0})\to\Gamma(I,J_{1})\] subject to functoriality laws: \[(\operatorname{map}_{\Gamma}^{-}\ i_{2})\circ(\operatorname{map}_{\Gamma}^{+ }\ j_{2})=(\operatorname{map}_{\Gamma}^{+}\ j_{2})\circ(\operatorname{map}_{ \Gamma}^{-}\ i_{2})\] \[\operatorname{map}_{\Gamma}^{-}\ \operatorname{id}=\operatorname{id}\qquad\qquad \operatorname{map}_{\Gamma}^{+}\ \operatorname{id}=\operatorname{id}\] \[\operatorname{map}_{\Gamma}^{-}(k\circ j)=(\operatorname{map}_{\Gamma}^{-}\ j)\circ(\operatorname{map}_{\Gamma}^{-}\ k)\qquad\operatorname{map}_{\Gamma}^{+}(i\circ h)=( \operatorname{map}_{\Gamma}^{+}\ i)\circ(\operatorname{map}_{\Gamma}^{+}h).\] Throughout, we'll refer to the first argument to a difunctor as its _negative_ or _contravariant_ argument, and the second argument as _positive_ or _covariant_. Difunctors can, of course, be defined to have a codomain category besides \(\operatorname{Set}\). They can also be generalized to the fruitful notion of a _profunctor_, whose positive and negative argument can come from different categories. But, for us, this definition of (set-valued) difunctors will be the correct level of generality. The canonical example of such a difunctor is the hom-set operation itself: Example 2.2 ().: The **hom** difunctor sends objects \(I,J\) to the \(\operatorname{hom}\)-set \(\operatorname{Hom}(I,J)\) and has morphism parts given by pre- and post-composition: for \(i_{2}\colon\operatorname{Hom}(I_{0},I_{1})\) and \(j_{2}\colon\operatorname{Hom}(J_{0},J_{1})\), \[\operatorname{map}_{\operatorname{Hom}}^{-}\ i_{2}\mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \mathrel{\vcentcolon}\ \ \mathrel{\vcentcolon}\ \mathrel A difunctor which completely ignores its negative argument is known as a _covariant set-valued functor_\(\mathbb{C}\to\operatorname{Set}\), and a difunctor which ignores its positive argument is _acontravariant set-valued functor_\(\mathbb{C}^{\operatorname{op}}\to\operatorname{Set}\), commonly known as a **presheaf**. As discussed in the introduction, our first concern is generalizing the well-known notion of a **natural transformation**, which can be defined between two covariant functors or between two presheaves, to difunctors. The correct such notion is that of a paranatural transformation. **Definition 2.3**: Given difunctors \(\Delta,\Gamma\colon\mathbb{C}^{\operatorname{op}}\times\mathbb{C}\to \operatorname{Set}\), a family of maps \[\Delta(I,I)\xrightarrow{\phi_{t}}\Gamma(I,I)\] indexed over objects \(I\colon|\mathbb{C}|\) is a **paranatural transformation**, denoted \(\phi\colon\Delta\xrightarrow{\circ}\Gamma\), if, for all \(i_{2}\colon\operatorname{\mathsf{Hom}}_{\mathbb{C}}(I_{0},I_{1})\) and all \(d_{0}\colon\Delta(I_{0},I_{0})\), \(d_{1}\colon\Delta(I_{1},I_{1})\), \[\operatorname{map}_{\Delta}^{+}i_{2}\ d_{0}=\operatorname{map}_{\Delta}^{-}i_ {2}\ d_{1}\qquad\text{implies}\qquad\operatorname{map}_{\Gamma}^{+}i_{2}\ (\phi_{I_{0}}\ d_{0})= \operatorname{map}_{\Gamma}^{-}i_{2}\ (\phi_{I_{1}}\ d_{1}).\] Diagrammatically: if the diamond commutes, so too does the hexagon. (1) As desired, when we instantiate this definition with \(\Delta,\Gamma\) both covariant functors or both presheaves, we get the standard notion of a natural transform. For instance, if \(\Gamma,\Delta:\mathbb{C}^{\operatorname{op}}\to\operatorname{Set}\) then the \(\operatorname{map}^{+}\) edges of Equation 1 are just the respective identity functions, and we get this: if the triangle commutes, then so too does the outer pentagon. (2) This is the same as saying that \((\operatorname{map}_{\Gamma}^{-}i_{2})\circ\phi_{I_{1}}=\phi_{I_{0}}\circ( \operatorname{map}_{\Delta}^{+}i_{2})\) for every \(i_{2}\colon\operatorname{\mathsf{Hom}}(I_{0},I_{1})\), which is the usual condition that defines \(\phi\) as a natural transformation from \(\Delta\) to \(\Gamma\). Likewise if \(\Delta\) and \(\Gamma\) are just covariant. So this is indeed a generalization of natural transformation to difunctors. Moreover, the paranaturality condition (Equation 1) is the _correct_ generalization for parametricity. The following instance, originally formulated in the relational parametricity setting by [20], will serve as our paradigm example. **Example 2.4**: The presheaf of **boolean-valued comparison functions** has an object part that sends a set \(X\) to the set of functions \(X\times X\to 2\), and a morphism part given by precomposition. The difunctor of **list rearrangement** sends a set \(X\) to the set \(\operatorname{List}X\to\operatorname{List}X\), with the morphism parts given by pre- and post-composition using the morphism part of the List functor.3 A paranatural transformation from the boolean-valued comparison presheaf to the difunctor of list rearrangements is a family of operations \[s_{X}\colon(X\times X\to 2)\to\operatorname{\mathsf{List}}X\to\operatorname{ \mathsf{List}}X\] for each set \(X\), which is _parametrically polymorphic_ in \(X\): if \(<_{X}\colon X\times X\to 2\) and \(<_{Y}\colon Y\times Y\to 2\) are comparison functions and \(f\colon X\to Y\) is _monotone_ in the sense that \[(x\prec_{X}x^{\prime})\quad=\quad(f(x)\prec_{Y}f(x^{\prime}))\qquad\qquad\qquad \qquad\text{for all }x,x^{\prime}:X\] then, given some \(L\colon\operatorname{\mathsf{List}}X\), it makes no difference whether one applies \(s_{X}\) to \(L\) and then list-maps \(f\) over the result, or does the list-map first and then applies \(s_{Y}\): \[(\operatorname{\mathsf{map}}_{\operatorname{\mathsf{List}}}f)\circ s_{X}=s_{Y} \circ(\operatorname{\mathsf{map}}_{\operatorname{\mathsf{List}}}f).\] The main example of such a family of functions \(s\) is a **sorting function** which, for any type whatsoever, takes a comparison function on that type and a list of values of that type, and sorts the list according to the comparison function. This example still does not take advantage of the full generality of paranaturality, as the domain difunctor is just a presheaf. But it does show how paranaturality manages to encode the essential idea of parametric polymorphism: making the parametric type variable a "black box" which the polymorphic function is not able to "inspect". The only thing a polymorphic function \[s\colon(\alpha\times\alpha\to 2)\to\operatorname{\mathsf{List}}\alpha\to \operatorname{\mathsf{List}}\alpha\] should "know" about the type is the provided comparison function. Therefore, it should not be able to "notice" if we pre-map by a monotone function. This is what it means for a polymorphic function of this type to "do the same thing for every type \(\alpha\)", i.e. to be parametrically polymorphic; paranaturality gets it right. ### Paranatural Category Theory Famously, the _dinatural transformations_ of [5] do not compose: given a dinatural transformation from \(\Delta\) to \(\Gamma\) and one from \(\Gamma\) to \(\Theta\), there is not, in general, a way to define one from \(\Delta\) to \(\Theta\) that serves as their "composite". As mentioned, this fact is the generally-cited reason to abandon purely function-based approaches to parametricity and stick to logical relations. From a category-theoretic perspective, not being composable severely limits the usefulness of dinatural transformations, in particular making it impossible to form a category of difunctors and dinatural transformations, let alone try and replicate some of the amazing structure of the category of presheaves. So, if paranatural transformations are to be worth much, they must at least compose. Thankfully, they clear this bar easily. Proposition 2.5 ().: _If \(Y:\Delta\xrightarrow{\circ}\Gamma\) and \(\theta\colon\Gamma\xrightarrow{\circ}\Theta\), then their pointwise-defined composite_ \[(\theta\gamma)_{I}\coloneqq\theta_{I}\circ\gamma_{I}\quad:\Delta(I,I)\to \Theta(I,I)\] _is a paranatural transformation \(\Delta\xrightarrow{\circ}\Theta\)._ Intuitively, the "naturality _squares_" of a natural transformation are replaced by _paranaturality chevrons_. This fragment of Equation 1 (3) can be called a _commutative chevron_, where here "commutative" means that the condition from Definition 2.3 is satisfied: for any span \(\Delta(I_{0},I_{0})\gets 1\to\Delta(I_{1},I_{1})\) on the left for which the resulting diamond commutes, the resulting hexagon commutes. So, similarly to how naturality squares can be combined side-by-side (giving us the composability of natural transformations), paranaturality chevrons can be snapped together, giving us the composability of paranatural transformations. The existing literature provides several alternative characterizations of strong dinaturality, allowing us to use spans with vertices besides \(1\) in the description of commutative chevrons given in the previous paragraph. **Proposition 2.6**: _For dinutors \(\Delta,\Gamma\) and a class of maps \(\phi_{I}:\Delta(I,I)\to\Gamma(I,I)\), the following are equivalent:_ 1. \(\phi\) _is a paranatural transform: if_ \(d_{0},d_{1},i_{2}\) _are such that_ \(\text{map}_{\Delta}^{+}\ i_{2}\ d_{0}=\text{map}_{\Gamma}^{-}\ i_{2}\ d_{1}\)_, then_ \(\text{map}_{\Gamma}^{+}\ i_{2}\ (\phi_{I_{0}}\ d_{0})=\text{map}_{\Gamma}^{-}\ i_{2}\ (\phi_{I_{1}}\ d_{1})\)_._ 2. _For any set_ \(W\)_, functions_ \(w_{0}\colon W\to\Delta(I_{0},I_{0})\) _and_ \(w_{1}\colon W\to\Delta(I_{1},I_{1})\)_, and_ \(i_{2}\colon\text{Hom}_{\mathbb{C}}(I_{0},I_{1})\) _such that_ \((\text{map}_{\Lambda}^{+}\ i_{2})\circ w_{0}=(\text{map}_{\Delta}^{-}\ i_{2}) \circ w_{1}\)_, it is the case that_ \((\text{map}_{\Gamma}^{+}\ i_{2})\circ\phi_{I_{0}}\circ w_{0}=(\text{map}_{ \Gamma}^{-}\ i_{2})\circ\phi_{I_{1}}\circ w_{1}\)_._ 3. _For every_ \(i_{2}\colon\text{Hom}_{\mathbb{C}}(I_{0},I_{1})\)_, it is the case that_ \[(\text{map}_{\Gamma}^{+}\ i_{2})\circ\phi_{I_{0}}\circ p_{0}=(\text{map}_{ \Gamma}^{-}\ i_{2})\circ\phi_{I_{1}}\circ p_{1}\] _where_ \(p_{0},p_{1}\) _are the projection maps of the pullback_ \[\Delta(I_{0},I_{1})\times_{\Delta(I_{0},I_{1})}\Delta(I_{1},I_{1})\xrightarrow{ p_{0}}\Delta(I_{0},I_{0})\] \[\Delta(I_{1},I_{1})\xrightarrow{\text{map}_{\Lambda}^{-}\ i_{2}} \Delta(I_{0},I_{1}).\] The latter formulation is the definition that appears in the older references[11, 10], though more recent works often use the second (e.g. [10, 10]). Regardless, the difference is superficial (at least when Set is the codomain of the dilutors): (1) and (3) are equivalent due to the standard explicit description of pullbacks in Set. Similarly, (2) is just (1), repeated for each element of the set \(W\). However, the more category-theoretic nature of (2) and (3) provide at least some promise that this notion of paranaturality will generalize to difunctors with codomains besides Set. We leave exploration of this point to future work. A rephrasing of paranaturality which _will_ see more use in the present work--and which sheds more light on the composition of paranaturals4--involves a diagonal analogue of the _category of elements_. **Definition 2.7**.: For a difunctor \(\Gamma:\mathbb{C}^{\mathrm{op}}\times\mathbb{C}\to\mathrm{Set}\), define the **category of \(\Gamma\)-structures** (or **category of diagonal elements of \(\Gamma\)**), denoted \(\Gamma\)-Struct, by \[\left|\Gamma\text{-Struct}\right| \coloneqq\sum_{I:\Gamma}\Gamma(I,I)\] \[\left(\Gamma\text{-Struct}\right)\ \left(I_{0},g_{0}\right)\ \left(I_{1},g_{1}\right) \coloneqq\sum_{i_{2}:\mathrm{Hom}\left(I_{0},I_{1}\right)}\text{ map}_{\Gamma}^{+}\ i_{2}\ g_{0}=\text{map}_{\Gamma}^{-}\ i_{2}\ g_{1}\] We'll sometimes write \(i_{2}\colon(I_{0},g_{0})\to(I_{1},g_{1})\) to indicate that \(i_{2}\) is a \(\Gamma\)-Struct-homomorphism. An essential aspect of paranaturals is that they only operate on the diagonal (hence their earlier name of "strong dinatural transformations", that is, "strong _diagonal_ natural transformations"). Paranatural transformations manage a careful balance of being chiefly concerned with the diagonal, but not discarding off-diagonal data (e.g. elements of \(\Delta(I,J)\) and how the morphism parts of \(\Delta\) affect them). Ultimately, the structures we'll be interested in will live in the diagonal--hence the above definition of diagonal structures--but homomorphisms between them will be dictated by the morphism parts, which take us away from the diagonal. This tension is exhibited in the following example. **Example 2.8**.: The **wild group data difunctor**\(\mathsf{WldGrpData}\colon\mathrm{Set}^{\mathrm{op}}\times\mathrm{Set}\to \mathrm{Set}\) has object part \[\mathsf{WldGrpData}(X^{-},X^{+})\quad\coloneqq(X^{-}\times X^{-}\to X^{+}) \times X^{+}\times(X^{-}\to X^{+}) \tag{4}\] with morphism parts given by the appropriate pre- and post-compositions. The category \(\mathsf{WldGrp}\) of **wild groups** is defined as \(\mathsf{WldGrpData}\)-Struct: objects are pairs \((X,\mu_{X},e_{X},i_{X})\) with \(\mu_{X}\colon X\times X\to X\), \(e_{X}\colon X\) and \(i_{X}\colon X\to X\). Spelling out the morphism part of 2.7, we get that a wild group homomorphism \((X,\mu_{X},e_{X},i_{X})\to(Y,\mu_{Y},e_{Y},i_{Y})\) is a function \(f\colon X\to Y\) such that \[f\circ\mu_{X}=\mu_{Y}\circ(f\times f)\qquad\text{and}\qquad f(e_{X})=e_{Y} \qquad\text{and}\qquad f\circ i_{X}=i_{Y}\circ f.\] In other words, a function that preserves all the relevant structure. The structures in this example-wild groups--come from the diagonal \(\mathsf{WldGrpData}(X,X)\), but defining homomorphisms involves the off-diagonal \(\mathsf{WldGrpData}(X,Y)\), a set which otherwise is not really of interest. The fact that these are _wild groups_ as opposed to just _groups_ refers to the fact that we have not imposed the group laws of associativity, unit, and inverse. We cannot impose these as parts of the difunctor (i.e. in Equation 4), since the laws only make sense on the diagonal. Instead, we must later "carve" the category of groups out as a subcategory of the category of wild groups: the latter encodes the actual "stuff" of the construction, to which we must impose laws. We also leave to future work the question of whether these steps can be unified. The equality required for \(i_{2}\) to be a \(\Gamma\)-structure homomorphism is, of course, the same equality which appears in the definition of paranatural transformation. This observation leads to the following alternative characterization of paranaturality (a special case of which appears in [11, 2.8]). **Proposition 2.9**.: _For difunctors \(\Delta,\Gamma\) and a class of maps \(\phi_{I}:\Delta(I,I)\to\Gamma(I,I)\), the following are equivalent:_ 1. \(\phi\) _is a paranatural transform: if_ \(d_{0},d_{1},i_{2}\) _are such that_ \(\text{map}_{\Delta}^{+}\ i_{2}\ d_{0}=\text{map}_{\Delta}^{-}\ i_{2}\ d_{1}\)_, then_ \(\text{map}_{\Gamma}^{+}\ i_{2}\ (\phi_{I_{0}}\ d_{0})=\text{map}_{\Gamma}^{-}\ i_{2}\ (\phi_{I_{1}}\ d_{1})\)_._ 2. \(\phi\) _preserves structure homomorphisms: if_ \(i_{2}\) _is a_ \(\Delta\)_-Struct-homomorphism from_ \((I_{0},d_{0})\) _to_ \((I_{1},d_{1})\)_, then_ \(i_{2}\) _is a_ \(\Gamma\)_-Struct-homomorphism from_ \((I_{0},\phi_{I_{0}}\ d_{0})\) _to_ \((I_{1},\phi_{I_{1}}\ d_{1})\) So, for any \(\gamma\colon\Delta\xrightarrow{\circ}\Gamma\), we obtain a corresponding functor \(\underline{\gamma}\colon\Delta\text{-Struct}\to\Gamma\text{-Struct}\), which is the identity on underlying objects/morphisms and uses the components of \(\gamma\) to transform the attached structures. As the proposition above indicates, the fact that morphisms get sent to morphisms (which is certainly required to define a functor) is the content of paranaturality. This operation of "taking the corresponding functor" is itself functorial, because \(\underline{\theta}\underline{\gamma}=\underline{\theta}\circ\gamma\), where the composition on the left is the composition of paranaturals and the composition on the right is composition of functors. Viewing a paranatural transformation as a functor of structures will come in handy at several points, particularly in Section5. As mentioned in the introduction, we wish to consider the category whose objects are difunctors and whose morphisms are paranatural transformations. By analogy to the use of the notation \(\hat{\mathbb{C}}\) to denote the category of presheaves on \(\mathbb{C}\) and natural transformations, we'll adopt the following notation. **Definition 2.10**: _Write \(\hat{\mathbb{C}}\) for the category whose objects are difunctors \(\mathbb{C}^{\text{op}}\times\mathbb{C}\to\operatorname{Set}\) and whose morphisms are paranatural transformations._ For full formality, we would of course be obliged to verify that the composition operations on paranaturals is associative, that the identity functions \(\Delta(I,I)\to\Delta(I,I)\) constitute paranatural transformations, and so on. But this is all quite routine. Perhaps the most exciting aspect of \(\hat{\mathbb{C}}\) from a category-theoretic perspective is that it possesses much of the elegant structure of \(\hat{\mathbb{C}}\). Indeed, the structure on the former generalizes that of the latter to simultaneously deal with both covariance and contravariance. To exemplify this, we rehearse the paranatural analogue of a topic which plays a central role in category theory: Yoneda-style reasoning. **Definition 2.11**: _The **diYoneda embedding**\(\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y }\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y}\mathbf{y} \mathbf{y}\ Observe it would be the same to define \(I/\mathbb{C}/J\) as \(\mathbf{yy}(I,J)\)-Struct. If \(\mathbb{C}\) has an initial object \(\mathbf{0}\) or a terminal object \(\mathbf{1}\), the splice category construction generalizes the construction of slice and coslice categories (respectively) from usual category theory: \[\mathbb{C}/J\cong\mathbf{0}/\mathbb{C}/J\qquad\text{and}\qquad I/\mathbb{C} \cong I/\mathbb{C}/\mathbf{1}.\] But, more relevant for our current purposes, this helps us say what it means to define paranatural transformations involving diprerementable difunctors. To define a paranatural transformation \(\alpha\colon\mathbf{yy}(I,J)\xrightarrow{\circ}\Gamma\) requires us to assign, for each \((I,J)\)-splice with vertex \(K\), an element of \(\Gamma(K,K)\), in a functorial fashion: if \(\ell\colon\operatorname{Hom}(K,L)\) is a morphism of splices from \((K,\operatorname{from}_{K},\operatorname{into}_{K})\) to \((L,\operatorname{from}_{L},\operatorname{into}_{L})\), then it needs to be a morphism of \(\Gamma\)-structures from \(\alpha_{K}(\operatorname{from}_{K},\operatorname{into}_{K})\) to \(\alpha_{L}(\operatorname{from}_{L},\operatorname{into}_{L})\). We're now able to state our key result: a new Yoneda Lemma for difunctors. **Lemma 2.13** (diYoneda Lemma).: _For any \(\Gamma\colon\mathbb{C}^{op}\times\mathbb{C}\to\operatorname{\mathit{Set}}\), there is a paranatural isomorphism5_ Footnote 5: That is, a pair of paranatural transformations in either direction, whose composites are the identity transformations; or, equivalently, a paranatural transfomation all of whose components are bijections. \[\Gamma(I,J)\cong(\mathbf{yy}(J,I)\xrightarrow{\circ}\Gamma).\] Note that, on the right-hand side, the arguments to \(\mathbf{yy}\) are flipped: this is because the \(\mathbf{yy}\) is itself in negative position, so the arguments must be flipped so that both sides are still contravariant in \(I\) and covariant in \(J\). More explicitly: the morphism parts of the right-hand side are given as follows. \[(\operatorname{map}^{-}_{\text{\emph{RHS}}}\,(i_{2}\colon \operatorname{Hom}(I_{0},I_{1}))\,\,(\phi\colon\mathbf{yy}(J,I_{1}) \xrightarrow{\circ}\Gamma))_{K} :\operatorname{Hom}(J,K)\times\operatorname{Hom}(K,I_{0})\to \Gamma(K,K)\] \[(\operatorname{map}^{-}_{\text{\emph{RHS}}}\,i_{2}\,\,\phi)_{K} \,\,(\operatorname{into}_{K},\operatorname{from}_{K}) :\equiv\phi_{K}(\operatorname{into}_{K},i_{2}\circ\operatorname{ from}_{K})\] \[(\operatorname{map}^{+}_{\text{\emph{RHS}}}\,(j_{2}\colon \operatorname{Hom}(J_{0},J_{1}))\,\,(\psi\colon\mathbf{yy}(J_{0},I) \xrightarrow{\circ}\Gamma))_{K} :\operatorname{Hom}(J_{1},K)\times\operatorname{Hom}(K,I)\to \Gamma(K,K)\] \[(\operatorname{map}^{+}_{\text{\emph{RHS}}}\,j_{2}\,\,\psi)_{K} \,\,(\operatorname{into}_{K},\operatorname{from}_{K}) :\equiv\psi_{K}(\operatorname{into}_{K}\circ j_{2},\operatorname{ from}_{K})\] So the right-hand side indeed is difunctorial in \((I,J)\), hence it makes sense to speak of paranatural transformations between it and the ditunctor \(\Gamma\). With this established, we can prove the diYoneda Lemma. Thankfully, we don't need anything too novel: the proof is the same as the proof of the original Yoneda Lemma, _mutatis mutandis_. Proof.: For a given \(g:\Gamma(I,I)\), the corresponding paranatural transformation \(\alpha_{g}\colon\mathbf{yy}(I,I)\xrightarrow{\circ}\Gamma\) is given by \[(\alpha_{g})_{K} :\operatorname{Hom}(I,K)\times\operatorname{Hom}(K,I)\to\Gamma(K,K)\] \[(\alpha_{g})_{K}\,\,(\operatorname{into}_{K},\operatorname{from} _{K}) :\equiv\operatorname{map}^{+}_{\Gamma}\,\operatorname{into}_{K}\,\,( \operatorname{map}^{-}_{\Gamma}\,\,\operatorname{from}_{K}g).\] Conversely, given a \(\phi\colon\mathbf{yy}(I,I)\xrightarrow{\circ}\Gamma\), obtain an element of \(\Gamma(I,I)\) by applying \(\phi_{I}\) to the pair of identity morphisms \((\operatorname{id}_{I},\operatorname{id}_{I})\). To indicate how this "diYoneda Lemma" is used, we can show that \(\hat{\mathbb{C}}\) is automatically a cartesian closed category, irrespective of whatever structure \(\mathbb{C}\) possesses. The analogous result for \(\hat{\mathbb{C}}\) is a standard result of basic category theory, and the author suspects (though has not verified yet) that \(\hat{\mathbb{C}}\) can also be shown to be a (co)complete elementary topos, and moreover that diYoneda preserves whatever limits exist in \(\mathbb{C}\), just as is the case with \(\hat{\mathbb{C}}\). The terminal object and binary products in \(\hat{\mathbb{C}}\) are precisely what one might expect: the constant-\(1\) difunctor and the pointwise binary product, respectively. Where diYoneda sees interesting application is in determining what exponential objects must be (just like the Yoneda Lemma tells us what exponentials in \(\hat{\mathbb{C}}\) must be (MacLane and Moerdijk, 2012, I.6)). Yoneda-style reasoning proceeds in the following manner: assume the thing exists with its desired universal property; use the Yoneda Lemma and said property to transform it, eliminating reference to the purported thing; then use the result as a definition and verify it indeed has the property. So let's do this. Given \(\Delta,\Gamma\), suppose their exponential \(\Gamma^{\Delta}\) exists. Then, \[\Gamma^{\Delta}(I,J) \cong\mathbf{y}\mathbf{y}(J,I)\xrightarrow{\diamond}\Gamma^{\Delta}\] (diYoneda Lemma) \[\cong\mathbf{y}\mathbf{y}(J,I)\times\Delta\xrightarrow{\diamond}\Gamma\] (desired property) So now we turn this into a definition: \(\Gamma^{\Delta}(I,J)\) is _defined_ as the set of paranatural transformations from \(\mathbf{y}\mathbf{y}(J,I)\times\Delta\) to \(\Gamma\). We can expand this definition into a more explicit description6 if desired. Footnote 6: An element of \(\Gamma^{\Delta}(I,J)\) is a family of maps \(\psi\), where for each \(K\colon\ |C|\), \(\psi_{K}\) accepts "\(\Delta\)-tagged \((J,I)\) splices with vertex \(K^{*}\)(that is, triples \((\text{into}_{K},\text{from}_{K},d)\) where \(d\colon\ \Delta(K,K)\)) and sends them to \(\Gamma\)-structures on \(K\) (i.e. \(\psi_{K}(\text{into}_{K},\text{from}_{K},d)\colon\Gamma(K,K)\)) which is paranatural: if \(\ell\colon\text{Hom}(K,L)\) is a \((J,I)\)-splice homomorphism from \((K,\text{into}_{K},\text{from}_{K})\) to \((L,\text{into}_{L},\text{from}_{L})\) and _also_ a \(\Delta\)-structure homomorphism from \((K,d)\) to \((L,d^{\prime})\), then \[\text{map}_{\Gamma}^{*}\ \ell\ (\psi_{K}(\text{into}_{K},\text{from}_{K},d))= \text{map}_{\Gamma}^{-}\ \ell\ (\psi_{L}(\text{into}_{L},\text{from}_{L},d^{\prime}))\] i.e. \(\ell\) is a \(\Gamma\)-structure homomorphism from \(\psi_{K}(\text{into}_{K},\text{from}_{K},d)\) to \(\psi_{L}(\text{into}_{L},\text{from}_{L},d^{\prime})\). One can confirm this satisfies the universal property of exponentials (we omit this for space). The (di)Yoneda trick works yet again! ## 3. Parametric Naturality Now we turn our attention to _using_ paranatural category theory. As described in the introduction, a key motivation for this theory will be the establishment of a function-based articulation of the abstraction/"data hiding" involved in parametric polymorphism. Example 2.4 already gave us a flavor of how paranatural transformations capture this idea, and how Wadler's free theorems are, at heart, just instances of the paranaturality condition. But in this section we'll aim to be more systematic. Let us repeat the famous opening of (Wadler, 1989), with only a few minor adjustments: _write down the definition of a polymorphic type (of at most one polymorphic variable) on a piece of paper. Tell me its type, but be careful to not let me see the function's definition. I will tell you a theorem that the function satisfies. The purpose of this section is to explain this trick._ Like Wadler, we'll develop our 'trick' to work on Girard/Reynolds-style System F (Girard, 1972, 1986; Reynolds, 1974, 1983). To distinguish from our metatheory, we'll use teletype font to denote the syntax of System F, e.g. writing unit and bool for the singleton type and boolean type, respectively, in System F, and writing T1 -> T2 for the type of functions from T1 to T2. The trick will be to convert type expressions of a single polymorphic type variable into difunctors, and then obtain the free theorem as the paranaturality condition of a paranatural transformation into said difunctor. This is the reason for the restriction to only a single polymorphic type variable: a difunctor only encodes positive and negative dependence on a single argument. As we'll indicate at the end of this section, this restriction can be lifted by developing appropriate notions of multivariable difunctors and multi-paranatural transformations, but we avoid doing so here for simplicity. Let's first develop the case that we're mainly interested in: polymorphic types of the form \[\forall\alpha.\texttt{T1}\ \neg\texttt{T2}\] where T1 and T2 are type expressions, possibly containing \(\alpha\). If we have a type of the form \(\forall\alpha.\texttt{T}\) where the topmost connective of T is _not_ an arrow, we can equivalently replace T with unit -> T and carry on. So the only real constraint is the above-mentioned restriction to single-variable polymorphism. To obtain our free theorem, we translate T1 and T2 into diffunctors \(\llbracket\texttt{T1}\rrbracket,\llbracket\texttt{T2}\rrbracket:\mathsf{Set}^{ \text{op}}\times\mathsf{Set}\to\mathsf{Set}\) according to their dependence on \(\alpha\). The following examples are sufficiently illustrative of the procedure for doing this for an arbitrary System F type expression T which may contain \(\alpha\). * If T does not, in fact, contain \(\alpha\) (i.e. it is a closed type, like unit or list(nat)), then \(\llbracket\texttt{T}\rrbracket\) is just the constant difunctor returning the set of terms of T. * \(\llbracket\alpha\rrbracket\) is the identity functor, taken as a difunctor which ignores its contravariant argument. * \(\llbracket\alpha\ \neg\texttt{bool}\rrbracket\) is the presheaf whose object part sends \(X\) to the set of functions \(X\to 2\) and whose morphism part is given by precomposition. * \(\llbracket(\alpha\ \neg\ \alpha)\ \neg\ \alpha\rrbracket\) is the difunctor whose object part sends \((I,J)\) to the set of functions \((J\to I)\to J\), with morphism parts by composition, e.g. \((\mathsf{map}^{+}\ j_{2})\colon((J_{0}\to I)\to J_{0})\to(J_{1}\to I)\to J_{1}\) sends \(w:(J_{0}\to I)\to J_{0}\) and \(x\colon J_{1}\to I\) to \(j_{2}(w\ (x\circ j_{2}))\colon J_{1}\). * \(\llbracket\texttt{list}\ \alpha\ \neg\texttt{list}\ \alpha\rrbracket\) is the _list rearrangement_ difunctor of Example 2.4. And so on. The general pattern is that appearing to the left of an arrow flips the variances, and morphism parts are ultimately given by composition of some form (perhaps run through an endofunctor like List). With that, here's _our_ notion of parametricity. **Principle 3.1**: _Every term of type \(\forall\alpha.\texttt{T1}\ \neg\texttt{T2}\) constitutes a paranatural transformation from \(\llbracket\texttt{T1}\rrbracket\) to \(\llbracket\texttt{T2}\rrbracket\)._ This is stated as an informal principle rather than a theorem because we can only determine its validity or falsity with respect to a given model of System F, and because we are, at this point, only conjecturing it as a way of stating parametricity. As our examples show, the resulting conditions we obtain from this principle end up saying just the free theorems which can be obtained from the relational formulation of parametricity, lending credence to the belief that this notion of parametricity applies in any model where relational parametricity holds. It remains to future work to fully sketch the connection between both notions of parametricity, and determine if they are indeed the same. Let us demonstrate the use of this principle. We've had our first example already, Example 2.4, which gave the free theorem for the type \(\forall\alpha.(\alpha*\alpha\to\texttt{bool})\ \neg\texttt{list}(\alpha)\to\texttt{list}(\alpha)\). Here are two more standard examples, which further demonstrate the method. **Example 3.1**: _For the type_ \[\forall\alpha.\texttt{list}\ \alpha\ \neg\texttt{list}\ \alpha\ \neg\texttt{list}\ \alpha,\] _our T1 is list(\(\alpha\)) and our T2 is list \(\alpha\ \neg\texttt{list}\ \alpha\), so \(\llbracket\texttt{T1}\rrbracket\) is just the List functor itself and \(\llbracket\texttt{T2}\rrbracket\) is the list rearrangement functor. So the paranaturality condition of \(\mathit{app}\colon\llbracket\texttt{T1}\rrbracket\stackrel{{ \circ}}{{\to}}\llbracket\texttt{T2}\rrbracket\) says that if \(f\colon X\to Y\) and_ \[\mathsf{map}_{\text{List}}\ f\ xs=ys\] _, then_ \[(\mathsf{map}_{\text{List}}\ f)\circ(\mathit{app}_{X}\ xs)=(\mathit{app}_{Y} \ ys)\circ(\mathsf{map}_{\text{List}}\ f)\] _as functions of type \(\text{List}(X)\to\text{List}(Y)\). This is just a slightly-different phrasing from Wadler's._ **P Example 3.2 ().: For the type \[\forall\alpha.\alpha\] i.e. \(\forall\alpha.\unit\ ->\alpha\), Principle 3.1 says that if \(s:\forall\alpha.\alpha\), then, for any types \(X\) and \(Y\) and any function \(f\colon X\to Y\), \[f(s_{X})=s_{Y}.\] But taking \(f\) to be the "swapping" function \(2\to 2\) shows this to be absurd, hence this free theorem tells us \(\forall\alpha.\alpha\) must be the empty type. So we can see that, for these standard examples of polymorphic types, relational parametricity and strong dinaturality coincide. As mentioned, further work is needed to clarify whether this is true in general. But, regardless of the outcome of this investigation, it is clear that the theory of paranaturality outlined in Section 2 has a significant connection to the theory of parametricity, and that the latter would undoubtedly benefit from further research into paranatural category theory. To conclude this section, let us briefly indicate how to lift the single-variable restriction above, and thereby be able to apply this method to, for example, the polymorphic zip type \(\forall\alpha\forall\beta.(\list(\alpha)*\list(\beta))\ ->\list(\alpha*\beta)\). To do this in full generality, we'll need a theory of _multi-variable difunctors_. We already understood the diYoneda embedding (Definition 2.11) as a kind of 2-variable difunctor: it takes in four objects in its object part, and has four morphism parts. Since we wanted to supply only some of the arguments at any given time, we expressed this as a difunctor whose codomain is \(\overleftarrow{\mathbb{C}}\) instead of \(\Set\). To iterate this, we would need to define a notion of paranatural transformation of 2-variable difunctors, then define 3-variable difunctors as difunctors into the category of 2-variable difunctors, and so on. Nothing about this theory seems to be too burdensome or surprising, we only omit it here because it would lead us too far field for our purposes. ## 4. Impredicative encodings of (co)inductive types The theory of paranaturality laid out in Section 2 also connects in an elegant way to another important theoretical topic in functional programming: the category-theoretic semantics of (co)inductive types as initial algebras and terminal coalgebras of endofunctors. Our goal for the present section will be to sketch these connections, beginning with inductive types. ### Structural Ends and Inductive Types In the previous sections, we did not pay attention to "size issues", i.e. whether certain collections (particularly the collection of paranatural transformations between two difunctors) constituted a _set_, or whether some machinery of "small" sets vs. "large" sets, or sets vs. proper classes, or a hierarchy of Grothendieck universes was required. In this section, we will be more careful. The paranatural apparatus we'll be using to study initial algebras\(-\)_insofar as they exist_\(-\)will be the following notion, which are to paranatural transformations what _ends_ are to _dinatural transformations_ (see (Mac Lane, 1978, IX.4-5) and (Bainbridge et al., 1990, Sect. 1)). Definition 4.1 ().: For a difunctor \(\Gamma\colon\Set^{\op}\times\Set\to\Set\), we say that a set \(\mu_{\Gamma}\) is the **structural end** of \(\Gamma\) and write \[\mu_{\Gamma}=\int_{X\colon\Set}\Gamma(X,X)\ \mathbf{d}X\] if \(\mu_{\Gamma}\) is the equalizer of the parallel arrows7 Footnote 7: For reasons of space, we’ve made the quantification over \((I_{0},g_{0})\) and \((I_{1},g_{1})\) implicit on the right-hand side of Equation 5. The full right-hand side is: \[\left((I,g)\colon\Gamma\text{-Struct}\right)\ \ \to\ \ \ (\ (I_{1},g_{1})\colon\Gamma\text{-Struct})\ \ \to\ \ \ \left(i_{2}\colon\operatorname{Hom}_{\Gamma\text{-Struct}}\ (I_{0},g_{0})\ (I_{1},g_{1})\right)\ \ \to\ \ I_{1}\] given by \[\lambda\ \varphi\ (I_{0},g_{0})\ (I_{1},g_{1})\ i_{2}\to i_{2}\left( \varphi_{(I_{0},g_{0})}\right)\] \[\lambda\ \varphi\ (I_{0},g_{0})\ (I_{1},g_{1})\ i_{2}\to\varphi_{(I_{1},g_{1})}.\] Since this equalizer is taking place in \(\operatorname{Set}\), we have a definite description of it: \[\mu_{\Gamma}\mathrel{\mathop{:}}=\sum_{\varphi\colon\ ((I,g)\colon\Gamma \text{-Struct})\to I}(\operatorname{map}_{\Gamma}^{+}\ i_{2}\ g_{0}= \operatorname{map}_{\Gamma}^{-}\ i_{2}\ g_{1})\to\left(i_{2}\left(\varphi_{(I _{0},g_{0})}\right)=\varphi_{(I_{1},g_{1})}\right). \tag{6}\] We've made the quantification of \((I_{0},g_{0}),(I_{1},g_{1})\), and \(i_{2}\) implicit here (for space), but spelled out explicitly the \(\Gamma\)-Struct homomorphism condition. The reason for the phrasing given in Definition 4.1 is to avoid assuming the existence of such a \(\mu_{\Gamma}\), since there's no guarantee _a priori_ that the expression on the right-hand side, which quantifies over all sets, defines a set (under pain of Russell's paradox). We could remedy this issue by restricting \(\Gamma\)'s codomain to a universe of "small sets" (the approach taken in the next section), but for the moment we're more concerned with those situations where \(\Gamma\) has a structural end in the same "universe" as its elements. The most famous instance of this is the following example. **Example 4.2**.: Let \(H^{N}:\operatorname{Set}^{op}\times\operatorname{Set}\to\operatorname{Set}\) be the difunctor with object part \[H^{N}(X^{-},X^{+})\mathrel{\mathop{:}}\equiv X^{+}\times(X^{-}\to X^{+})\] with morphism parts by composition. We'll call the category of \(H^{N}\)-structures \(N\)**-algebras**: \[|N\text{-Alg}|\mathrel{\mathop{:}}\equiv\sum_{X\colon\operatorname{Set}}X \times(X\to X).\] The set \(\mathbb{N}\) of natural numbers, equipped with the injection \[\lambda\ n\ (X,x,s)\to s^{n}(x)\quad:\quad\mathbb{N}\to\left((X,x,s)\colon N \text{-Alg}\right)\to X \tag{7}\] is the structural end of \(H^{N}\). * For any homomorphism of \(N\)-algebras \(f\colon\ (X,x,s)\to(Y,y,t)\) (i.e. a function \(f\colon X\to Y\) such that \(f(x)=y\) and \(f\circ s=t\circ f\)), it is indeed the case that, for any \(n\colon\mathbb{N}\), \[f(s^{n}\ x)=t^{n}\ y,\] so the map given in Equation 7 equalizes the parallel arrows of Equation 5. * If \(\zeta\colon Z\to\left((X,x,s)\colon N\text{-Alg}\right)\to X\) is any other map such that \[f(\zeta\ z\ (X,x,s))=\zeta\ z\ (Y,y,t)\] (8) for every \(z\colon Z\) and homomorphism \(f\colon\ (X,x,s)\to(Y,y,t)\), then define a map \(h\colon Z\to\mathbb{N}\) by \[h(z)\mathrel{\mathop{:}}\equiv\zeta\ z\ (\mathbb{N},0,\operatorname{succ}).\] Then, for any \(z\) and any \((Y,y,t)\), \[\left((\lambda\ n\ (X,x,s)\to s^{n}(x))\circ h\right)\ z\ (Y,y,t)\] \[\equiv t^{(z)}\ y\] \[\equiv t^{(\zeta\ z\ (\{\mathbb{I},0,\operatorname{succ}\}))}\ y\] \[=\zeta\ z\ (Y,y,t).\] (Equation 8, with \[f\equiv\lambda\ n\to t^{n}y\] ) We know that \(\lambda\ n\to s^{n}x\) is a \(N\)-Alg-homomorphism from \(({\mathbb{N},0,\operatorname{succ}\})\) to \((X,x,s)\), because \(s^{\operatorname{succ}(n)}\equiv s\circ s^{n}\) and \(s^{0}x\equiv x\), so the application of Equation 8 is legitimate. Therefore, conclude \[\zeta=\left(\lambda\ n\ (X,x,s)\to s^{n}x\right)\circ h,\] satisfying the universal property of the equalizer.8 Footnote 8: For full formality, we need to show that this factorization \(h\) of \(\zeta\) is unique, but this follows easily from the injectivity of the map \(\lambda\ n\ (X,x,s)\to s^{n}x\). The ideas rehearsed in this example are, of course, the main ingredients that go into understanding \({\mathbb{N}}\) as the initial algebra of the "maybe" endofunctor \(N(X)\coloneqq\mathbb{1}+X\), just spelled out using the calculus of difunctors and structural ends. If we were to _define_\({\mathbb{N}}\) as \(\int_{X:\ \operatorname{Set}}X\times(X\to X)\ \mathbf{d}X\), then this would be an inherently _impredicative_ definition, because it's essential that we be able to instantiate any of these \(\varphi\colon\left((X,x,s)\colon N\text{-Alg}\right)\to X\) with the \(N\)-algebra \(({\mathbb{N},0,\operatorname{succ}\})\) itself (as we must do above to prove the universal property). This therefore gives us an **impredicative encoding** of the inductive type \({\mathbb{N}}\), the same one given in [1]. Like Awodey et al., we can generalize this result to one about algebras of an arbitrary endofunctor. **Definition 4.3**: Let \(T\colon\operatorname{Set}\to\operatorname{Set}\) be an endofunctor. Define the \(T\)**-algebraic difunctor**\(H^{T}\) to have object part \[H^{T}(X^{-},X^{+})\coloneqq T(X^{-})\to X^{+}\] with \(\operatorname{map}_{H^{T}}^{-}x_{2}\) given by precomposition by \(T(x_{2})\) and \(\operatorname{map}_{H^{T}}^{+}\) given by postcomposition. The category of \(H^{T}\)-structures is known in the literature as the **category of \(T\)-algebras**: a \(T\)-algebra is a set \(X\) equipped with a function \(T(X)\to X\), and a \(T\)-algebra homomorphism from \((X,u)\) to \((Y,v)\) is a function \(f\colon X\to Y\) such that \(f\circ u=v\circ T(f)\). An **initial \(T\)-algebra** is a \(T\)-algebra \((\mu_{T},\operatorname{in}_{T})\) such that, for every \(T\)-algebra \((X,u)\), there exists a unique \(T\)-algebra homomorphism \((\operatorname{rec}\ u)\colon(\mu_{T},\operatorname{in}_{T})\to(X,u)\). **Proposition 4.4**: _For an endofunctor \(T\), the following are equivalent_ * _The structural end_ \(\int_{X:\ \operatorname{Set}}T(X)\to X\ \mathbf{d}X\) _exists_ * \(T\) _has an initial algebra_ Indeed, the underlying set \(\mu_{T}\) of the initial algebra-when it exists-is precisely the structural end \(\int_{X:\ \operatorname{Set}}T(X)\to X\ \mathbf{d}X\). Moreover, the required \(T\)-algebra homomorphism \((\operatorname{rec}\ u)\colon\mu_{T}\to X\) for any \(T\)-algebra \((X,u)\) takes \(\varphi\colon\mu_{T}\) to \(\varphi_{(X,u)}\).9 And we can even give an explicit description of the \(T\)-algebra structure which \(\mu_{T}\) is equipped with: Footnote 9: Here, and henceforth, we’re making the map from \(\mu_{T}\) to \(((X,u)\colon T\text{-Alg})\to X\) implicit, since we’ll generally think of the former as a subset of the latter, as in Equation 6. Only for a specific example where we have established notation for \(\mu_{T}\), like Example 4.2 above, is it helpful to make this inclusion map explicit. \[\operatorname{in}_{T}\coloneqq\lambda\ (\omega\colon T(\mu_{T}))\ \left((X,u)\colon T\text{-Alg}\right)\ \to u(T(\operatorname{rec}\ u)\ \omega)\quad:\quad T(\mu_{T})\to\mu_{T}.\] So this framework of structural ends of difunctors connects cleanly to the logic of initial algebras, and the key results from that well-studied topic (e.g. [1, 10]) can be employed. Let us conclude this subsection by noting that Definition 4.1 admits a nice generalization. **Definition 4.5**.: For difintors \(\Gamma,\Theta\colon\mathbb{C}^{\mathrm{op}}\times\mathbb{C}\to\mathsf{Set}\), we say that a set \(\mu\) is the **structural end of \(\Gamma\) with respect to \(\Theta\)** and write \[\mu=\int_{X\colon\mathsf{Set}}\Gamma(X,X)\ \mathbf{d}\Theta(X,X)\] if \(\mu\) is the equalizer of the parallel arrows \[\left(\left((I,g)\colon\Gamma\text{-Struct}\right)\to\Theta(I,I)\right) \xrightarrow{\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad ### Costructure Integrals and Coinductive Types As with most major concepts in category theory, the notion of _ends_ were introduced along with their dual notion, _coends_. Since Definition4.1 is phrased in terms of _equalizers_, they are ripe for dualization in terms of _coequalizers_. **Definition 4.7**.: For a difunctor \(\Gamma\colon\operatorname{Set}^{\operatorname{op}}\times\operatorname{Set} \to\operatorname{Set}\), we say that a set \(v_{\Gamma}\) is the **structural coend** of \(\Gamma\) and write \[v_{\Gamma}=\int^{X\colon\operatorname{Set}}\Gamma(X,X)\ \mathbf{p}X\] if \(v_{\Gamma}\) is the coequalizer of the parallel arrows (10) given by \[\lambda\ (i_{2},x) \to((l_{0},g_{0}),x)\] \[\lambda\ (i_{2},x) \to((l_{1},g_{1}),i_{2}(x)).\] As with the dual \(\mu_{\Gamma}\), this set \(v_{\Gamma}\) admits an explicit definition, with the attached caveat that its existence is not guaranteed. In the category Set, coequalizers are given by _quotients_. Specifically, this coequalizer is the type \(\sum_{(I,g)\colon\Gamma\text{-Struct}}I\) of "pointed \(\Gamma\) structures", quotiented by bisimilarity. **Definition 4.8**.: The binary relation of **bisimilarity** on pointed \(\Gamma\)-structures \[\operatorname{Bisim}_{\Gamma}\subseteq\left(\sum_{(I,g)\colon\Gamma\text{- Struct}}I\right)\times\left(\sum_{(I,g)\colon\Gamma\text{-Struct}}I\right)\] is the equivalence relation generated by \[(i_{2}\colon\operatorname{Hom}_{\Gamma\text{-Struct}}\ (I_{0},g_{0})\ (I_{1},g_{1}))\quad \to\quad(x_{0}\colon I_{0})\quad\to\quad\operatorname{Bisim}_{\Gamma}\ (I_{0},g_{0},x_{0})\ (I_{1},g_{1},i_{2}(x_{0}))\] In short: two points are bisimilar if there's a homomorphism connecting them. The question of existence here is whether the bisimilarity relation "glues together" enough pointed \(\Gamma\)-structures to obtain a "small" set. Similarly to how endofunctor algebras provided us concrete examples of exist structural ends, endofunctor _coalgebras_ will furnish examples of existent structural coends. **Definition 4.9**.: Let \(T\colon\operatorname{Set}\to\operatorname{Set}\) be an endofunctor. Define the \(T\)**-coalgebraic difunctor**\(H_{T}\) to have object part \[H_{T}(X^{-},X^{+})\coloneqq X^{-}\to T(X^{+}).\] The category of \(H_{T}\)-structures is known as the **category of \(T\)-coalgebras**: a \(T\)-coalgebra is a set \(X\) equipped with a function \(X\to T(X)\), and a \(T\)-coalgebra homomorphism from \((X,u)\) to \((Y,v)\) is a function \(f\colon X\to Y\) such that \(T(f)\circ u=v\circ f\). An **terminal \(T\)-coalgebra** is a \(T\)-coalgebra \((v_{\Gamma},\operatorname{out}_{T})\) such that, for every \(T\)-coalgebra \((X,u)\), there exists a unique \(T\)-coalgebra homomorphism \((\operatorname{core}\,u)\colon(X,u)\to(v_{\Gamma},\operatorname{out}_{T})\). **Example 4.10**.: For any set \(A\), the set \(\operatorname{Stream}(A)\coloneqq\mathbb{N}\to A\) of \(A\)-**streams** is the structural coend of the difunctor \((X^{-},X^{+})\mapsto(X^{-}\to A)\times(X^{-}\to X^{+})\).10 To define the requisite map \(\epsilon\) from pointed coalgebras \(\sum_{X\colon\operatorname{Set}}(X\to A)\times(X\to X)\times X\) to \(\operatorname{Stream}(A)\), send a pointed coalgebra \((X,h_{X},t_{X},x_{0})\) to the stream Footnote 10: Which is equivalent to the coalgebraic difunctor \(H_{S_{A}}\), where \(S_{A}\) is the endofunctor sending \(X\) to to \(A\times X\). \[h_{X}(x_{0}),\quad h_{X}(t_{X}\ x_{0}),\quad h_{X}(t_{X}(t_{X}\ x_{0})),\quad h _{X}(t_{X}(t_{X}(t_{X}\ x_{0}))),\quad\dots\] (more formally: \(\epsilon(X,h_{X},t_{X},x_{0})\equiv\lambda\ (n\colon\mathbb{N})\to h_{X}(t_{X}^{n}\ x_{0})\)). This respects the bimilarity equivalence relation: if we have pointed coalgebras \[(X,h_{X},t_{X},x_{0}),\ (Y,h_{Y},t_{Y},y_{0})\quad:\ \sum_{X\colon\ \mathrm{Set}} (X\to A)\times(X\to X)\times X\] and a coalgebra homomorphism from \((X,h_{X},t_{X})\) to \((Y,h_{Y},t_{Y})\) sending \(x_{0}\) to \(y_{0}\), then we can show by a simple natural number induction that \[\epsilon(X,h_{X},t_{X},x_{0})=\epsilon(Y,h_{Y},t_{Y},y_{0}).\] Moreover, we can construct a coalgebra structure \((\mathrm{Stream}(A),\mathrm{hd},\mathrm{tl})\) and show that every coalgebra structure has a homomorphism into it. From this, we'll be able to prove the universal property of the coequalizer: that every function from pointed coalgebras into any set which respects bisimilarity (i.e. sends bisimilar pointed coalgebras to equal elements) must factor uniquely through \(\epsilon\). As with structural ends, structural coends permit us to connect the theory of terminal coalgebras as categorical semantics for coinductive types to our framework of paranaturality. We can show that the structural coend of a \(T\)-coalgebraic difunctor, when it exists, furnishes us with a terminal coalgebra \((v_{T},\mathrm{out}_{T})\) for \(T\), i.e. a coinductive type. Indeed, we can generalize Definition 4.7 to get a notion of the structural coend \(\int^{X:\mathrm{Set}}\Gamma(X,X)\ \mathbf{p}\Theta(X,X)\) with respect to any difunctor \(\Theta\), dually to how we generalized Definition 4.1 to Definition 4.5. We can then prove a dual version of Lemma 4.6, which, among other things, has as a corollary that structural coends of coalgebraic difunctors are terminal coalgebras. But let us conclude this section by discussing the principle of coinduction, a crucial component for working with coinductive types. The principle of coinduction (see e.g. [1]) states that two terms of a coinductive type (i.e. a terminal coalgebra) can be identified whenever they are _bisimilar_ in the appropriate sense. We'll be able to state our principle of coinduction for any difunctor whose structural coend exists, which will be applicable beyond just specifying coinductive types. We first state a more-general result, and then derive a more familiar principle of coinduction as a consequence. **Definition 4.11**: _Given a difunctor \(\Delta\colon\mathrm{Set}^{\mathrm{op}}\times\mathrm{Set}\to\mathrm{Set}\), a **logical relation** (or **bisimulation**) between \(\Delta\)-structures \((I_{0},d_{0})\) and \((I_{1},d_{1})\) is a relation \(R\subseteq I_{0}\times I_{1}\) and a structure \(r\colon\Delta(R,R)\) such that the projection functions \(\pi_{0},\pi_{1}\) are \(\Delta\)-structure homomorphisms_ \[(I_{0},d_{0})\ \xleftarrow{\pi_{0}}\ \ \(R\subseteq\mathsf{Stream}(A)\times\mathsf{Stream}(A)\) which is a bisimulation from \((\mathsf{Stream}(A),\mathsf{hd},\mathsf{tl})\) to itself such that \(S\ R\ S^{\prime}\). In the case of streams, \(R\) is a bisimulation if, for every \(T,T^{\prime}\colon\mathsf{Stream}(A)\), \[T\ R\ T^{\prime}\qquad\text{implies}\qquad\mathsf{hd}(T)=\mathsf{hd}(T^{ \prime})\quad\text{and}\quad(\mathsf{tl}\ T)\ R\ (\mathsf{tl}\ T^{\prime}).\] But our examples are not confined to just coinductive types: it also allows us to discuss _representation independence_ in the sense of (Mitchell, 1986). The following is a classic example. Example 4.14 ().: For \(A\colon\mathsf{Set}\), define the \(A\)-queue data difunctor by \[\operatorname{QData}(X^{-},X^{+}):=X^{+}\times(A\times X^{-}\to X^{+}) \times(X^{-}\to\mathsf{Maybe}(A\times X^{+}))\] where \(\mathsf{Maybe}\) is the \(X\mapsto 1+X\) endofunctor; we'll use the Haskell-like notation of Nothing : \(\mathsf{Maybe}(X)\) and \(\mathsf{Just}\colon X\to\mathsf{Maybe}(X)\) for the constructors of \(\mathsf{Maybe}(X)\). A QData-Struct is an "implementation of \(A\)-queues": a set \(Q\) (the type of queues) equipped with: \[\varepsilon\colon Q\qquad enq\colon A\times Q\to Q\qquad deq\colon Q\to \mathsf{Maybe}(A\times Q)\] corresponding to the _empty queue_, the _enqueue_ operation, and the _dequeue_ operation, respectively. Now suppose we have two such implementations \((Q_{0},\varepsilon_{0},enq_{0},deq_{0})\) and \((Q_{1},\varepsilon_{1},enq_{1},deq_{1})\). Observe that \(R\subseteq Q_{0}\times Q_{1}\) is a bisimulation between \((Q_{0},\varepsilon_{0},enq_{0},deq_{0})\) and \((Q_{1},\varepsilon_{1},enq_{1},deq_{1})\) if the following hold: * \(\varepsilon_{0}\ R\ \varepsilon_{1}\); * if \(q_{0}\ R\ q_{1}\), then, for any \(a:A\), \((enq_{0}(a,q_{0}))\ R\ (enq_{1}(a,q_{1}))\); * if \(q_{0}\ R\ q_{1}\), then either \(deq_{0}(q_{0})=\mathsf{Nothing}=deq_{1}(q_{1})\) or else \(deq_{0}(q_{0})=\mathsf{Just}(a_{0},q_{0}^{\prime})\) and \(deq_{1}(q_{1})=\mathsf{Just}(a_{1},q_{1}^{\prime})\) such that \(a_{0}=a_{1}\) and \(q_{0}^{\prime}\ R\ q_{1}^{\prime}\). Now apply 4.12: if we have such a bisimulation \(R\) and \(q_{0}\colon Q_{0}\), \(q_{1}\colon Q_{1}\) such that \(q_{0}\ R\ q_{1}\), then \(q_{0}\) and \(q_{1}\) are equal as elements of \(\int^{X\times\mathsf{Set}}\operatorname{QData}(X,X)\ \mathbf{p}X\). What this says is that the quotient forming \(\int^{X\times\mathsf{Set}}\operatorname{QData}(X,X)\ \mathbf{p}X\) identifies pointed queue implementations which are _behaviorally indistinguishable_: if \(R\) is a bisimulation, then, as mentioned above, it relates the empty queues of the two representations, is preserved in any application of the enqueue operation, and ensures indistinguishable results upon dequeueing. So a user who's allowed to start with the empty queue and then queue/dequeue to their heart's content with various elements of \(A\) could never distinguish between bisimilar implementations. The quotient makes this explicit by identifying pointed queues which are equivalent in this way, thus functioning as the **existential type**\(\exists X.\operatorname{QData}(X,X)\). As detailed in e.g. (Angiuli et al., 2020), this is advantageous for the design and specification of software libraries: a simple but slow implementation of queues (e.g. the single-stack implementation) can be used for specification purposes, but in practice replaced by a more efficient, provably-bisimilar implementation (e.g. the double-stack implementation), with the representation independence result guaranteeing that the user will never see the difference. ## 5. A Difunctor Model of Type Theory Finally, we'll explore another branch of research which can potentially benefit from the ideas and insights of paranatural category theory: the categorical semantics of type theory. As is the case with the classical model theory of logic, the study of semantics for type theory stems from a need to prove _metatheoretic_ results such as consistency or independence. Following in the tradition of (Lawvere, 1963), such semantics investigations frequently utilized category theory, as it often proved quite suitable for modelling type theory (e.g. in the landmark result of (Lambek, 1980)). For dependent type theory (in the style of (Martin-Lof, 1975, 1982)) in particular, a number of category-theoretic notions of "model" of dependent type theory flourished (see (Hofmann, 1997; nLab authors, 2023a) for a summary), and were used to establish key metatheoretic results about dependent type theory [Hofmann and Streicher 1995]. In this section, we'll be using **category with families (CwFs)**[Dybjer 1995] as our notion of a model of type theory. A model of a given theory is an interpretation of the syntax of the theory into mathematical structures. In type theory, the syntactic objects which we must interpret are _contexts_, _substitutions_, _types_, and _terms_, along with the various operations for forming, combining, transforming, and inter-converting these. A CwF is a straightforward rendering of these as a category-theoretic structure: to define a CwF, we must supply a category (whose objects will interpret contexts, and whose morphisms will interpret substitutions), a presheaf on that category (whose object part sends contexts to their set of types, and whose morphism part represents the substitution of free variables in a type), a presheaf on the category of elements of that presheaf (sending a context and a type to the set of terms of that type, also with an action under substitution), as well as an appropriate operation on objects (interpreting the extension of a context by a typed variable). So, while being deeply category-theoretic, CwFs manage to stay very close to the syntax of type theory they are designed to interpret. One particularly fruitful class of models of dependent type theory are **presheaf models** (see [Hofmann 1997, Sect. 4] for a detailed definition), which forms the basic semantics for many type theories of interest, e.g. [Bezem et al. 2014]. In a presheaf model, the category of contexts and substitutions is taken to be \(\hat{\mathbb{C}}\), the category of presheaves (hence the name) and natural transformations on some fixed category \(\mathbb{C}\). Among the many useful features of such a model is the fact that the Yoneda Lemma makes it expedient to import the Grothendieck universes of the metatheory (see explanation below) into the syntax as _type universes_--a process referred to as "lifting" these universes in [Hofmann and Streicher 1999]. Since the difunctor category \(\hat{\mathbb{C}}\) generalizes the category of presheaves, particularly by having its own Yoneda Lemma (Lemma 2.13), the question naturally arises whether we can formulate a difunctor model of type theory which also permits the lifting of Grothendieck universes. The purpose of the present section is to answer this affirmatively. To do so, it will be helpful to generalize the definition of paranatural transformation to _dependent paranatural transformations_. Definition 5.1: Given a difunctor \(\Gamma\colon\mathbb{C}^{\mathrm{op}}\times\mathbb{C}\to\mathrm{Set}\) and a _difunctor over \(\Gamma\)_, i.e. \(A:(\Gamma\text{-}\mathrm{Struct})^{\mathrm{op}}\times\Gamma\text{-}\mathrm{Struct }\to\mathrm{Set}\), a **dependent paranatural transformation** \[\tau\colon\ (C\colon\Gamma\text{-}\mathrm{Struct})\xrightarrow{\circ}A(C,C)\] is a \(|\mathbb{C}|\)-indexed family of dependent functions \[\tau_{I}\colon(c\colon\Gamma(I,I))\to A((I,c),(I,c))\] satisfying \[\Gamma(I_{0},i_{2})\ c_{0}=\Gamma(i_{2},I_{1})\ c_{1}\qquad\text{ implies }\qquad\text{map}_{A}^{+}\ i_{2}\ (\tau_{I_{0}}\ c_{0})=\text{map}_{A}^{-}\ i_{2}\ (\tau_{I_{1}}\ c_{1})\] for every \(I_{0},I_{1}:|\mathbb{C}|\), \(i_{2}\colon\mathbb{C}(I_{0},I_{1})\), \(c_{0}\colon\Gamma(I_{0},I_{0})\) and \(c_{1}\colon\Gamma(I_{1},I_{1})\). The antecedent says that \(i_{2}\) is a \(\Gamma\text{-}\mathrm{Struct}\)-morphism from \((I_{0},c_{0})\) to \((I_{1},c_{1})\), hence why it is well-typed to apply the (positive and negative) morphism parts of \(A\) to \(i_{2}\) in the consequent. True to their name, dependent paranatural transformations serve as the dependent analogue of paranatural transformations: instead of having a family of simple functions \(\phi_{I}\) sending \(\Delta\)-structures on \(I\) to \(\Gamma\)-structures on \(I\), we have dependent functions \(\tau_{I}\) sending \(\Gamma\)-structures \(c\) on \(I\) to \(A\)-structures on \((I,c)\). But besides this change (and the necessary changes to the paranaturality condition), this notion is the same. Indeed, if \(A\) is merely a difunctor on \(\mathbb{C}\), i.e. it ignores its \(\Gamma\)-structure arguments, then this is just the usual notion of paranatural transformation. One minor point on notation: readers who prefer the integral notation of Definition 4.5 might prefer to write the type of dependent paranatural transformations from \(\Gamma\) to \(A\) as \[\int_{I:\ \mathbb{C}}\left(c\colon\Gamma(I,I)\right)\ \mathbf{d}A\ (I,c)\ (I,c).\] In what follows, we'll use the notation, but sometimes the integral notation is easier to understand, analogously to how the integral notation for ends often makes it easier to reason about dependent natural transformations. Now, the definition of difunctor models is as follows. **Definition 5.2**: _For any category \(\mathbb{C}\), the **difunctor model of type theory** (over \(\mathbb{C}\)) is defined as follows._ * _Contexts are difunctors_ \(\Gamma:\mathbb{C}^{\mathrm{op}}\times\mathbb{C}\to\mathrm{Set}\) _and substitutions_ \(\gamma\colon\mathrm{Sub}(\Delta,\Gamma)\) _are paranatural transformations_ \(\Delta\xrightarrow{\circ}\Gamma\)_._ * _The empty context_ \(\blacklozenge\) _is the terminal object of_ \(\hat{\mathbb{C}}\)_, the constant-_\(\mathbf{1}\) _difunctor._ * _A type_ \(A:\mathsf{Ty}\ \Gamma\) _is a difunctor over_ \(\Gamma\)_, i.e._ \(A\colon\ (\Gamma\text{-Struct})^{\mathrm{op}}\times\Gamma\text{-Struct}\to \mathrm{Set}\)_._ * _Given_ \(A\colon\mathsf{Ty}\ \Gamma\) _and_ \(\gamma\colon\mathrm{Sub}(\Delta,\Gamma)\)_, define_ \(A[\gamma]\colon\mathsf{Ty}\ \Delta\) _by_ \[A[\gamma]\left(D,D^{\prime}\right)\coloneqq A\left(\underline{\gamma}\ D,\ \underline{\gamma}\ D^{\prime}\right)\] _for any_ \(D,D^{\prime}\colon\Delta\text{-Struct}\) _(recall_ \(\underline{\gamma}\colon\Delta\text{-Struct}\to\Gamma\text{-Struct}\) _is the functor corresponding to_ \(\gamma\)_, by Proposition_ 2.9_)._ * _For_ \(A\colon\mathsf{Ty}\ \Gamma\)_, define_ \(\mathsf{Tm}(\Gamma,A)\) _to be the set of dependent paranatural transformations_ \[\tau\colon\ (C\colon\Gamma\text{-Struct})\xrightarrow{\circ}A(C,C).\] _Define the morphism_ \(\mathrm{part}-[\gamma]\colon\mathsf{Tm}(\Gamma,A)\to\mathsf{Tm}(\Delta,A[ \gamma])\) _similarly to the morphism part of_ \(\mathsf{Ty}\)_._ * _To define the context extension_ \(\Gamma.A\)_, we'll use diYoneda reasoning. We want_ \(\Gamma.A\) _to satisfy:_ \[\mathrm{Sub}\ \Delta\ \Gamma.A\quad\cong\quad\sum_{\gamma:\ \mathrm{Sub}(\Delta, \Gamma)}\mathsf{Tm}(\Delta,A[\gamma]).\] _So apply the diYoneda Lemma:_ \[\Gamma.A(I,J) \cong\mathrm{Sub}\ (\mathbf{yy}(J,I))\ \Gamma.A\] \[\cong\sum_{\gamma:\ \mathrm{Sub}(\mathbf{yy}(J,I),\Gamma)} \mathsf{Tm}(\mathbf{yy}(J,I),A[\gamma])\] \[\cong\sum_{g:\ \Gamma(I,J)}\mathsf{Tm}(\mathbf{yy}(J,I),A[ \alpha_{g}])\] _where_ \(\alpha_{g}\) _is the paranatural transformation_ \(\mathbf{yy}(J,I)\xrightarrow{\circ}\Gamma\) _corresponding to_ \(g\colon\Gamma(I,J)\)_, like we used in the proof of Lemma 2.13. Applying the definition of_ \(\alpha\) _and cleaning up a bit, we can come up with the definition_ \[\Gamma.A(I,J)\coloneqq\sum_{g:\ \Gamma(I,J)}\left((K,\mathrm{into}_{K}, \mathrm{from}_{K})\colon J/\mathbb{C}/I\right)\xrightarrow{\circ}A\ (K,g^{\prime})\ (K,g^{\prime})\] _where_ \(g^{\prime}\colon\Gamma(K,K)\) _is given by_ \(\mathrm{map}_{\Gamma}^{-}\ \mathrm{from}_{K}\ (\mathrm{map}_{\Gamma}^{+}\ \mathrm{into}_{K}\ g)\)_._ This definition of \(\Gamma.A\) is somewhat unsatisfying. In the presheaf case, we're able to make use of a _dependent Yoneda Lemma_ to rephrase the corresponding expression--which involves a dependent natural transformation out of a representable presheaf--into a compact expression. However, it's unclear what the difunctorial/paranatural analogue of the dependent Yoneda Lemma ought to be. We leave it to future work to clarify this point. As we did in the first few sections, we have been neglecting concerns about whether given collections are "too big" to be sets. For instance, there's nothing to ensure that the collection \(\mathsf{Ty}\ \Gamma\) of all difintors over \(\Gamma\) constitutes a set, so we leave ourselves open to paradox by assuming it is. Unlike Section4, it is not suitable for our present purpose to carefully avoid making existence claims. Rather, we'll adopt a standard approach in category theory for handling size issues: _Grothendieck universes_ (see [nLab authors 2023b] for an introduction and references). What a Grothendieck universe consists of is a set \(\mathcal{U}\) of "small sets" which is closed under all the usual operations on sets (e.g. cartesian product, function, power sets). If we assume our metatheory has a Grothendieck universe \(\mathcal{U}\), then we can make the following two restrictions, and they will guarantee that each \(\mathsf{Ty}\ \Gamma\) indeed constitutes a set. * \(\mathbb{C}\) itself needs to a _small category_, i.e. \(|\mathbb{C}|:\mathcal{U}\) and each hom-set is in \(\mathcal{U}\). * \(\mathsf{Ty}\ \Gamma\) is actually the collection of all _small difintors_ on \(\Gamma\), that is, difintors \(A\colon(\Gamma\text{-Struct})^{\text{op}}\times\Gamma\text{-Struct}\to \text{Set}\) such that \(A(C,C^{\prime})\colon\mathcal{U}\) for all \(\Gamma\)-structures \(C,C^{\prime}\). Together, these also allow us to deduce that each \(\mathsf{Tm}(\Gamma,A)\) constitutes a set, freeing this construction from paradox. While avoiding paradoxes is certainly important, assuming a Grothendieck universe \(\mathcal{U}\) in our metatheory and using it in the definition of the difunctor model has a more exciting implication: we can 'lift' this Grothendieck universe into the syntax of our type theory as a type-theoretic universe. **Proposition 5.3**.: _In the difunctor model, there exists a closed type \(\mathbf{U}\) such that, for all contexts \(\Gamma\),_ \[\mathsf{Tm}(\Gamma,\mathbf{U})\cong\mathsf{Ty}\ \Gamma \tag{11}\] So \(\mathbf{U}\) acts as a "type of all types". To avoid another paradox (due to [Girard 1972], but conceptually related to the earlier paradoxes of Russell and Burali-Forti), \(\mathbf{U}\) actually must be a "large type", that is, a difunctor \((\blacklozenge\mathsf{Struct})^{\text{op}}\times\blacklozenge\mathsf{Struct} \to\text{Set}\), without the size restriction on the codomain. So \(\mathbf{U}\) itself is not included in the right-hand side of Equation11, and hence \(\mathbf{U}\) doesn't "contain itself". Note that \(\blacklozenge\mathsf{Struct}\) is isomorphic to just \(\mathbb{C}\) itself, so we'll describe \(\mathbf{U}\) as just a difunctor over \(\mathbb{C}\). Proof.: By diYoneda reasoning. Suppose such a \(\mathbf{U}\) existed. Then, \[\mathbf{U}(I,J) \cong(\mathbf{yy}(J,I)\xrightarrow{\diamond}\mathbf{U})\] (diYoneda Lemma) \[\cong\mathsf{Tm}(\mathbf{yy}(J,I),\mathbf{U})\] ( \[\mathbf{U}\] is a closed type) \[\cong\mathsf{Ty}(\mathbf{yy}(J,I))\] (Equation11) \[\equiv(\mathbf{yy}(J,I)\text{-Struct})^{\text{op}}\times\big{(} \mathbf{yy}(J,I)\text{-Struct}\big{)}\to\mathcal{U} \tag{12}\] \[\equiv(J/\mathbb{C}/I)^{\text{op}}\times(J/\mathbb{C}/I)\to \mathcal{U} \tag{13}\] So define \(\mathbf{U}(I,J)\) to be the set of (small) difintors on the splice category \(J/\mathbb{C}/I\). The line labelled "\(\mathbf{U}\) is a closed type" refers to the fact that we can weaken a type in the empty context to a type in arbitrary context by ignoring further arguments (hence why it makes sense to write \(\mathsf{Tm}(\mathbf{yy}(J,I),\mathbf{U})\) when \(\mathbf{U}\) is a type in context \(\blacklozenge\)), and, as mentioned above, a dependent paranatural transformation whose codomain doesn't actually depend on the structure argument of the domain is just a usual paranatural transform. Thus, we have the beginnings of a model of type theory: a CwF interpreting the basic syntax of type theory, plus a type universe. But much remains to be done. Analogously to the presheaf model, it seems that this model can give semantics for a stock of basic types, plus more sophisticated constructions like dependent sums, dependent products, and identity types. Part of the hope we have for this model is that it will allow us to simultaneously benefit from the richness of the presheaf model (as we've already seen), but also be able to leverage the deep ties to parametricity which come from paranaturality. But further investigation is most definitely needed. ## 6. Conclusion and Future Directions What we have developed in this article is a small but powerful theory, which will hopefully serve as the core of a more expansive branch of category theory. Without the addition of paranatural transformations, category lacks adequate tools to properly treat diffunctors: natural transformations are too strong, and dinatural transformations are too weak. Paranatural transformations strike a delicate balance, privileging the difunctor's diagonal but not neglecting the off-diagonal. By doing so, they manage to replicate all the intricate structure category theorists are accustomed to from presheaves-particularly Yoneda-style reasoning-but in a way which simultaneously handles both co- and contra-variance. To the author's knowledge, this paper gives the first instances of this "diYoneda reasoning", but certainly there are many more of interest. Hopefully future work will continue to develop paranatural category theory, further along the lines of the standard category theory, and beyond. We have not endeavored to study adjunctions in the paranatural setting, but suspect that they will prove interesting. We alluded to the possibility that \(\hat{\mathbb{C}}\) would admit a subobject classifier, but have not explored "paranatural topos theory" to any appreciable extent. As briefly discussed at the end of Section 3, the theory of multivariable diffunctors is presently lacking systematic development, e.g. a Fubini theorem for structural ends. Section 5 furthermore mentioned the question of how to formulate a _dependent diYoneda Lemma_, which also remains open. Finally, we have confined ourselves entirely to 1-category theory in the present work; undoubtedly there are numerous interesting research avenues involving paranatural analogues of higher category theory and enriched category theory. After establishing the basic theory in Section 2, we endeavored to justify the need for such a theory by developing connections to three branches of research: parametricity, category-theoretic semantics of (co)inductive types, and category-theoretic semantics of type theory. For parametricity, we were able to use paranaturality to replicate the famous "free theorems" of Wadler by interpreting System F into paranatural category theory. This task is still incomplete. As mentioned, a theory of multivariable diffunctors and paranatural transformations will be needed to interpret the full range of polymorphic types available in System F. More significantly, we presently lack a concrete result connecting our parametricity "theorem" to the existing formulations in the relational parametricity literature. Further work is needed to either demonstrate that our notion of parametricity exactly coincides with the relational one, or else clearly delineate where they differ. Beyond this, there is work to be done to extend these results to richer and more elaborate languages than System F, such as parametric dependent type theories (e.g. (Bernardy et al., 2010, 2012)). In Section 4, we saw that special instances of paranatural transformations gave us impredicative encodings of inductive types, by taking the structural end of the appropriate "algebraic diffunctor". Moreover, this formulation made it convenient to dualize, which gave us structural _coends_, a paranatural framework for reasoning about coinductive and existential types. Several topics warrant further study: characterizing which diffunctors have structural (co)ends, analogously to the theory of containers/polynomial endofuctors; giving a paranatural analysis of (co)monads and their (co)algebras; extending this study to more exotic kinds of inductive types, such as higher inductive types, inductive-recursive types, inductive-inductive types, and so on. Finally, we introduced difunctor models of type theory and showed that, by virtue of the diYoneda Lemma, we could lift Grothendieck universes in our metatheory to type-theoretic universes in the syntax interpreted by these models. As mentioned, work is needed to fully elaborate the type theory which is interpreted by difunctor models. In particular, it remains unclear how the parametricity encoded by difunctors and paranatural transformations is reflected syntactically in the type theory interpreted by difunctor models. Another question which the present author is keen to explore is how the insight of [10]--that presheaf models are well-suited for interpreting _higher order abstract syntax_--can be copied over to the difunctor model.
2308.01050
A Counterfactual Safety Margin Perspective on the Scoring of Autonomous Vehicles' Riskiness
Autonomous Vehicles (AVs) promise a range of societal advantages, including broader access to mobility, reduced road accidents, and enhanced transportation efficiency. However, evaluating the risks linked to AVs is complex due to limited historical data and the swift progression of technology. This paper presents a data-driven framework for assessing the risk of different AVs' behaviors in various operational design domains (ODDs), based on counterfactual simulations of "misbehaving" road users. We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision. This methodology not only pinpoints the most critical scenarios but also quantifies the (relative) risk's frequency and severity concerning AVs. Importantly, we show that our approach is applicable even when the AV's behavioral policy remains undisclosed, through worst- and best-case analyses, benefiting external entities like regulators and risk evaluators. Our experimental outcomes demonstrate the correlation between the safety margin, the quality of the driving policy, and the ODD, shedding light on the relative risks of different AV providers. Overall, this work contributes to the safety assessment of AVs and addresses legislative and insurance concerns surrounding this burgeoning technology.
Alessandro Zanardi, Andrea Censi, Margherita Atzei, Luigi Di Lillo, Emilio Frazzoli
2023-08-02T09:48:08Z
http://arxiv.org/abs/2308.01050v4
# A Counterfactual Safety Margin Perspective on the Scoring of Autonomous Vehicles' Riskiness ###### Abstract Autonomous Vehicles (AVs) have the potential to provide numerous societal benefits, such as decreased road accidents and increased overall transportation efficiency. However, quantifying the risk associated with AVs is challenging due to the lack of historical data and the rapidly evolving technology. This paper presents a data-driven framework for comparing the risk of different AVs' behaviors in various operational design domains (ODDs), based on counterfactual simulations of "misbehaving" road users. We introduce the concept of _counterfactual safety margin_, which represents the minimum deviation from normal behavior that could lead to a collision. This concept helps to find the most critical scenarios but also to assess the frequency and severity of risk of AVs. We show that the proposed methodology is applicable even when the AV's behavioral policy is unknown-through worst- and best-case analyses-making the method useful also to external third-party risk assessors. Our experimental results demonstrate the correlation between the safety margin, the driving policy quality, and the ODD shedding light on the relative risk associated with different AV providers. This work contributes to AV safety assessment and aids in addressing legislative and insurance concerns surrounding this emerging technology. Autonomous Vehicles, Risk, Safety, Robotics. ## I Introduction Autonomous Vehicles (AVs) are poised to bring economic benefits, better accessibility to mobility, and an overall more efficient transportation system in the coming decades. More importantly, AVs are expected to drastically reduce road accidents and thus actively save human lives. Indeed, still today, every year, more than 1.35 million people die as a result of road traffic accidents, with an additional 20-50 million injured, as reported by the World Health Organization 1. Footnote 1: [https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries](https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries). However, the question of whether AVs are truly safer than traditional vehicles remains to be answered. As shown in early studies by RAND Corporation [1], it is difficult to draw statistically sound conclusions about the real risk of AVs. Partially because of the limited amount of historical claim data currently available. But also the ever-evolving software, hardware, and ODD of AVs pose new challenges to traditional methods that are heavily based on historical data. All of these factors present a significant challenge for tech developers, regulators, and insurance companies when it comes to assessing the risk of AVs. Currently, most of the safety assessments for AVs are performed by the manufacturers themselves with little to no external evaluation from third parties-insurers and legal entities above others. If we take Waymo as an example, they have been transparent about their safety guidelines [2] and have carried out extensive tests on their own vehicle with encouraging results. By January 2023, they totaled 1 million rider-only miles with 2 major contact events and 18 minor ones [3]. They further showed that AVs also have a bright future ahead in terms of handling emergency situations, as shown in [4]. However, in spite of these efforts, there is still one major open problem. It is currently hard-if not impossible-for an external entity to quantify the actual risk associated to AVs. Two, no matter how virtuously these vehicles can behave, on "open" roads with other humans, there is no such a thing as "zero risk". A long tail of unfortunate event that are out of the AV's control is statistically bound to happen. In this work, we propose a safety evaluation framework based on counterfactual simulations. We focus on the risk coming from other road users that "misbehave". We show that the derived risk metrics could serve not only tech developers, but also external third parties that want to score the riskiness of different AV's providers-even when the underlying driving policy is unknown. The overwhelming importance of a risk factor based on others' misbehavior is also corroborated by the early results observed by Waymo, where the few contacts events observed in [5, 3] are ascribed to the misbehaviors of others. We address the problem in a data-driven way. Under the assumption of having access only to data collected by the AV during nominal operation and a set of counterfactuals policies for the other agents. To this end, we introduce the notion of _counterfactual safety margin_ as the smallest misbehavior of other road users that would have potentially caused a collision. This metric is associated to an AV, but also to its ODDs, naturally reflecting all the external risk factors. Moreover, we show that even without knowing the actual behavioral policy of the AV, one can perform a worst and best case analysis to provide an upper and lower bound on the safety margin of a certain pair of AVs and ODDs. A persistent question that remains is identifying which counterfactuals to examine and determining their relative significance. In our view, the creation of counterfactual policies ought to be guided by historical claim of human-operated vehicles. Furthermore, the advent of readily available data capturing misbehavior of road users in typical, daily scenarios offers an additional, crucial source of information. Both these elements together provide a broad and nuanced perspective, enabling a more comprehensive assessment of factors influenc ing the safety of autonomous vehicles ### _Related Work_ Counterfactual analysis enables researchers to reason about "what if" scenarios by comparing actual outcomes with hypothetical outcomes under alternative conditions. This approach is widely adopted in many scientific domains [6], most importantly, it is often crucial to understand the causal relationships between interventions (often referred to as "treatments") and outcomes [7]. In medical research, for example, counterfactual analysis is widely used to study and evaluate the effectiveness of different treatments [8], in socioeconomics to judge the efficacy of a policy [9], or in engineering to compare different designs. Recently, counterfactual reasoning has also gained popularity for the safety assessment of autonomous vehicles. Waymo, for example, showcased their vehicle behavior in specific test scenarios reconstructed from detailed police reports of fatal accidents [10]. The AV was substituted in the simulation in different roles to demonstrate the possible effectiveness in mitigating, if not avoiding, damage. The study was further developed comparing the avoidance capability of a human driver that is non-impaired, with eyes always on the conflict (NIEON model) to the Waymo vehicle [4]. Notably, the counterfactual paradigm is also used in the "design phase" of an AV stack to generate more heterogeneous training and testing scenarios [11, 12, 13, 14, 15]. Other attempts in assessing AVs' riskiness often involve either first principle statistics [1], or reachability-based analysis [16, 17, 18]. The latter, in particular, includes different approaches that engineer specific surrogate safety measures to gauge the criticality of a scenario. For instance, [19, 20] refine the standard notion of time-to-collision to account for the map topology, or the proposed Responsibility-Sensitive Safety (RSS) metrics could serve this purpose by measuring the violation rate of first principle physics-based safety distances2. In general, surrogate safety measures are computed by forward propagating the other agents according their motion model without any particular assumptions on their intentions and behavior. In contrast, it is at the core of our counterfactual approach the possibility to explicitly test against certain (mis)behaviors of others that one deems relevant. Footnote 2: [https://www.mobileye.com/technology/responsibility-sensitive-safety/](https://www.mobileye.com/technology/responsibility-sensitive-safety/). We envision that these counterfactual policies can indeed be learned and engineered based both on historical claim data but also on observed local cultural behaviors (e.g., Pittsburgh left). This naturally would also associate to each counterfactual a relevance weight. In this regard, many developments have recently been proposed to create more _controllable_ simulations. An example is given by [12], which introduces a conditional diffusion model for traffic generation that allows users to control desired properties of the agents' trajectories. Another example that aims to generate challenging scenarios is provided by [21] which introduces STRIVE. STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior while maintaining the plausibility of the scenario. ### _Statement of Contribution_ In this work, we introduce a framework for comparing the risk of different driving policies of AVs operating in different ODDs. The proposed risk assessment hinges on counterfactual simulations of what would have happened if others were to misbehave. We introduce the concept of safety margin as the minimum counterfactual deviation that would cause a collision with non negligible probability. Importantly, we consider counterfactuals to be parametrized by an intensity value which controls the degree of the counterfactual. The counterfactual safety margin allows to automatically mine on large datasets without rare events the most critical scenarios. Furthermore, when a prior on the counterfactual likelihood is available, this risk measure can be related both to the frequency and the severity components of risk. In turn, this encourages an additional line of work from authorities and tech developers to provide statistical models of human misbehavior on the roads. We further show that the proposed methodology can also be utilized by an external entity that has no access to the AV's behavioral policy. Indeed, one can provide a lower bound on the safety margin by assuming that the AV is non-reactive in the counterfactual simulation and an upper bound by computing what could have been the best possible reaction. Finally, we consider five possible counterfactual behaviors engineered from known common cases of human mistakes. We showcase experimentally that the safety margin definition is well-posed, in the sense that higher counterfactual intensities lead to higher collision probability. Moreover, we show that the framework allows to naturally capture in a data-driven fashion also the degree of risk inherent in a specific ODD. To this end, we compare the results of the same type of vehicle driving on average in high speed scenarios against slow speed ones. Surprisingly, while the safety margin does not show a marked trend, its severity component clearly retrieves quantitatively the known correlation between speed and accidents' severity. To the best of our knowledge, this is one of the first works that provides an estimate of the AV's risk from driving data that do not necessarily contain "rare" or "surprising" events. ### _Organization of the manuscript_ In Sec. II we introduce the main definitions and concepts relative to the safety margin. In Sec. III we assume that the driving policy of the vehicle under scrutiny is not available and introduce upper and lower bounds of the safety margin metric. Finally, in Sec. IV shows experimental results obtained on CommonRoad scenarios. ## II Counterfactual Episodes and Safety Margin The counterfactual analysis we propose has as starting point the data collected during nominal operations of an Autonomous Vehicle (AV). These data must suffice to recreate a representation of what happened around the vehicle. To this end, we consider as required by the analysis a topological map of the road and the perceived state and occupancy of other relevant road agents (the ego vehicle, pedestrians, other vehicles, traffic lights, etc.). This collection of data over a certain time interval forms what we call an _episode_. An episode is then utilized as the initial condition for the counterfactual analysis. In fully generality, we consider a _counterfactual_ as a "what-if" scenario with respect to the original episode. But in this work we focus on a particular subclass of counterfactuals that can be parameterized by a scalar _intensity value_. The intensity value acts as a "knob" that we can control to determine how much we are deviating from the original episode. At the high level, we want to evaluate what would have happened if another vehicle had had a low probability of not respecting the stop sign. What if the probability was higher? What would have happened if the driver behind us had been distracted looking at the phone when we braked. What if they were distracted a bit longer? What if the other vehicle had seen us only at a certain distance? The main idea behind this work is to evaluate what is the maximum deviation-i.e., minimum counterfactual intensity- that the agent under scrutiny can tolerate without collision events. We name this particular quantity the _counterfactual safety margin_. We observe that this quantity is determined by two main factors. The first being the AV's decision making itself, in particular via its resulting behavior on the road. The second is the Operational Design Domain (ODD) context. Certain environments result inherently to be more risky due to traffic conditions, local driving culture, and road infrastructure. In the following we make these concepts more formal. We use game theoretic notation by denoting the quantities relative to the \(i\)-th agent with the \(i\)-th subscript, \(-i\) for "everyone but \(i\)", and no subscript for the joint quantities relative to all agents. For instance, we denote with \(X_{i}\) the state space of the \(i\)-th agent. Its trajectory over a finite time horizon \(T>0\) will then be \(x^{[0:T]}_{i}\). Consequently, the trajectories of all agents will be \(x^{[0:T]}_{i}\). These elements are already sufficient to define more formally an _episode_. **Definition 1** (Episode).: _An episode\(e\) consists of a finite set of agents \(\mathcal{A}\) and their relative trajectories over a finite interval of time \(T>0\). That is, \(e\coloneqq\langle\mathcal{A},x^{[0:T]}\rangle\). In probabilistic terms, we further consider an episode \(e\) to be the realization of a random variable \(\mathsf{E}\)._ An illustrative example is given by Fig. 1. Clearly there are infinitely many possible episodes, each representing a particular realization of the interaction among different agents on a specific road. Setting aside mathematical technicalities, we consider the space of episodes to be a probability space following the density function defined by the ODD. This, in fact, defines the likelihood of experiencing certain episodes. This varies depending on many factors, most importantly the geographic region in which one drives determines how likely it is to drive a certain map topology and observe certain behaviors of the other traffic participants. **Definition 2** (Operational Design Domain (ODD)).: _An ODD defines the probability density function of \(\mathsf{E}\), denoted as \(p_{\mathsf{E}}(e)\)._ Starting from an episode, we then consider a _counterfactual_ as a "what-if" scenario where the original episode is replayed-i.e., re-simulated-modifying the behavior of certain agents. A _counterfactual episode_ is the realization of a simulation with the boundary initial conditions given by the original episode, hence an episode itself. In the counterfactual simulation each agent acts according to a certain policy that maps the state of the simulation to control commands of the agents. Allowing stochastic policies, we denote by \(\pi_{i}:X\to\Delta(U_{i})\) the policy for agent \(i\). The simulator assumes a certain dynamic model for each agent such that the state evolution is determined by a discrete dynamic equation \(x(k+1)=f(x(k),\ldots,u_{i}(k),\ldots)\). Allowing stochastic dynamics and policies the counterfactual itself becomes a distribution of episodes with respect to the original one, more formally: **Definition 3** (Counterfactual).: _The counterfactual of an episode \(e\) is denoted as \(\mathsf{C}(e)\) and it is fully determined by the tuple \(\langle\mathcal{A},(\pi_{i}^{\mathsf{C}})_{i\in\mathcal{A}},f,x^{0}(e),T\rangle\). Again, \(\mathsf{C}(e)\) shall be interpreted as a random variable whose realizations are (counterfactual) episodes (Def. 1). We denote by \(\mathsf{C}(e)\) a specific realization of \(\mathsf{C}(e)\)._ Akin to [11], a counterfactual episode can be seen as the realization of a Markov game, and \(\mathsf{C}(e)\) is itself an episode. We highlight that the policies can be constructed arbitrarily, for instance, they could come from learned behavioral models, fitted to the specific episode, or simply be constrained to reply the behavior observed in the original episode. From the original episode, for physics-based simulations, one may need to estimate the dynamics model of the different agents for realistic closed-loop simulation. However, we believe that standard car, truck, bicycle models shall suffice. While Def. 3 is quite general and allows to perform any Fig. 1: The counterfactual safety margin provides a data-driven framework to score and compare the riskiness of AV’s providers operating in different ODD. During nominal operations, an AV records _episodes_ (i.e., the state of the surrounding environment and of the other road users): these episodes represent the initial anchor in the counterfactual simulation. Given a set of counterfactual policies parametrized by a scalar intensity value, these are employed to re-simulate the episode with now the other agents behaving according to the counterfactual policy. The safety margin is then determined as the smallest counterfactual intensity for which a collision would be “likely to occur”. The analysis can be performed even without knowing the policy of the AV under scrutiny. type of "what-if" scenario, we focus on a particular subcategory for risk assessment. In particular, we consider counterfactuals that can be parametrized by a scalar value \(\lambda\in\mathbb{R}_{\geq 0}\) that we regard as the _intensity_ of the counterfactual. For values close to zero we retrieve the original episode. Whereas by increasing the magnitude of the intensity, we simulate counterfactual that are "further away" from the original episode. Loosely speaking, one can interpret the counterfactual intensity as the magnitude of the introduced nuisance. **Definition 4** (Scalar Counterfactual).: _We call a scalar counterfactual with intensity parameter \(\lambda\in\mathbb{R}_{\geq 0}\) a counterfactual parametrized by \(\lambda\) such that it is fully determined by \(\langle\mathcal{A},(\pi_{i}^{\mathsf{C}}(\lambda))_{i\in\mathcal{A}},f,x^{0},T\rangle\). We denote it as \(\mathsf{C}(e,\lambda)\)._ Some of the scalar counterfactual examples with their respective intensity can include: * What if they had slower reaction times? One can re-simulate an episode introducing latency in the observations received by the simulator, the magnitude of the latency represent the intensity of such a counterfactual. * What if they did not see me? The vehicle under scrutiny can be removed from the observations of the other agents unless closer than a certain threshold. The intensity is inversely proportional to this threshold. * What if they were distracted? For small periods of times other agents do not receive new observations. The intensity is given by the extension of the distraction period. * What if they did not respect the stop sign? or the traffic light? We can introduce a probability associated to the binary decision of not respecting the traffic signs. The intensity is the probability itself in this case. A more through description with the corresponding implementation will be made precise in Sec. IV. ### _Counterfactual Safety Margin_ Given the scalar counterfactuals (Def. 4) we introduce the _counterfactual safety margin_ as the minimum intensity for which a contact event would occur. Where a contact event is a boolean function of a given episode realization. In practice, this simply amounts to performing collision checking on the agents' trajectories. **Definition 5** (Contact Event).: _A contact event for the \(i\)-th agent in the \(e\) episode, is the realization of a function \(\operatorname{coll}_{i}:e\mapsto\operatorname{Bool}\). Returns True if the \(i\)-th agent collided in the episode, False otherwise._ Note that the notation introduced up to this point has the following implications: * \(\operatorname{coll}_{i}(e)\in\operatorname{Bool}\): given an episode, it tells us if a collision happened. * \(\operatorname{coll}_{i}(\mathsf{C}(e,\lambda))\sim\mathsf{Bernoulli}(\theta)\): given a counterfactual, its realization can be stochastic, \(\theta\) is in general unknown and can only be estimated be (re-)simultaneous the counterfactual episode many times. If the counterfactual simulation is fully deterministic (policies, parameters, and simulator), then \(\theta\in\{0,1\}\). Hence, we express the minimum intensity that would cause a collision as follows: **Definition 6** (Counterfactual Safety Margin).: _Let \(\mathsf{C}\) to be a scalar counterfactual parametrized by \(\lambda\in[0,\lambda_{\max}]\) (Def. 4) and \(e\) be an episode. We define the counterfactual safety margin for the \(i\)-th agent as the smallest intensity \(\lambda^{*}\) which causes Agent \(i\) to collide with a non-negligible probability \(\epsilon\):_ \[\lambda^{*}_{i}(\mathsf{C}(e))\begin{cases}\in\operatorname*{arg\,min}_{ \lambda\in[0,\lambda_{\max}]}&\text{s.t. }\mathbb{P}\left[\operatorname{coll}_{i}(\mathsf{C}(e, \lambda))\right]>\epsilon;\\ >\lambda_{\max}&\text{otherwise}.\end{cases} \tag{1}\] Some observations: * A small safety margin implies a higher risk for the agent since a smaller counterfactual deviation would result in a collision. * An episode is analysed in the proposed counterfactual framework by simulating different counterfactual intensities. The main insights are then obtained by plotting the safety margin curves shown in Fig. 2, where the collision probability is plotted as a function of the counterfactual intensity. * \(\epsilon\) is arbitrary. Its role is to threshold a certain significance level for the collision probability removing the sensitivity to rare stochastic realizations in the counterfactual simulations. Notice in Fig. 2 that for deterministic frameworks its value is irrelevant. Moreover, also in the stochastic framework of our experiments, we usually observed sharp increases in the collision probability around a certain intensity, making the specific choice of \(\epsilon\) irrelevant. * In many cases the vehicle might result to be insensitive to the specific counterfactual. Either because of the specific episode setup or because the real value falls beyond the range that has been tested. We assign to these cases a special value (\(>\lambda_{\max}\)). ### _Averaging over an ODD_ When evaluating an agent, the analysis will be carried out on a dataset of episodes. The first straightforward result will be a list of episodes that are more risky-the ones with the lowest safety margin. This information is relevant both for tech developers as well as for external regulatory entities. Importantly, we distinguish two cases depending on whether any prior on the counterfactual likelihood is available. If no counterfactual likelihood is available, the framework allows one to naturally recognize the scenarios that are potentially more critical. Furthermore, it allows to compare (i.e., provide an order) to different pairs of agent and ODD. Indeed analogous curves to the stochastic case of Fig. 2 are obtained by averaging over the ODD. Clearly the analysis can go more in depth taking into account other statistics, for instance, the Fig. 2: The counterfactual analysis aims to find to smallest intensity for which the probability of collision surpasses a certain threshold \(\epsilon\). frequency and the severity of the counterfactuals that have a low safety margin. When the likelihood of a counterfactual scenario is available it is possible to weight the importance of each counterfactual simulation. For instance, we can expect to have statistics that serve as proxies for each a counterfactual likelihood. An example could be given by knowing how often people run over a red light or other infraction occur [22]. Most importantly, this information would allow to better relate the safety margin curve of an agent to the frequency component of risk. ## III Upper and Lower Bounds for Counterfactual Safety Margin So far, to perform a counterfactual simulation one would also need the behavioral model of the AV. While this is the case for AV companies, it might not be the case for external entities. Therefore, in the following, we assume that we do not know the policy of the AV under scrutiny. In this setting, the external evaluator still has access to a significant collection of episodes recorded during nominal operation of the AV, but the reactive policy (i.e., the decision making model) of the AV is not available. This assumption finds ground in the real world where, very likely, an AV provider is not willing to share externally its vehicles' behavioral model. **Assumption 1**.: _Let the vehicle of interest be \(\mathsf{av}\in\mathcal{A}\). The policy \(\pi_{\mathsf{av}}(\cdot)\) is unknown._ In the following, we show that despite Assumption 1, the methodology introduced so far can still be applied. In particular, we can consider two cases in the counterfactual analysis: 1. The AV is **non-reactive** and replays the trajectory of the original episode independently of what the others do; 2. The AV is **omniscient** and behaves in order to obtain the best possible outcome in the counterfactual simulation. Note that (i) amounts to a simulation with the vehicle of interest non-reactive, meaning that it will replay the initial episode trajectory independently of the others' counterfactual behavior. (ii) boils down to a single-agent optimal control problem over a finite horizon. The result of such an analysis will provide two safety margin lines plotted as shown in Fig. 3. #### Iii-1 Lower Bound - Non-reactive AV This is the simpler to compute. In the counterfactual simulation, the \(\mathsf{av}\) replays its original trajectory irrespective of what the others' do. This outcome represents a lower bound for the safety margin, assuming that the AV would have taken action to mitigate the situation, surpassing a 'do-nothing' response. #### Iii-2 Upper Bound - Best-case Outcome (ii) requires a more nuanced explanation. First, we anchor the notion of _best outcome_ to the collision happening in the non-reactive case (i) and its relative emergency maneuver. This is done to avoid a pointless discussions of the type: <<The best outcome could have been achieved by not having taken that road in the first place>>. Starting from the initial condition of the episode, we would like to find a trajectory for the \(\mathsf{av}\) which solves the following optimal control problem: \[\begin{split}\min_{u_{\mathsf{av}}^{(\text{\tiny{0:T}})}}& \sum_{k=0}^{T}\text{\small{coll\_severity}}_{\mathsf{av}}(x(k))\\ \text{subject to}& x(k+1)=f(x(k),u_{\mathsf{av}}(k),\pi_{- \mathsf{av}}^{\mathsf{c}}(x(k)))\\ & x(0)=x^{0}.\end{split} \tag{2}\] Note that (2) can be seen as a single-agent optimal control problem. Nevertheless, it is computationally hard to solve. First, due to the presence of other agents, the state space is highly dimensional, and the dynamics of the system is also governed by the others' input. Second, the counterfactual policies of others might be known only in terms of the input-output relation (i.e., black-box models). Third, the real cost function that we want to minimize is discontinuous with respect to contact events. Moreover, it includes damage models that are highly non-linear and non-convex. In this work, we consider a cost function \(\mathsf{coll\_severity}\) modeled after the standard Maximum Abbreviated Injury Scale[23]. More specifically, we compute the damages derived from a collision according to the model proposed in [24] which computes the probability of experiencing a certain degree of injury severity given the speeds and the angle and point of impact. It should be noted that these models have not been specifically designed for AVs. In fact, they only address damages related to the driver. Consequently, we identify an additional valuable research direction: developing more precise injury models that consider vehicle occupancy and the seating positions of the passengers. \(\mathsf{cln}\) our case, we lexicographically minimize respectively the probability of a \(\mathsf{Fatality}\), a MAIS3+ injury, and of a MAIS2+ injury. ## IV Experiments We showcase the presented methodology on reproducible, publicly available CommonRoad scenarios [25]. In these experiments, we begin with a common initial step: Given a CommonRoad scenario, we first run a simulation to generate a synthetic episode. This mimics the data that an AV would record during nominal operations. One vehicle gets to play the role of the \(\mathsf{av}\) and logs all the other visible agents by simulating a planar 2D laser. Subsequently, the recorded log (i.e., the episode) is used to build the counterfactuals. Given an episode, the counterfactual episode is then simulated by taking the initial condition of the log and spawning another simulation where each agent behaves according to the designed counterfactual policy. Each simulation is carried out with the agent-based simulator provided by [26]. As shown in Fig. 4, the policy of each agent is clearly separated from the rest. The simulator generates observations for each agent in the form of perceived occupancy and state of the other surrounding Fig. 3: Computable safety margin’s bounds under the assumption of not knowing the \(\mathsf{av}\)’s behavioral policy. agents, these are fed to the agent's policy that returns control commands (acceleration and steering) that are fed back to the simulator. ### _Counterfactual details_ We implemented and considered the counterfactuals of Table I. #### Aggressiveness This counterfactual corresponds to the other drivers being more "aggressive". Among the numerous way in which one could parameterize this driving style, we use a generalized IDM parameterization of [27] and increase the corresponding "aggressiveness" parameter. DistractionWhat if the others were distracted behind the wheel? We mimic a driver getting distracted by not updating its observations for relatively short period of time-as if they were to look away from the road. This counterfactual intensity corresponds to the distraction period. After each "distracted" period we have an "attentive" period of a fixed duration (\(0.5\,s\)). #### Illegal precedence When an IDM agent encounters a stop sign or a red light this acts as a planning constraint bringing them to an halt. We randomly draw whether the agent will ignore this constraint. The counterfactual intensity is the probability of the agent ignoring this precedence rule. As for the aggressiveness counterfactual, this is currently implemented relying on the model-based IDM policy. Nevertheless, we imagine that also policy-agnostic implementations are possible if the traffic sign observation are passed through the "hallucinated observations". Impaired reflexesIn this policy-agnostic counterfactual the observations of the simulator are delayed to the agent. The introduced latency aims to mimic the driving behavior of a person with impaired reflexes or under the influence of substances. On a behavioral level it translates to slower reaction times an more "wiggly" behaviors such as poor lane keeping. UnseenWhat if the others had not seen us? In this counterfactual we remove from the others' observations our presence until a certain distance. At smaller counterfactual intensities others will see us from distance, whereas, as we increase the intensity, they will see us only in close proximity. ### _Validation Method_ We validate the proposed counterfactual framework with the following experiments: 1. First, we verify that the safety margin curves are monotonically increasing with respect to the counterfactual intensity. That is, as we increase the counterfactual intensity, on average, also the probability of collision increases. This monotonicity verifies that the minimization problem (1) is meaningful. 2. We test whether the method is well suited for ranking av\(\times\)ODD pairs. We fix the agent and compare different ODDs. Namely, we artificially separate the scenarios in two ODD categories: high speed and low speed. We show that since the proposed method is fully data-driven, it does not require any type of ODD labeling and classification, the resulting differences in risk of driving in different ODDs are directly reflected in the safety margin analysis. _E.1 - Safety margin curve monotonicity:_ We experienced that the safety margin curves for the proposed counterfactuals satisfied _on average_ monotonicity. Clearly, given the high non-linearity of the collision dynamics and the policy perturbations, this cannot be guaranteed for every single instance of the counterfactual episodes. While we observed some cases in which a very high counterfactual disturbance would actually avoid a collision, this is not true on average. This sanity check reassures us that the the safety margin minimization problem of Def. 6 is well-posed and meaningful. In the first row of Fig. 5 we report the results from simulating and evaluating an agent on 100 episodes against three different counterfactuals. _E.2 - Same agent, different ODDs:_ Finally, we show that the proposed methodology naturally accounts for the different external risk factors coming from the ODD. To this end, we evaluate the same agent type on two different sets of scenarios (i.e., ODD): high speed and low speed. For each set, we evaluate 100 episodes. The episodes are differentiated based on whether the average initial velocity of the agents exceeds or is less than \(12\)_m/s_ (\(\sim 40\)_km/h_). Interestingly, we show that the proposed framework naturally provides the possibility not only to compute the safety Fig. 4: Each agent receives its own observations from the simulator, comprising of the state and the occupancy of the nearby agents. These are fed to the policy which is expected to return commands (acceleration and steering derivative) that are used to update the corresponding physical model in the simulation. A counterfactual policy can be obtained acting only on the observations and commands (purple filters), or by modifying the policy itself. The former method allows to be _agnostic_ of the original policy, thus it can be easily integrated with black-box models of learned policies. This allows the original policy to be of any type (model-based, learned,...). The latter relies on a particular parameterization of the original policy to modify its behavior. For example, increasing the aggressiveness parameter of an Intelligent Driver Model (IDM). margin but also its corresponding severity. This allows to perform a more nuanced analysis that better captures the two main components of risk: _frequency_ and _severity_. The specific results are depicted in Fig. 5. ### _Discussion_ The presented framework facilitates a comparative analysis of the behaviors exhibited by different autonomous vehicles while operating in certain ODDs. In particular, we initiated our study using data that may not necessarily include unfortunate rare events, such as collisions. Nevertheless, we adopt a data-driven approach to quantify the safety margin of each vehicle concerning potential counterfactual misbehavior of the other agents. It is important to note that our evaluation is primarily focused on the resulting behavior from a phenomenological perspective. Consequently, behaviors leading to close calls and reduced safety margins could arise due to shortcomings in the vehicle's perception, planning, or control systems, as well as from the attributes of the surrounding environment--the ODD within which it operates. An example is provided in Fig. 5. While our framework provides a mean to score behaviors based on certain counterfactual policies, an essential avenue for future research lies in developing such counterfactual policies. Specifically, we recognize the significance of leveraging a combination of historical data claims with the observed misbehavior of other agents on public roads to synthesize relevant counterfactual policies. These policies should encompass the most common human errors, enabling us to derive safety margin scores that strongly correlate with real-world risk. Moreover, we emphasize the necessity of subjecting the presented framework to a rigorous statistical treatment to establish the confidence associated with the results that one may derive from a dataset. ## V Conclusions In conclusion, our proposed framework offers a comprehensive approach to comparing and evaluating the behaviors of autonomous vehicles in different ODDs. The integration of counterfactual analysis and statistical treatment will play a crucial role in ensuring the accuracy and practical applicability of our safety margin scores. This research contributes to the advancement of autonomous vehicle technology and its safe deployment in real-world scenarios. Importantly, this methodology is suited for adoption by various stakeholders, including AV suppliers, as well as third-party entities such as insurance companies and regulators. On a more technical side, this method opens up also the inverse question for tech developers. What are driving behaviors and policies that maximize the counterfactual safety margin? ## VI Acknowledgement The authors thank Shuhan He for the fruitful discussions and the help with the behavioral models implemented in the simulator. ## References * [1]N. Kalra and S. M. Paddock (2023-04) Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?. Technical report Cited by: SSI. * [2]M. Althoff, M. Koschi, and S. Manzinger (2017) CommonRoad: Composable benchmarks for motion planning on roads. In IEEE Intelligent Vehicles Symposium, Proceedings, Cited by: SSII-A. * [3]M. Althoff and O. Stursberg (2021-03) Reachability Analysis and its Application to the Safety Assessment of Autonomous Cars. Cited by: SSI. [MISSING_PAGE_POST] and O. Stursberg (2021-07) Reachability Analysis and its Application to the Safety Assessment of Autonomous Cars. Cited by: SSII-A. * [43]J. D. States (1969) The Abbreviated and the Comprehensive Research Injury Scales. In 13th Stapr Car Crash Conference, 2 edition, 2 edition, 2 edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition,, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition,, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition,, edition, edition, edition, edition, edition,, edition, edition, edition, edition, edition, edition, edition, edition, edition, edition,, edition, edition, edition, edition, edition,, edition, edition, edition,, edition, edition, edition,, edition, edition,, edition, edition,, edition, edition,, edition, edition,, edition, edition, edition,, edition,, edition,,
2304.11196
Fast GraspNeXt: A Fast Self-Attention Neural Network Architecture for Multi-task Learning in Computer Vision Tasks for Robotic Grasping on the Edge
Multi-task learning has shown considerable promise for improving the performance of deep learning-driven vision systems for the purpose of robotic grasping. However, high architectural and computational complexity can result in poor suitability for deployment on embedded devices that are typically leveraged in robotic arms for real-world manufacturing and warehouse environments. As such, the design of highly efficient multi-task deep neural network architectures tailored for computer vision tasks for robotic grasping on the edge is highly desired for widespread adoption in manufacturing environments. Motivated by this, we propose Fast GraspNeXt, a fast self-attention neural network architecture tailored for embedded multi-task learning in computer vision tasks for robotic grasping. To build Fast GraspNeXt, we leverage a generative network architecture search strategy with a set of architectural constraints customized to achieve a strong balance between multi-task learning performance and embedded inference efficiency. Experimental results on the MetaGraspNet benchmark dataset show that the Fast GraspNeXt network design achieves the highest performance (average precision (AP), accuracy, and mean squared error (MSE)) across multiple computer vision tasks when compared to other efficient multi-task network architecture designs, while having only 17.8M parameters (about >5x smaller), 259 GFLOPs (as much as >5x lower) and as much as >3.15x faster on a NVIDIA Jetson TX2 embedded processor.
Alexander Wong, Yifan Wu, Saad Abbasi, Saeejith Nair, Yuhao Chen, Mohammad Javad Shafiee
2023-04-21T18:07:14Z
http://arxiv.org/abs/2304.11196v1
Fast GraspNeXt: A Fast Self-Attention Neural Network Architecture for Multi-task Learning in Computer Vision Tasks for Robotic Grasping on the Edge ###### Abstract Multi-task learning has shown considerable promise for improving the performance of deep learning-driven vision systems for the purpose of robotic grasping. However, high architectural and computational complexity can result in poor suitability for deployment on embedded devices that are typically leveraged in robotic arms for real-world manufacturing and warehouse environments. As such, the design of highly efficient multi-task deep neural network architectures tailored for computer vision tasks for robotic grasping on the edge is highly desired for widespread adoption in manufacturing environments. Motivated by this, we propose Fast GraspNeXt, a fast self-attention neural network architecture tailored for embedded multi-task learning in computer vision tasks for robotic grasping. To build Fast GraspNeXt, we leverage a generative network architecture search strategy with a set of architectural constraints customized to achieve a strong balance between multi-task learning performance and embedded inference efficiency. Experimental results on the MetaGraspNet benchmark dataset show that the Fast GraspNeXt network design achieves the highest performance (average precision (AP), accuracy, and mean squared error (MSE)) across multiple computer vision tasks when compared to other efficient multi-task network architecture designs, while having only 17.8M parameters (about \(>\)5\(\times\) smaller), 259 GFLOPs (as much as \(>\)5\(\times\) lower) and as much as \(>\)3.15\(\times\) faster on a NVIDIA Jetson TX2 embedded processor. ## 1 Introduction Significant advances have been made in recent years to take advantage of deep neural networks for robotic grasping. In particular, multi-task learning has shown considerable promise for improving the performance of deep learning-driven vision systems for robotic grasping [3, 7, 8], where the underlying goal is to learn to perform additional tasks during the model training process. Multi-task learning has enabled not only greater precision and versatility in deep learning-driven vision systems for robotic grasping, but also enabled such systems to perform a wide range of computer vision tasks that are important for robotic grasping (see Fig. 1 for example tasks that need to be performed Figure 1: Example multi-task outputs from Fast GraspNeXt. (a) and (b) Detected occluded objects are shown in blue and non-occluded objects are shown in red. (c) The detected center of mass of each object is shown in blue. (d) Applicability of suction grasp is labelled from high to low in red, green, and blue as a heatmap. by such deep learning-driven vision systems for robotic grasping such as visible object mask detection, amodal object detection [1], center of mass prediction, and suction grasp heatmap generation [2]). However, while multi-task learning can greatly improve the performance of computer vision tasks for robotic grasping, high architectural and computational complexity can limit operational use in real-world manufacturing and warehouse environments on embedded devices. Motivated to address these challenges with embedded deployment for robotic grasping in real-world manufacturing and supply chain environments, we leverage a generative network architecture search strategy with a set of architectural design constraints defined to achieve a strong balance between multi-task learning performance and embedded operational efficiency. The result of this generative network architecture search approach is Fast GraspNeXt, a fast self-attention neural network architecture tailored specifically for multi-task learning in robotic grasping under embedded scenarios. The paper is organized as follows. Section 2 describes the methodology behind the creation of the proposed Fast GraspNeXt via generative network architecture search, as well as a description of the resulting deep neural network architecture. Section 3 describes the dataset used in this study, the training and testing setup, as well as the experimental results and complexity comparisons. ## 2 Methods ### Generative Network Architecture Search In this paper, we take a generative network architecture search approach to creating the optimal multi-task deep neural network architecture for Fast GraspNeXt. More specifically, we leveraged the concept of generative synthesis [13], an iterative method that generates highly tailored architectural designs that satisfy given requirements and constraints (e.g., model performance targets). Generative synthesis can be formulated as a constrained optimization problem: \[\mathcal{G}=\max_{\mathcal{G}}\mathcal{U}(\mathcal{G}(s))\quad\text{ subject to}\quad 1_{r}(G(s))=1,\ \ \forall\in\mathcal{S}. \tag{1}\] where the underlying objective is to learn an expression \(\mathcal{G}(\cdot)\) that, given seeds \(\{s|s\in S\}\), can generate network architectures \(\{N_{s}|s\in S\}\) that maximizes a universal performance metric \(U\) (e.g., [11]) while adhering to operational constraints set by the indicator function \(1_{r}(\cdot)\). This constrained optimization is solved iteratively through a collaboration between a generator \(G\) and an inquisitor \(I\) which inspects the generated network architectures and guides the Figure 2: Overall network architecture design for Fast GraspNeXt, which possess a self-attention neural network architecture with highly optimized macroarchitecture and microarchitecture designs for all components. Fast GraspNeXt consists of a generated self-attention backbone architecture feeding into a generated feature pyramid network architecture followed by generated head network architecture designs for multi-task learning. The numbers in brackets are channel sizes of the feature maps in the heads. generator to improve its generation performance towards operational requirements (see [13] for details). To build Fast GraspNeXt, we enforce essential design constraints through \(1_{r}(\cdot)\) in Eq. 1 to achieve the desired balance between i) accuracy, ii) architectural complexity, and iii) computational complexity to yield high-performance, compact, and low-footprint neural network architectures such as: 1. Encouraging the implementation of anti-aliased downsampling (AADS) [14] to enhance network stability and robustness. 2. Encouraging the use of attention condensers [12], which are highly efficient self-attention mechanisms designed to learn condensed embeddings characterizing joint local and cross-channel activation relationships for selective attention. They have been shown to improve representational performance while improving efficiency at the same time. 3. Enforce a FLOPs requirement of less than 300B FLOPs and an accuracy requirement of no lower AP across all assessable tasks than a ResNet-50 variant of the multi-task network for robotic grasping (which we call ResNet-GraspNeXt) by 0.5%. ### Network Architecture The resulting Fast GraspNeXt network architecture design is shown in Fig. 2. It possesses a self-attention neural network architecture with highly optimized macroarchitecture and microarchitecture designs for all its components. The network architecture adheres to the constraints we imposed, with the generated backbone architecture feeding into a generated feature pyramid network architecture design followed by generated head network architecture designs for predicting the multi-task outputs: i) amodal object bounding boxes, ii) visible object masks, iii) amodal object masks, iv) occlusion predictions, v) object center of mass, vi) and suction grasp heatmap. More specifically, the multi-scale features from the generated backbone architecture are provided as input directly to each level of the generated feature pyramid network architecture, followed by the generated bounding box head, visible mask head, amodal mask head and occlusion prediction head. Each level of the feature pyramid network are also upsampled to reach the same scale and summed as input for the center of mass head and suction grasp heatmap head. The multi-task training loss, denoted as \(L_{mt}\), used to train Fast GraspNeXt is a weighted combination of task-specific losses and can be expressed by \[\begin{split} L_{mt}&=l_{rpn}+\lambda_{1}l_{abox}+ \lambda_{2}l_{segm,v}+\lambda_{3}l_{segm\_a}\\ &+\lambda_{4}l_{occ}+\lambda_{5}l_{com}+\lambda_{6}l_{suc}\end{split} \tag{2}\] where \(\lambda_{1},\lambda_{2},\ldots,\lambda_{6}\) denote task-specific weight coefficients used to balance the contribution of individual task-specific losses. The individual task-specific losses are defined as follows: * \(l_{rpn}\): Region Proposal Network loss [9] * \(l_{abox}\): Amodal bounding box prediction loss [1] * \(l_{segm\_v}\): Visible mask segmentation loss [1] * \(l_{segm\_a}\): Amodal mask segmentation loss [1] * \(l_{occ}\): Occlusion classification loss [1] * \(l_{com}\): Center of mass heatmap prediction loss implemented with the modified focal loss proposed by CenterNet [15] * \(l_{suc}\): Suction grasp heatmap prediction loss implemented with pixel-wise averaged mean squared error (MSE) loss It can be observed that the architecture design is highly heterogeneous and columnar for high architectural and computational efficiency. It can also be observed that the architecture design possesses attention condensers at different stages of the architecture for improved attentional efficacy and efficiency. Furthermore, the architecture design possesses AADS at strategic locations for greater robustness. Finally, it can be observed that the macroarchitecture for each task-specific head is unique, thus tailored around the specific balance between accuracy and efficiency for each individual task. As such, these characteristics make the Fast GraspNeXt architecture design well-suited for high-performance yet highly efficient multi-task robotic grasp applications on the edge. ## 3 Experiments ### Dataset We evaluate the performance of the proposed Fast GraspNeXt on the MetaGraspNet [4] benchmark dataset to explore the efficacy. This large-scale robotic grasping benchmark dataset contains 217k images across 5884 scenes featuring 82 different objects. We use 60%, 20%, and 20% of the scenes for training, validation, and testing respectively. Average precision (AP) evaluation was conducted for amodal object bounding box, visible object mask, amodal object mask, and object center of mass. Occlusion accuracy evaluation was conducted to evaluate occlusion predictions, while mean squared error (MSE) evaluation was conducted to evaluate suction grasp heatmap predictions. Our experiments use the class agnostic labels which put all objects into one class category, so that it can be readily deployed in industrial scenarios with novel, unseen items. ### Training and Testing Setup In addition to the proposed Fast GraspNeXt, we evaluated the performance of efficient multi-task network designs leveraging ResNet-50 [5], EfficientNet-B0 [10], and MobileNetV3-Large [6] as backbones paired with our multi-task network architecture design but without utilizing the generative network architecture search strategy. Both EfficientNet and MobileNetV3 are widely-used, state-of-the-art efficient backbones, making them well-suited for this comparison. Those network architectures are designated as ResNet-GraspNeXt, EfficientNet-GraspNeXt, and MobileNet-GraspNeXt, respectively. For training, we use a base learning rate of 0.03, SGD optimizer with momentum of 0.9, and weight decay of 0.0001 for all experiments. Learning rate step decay are performed at 67% and 92% of the total epochs with gamma of 0.1. All network architectures are trained with the full image size of 1200\(\times\)1200 pixels with batch size of 2. Empirical results found that the above training strategy yielded the best performance for all tested architectures. Inference time evaluations are executed with batch size of 1 to reflect the robotic grasping environment which prioriti lowest possible inference latency instead of potential speed benefit of batched inference. We evaluate the inference time on the NVIDIA Jetson TX2 embedded processor with 8 GB of memory, which is widely used for embedded robotics applications in manufacturing and warehouse scenarios. ### Results and Analysis Tab. 1 shows the quantitative performance results and model complexity of the proposed Fast GraspNeXt compared to ResNet-GraspNeXt, EfficientNet-GraspNeXt, and MobileNet-GraspNeXt. We can observe that leveraging state-of-the-art efficient backbone architectures EfficientNet-B0 and MobileNetV3-Large enables noticeably faster inference time and lower architectural complexity when compared to leveraging ResNet-50 but results in \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Model & \begin{tabular}{c} Inf. Time \\ (ms) \\ \end{tabular} & \begin{tabular}{c} Amodal \\ Bbox AP \\ \end{tabular} & \begin{tabular}{c} Visible \\ Mask AP \\ \end{tabular} & \begin{tabular}{c} Amodal \\ Mask AP \\ \end{tabular} & \begin{tabular}{c} Occlusion \\ Accuracy \\ \end{tabular} & \begin{tabular}{c} Center of \\ Mass AP \\ \end{tabular} & \begin{tabular}{c} Heatmap \\ MSE \\ \end{tabular} & \begin{tabular}{c} Parameters \\ (M) \\ \end{tabular} & \begin{tabular}{c} FLOPs \\ (B) \\ \end{tabular} \\ \hline ResNet-GraspNeXt & 3501 & 85.0\% & 84.9\% & 84.1\% & 77.2\% & 75.3\% & 0.0113 & 92.1 & 1314 \\ EfficientNet-GraspNeXt & 2972 & 84.6\% & 85.0\% & 83.8\% & 81.7\% & 82.6\% & 0.0189 & 72.0 & 1183 \\ MobileNet-GraspNeXt & 2712 & 84.3\% & 84.6\% & 83.7\% & 80.7\% & 81.2\% & 0.0104 & 70.9 & 1189 \\ Fast GraspNeXt & **1106** & **87.9\%** & **85.4\%** & **85.0\%** & **85.1\%** & **84.6\%** & **0.0095** & **17.8** & **259** \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of quantitative performance results on MetaGraspNet dataset and network complexity. Figure 3: (top) Predicted suction grasp heatmaps produced by the proposed Fast GraspNeXt. (bottom) Example ground truth suction grasp heatmaps. a noticeable drop in amodal bbox AP and amodal mask AP performance. In contrast, the proposed Fast GraspNeXt is \(>\)3.15\(\times\), \(>\)2.68\(\times\), and \(>\)2.45\(\times\) faster on the Jetson TX2 embedded processor compared to ResNet-GraspNeXt, EfficientNet-GraspNeXt, and MobileNet-GraspNeXt, respectively, while improves the performance across all test tasks. Specifically, Fast GraspNeXt improves the amodal bbox AP, visible mask AP, amodal mask AP, occlusion accuracy, center of mass AP, and averaged heatmap MSE by 2.9%, 0.4%, 0.6%, 3.4%, 2.0%, and 8.7% respectively compared to the second best results. In terms of architectural complexity, Fast GraspNeXt is 5.2\(\times\) smaller then ResNet-GraspNeXt which has the second best amodal bbox AP and amodal mask AP, 4\(\times\) smaller then EfficientNet-GraspNeXt which has the second best visible mask AP and center of mass AP, and 4\(\times\) smaller then MobileNet-GraspNeXt. In terms of computational complexity, Fast GraspNeXt is 5.1\(\times\), 4.6\(\times\), and 4.6\(\times\) lower FLOPs than ResNet-GraspNeXt, EfficientNet-GraspNeXt, and MobileNet-GraspNeXt respectively. Example ground truth suction grasp heatmaps along with the predicted suction grasp heatmaps produced by proposed Fast GraspNeXt are shown in Fig. 3. As such, the above experimental results demonstrated that the proposed Fast GraspNeXt achieves significantly lower architectural complexity and computational complexity while possessing improved AP across test tasks compared to designs based on state-of-the-art efficient architectures. Furthermore, these experiments demonstrated that Fast GraspNeXt achieves significantly faster inference time on the NVIDIA Jetson TX2 embedded processor, making it well-suited for robotic grasping on embedded devices in real-world manufacturing environments. Future work involves exploring this generative approach to network architecture search for other embedded robotics applications in manufacturing and warehouse scenarios. ## Acknowledgements This work was supported by the National Research Council Canada (NRC) and German Federal Ministry for Economic Affairs and Climate Action (BMWK) under grant 01MJ21007B.
2307.08197
Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs
Biological nervous systems are created in a fundamentally different way than current artificial neural networks. Despite its impressive results in a variety of different domains, deep learning often requires considerable engineering effort to design high-performing neural architectures. By contrast, biological nervous systems are grown through a dynamic self-organizing process. In this paper, we take initial steps toward neural networks that grow through a developmental process that mirrors key properties of embryonic development in biological organisms. The growth process is guided by another neural network, which we call a Neural Developmental Program (NDP) and which operates through local communication alone. We investigate the role of neural growth on different machine learning benchmarks and different optimization methods (evolutionary training, online RL, offline RL, and supervised learning). Additionally, we highlight future research directions and opportunities enabled by having self-organization driving the growth of neural networks.
Elias Najarro, Shyam Sudhakaran, Sebastian Risi
2023-07-17T01:58:52Z
http://arxiv.org/abs/2307.08197v1
# Towards Self-Assembling Artificial Neural Networks through ###### Abstract Biological nervous systems are created in a fundamentally different way than current artificial neural networks. Despite its impressive results in a variety of different domains, deep learning often requires considerable engineering effort to design high-performing neural architectures. By contrast, biological nervous systems are grown through a dynamic self-organizing process. In this paper, we take initial steps toward neural networks that grow through a developmental process that mirrors key properties of embryonic development in biological organisms. The growth process is guided by another neural network, which we call a _Neural Developmental Program_ (\(\mathcal{NDP}\)) and which operates through local communication alone. We investigate the role of neural growth on different machine learning benchmarks and different optimization methods (evolutionary training, online RL, offline RL, and supervised learning). Additionally, we highlight future research directions and opportunities enabled by having self-organization driving the growth of neural networks. ## Introduction The study of neural networks has been a topic of great interest in the field of artificial intelligence due to their ability to perform complex computations with remarkable efficiency. However, despite significant advancements in the development of neural networks, the majority of them lack the ability to self-organize, grow, and adapt to new situations in the same way that biological neurons do. Instead, their structure is often hand-designed, and learning in these systems is restricted to the optimization of connection weights. Biological networks on the other hand, self-assemble and grow from an initial single cell. Additionally, the amount of information it takes to specify the wiring of a sophisticated biological brain directly is far greater than the information stored in the genome [1]. Instead of storing a specific configuration of synapses, the genome encodes a much smaller number of rules that govern how to grow a brain through a local and self-organizing process [2]. For example, the 100 trillion neural connections in the human brain are encoded by only around 30 thousand active genes. This outstanding compression has also been called the "genomic bottleneck" [2], and neuroscience suggests that this limited capacity has a regularizing effect that results in wiring and plasticity rules that generalize well. In this paper, we take first steps in investigating the role of developmental and self-organizing algorithms in growing neural networks instead of manually designing them, which is an underrepresented research area [1, 1, 13, 14]. Even simple models of development such as cellular automata demonstrate that growth (i.e. unfolding of information over time) can be crucial to determining the final state of a system, which can not directly be calculated [15]. The grand vision is to create a system in which neurons self-assemble, grow, and adapt, based on the task at hand. Towards this goal, we present a graph neural network type of encoding, in which the growth of a policy network (i.e. the neural network controlling the actions of an agent) is controlled by another network running in each neuron, which we call a _Neural Developmental Program_ (\(\mathcal{NDP}\)). The \(\mathcal{NDP}\) takes as input information from the connected neurons in the policy network and decides if a neuron should replicate and how each connection in the network should set its weight. Starting from a single neuron, the approach grows a functional policy network, solely based on the local communication of neurons. Our approach is different from methods like NEAT [15] that grow neural networks during evolution, by growing networks during the lifetime of the agent. While not implemented in the current \(\mathcal{NDP}\) version, this will ultimately allow the neural network of the agents to be shaped based on their experience and environment. While indirect genome-to-phenotype encodings such as CPPN-based approaches [2] or Hypernetworks [1] have had great success, they often purposely abstracted away development and the process of self-organizational growth. However, in nature, these abilities seem essential in enabling the remarkable robustness to perturbations and unexpected changes [1, 13]. Allowing each neuron in an artificial neural network to act as an autonomous agent in a decentral ized way similar to their biological counterpart Hiesinger (2018), could enable our AI methods to overcome some of their current limitations in terms of robustness. Since the space of possible \(\mathcal{NDP}\) representations is large, we explore two different representations and different training methods such as evolutionary and gradient-descent based. While lacking state-of-the-art performance, our method can learn to grow networks and policies that can perform competitively, opening up interesting future work in growing and developmental deep neural networks. Our goal is to inspire researchers to explore the potential of \(\mathcal{NDP}\)-like methods as a new paradigm for self-assembling artificial neural networks. Overall, this work represents a step towards the development of more biologically inspired developmental encodings, which have the potential to overcome some of the limitations of current deep learning approaches. ## Background and Related Work ### Indirect encodings Indirect encodings are inspired by the biological process of mapping a compact genotype to a larger phenotype and have been primarily studied in the context of neuroevolution Floreano et al. (2008) (i.e. evolving neural networks) but more recently were also optimized through gradient-descent based approaches Ha et al. (2016). In indirect encodings, the description of the solution is compressed, allowing information to be reused and the final solution to contain more components than the description itself. Even before the success of deep RL, these methods enabled solving challenging car navigation tasks from pixels alone Koutnik et al. (2013). A highly influential indirect encoding is HyperNEAT Stanley et al. (2009). HyperNEAT employs an indirect encoding called compositional pattern producing networks (CPPNs) that _abstracts away the process of growth_ and instead describes the connectivity of a neural network through a function of its geometry; an extension called Evolvable-substrate HyperNEAT Risi and Stanley (2012) allowed neural architectures to be discovered automatically but did not involve any self-organizing process. More recently, Hypernetworks Ha et al. (2016) demonstrated that networks generating the weights of another network can also be trained end-to-end through gradient descent. Hypernetworks have been shown to be able to generate competitive convolutional neural networks (CNNs) Zhmoginov et al. (2022) and recurrent neural networks (RNNs) in a variety of tasks while using a smaller set of trainable parameters. However, neither HyperNEAT nor Hypernetworks make use of the process of development over time, which can increase the evolvability of artificial agents Kriegman et al. (2017); Bongard (2011) and is an important ingredient of biological systems Hiesinger (2021). Developmental EncodingsDevelopmental encodings are a particular family of indirect encodings. They are abstractions of the developmental process that allowed nature to produce complex artifacts through a process of growth and local interactions between cells, ranging from the low level of cell chemistry simulations to high-level grammatical rewrite systems Stanley and Miikkulainen (2003), and neurogenesis approaches Miller (2022); Maile et al. (2022); Tran et al. (2022); Huang et al. (2023). Approaches with Figure 1: **Neural Developmental Program approach for growing neural network. Each node state \(s\) is represented as an embedding vector. During the information aggregation phase, the graph propagates each node state \(s\) to its neighbors for \(n\) steps. Based on the updated nodes embedding \(\hat{s}_{t+n}\), the replication model —implemented as a MLP— determines which nodes will grow new nodes. Finally, if the target network is not unweighted, another MLP estimates the edge weights of each pair of nodes based on their concatenated embeddings; otherwise, edges are assigned a unitary weight. The resulting network is evaluated through an objective function, i.e. solving a task or having certain topological properties. The \(\mathcal{NDP}\) is a distributed model that operates purely on local information.** neural networks that can grow are a largely under-explored area [1, 13, 14, 15] because these algorithms are either not expressive enough or not efficiently searchable. Recently, cellular automata (CA) had a resurgence of interest as a model of biological development. CA are a class of computational models whose outcomes emerge from the local interactions of simple rules. Introduced by Neumann et al. (1966) as a part of his quest to build a self-replicating machine or universal constructor, a CA consist of a lattice of computing cells that iteratively update their states based exclusively on their own state and the states of their local neighbors. On a classical CA, each cell's state is represented by an integer and adjacent cells are considered neighbors. Critically, the update rule of a cellular automaton is identical for all the cells. Neural cellular automata (NCA) differ from classical cellular automata (CA) models by replacing the CA update function with an optimized neural network [14, 15]. Recently, this approach has been extended to grow complex 3D entities such as castles, apartment blocks, and trees in a video game environment [13]. A recent method called HyperNCA, extends NCA to grow a 3D pattern, which is then mapped to the weight matrix of a policy network [16]. While working well in different reinforcement learning tasks, the mapping from the grown 3D pattern to policy weight matrix did not take the topology of the network into account. Instead of a grid-like HyperNCA approach, the method presented in this paper extends NCA to directly operate on the policy graph itself and should thus also allow more flexibility in the types of architectures that can be grown. ### Distribution-fitting approaches to evolving graphs Previous work has explored the emerging topologies of the different growth processes [1] and shown that they can reproduce real networks by fitting the parameters of the distribution from which new nodes are sampled. In contrast to our method, the growth processes in previous network-theory approaches do not depend on the internal state of the graph, and therefore do not make use of the developmental aspect of the network to achieve the target topological properties. ### Approach: Growing Neural Networks through Neural Developmental Programs This section presents the two different Neural Developmental Program instantiations we are exploring in this paper: (1) an evolution-based \(\mathcal{NDP}\) and (2) a differentiable version trained with gradient descent-based. While an evolutionary version allows us to more easily explore different architectures without having to worry about their differentiability, gradient descent-based architectures can often be more sample efficient, allow scaling to higher dimensions, and enable approaches such as offline reinforcement learning. Code implementations will be available soon at: [https://github.com/enajx/NDP](https://github.com/enajx/NDP). ### Evolutionary-based \(\mathcal{NDP}\) The \(\mathcal{NDP}\) consists of a series of developmental cycles applied to an initial seeding graph; our experiments always use a seeding graph consisting of a single node or a minimal network connecting the neural network's inputs directly to its outputs. Each of the nodes of the graph has an internal state represented as \(n-\)dimensional latent vector whose values are updated during the developmental process through local communication. The node state-vectors --or embeddings-- encode the cells' states and are used by the \(\mathcal{NDP}\) to determine which nodes will duplicate to make new nodes. Similarly to how most cells in biological organisms contain the same program in the form of DNA, each node's growth and the synaptic weights are controlled by a copy of the same \(\mathcal{NDP}\), resulting in a distributed self-organized process that incentives the reuse of information. An overview of the approach is shown in Fig. 1. The \(\mathcal{NDP}\) architecture consists of a Multilayer Perceptron (MLP) --acting as a Graph Cellular Automata (GNCA) [12]-- which updates the node embeddings after each message-passing step during the developmental phase. Subsequently, a replication model in the form of a second MLP queries each node state and predicts whether a new node should be added; if so, a new node is connected to the parent node and its immediate neighbors. Finally, if the target network is weighted, a third MLP determines the edge weights based on the concatenation of each pair of node embeddings. The grown network can now be evaluated on the task at hand by assigning a subset of nodes as the input nodes and another subset as the output nodes. In our case, we select the first --and last-- rows of the adjacency matrix representing the network to act as input -- and output-- nodes, respectively. During evaluation the activations of the nodes are scalars (that is, \(\mathbb{R}^{1}\) instead of the \(\mathbb{R}^{n}\) vectors used during development), and all node activations are initialized to zero. We refer to the \(\mathcal{NDP}\) as the set of these MLPs which are identical for each cell in the policy network; in order to keep the number of parameters of the \(\mathcal{NDP}\) low, the reported experiments make use of small MLPs with a single hidden layer. However, it's worth noticing that because the \(\mathcal{NDP}\) is a distributed model (i.e. the same models are being applied to every node), the number of parameters is constant with respect to the size of the graph in which it operates. Therefore, any neural network of arbitrary size or architecture could be used, provided that it was deemed necessary to grow a more complex graph. The \(\mathcal{NDP}\)'s neural networks can be trained with any black-box optimization algorithm to satisfy any objective function. In this paper, we demonstrate how the ap proach allows to grow neural networks capable of solving reinforcement learning and classification tasks, or exhibiting some topological properties such as small-worldness. The pseudocode of the approach is detailed in Algorithm 1. ``` Input: Replication model \(\mathcal{R}\), Graph Cellular Automata \(\mathcal{GNC}\mathcal{A}\), Weight update model \(\mathcal{W}\), number of developmental cycles \(\mathcal{C}\), pruning threshold \(\mathcal{P}\), number of training generations \(\mathcal{T}\), training hyper-parameters \(\Omega\) Output: Developmental program producing graph \(\mathcal{G}\) that minimise/maximise \(\mathcal{F}\) 1 Co-evolve or sample random embedding \(E_{N=0}\) for the initial node \(N_{0}\) of \(\mathcal{G}\); 2forgenerationin\(\mathcal{T}\)do 3fordevelopmental cyclein\(\mathcal{C}\)do 4 Compute network diameter \(\mathtt{D}\); 5 Propagate nodes states \(E_{N}\) via graph convolution \(\mathtt{D}\) steps; 6 Replication model \(\mathcal{R}\) determines nodes in growing state; 7 New nodes are added to each of the growing nodes and their immediate neighbors; 8 New nodes' embeddings are defined as the mean embedding of their parent nodes. 9ifweightednetworkthen 10 Weight update model \(\mathcal{W}\) updates 11 connectivity for each pair of nodes based on their concatenated embeddings; 12ifpruningthen 13 Edges with weights below pruning threshold \(\mathcal{P}\) are removed; 14 Evaluate objective \(\mathcal{F}\) of grown graph \(\mathcal{G}\); 15 Use \(\mathcal{F}(G)\) to guide optimisation; ``` **Algorithm 1**Neural Developmental Program \(\mathcal{NDP}\): non-differentiable version ### Gradient-based \(\mathcal{NDP}\) The gradient-based growth process is similar to the evolutionary approach, albeit with some additional constraints due to its requirement of complete differentiability in the process. In contrast to the evolutionary approach, the grown networks are exclusively feedforward networks, where information is iteratively transmitted through message passing via the topologically sorted nodes. Each node has a bias value, and an activation that is applied to when the incoming nodes' information is aggregated, similar to the node behavior in e.g. NEAT [20]. Like in the evolutionary approach, cells' states are represented as vectors. However, instead of all values of each cell's state vector being treated as intractable black-boxes for the network to store information, here each first and second elements of the vectors have pre-defined roles: the first element represents the bias value and the second encodes the activation. The remaining encode hidden states for each cell, capturing temporal information used to guide the developmental process. These cells are passed into a message-passing network, implemented as a graph Convolution [17], where a neighborhood of cell embeddings, characterized as being 1 edge away are linearly projected and added to create new embeddings. In addition to using a message passing network to update nodes, we use a trainable small 2-layer MLP, with tanh activation, to predict edge weights using pairs of (source, destination) nodes. The message-passing network is composed of a GraphConv layer that outputs vectors of size 32, followed by a Tanh activation, followed by a linear layer that maps the size vectors to the original cell embeddings. In order to add more nodes, we treat a channel in each cell as a _replication probability_, which is used to sample cells and connect to a random node further into the network. This process happens every other growth step. Detailed replication process: (1) A replication network, implemented as a separate graph convolution, is applied on cells to output replication probabilities. (2) A cell is sampled from these probabilities and is passed into a perturbation network to get a new cell, and an incoming edge is connected from the parent to the new cell. An outgoing edge is connected to a random child (found further in the network). After the new node is added to the network, we update the edges using an MLP that takes in incoming node and outgoing node pairs and outputs a new edge weight. We initialize the starting network by fully connecting trainable input cell embeddings and output cell embeddings, which we found to work better for our gradient-based \(\mathcal{NDP}\) than starting with a single node. For example, if input = 4 and output = 2, then each input node will be connected to an output node, resulting in 4\(\times\)2 initial edges. The pseudocode of this approach is detailed in Algorithm 2. ## Experiments We test the \(\mathcal{NDP}\) approach on generating networks for classification tasks such as _MNIST_, boolean gates such as _XOR_ gate and reinforcement learning tasks with both continuous and discrete action spaces. The RL tasks include _CartPole_, a classical control task with non-linear dynamics, and _LunarLander_, a discrete task where the goal is to smoothly land on a procedurally generated terrain. We also tackle an offline RL task using Behavioral Cloning (BC) [17], which is _HalfCheetah_, a canine-like robot task. The different environments are shown in Fig. 2. The _CartPole_ environment provides observations with 4 dimensions, _LunarLander_ has 8, MNIST has 64, and HalfCheetah has 17. _CartPole_ has 2 actions, _LunarLander_ has 4 actions, _MNIST_ has 10 classes to predict, and _HalfCheetah_ has a continuous action of dimension 7. ### Evolutionary Optimization Details We use CMA-ES -- Covariance Matrix Adaptation Evolution Strategy -- [1], a black-box population-based optimisation algorithm to train the \(\mathcal{NDP}\). Evolutionary strategies (ES) algorithms have been shown to be capable of finding high-performing solutions for reinforcement learning tasks, both directly optimising the policy weights [10], and with developmental encodings [10]. Black-box methods such as CMA-ES have the advantage of not requiring to compute gradients and being easily parallelizable. Experiments have been run on a single machine with a _AMD Ryzen Threadripper 3990X_ CPU with \(64\) cores. We choose a population size of \(512\), and an initial variance of \(0.1\). Finally, we employ an early stopping method which stops and resets training runs that show poor performance after a few hundred generations. ### Differentiable Optimization Details We use the Pytorch Geometric library to implement our \(\mathcal{NDP}\) as well as our grown networks, enabling us to backpropagate a loss using predictions from the grown networks all the way back to the parameters of the \(\mathcal{NDP}\). We are also able to leverage GPUs for the forward and backward passes. Experiments have been run on a single machine with a _NVIDIA 2080ti_ GPU. We use the Adam Optimizer [11] to optimize the \(\mathcal{NDP}\)'s trainable parameters. Online RL with PPOBecause we can take advantage of backpropagation with the differentiable \(\mathcal{NDP}\) approach, we can utilize reinforcement learning algorithms, specifically Proximal Policy Optimization (PPO) [12], to grow optimal policies. Implementations of PPO typically use separate networks / shared networks for critic and actor heads. In our implementations, we simply treat the critic output as a separate node that is initially fully connected to all the input nodes. In our implementation, we use a learning rate of 0.0005, an entropy coefficient of 0.001, and optimize for 30 steps after each rollout is collected. We train for 10,000 rollouts and record the average reward from 10 episodes in Tables (a)a and (b)b. Offline RL with Behavioral CloningIn Offline RL, instead of gathering data from an environment, we only have access to a dataset of trajectories. This setting is challenging but also easier because we can avoid the noisy training process of Online RL. In our approach, we utilize Behavioral Cloning (BC) to optimize our \(\mathcal{NDP}\) to grow high-performing policies. We use a learning rate of 0.0001 and a batch size of 32 observations. We train for 10000 rollouts and record the average reward from 10 episodes in Table (b)b. Supervised LearningWe evaluate the supervised learning capabilities of our differentiable \(\mathcal{NDP}\) with the classic _MNIST_ task, where a small (8\(\times\)8) image is classified as a digit between 0-9. We use a learning rate of 0.001 and a batch size of 32 observations. We train for 10000 iterations and record the test accuracy in Table (a)a. Fruchterman-Reingold force-directed algorithm. Training curves are shown in Fig. 3. **Growing policy network for RL tasks.** We trained a \(\mathcal{NDP}\) with 162 parameters for the CartPole tasks, in which it has to grow a policy network controlling the force applied to a pole to keep it balanced. It grew an undirected network with 10 nodes and 33 weighted edges that reached a reward of \(500\pm 0\) over 100 rollouts (Fig. 3). This score is considered as solving the task. The growth process of the network from a single node to the final policy can be seen in Fig. 5. Similarly, we trained a \(\mathcal{NDP}\) to grow a network policy to solve the Lunar Lander control task (Fig. 4, right). In this case, a \(\mathcal{NDP}\) with 868 trainable parameters grew an undirected network policy with 16 nodes and 78 weighted edges. Over 100 rollouts, the mean reward obtained is \(116\pm 124\). Although the resulting policy controller could solve the task in many of the rollouts, the stochasticity of the environment (e.g. changing the landing surface at each instantiation) resulted in a high reward variance. This means that the grown network did not quite reach the 200 reward over 100 rollouts score that is considered as the task's solving criterion. **Growing small-world topologies.** Real networks found in nature such as biological neural networks, ecological networks or social networks are not simple random networks but rather their graphs have very specific topological properties. In order to analyze the ability of the neural developmental program to grow complex graph motifs, we train the \(\mathcal{NDP}\) to grow a small-world network (Watts and Strogatz, 1998). A small-world network is characterised by a small average shortest path length but a relatively large clustering coefficient. We can quantify this with two metrics \(\sigma=\frac{C/C_{r}}{L/L_{r}}\) and \(\omega=\frac{L_{r}}{L}-\frac{C}{C_{l}}\), where \(C\) and \(L\) are respectively the average clustering coefficient and average shortest path length of the network. \(C_{r}\) and \(L_{r}\) are respectively the average clustering coefficient and average shortest path length of an equivalent random graph. A graph is commonly classified as small-world if \(\sigma>1\) or, equivalently, \(\omega\approx 0\). We show that \(\mathcal{NDP}\) technique can grow a graph with small-world coefficients are \(\sigma\approx 1.27\) and \(\omega\approx-1.11^{-16}\), hence satisfying the small-worldness criteria. An example network is shown in Fig. 6. While these results are promising, further experiments are required --notably on bigger graphs-- to investigate with solid statistical significance the potential of the method to grow graphs with arbitrary topological properties. ## Gradient-Based Results We evaluate the differentiable \(\mathcal{NDP}\) by comparing models that are trained and tested on different numbers of growth steps ("Growth Steps" column in Table 1). It seems that for most tasks, after a certain number of growth steps, the grown network's performance can deteriorate, as policies do not really benefit from more complexity. This could also be due to the simplicity constraint of grown architectures, making it unable to take advantage of new nodes as the networks get larger. Automatically learning when to stop growing will be Figure 4: Grown networks solving XOR gate task (left), and Lunar Lander task (right). Red nodes behave as sensory neurons, white nodes are interneurons, and blue ones are the _output_ neurons that determine the actions of the cartpole. Figure 3: Fitness during evolution on the XOR, LunarLander and CartPole tasks. Gray areas show standard deviation at each generation of the CMA-ES sampled solutions. Figure 2: Test domains in this paper include both reinforcement and supervised tasks. an important addition to the \(\mathcal{NDP}\) approach. The differentiable \(\mathcal{NDP}\) reaches comparable performance to the evolutionary-based version on CartPole (Table 0(a)) and LunarLander (Table 0(b)). An example of the growth steps for the LunarLander tasks is shown in Figure 7. In the offline RL setting, the NDP is able to get a mean reward of 29 on the HalfCheetah task (Table 0(b)), which is satisfactory but lower compared to the performance of 43.1 achieved by behavioral cloning in (Chen et al., 2021). We are also able to scale up to larger networks for tasks like MNIST, which uses an 8\(\times\)8 image and reaches a test accuracy of 91% (Table 0(a)). The sequence of an MNIST network growing is shown in Figure 8. ## Discussion and Future Work We introduced the idea of a neural developmental program and two instantiations of it, one evolutionary-based and one gradient descent. We showed the feasibility of the approach in continuous control problems and in growing topologies with particular properties such as small-worldness. While the approach is able to solve these simple domains, many future work directions remain to be explored. For example, the current \(\mathcal{NDP}\) version does not include any activity-dependent growth, i.e. it will grow the same network independently of the incoming activations that the agent receives during its lifetime. However, biological nervous systems often rely on both activity and activity-independent growth; activity-dependent neural development enables biological systems to shape their nervous system depending on the environment. Similar mechanisms also form the basis for the formation of new memories and learning. In the future, we will extend the \(\mathcal{NDP}\) with the ability to also incorporate activity-dependent and reward-modulated growth and adaptation. While a developmental encoding offers the possibility to encode large networks with a much smaller genotype, the \begin{table} \end{table} Table 1: **Differentiable NDP: Online RL Results. Mean reward is calculated over 10 test episodes after 10,000 rollouts were collected. Networks are trained with a specific number of growth steps, as shown in the “growth steps” column.** Figure 5: Developmental growth of the network capable of solving the Cart Pole balancing task. The network starts as a single node and grows to a network of size 2, 4, 5, and finally 10 neurons and 33 weighted edges after 4 growth cycles. Red nodes behave as sensory neurons, white nodes as interneurons, and blue ones act as the _output_ neurons that determine the actions of the cartpole. The vector displayed above the neurons are the node embeddings which represent the state of each node during the growth process. Figure 6: Grown Small-World network. In this experiment, the \(\mathcal{NDP}\) seeks to maximise the coefficients used to capture the small-worldness criteria of a graph. Unlike the networks grown to act as policies, this network is unweighted. \(\mathcal{NDP}\) in this paper are in fact often larger than the resulting policy networks. However, by running the developmental process longer, it is certainly possible to ultimately grow policy networks with a larger number of parameters than the underlying \(\mathcal{NDP}\). However, as the results in this paper suggest, growing larger policy networks than necessary for the tasks can have detrimental effects (Table 0(b)), so it will be important to also learn when to stop growing. The exact interplay between genome size, developmental steps, and task performance constitutes important future work. We will additionally extend the approach to more complex domains and study in more detail the effects of growth and self-organization on the type of neural networks that evolution is able to discover. \(\mathcal{NDP}s\) offer a unifying principle that has the potential to capture many of the properties that are important for biological intelligence to strive (Versace et al., 2018; Kudithipudi et al., 2022). While innate structures in biological nervous systems have greatly inspired AI approaches (e.g. convolutional architectures being the most prominent), how evolution discovered such wiring diagrams and how they are grown through a genomic bottleneck are questions rarely addressed. In the future, \(\mathcal{NDP}s\) could consolidate a different pathway for training neural networks and lead to new methodologies for developing AI systems, beyond training and fine-tuning. ## Acknowledgments This project was supported by a GoodAI Research Award and a European Research Council (ERC) grant (GA no. 101045094, project "GROW-AI") \begin{table} \end{table} Table 2: **Differentiable \(\mathcal{NDP}\)**: Supervised Learning and Offline RL tasks. Test accuracy for MNIST calculated after 10,000 epochs. Mean reward for HalfCheetah calculated over 10 test episodes after 10000 epochs. Figure 8: **Differentiable \(\mathcal{NDP}\)**: MNIST network growth over 64 steps. Figure 7: **Differentiable \(\mathcal{NDP}\)**: Lunar Lander Network policy growth over 64 steps. Red nodes are input nodes, blue nodes are output nodes, white nodes are hidden nodes.
2308.07421
U-Turn Diffusion
We explore diffusion models of AI, which consist of a forward noise-injecting process and a reverse de-noising process, to understand how they encode information about the Ground Truth (GT) samples in the score-function. Our observations indicate that the most essential information is stored primarily during the early stages of the forward process. Consequently, we propose the U-turn diffusion model, which modifies the traditional approach by shortening the duration of both the forward process and the subsequent reverse dynamics, starting from the final configuration of the forward process. To determine the optimal moment for the U-turn, ensuring that synthetic samples generated at the end of the reverse process are independently and identically distributed (i.i.d.) according to the probability distribution implicitly represented by the GT samples, we utilize various analytical tools, including auto-correlation analysis and the Kolmogorov-Smirnov Gaussianity test. Our experiments with the ImageNet demonstrate that the U-turn diffusion model achieves state-of-the-art Fr\'echet Inception Distance scores with fewer Neural Function Evaluations. Notably, we achieve a 1.35-fold speed-up in inference without the need for retraining.
Hamidreza Behjoo, Michael Chertkov
2023-08-14T19:21:28Z
http://arxiv.org/abs/2308.07421v2
# U-Turn Diffusion ###### Abstract We present a comprehensive examination of score-based diffusion models of AI for generating synthetic images. These models hinge upon a dynamic auxiliary time mechanism driven by stochastic differential equations, wherein the score function is acquired from input images. Our investigation unveils a criterion for evaluating efficiency of the score-based diffusion models: the power of the generative process depends on the ability to de-construct fast correlations during the reverse/de-noising phase. To improve the quality of the produced synthetic images, we introduce an approach coined "U-Turn Diffusion". The U-Turn Diffusion technique starts with the standard forward diffusion process, albeit with a condensed duration compared to conventional settings. Subsequently, we execute the standard reverse dynamics, initialized with the concluding configuration from the forward process. This U-Turn Diffusion procedure, combining forward, U-turn, and reverse processes, creates a synthetic image approximating an independent and identically distributed (i.i.d.) sample from the probability distribution implicitly described via input samples. To analyze relevant time scales we employ various analytical tools, including auto-correlation analysis, weighted norm of the score-function analysis, and Kolmogorov-Smirnov Gaussianity test. The tools guide us to establishing that analysis of the Kernel Intersection Distance, a metric comparing the quality of synthetic samples with real data samples, reveals the optimal U-turn time. ## 1 Introduction The fundamental mechanics of Artificial Intelligence (AI) encompass a three-step process: acquiring data, modeling it, and then predicting or inferring based on the constructed model. Culumination of this process is the generation of synthetic data, which serves as a core component of the prediction step. Synthetic data holds the potential to augment information in various ways. It achieves this by leveraging model-derived conjectures to enrich the data's complexity and structure. In particular, Score Based Diffusion (SBD) models [22, 14] have emerged as a highly successful paradigm in this context. The foundation of the SBD models success rests on the notion that their inherent structure extracts a substantial amount of information from the data. The essence of SBD models is deeply rooted in the concepts that the reality represented via data can emerge from noise or chaos, suggesting a process akin to de-noising (reverse part of the SBD dynamics), and that the introduction of diffusion can disrupt existing order within data (direct part of the SBD dynamics). These fundamental principles underlie the audacious approach of building generative models upon these very principles. While the achievements of SBD models are impressive, they are not universally successful. Instances where barriers are significant, referred to colloquially in physics jargon as "glassy" scenarios, may necessitate the graceful addition of diffusion, compelling the extension of SBD model runtime for better performance. Our overarching objective revolves around gaining insights into existing successful SBD models and further enhancing their capabilities. We methodically approach this goal by breaking it down into steps. However, in this manuscript our primary focus resides not in refining the diffusion component of the model. Instead, we presume this component as given to us, as already developed and documented in prior works (e.g., [23, 14, 15]). Then our attention centers on comprehending temporal correlations within both the diffusion process (forward part of SBD) and the denoising/reconstruction process (reverse part of SBD). A pricipal outcome of our analysis of temporal correlations is a fundamental realization concerning the optimal termination point of the forward process, i.e. of the U-Turn point. This culminates in the proposal of a novel algorithm termed "U-Turn Diffusion." This algorithm provides guidance on when to pivot from the direct to the reverse process. Moreover, we naturally initialize the reverse process after the U-turn with the last configuration of the forward process. In summary, this manuscript presents a comprehensive exploration of the dynamics of SBD models, delving into details of the temporal correlations that underpin their success. Our insights not only enhance the understanding of these models but also lay the foundation for the development of novel techniques, such as the U-Turn Diffusion algorithm, which promises to further elevate the capabilities of SBD-based generative modeling. The manuscript is structured as follows: In Section 2, we provide a technical introduction, laying the foundation by outlining the construction of SBD models. Section 3 forms the first original contribution of this work, encompassing an extensive correlation analysis. We delve into two-time auto-correlation functions of the SBD, establishing relevant time scales. Additionally, we identify the emergence of similar time scales in single-time tests of (a) the average 2-norm of the score-function and (b) the Kolmogorov-Smirnov criterion for Gaussianity. This section reaches its climax with the proposal of the U-Turn diffusion algorithm, discussed in Section 4. Our manuscript concludes by summarizing findings and outlining future directions for research in Section 5. ## 2 Technical Introduction: Setting the Stage Within this manuscript, we embrace the Score-Based Diffusion (SBD) framework, as expounded in [22]. The SBD harmoniously integrates the principles underlying the "Denoising Diffusion Probabilistic Modeling" framework introduced in [10] and subsequently refined in [11], along with the "Score Matching with Langevin Dynamics" approach introduced by [22]. This seamless integration facilitates the reformulation of the problem using the language of stochastic differential equations, paving the way to harness the Anderson's Theorem [1]. As elucidated in the following, this theorem assumes a principal role in constructing a conduit linking the forward and reverse diffusion processes. Let us follow [22] and introduce the forward-in-time Stochastic Ordinary Differential Equation (SDE): \[d\textbf{x}_{t}=f(\textbf{x}_{t},t)dt+G(\textbf{x}_{t},t)d\textbf{w}_{t}, \tag{1}\] and another reverse-in-time SDE: \[d\textbf{x}_{t}=\left(f(\textbf{x}_{t},t)-G(\textbf{x}_{t},t)G( \textbf{x}_{t},t)^{T}\left(\nabla\textbf{x}_{t}\log\left(p_{t}(\textbf{x}_{t} )\right)\right)\right)dt\] \[+G(\textbf{x}_{t},t)d\bar{\textbf{w}}_{t}, \tag{2}\] where the drift/advection \(f:\mathbb{R}^{n_{x}}\times\mathbb{R}\rightarrow\mathbb{R}^{n_{x}}\) and diffusion \(G:\mathbb{R}^{n_{x}}\times\mathbb{R}\rightarrow\mathbb{R}^{n_{x}}\times \mathbb{R}^{n_{x}}\) are sufficiently smooth (Lipschitz functions). Additionally, we assume the existence of a well-defined initial distribution \(p_{0}(\cdot)\) represented by data (samples), and both forward and backward processes are subject to Ito-regularization. The Wiener processes \(\textbf{w}_{t}\) and \(\bar{\textbf{w}}_{t}\) represent standard Wiener processes for forward and reverse in time, respectively. Anderson's theorem establishes that the forward-in-time process and the reverse-in-time process have the same marginal probability distribution, denoted by \(p_{t}(\cdot)\). _Remark._ The proof of Anderson's Theorem relies on the equivalence of the Fokker-Planck equations derived for the direct (1) and inverse (2) dynamics: \[\partial_{t}p_{t}(\textbf{x})-\nabla_{\textbf{x}}\left(f(\textbf{x},t)p_{t}( \textbf{x})\right)= \frac{1}{2}\nabla_{\textbf{x}}\!\left(\!G(\textbf{x},t)G(\textbf{x},t)^{T} \nabla_{\textbf{x}}p_{t}(\textbf{x})\!\right),\] \[\partial_{t}p_{t}(\textbf{x})-\nabla_{\textbf{x}}\left(f(\textbf{x},t)p_{t}( \textbf{x})\right)+\] \[\nabla_{\textbf{x}}\left(G(\textbf{x},t)G(\textbf{x},t)^{T}\left(\nabla_{ \textbf{x}}\log\left(p_{t}(\textbf{x})\right)\right)\right)p_{t}(\textbf{x})\] where \(p_{t}(\cdot)\) is the marginal probability distribution of **x** which is equivalent for the forward and inverse processes (by construction). The forward diffusion process transforms the _initial distribution_\(p_{0}(\cdot)\), represented by samples, into a _final distribution_\(p_{T}(\cdot)\) at time \(T\). The terms \(f(\textbf{x},t)\) and \(G(\textbf{x},t)\) in the SDE are free to choose, but in the SBD approach, they are selected in a data-independent manner such that \(p_{T}(\cdot)\) converges to \(\pi(\cdot)=\mathcal{N}(\cdot,\textbf{0},\textbf{I})\) as \(T\) approaches infinity. This convergence ensures that the generated samples align with a target distribution, typically the standard normal distribution \(\mathcal{N}(\cdot,\textbf{0},\textbf{I})\). **Inference**, which involves generating new samples from the distribution represented by the data, entails initializing the reverse process (2) at \(t=T\) (large but finite) with a sample drawn from \(\pi(\cdot)\), and then running the process backward in time to reach the desired result at \(t=0\). This operation requires accessing the so-called _score function_\(s(\textbf{x},t)=(\nabla_{\textbf{x}}\log(p_{t}(\textbf{x}))\), as indicated in Eq. (2). However, practically obtaining the exact time-dependent score function is challenging. Therefore, we resort to approximating it with a Neural Network (NN) parameterized by a vector of parameters \(\theta\): \(s_{\theta}(\textbf{x},t)\approx s(\textbf{x},t)\). The neural network-based approximation of the score function allows us to efficiently compute and utilize gradients with respect to the input data **x** at different times \(t\), which is essential for guiding the reverse process during inference. By leveraging this neural network approximation, we can effectively sample from the desired distribution and generate new images which are approximately i.i.d. from a target probability distribution represented by input data. This approach enables us to achieve reliable and accurate inference in complex high-dimensional spaces, where traditional methods may struggle to capture the underlying data distribution effectively. **Training:** The neural network \(s_{\theta}(\textbf{x}_{t},t)\) can be trained to approximate the score function \(\nabla\textbf{x}_{t}\!\log p_{t}(\textbf{x}_{t})\) using the weighted De-noising Score Matching (DSM) objective [22]: \[\mathcal{L}_{DSM}(\theta,\lambda(\cdot)):= \tag{3}\] \[\frac{1}{2}\mathbb{E}_{\begin{subarray}{c}t\sim U(0,T),\\ \textbf{x}_{0}\sim p_{0}(\textbf{x}_{0}),\\ \textbf{x}_{t}\sim p_{t}(\textbf{x}_{t}|\textbf{x}_{0})\end{subarray}}\left[ \lambda(t)\|\nabla_{\textbf{x}_{t}}\!\log p_{t}(\textbf{x}_{t}|\textbf{x}_{0}) -s_{\theta}(\textbf{x}_{t},t)\|_{2}^{2}\right]\!,\] \[\begin{subarray}{c}\textbf{x}_{t}\sim\sum_{p_{t}(\textbf{x}_{t}| \textbf{x}_{0})}\end{subarray}\] This approach offers significant advantages over alternative methods, such as those described in [13, 14], due to the analytical evaluation of \(p_{t}(\textbf{x}_{t}|\textbf{x}_{0})\) as an explicit function of \(\textbf{x}_{0}\) for various simple drift and diffusion choices in the forward SDE. The objective function \(\mathcal{L}_{DSM}\) leverages the score matching technique to ensure that the gradients estimated by the neural network closely align with the true gradients of the log-likelihood. The weight function \(\lambda(t)\) allows us to assign varying importance to different time points during training, offering further flexibility in optimizing the neural network's performance. In the two subsequent subsections, we will explore the freedom in selecting the drift/advection and diffusion terms in the forward SDE, as well as the implications of choos ing specific weight functions \(\lambda(t)\). This analysis will provide valuable insights into the overall performance of the Score-Based Diffusion (SBD) framework and its impact on improving the scheme. ### Variance Preserving SDE In this manuscript, we focus on a special class of Stochastic Differential Equations (SDEs) known as Variance Preserving (VP) Stochastic Differential Equations (VP-SDEs) that possess closed-form solutions. We achieve this by choosing specific drift and diffusion functions as follows: \[f(\mathbf{x}_{t},t)=-\frac{1}{2}\beta(t)\mathbf{x}_{t},\quad G(\mathbf{x}_{t}, t)=\sqrt{\beta(t)}. \tag{4}\] This choice results in the following form of the VP-SDE (1) supplemented by the initial conditions: \[d\mathbf{x}_{t}=-\frac{1}{2}\beta(t)\mathbf{x}_{t}dt+\sqrt{\beta(t)}\;d \mathbf{w}_{t},\quad\mathbf{x}_{0}\sim p_{\text{data}}. \tag{5}\] Here, \(\beta(t)\) is a positive function, often referred to as the noise scheduler. We chose this specific form for the drift and diffusion functions based on considerations of simplicity and historical context. Linearity in Eq. (5) was a practical consideration that allows us to express \(\mathbf{x}_{t}\) analytically in terms of \(\mathbf{x}_{0}\). The affine drift in \(\mathbf{x}\) and \(\mathbf{x}\)-independent diffusion, as opposed to more general linear forms in \(\mathbf{x}\) for both, were inherited from their original discrete-time counterparts. The original discrete version of the VP-SDE was given by: \[\mathbf{x}_{n}=\sqrt{1-b_{n}}\mathbf{x}_{n-1}+\sqrt{b_{n}}\mathbf{z}_{n-1},\; \mathbf{z}_{n}\sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{6}\] where \(n=1,\cdots,N\). By introducing \(\beta\doteq b/\Delta,\;t\doteq\Delta n,\;T\doteq\Delta N\), and taking the limit: \(N\rightarrow\infty,\;\Delta\to 0\), Eq.(6) transforms into Eq.(5). The solution to Eq. (5) is given by: \[\mathbf{x}_{t}=\sqrt{\Phi(t,0)}\;\mathbf{x}_{0}+\int_{0}^{t}\sqrt{\Phi(t,s) \beta(s)}\;d\mathbf{w}_{s}, \tag{7}\] where \(\Phi(t,s)=e^{-\int_{s}^{t}b(u)\text{d}u}\) and \(\mathbf{x}_{t}\) conditioned on \(\mathbf{x}_{0}\) is Gaussian: \[p(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}\bigg{(}\mathbf{x}_{t};\sqrt{\Phi( t,0)}\mathbf{x}_{0},(1-\Phi(t,0))\hat{\mathbf{I}}\bigg{)}. \tag{8}\] For completeness, we present a set of useful formulas derived from Eq. (8) that describe correlations within the forward process: \[\mathbb{E}[\mathbf{x}_{t}] =\sqrt{\Phi(t,0)}\;\mathbb{E}[\mathbf{x}_{0}], \tag{9}\] \[\mathbb{E}[\mathbf{x}_{t}^{2}] =\Phi(t,0)\;\mathbb{E}[\mathbf{x}_{0}^{2}]+\int_{0}^{t}\Phi(t,s) \beta(s)ds\] (10) \[=\Phi(t,0)\left(\mathbb{E}[\mathbf{x}_{0}^{2}]-1\right)+1,\] \[\mathbb{V}[\mathbf{x}_{t}] =\mathbb{E}[\mathbf{x}_{t}^{2}]-\mathbb{E}[\mathbf{x}_{t}]^{2}=1- \Phi(t,0),\] (11) \[\mathbb{E}[\mathbf{x}_{t}\mathbf{x}_{s}] =\sqrt{\Phi(\xi,\gamma)}\;\mathbb{E}[\mathbf{x}_{\gamma}^{2}],\] (12) \[\xi=\max(t,s),\;\gamma=\min(t,s),\] \[C_{0}(t) \doteq\frac{\mathbb{E}\left[\mathbf{x}_{0}\mathbf{x}_{t}\right]}{ \mathbb{E}\left[\mathbf{x}_{0}^{2}\right]}=\sqrt{\Phi(t,0)},\] (13) \[C_{T}(t) =\frac{\mathbb{E}\left[\mathbf{x}_{T}\mathbf{x}_{t}\right]}{ \mathbb{E}\left[\mathbf{x}_{T}^{2}\right]}=\sqrt{\Phi(T,t)}\;\frac{\mathbb{E}[ \mathbf{x}_{t}^{2}]}{\mathbb{E}[\mathbf{x}_{T}^{2}]}, \tag{14}\] where \(\mathbb{E}[\cdots]\) and \(\mathbb{V}[\cdots]\) represent expectations and variances over the forward VP process (5). By shifting and re-scaling the initial data to ensure that \(\mathbb{E}[\mathbf{x}_{0}]=0\) and \(\mathbb{E}[\mathbf{x}_{0}^{2}]=1\), we find from Eqs. (9, 10, 11) that \(\mathbb{E}[\mathbf{x}_{t}]=0\), and \(\mathbf{x}_{t}\) indeed becomes Variance Preserving (VP), as the name suggests, since \(\mathbb{E}[\mathbf{x}_{t}^{2}]=1\). It is important to note that while the direct process (5) only depends on \(\mathbf{x}_{0}\) through the initial condition, resulting in the explicit solution (7), the respective reverse process described by Eq.(2), with fixed \(f(\cdot)\) and \(G(\cdot)\) according to Eq.(4), carries information about \(\mathbf{x}_{0}\) through the score function. ### Re-weighting in the Score Function Training In the context of Eq. (3), choosing an appropriate weight function \(\lambda(t)\) is crucial. While there are various options for \(\lambda(t)\) (as discussed in [20]), we adopt in this work the approach introduced in [20]. Specifically, we substitute: \[\lambda(t)\to 1-\Phi(t,0). \tag{15}\] This choice of \(\lambda(t)\) is well-motivated as it accounts for the scaling with the \(\mathbf{x}_{0}\)-independent part of the conditional probability described by Eq. (8). To elaborate, we estimate the term involving \(\lambda(t)\) in the objective function as follows: \[\lambda(t)|\nabla_{\mathbf{x}_{t}}\!\log p_{t}(\mathbf{x}_{t}| \mathbf{x}0)\!-\!s_{\theta}(\mathbf{x}_{t};t)|^{2}\!\sim\!\lambda(t)|\nabla \mathbf{x}_{t}\!\log p_{t}(\mathbf{x}_{t}|\mathbf{x}_{0})|^{2}\] \[\sim\lambda(t)\frac{|\mathbf{x}_{t}-\sqrt{\Phi(t,0)}\mathbf{x}_{0} |^{2}}{(1-\Phi(t,0))^{2}}\sim\frac{\lambda(t)}{1-\Phi(t,0)},\] This suggests that by choosing \(\lambda(t)\) according to Eq. (15), we equalize the contributions from different time steps into the integration (expectation) over time in the objective function (3). This ensures a balanced influence from all time steps, making the learning process more effective and efficient. By incorporating the selected \(\lambda(t)\), our approach successfully captures the inherent characteristics of the VP-SDE solution and leverages them for generative modeling. The choice of \(\lambda(t)\) based on the closed-form solution (7) and the VP property facilitates better learning and leads, in what follows, to improved results. Numerical Experiments in the Standard Setting In this section, we present the results of our numerical experiments, which involve different direct and reverse processes defined in Eq.(5) (or Eq.(7)) and Eq. (2), with the specific choices of Eq. (4). Additionally, we explore various profiles for the function \(b(n)\), which are introduced in the following. These profiles allow us to test the sensitivity and effectiveness of the methods under varying balance between advection and diffusion. By systematically exploring these setups, we gain valuable insights into the generative capabilities and limitations of the models based on the VP-SDE formulation. The experimental findings presented below shed light on the interplay between the direct and reverse processes, revealing how they collectively contribute to the overall generative performance, then suggesting how to improve the process. ### Profiles of \(b\) (discrete version of \(\beta\)) Fig.(1) displays the mean and standard deviation (square root of variance) of the direct process samples \(x_{n}\), which are \(\sqrt{\Phi(n\Delta,0)}\) and \(\sqrt{1-\Phi(n\Delta,0)}\) respectively, as described by Eq.(8). These results are presented for three distinct Variance Dampening (VD) profiles of \(b(n)\) - linear, sigmoid, and cosine - as outlined in Table 1. The original design of \(b(n)\) was motivated by the desire to achieve a smooth transition forward in time from the part of the dynamics, \(x_{n}\), that retains information about the initial condition \(x_{0}\) (dominated by the drift or ballistic dynamics) to a phase where this information is gradually forgotten (dominated by diffusion). This trend is clearly evident in Figs.(1a,b), in line with the behavior described by Eqs.(9) and (11). Among the three VD profiles, the linear one exhibits the most rapid decrease/increase in drift/diffusion over time, while the \(\cos\) profile results in a more gradual and slower transition. ### Samples of the forward and backward processes Here we explore both the forward process \((\mathbf{x}(t)|t\in[0,T])\) and the reverse process \((\mathbf{x}_{r}(t)|t\in[0,T])\). For consistency and clarity, we use the same notation for time in both processes, following a forward counting scheme. In our simulations, we discretize time, taking values in the range \(0,\cdots,T=1000\) to facilitate numerical computations and analysis. Figures (2) and (3) showcase temporal samples of both the forward and reverse processes, each corresponding to the three different profiles of \(\beta\). Notably, we observe that the linear and cosine profiles exhibit the most rapid and gradual decay, respectively, among the three options. ### Auto-Correlation Functions of the Reverse Process In our analysis of the Score-Based-Diffusion (SBD) method, we conduct computations by averaging (computing expectations) over multiple samples of the stochastic processes. Our focus is on studying auto-correlation functions, as they serve as direct indicators of how the processes retain or discard information over time. The auto-correlation functions of the forward process are fully described in Eqs.(13) and (14). Therefore, numerical experiments for the forward process serve primarily as a sanity check, since the analytical expressions are available. However, for the reverse VP process, described by Eq.(2) with drift and diffusion functions according to Eq.(4), no analytical expressions are available for the auto-correlation functions. Consequently, we primarily investigate these auto-correlation functions numerically. Specifically, we study the auto-correlation functions of the reverse process between the current time \(\tau\) and an earlier (counted in reverse) time \(t\): \[0\leq\tau\leq t\leq T:\quad C_{t;r}(\tau)=\frac{\mathbb{E}[\mathbf{x}_{r}(t) \mathbf{x}_{r}(\tau)]}{\mathbb{E}[(\mathbf{x}_{r}(t))^{2}]}. \tag{16}\] These auto-correlation functions provide valuable insights into the behavior of the reverse process and its ability to re Figure 1: Mean and standard deviation of the direct process samples \(x_{t}\) for three different Variance Dampening (VD) protocols of \(b(n)\) – linear, sigmoid, and cosine. The \(x\)-axis represents time \(t\), and the \(y\)-axis corresponds to the mean and standard deviation of the direct process samples \(x_{t}\). call information from earlier time steps, contributing to a comprehensive understanding of the generative capabilities of the SBD method. The auto-correlation function analysis results for three different advection/diffusion (noise) profiles are presented in Fig.(4) and Fig.(5) for the forward and reverse processes, respectively. These findings yield several important observations: 1. The auto-correlation functions demonstrate clear differences among the various processes, supporting the notion of using auto-correlation as an indicator of "correlation" decay, i.e., how quickly the signals lose correlation (information) over time. Among the three forward processes, the "linear" and "cosine" profiles exhibit the fastest and slowest decay of correlations, respectively, which is consistent with the temporal evolution of samples shown in Fig. (2). 2. Although correlations between \(t=0\) and subsequent times are destroyed/reconstructed similarly in both the forward and reverse processes, the correlations between \(t=T\) and preceding moments of time are remarkably different. Specifically, the \(T\)-referenced auto-correlation function of the reverse process, \(C_{T;r}(t)\), decays much faster with decreasing \(t\) compared to the auto-correlation function of the forward process, \(C_{T}(t)\). This observation indicates that while the forward process retains all the original-sample-specific information in the initial conditions, the reverse process transforms this information into the "advection" term, spreading the information over time. 3. Moreover, the decay of correlations in the reverse process counted from \(T\) is the fastest for the cosine profile. This finding implies that by engineering the forward process (independent of the initial conditions), we can achieve a faster or slower decay of correlations in the reverse process. 4. Furthermore, the dramatic decay of correlations in the reverse process, as observed in \(C_{T;r}(t)\), indicates that the information contained in \(\mathbf{x}(T)\) is rapidly forgotten. To quantify this behavior, we study the \(1/2\)-decay correla \begin{table} \begin{tabular}{|l|l|} \hline \hline Profile of \(b\) & Definition \\ \hline Linear (Ho, Jain, and Abbeel 2020) & \(b(n)=(b_{2}-b_{1})\frac{n}{N}+b_{1},n\in[0,N]\) \\ Cosine (Nichol and Dhariwal 2021) & \(b(n)=\min\left(1-\frac{p(n)}{p(0)-p(n)},0.999\right),\;p(n)=\cos^{2}\left( \frac{n/N+0.008}{1+0.008},\frac{\pi}{2}\right)\) \\ Sigmoid (Xu et al. 2022) & \(b(n)=\frac{b_{2}-b_{1}}{1+\exp(-12n/N+6)}+b_{1}\), \(n\in[0,N]\) \\ \hline \end{tabular} \end{table} Table 1: Profiles of \(b\) (discrete version of \(\beta\) associated with the discretization of time \(t\) using integer \(n\), where \(t=n\Delta\)) commonly employed in previous studies (Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021; Xu et al. 2022), and also adopted for our investigation. For our discrete-time simulations (as discussed in the following), we set \(b_{1}=0.0001\) and \(b_{2}=0.02\), and consider integer values of \(n\) in the range \(0,1,\cdots,N=1000\). These profiles have been selected to enable a comprehensive assessment of our approach and are representative of the noise conditions commonly used in the literature. Figure 2: Temporal samples of the forward process \(x_{n}\) shown for three different noise profiles described in Table 1. Each figure represents a distinct noise profile, demonstrating the dynamic behavior of the process over time. tion time \(\delta(t)\), defined by \[C_{t;r}(t-\delta(t))=1/2, \tag{17}\] and its dependence on \(t\), which is illustrated in Fig. (6). In summary, these observations shed light on the distinctive behaviors of the forward and reverse processes, providing valuable insights into their information retention capabilities and temporal characteristics. The correlation decay analysis offers a deeper understanding of the generative dynamics underlying the SBD method. ### Average of the Score Function 2-Norm The analysis of the time-dependence of the average of the score function (a vector) 2-norm is presented in Fig. (7a). This score function is denoted as: \[S(t)\doteq\sqrt{\frac{\mathbb{E}\left[\left(\nabla_{\mathbf{x}}\log p_{t}( \mathbf{x})\right)^{2}\right]}{\mathbb{E}\left[\left(\nabla_{\mathbf{x}}\log p _{0}(\mathbf{x})\right)^{2}\right]}}, \tag{18}\] In Fig. (7b), we present the average score function 2-norm, weighted with the \(\sqrt{\lambda(t)}\)-factor, and normalized at \(t=T\) according to: \[M(t)\doteq\sqrt{\frac{\lambda(t)\mathbb{E}\left[\left(\nabla_{\mathbf{x}} \log p_{t}(\mathbf{x})\right)^{2}\right]}{\lambda(T)\mathbb{E}\left[\left( \nabla_{\mathbf{x}}\log p_{T}(\mathbf{x})\right)^{2}\right]}}. \tag{19}\] These figures illustrate how the score function norms, weighted and not, evolve over time, offering valuable insights into the generative modeling process and the relevance of the score function in capturing the underlying dynamics. The figures help to appreciate how the weighted score function provides a means to evaluate the importance of different time steps in the generative learning process. ### Kolmogorov-Smirnov Gaussianity Test We employ the Kolmogorov-Smirnov (KS) Gaussianity test to examine the null hypothesis: "Is a single-variable marginal of the resulting multi-variate distribution of \(\mathbf{x}_{r}(t)\) at a given time \(t\) Gaussian or not?" To validate this hypothesis, we apply the KS test to each of the single-variable marginals \(p_{t}(x_{k})=\int d(\mathbf{x}\setminus x_{k})p_{t}(\mathbf{x})\). The KS-ratio is then calculated as the number of single dimensions \(k\) for which the Gaussianity of the corresponding \(x_{k}(t)\) is not confirmed, divided by the total number of dimensions (cardinality of \(\mathbf{x}(t)\)): \[\text{KS}(t)=\frac{\text{\# of }p_{t}(x_{k})\text{ failing the Gaussianity test}}{\text{cardinality of }\mathbf{x}=64\times 64\times 3}.\] The results of this analysis are displayed in Fig. 8. Several factors contribute to the score-function based estimation of the probability distribution constructed from the data, resulting in potential discrepancies. Firstly, the Neural Network is only an approximate fit to the score function it approximates. Secondly, although the "reverse" Fokker-Planck equation is exact, ensuring the same marginal probability distribution as derived in the forward process, this exactness holds only in continuous time, while the actual implementation occurs in discrete time. Lastly, the number of input (data) samples, while assumed to be large, is still finite. All three errors accumulate, leading to the observed mismatch between the KS tests performed for the forward and reverse processes, as shown in Fig. (8). Figure 3: Temporal samples of the reverse process for three different noise profiles described in Table 1. Each figure corresponds to a distinct noise profile, providing a representation of the dynamic behavior of the reverse process over time. This discrepancy aligns with earlier observations and discussions reported in (De Bortoli et al., 2021; Block, Mroueh, and Rakhlin, 2022). Additionally, we find that the mismatch in the KS-curves diminishes with improvements in the quality of the Neural Network, reduction of the number of discretization steps, and/or an increase in the number of input samples. These insights highlight the importance of these factors in refining the generative modeling process and reducing discrepancies between forward and reverse processes. ### Quality of Inference Next we employ the Kernel Inception Distance (KID) (Binkowski et al., 2021) as our chosen metric to assess the quality of inference. KID measures the dissimilarity between the distributions of real and generated samples without assuming any specific parametric form for them. The KID is constructed by embedding real and generated images into a feature space using the Inception network (Gretton et al., 2012). It then becomes a squared Maximum Mean Discrepancy (MMD) between the inception features of the real and generated images: \[\text{KID}(\mathbb{P}_{r},\mathbb{P}_{g})=\mathbb{E}_{\begin{subarray}{c} \mathbf{x}_{r},\mathbf{x}^{\prime}_{r}\sim\mathbb{P}_{g}\\ \mathbf{x}_{g},\mathbf{x}^{\prime}_{g}\sim\mathbb{P}_{g}\end{subarray}}\left[ k(\mathbf{x}_{r},\mathbf{x}^{\prime}_{r})+k(\mathbf{x}_{g},\mathbf{x}^{\prime}_{g})-2k( \mathbf{x}_{r},\mathbf{x}_{g})\right]\] where \(\mathbb{P}_{r}\) and \(\mathbb{P}_{g}\) represent the distributions of real and generated samples, respectively. The KID quantifies the distance between the two distributions, with a lower KID indicating that \(\mathbb{P}_{r}\) and \(\mathbb{P}_{g}\) are closer to each other. In our evaluation, we use a polynomial kernel to calculate the KID. Fig. 9 presents the results of our KID tests for different profiles of \(b\). Notably, we observe that the \(\cos\)-profile yields a lower KID, indicating better quality of generated samples compared to other profiles. It is worth mentioning that another popular measure of similarity between two datasets of images, the Frechet Inception Distance (FID) (Heusel et al., 2018), is often used to evaluate the quality of generated samples. However, FID is not applicable in our setting, as we intentionally work with a number of images smaller than the dimensionality of the Figure 4: Auto correlation for forward process. Figure 5: Auto correlation for reverse process. input vector. In our experiments, the input vector has a dimension of \(2048\) (after passing through the Inception-v3), but we use only \(1000\) samples to estimate the covariance. Consequently, the covariance matrix becomes singular, making FID unsuitable for our evaluation. Therefore, we rely on KID as a robust alternative to assess the performance of our generative modeling approach. ### Discussion In this section, all the experiments presented and discussed were conducted under the "standard" setting of the Score-Based Diffusion (SBD). As a reminder, the standard setting involves training the score function on input data propagated with forward stochastic dynamics (1) from time \(0\) to \(T\). Subsequently, synthetic data emerge through the propagation of an initially noisy (completely uninformative) image by the reverse stochastic process (2), which depends on the score function. Our analysis of the standard setting reveals that generating high-quality synthetic data, and potentially enhancing their quality, does not necessitate initiating the inverse process at time \(T\). This conclusion is supported by our examination of the auto-correlation function in the reverse process, as shown in Fig.(5), and the 1/2-correlation time, as illustrated in Fig.(6). Both figures indicate that the early stages of the reverse process do not contribute significantly to generating a synthetic image. Specifically, Fig.(6) demonstrates that correlations start forming not at \(n=1000\), but rather at \(n\approx 200\) for linear \(b\)-profiles and at \(n\approx 400\) for sigmoid- and cosine- \(b\)-profiles. Visual inspection of samples from the reverse dynamics, as displayed in Fig.(3), aligns with these findings. Similar observations and time scales are also evident in our reverse process KID score test, depicted in Fig. (9). Considering the role of the score function itself as a function of time, which is extracted from the evolving data in the direct process, we inquire if it indicates when a synthetic image begins to form. Experimental evidence from Fig. (7b), depicting the evolution of the properly normalized norm of the score function, indicates that the score function ceases to change at \(n\approx 600\) for linear- and sigmoid- \(b\)-profiles, and at \(n\approx 800\) for cosine- \(b\)-profiles. Furthermore, the results of the KS test of Gaussianity in Fig. (8b) suggest that the reverse processes with linear-, sigmoid-, and cosine- \(b\)-profiles become largely Gaussian (proxy for uninformative) at \(n\gtrsim 600\), \(n\gtrsim 700\), and \(n\gtrsim 900\), respectively. These findings collectively demonstrate that initiating the reverse process earlier than \(n\approx 600\) for linear- and sigmoid- \(b\)-profiles, and \(n\approx 800\) for cosine-\(b\)-profiles, does not significantly impact the quality of synthetic data. Based on the findings discussed above, we have made significant advancements in our proposal for generating synthetic samples, which will be further elaborated in the following Section. Figure 6: Dependence of the \(1/2\)-correlation time (discrete time version) \(\delta(n\Delta)/\Delta\) of the reverse process, as defined in Eq. (17), on the (discrete) time index, \(n\). Figure 7: Average of the Score Function 2-Norm for the experiments described in the main text. ## 4 U-Turn Diffusion: Set Up and Numerical Experiments We propose running a direct process, but with a shorter duration compared to the standard setting. Instead, we reverse the approach earlier, initiating the process in the opposite direction. We initialize the reverse process using the last configuration from the forward process. This entire process, that is direct and reverse combined, is termed _U-Turn Diffusion_, emphasizing our expectation that direct process followed by U-Turn and the reverse process will ultimately produce a synthetic image. This synthetic image should, on one hand, closely resemble samples from the probability distribution representing the input data. On the other hand, it should be distinctly different from the original sample that initiated the preceding direct process when it arrives at \(t=n=0\). Synthetic images generated at \(n=0\) by making the U-turn at different times are shown in Fig. (10) for the three \(b\)-profiles. Consistently with the discussion above in Section 3 we observe, by examining the figures visually, that a principally new image of a high quality is generated if the U-turn occurs at \(n\gtrsim 600\) for the linear-, sigmoid- \(b\)-protocols and at \(n\gtrsim 850\) for the cosine- \(b\)-protocols. The KID score, which compares synthetic images generated at \(n=0\) with the original data, is analyzed as a function of the U-turn time and presented in Fig. (11). The results displayed in the figures corroborate the observations made in Fig. (10). Notably, Fig. (11) reveals a significant finding - when the U-turn time surpasses an optimal threshold (\(n\approx 600\) for linear and sigmoid \(b\)-profiles, and \(n\approx 850\) for cosine \(b\)-profiles), the deterioration in the synthetic image quality accelerates considerably with increasing \(n\) as compared to lower values. (Obviously, conducting a U-turn at a sufficiently small \(n\) yields synthetic images at \(n=0\) that closely resemble the original images, resulting in a minimal KID.) In light of these observations, we deduce that these critical values of \(n\), signifying a more rapid increase in KID with higher \(n\), represent the optimal choices for the U-turn. Fig. (12) showcases the outcomes of experiments analogous to those described earlier (leading to the results presented in Fig.(11)), however initiating the reverse process with a noise in this case. Evidently, in this case the reverse process does not retain any memory of the forward process, thus leading to increase in KID with decrease in \(n\), where \(n\) is the time step where we initialize the reverse process with the noise. Notably, the dependence of the KID on \(n\) flattens as \(n\) decreases. The flattening occurs around \(n\approx 600\) for linear and sigmoid \(b\)-profiles, and around \(n\approx 900\) for the cosine \(b\)-profile. This observation suggests that for random initialization of the reverse process, starting the process at \(n=1000\) is unnecessary. Instead, it is advantageous to start the reverse process at a smaller \(n\), chosen based on the b-profile. Remarkably, a comparison between Fig.(11) and Fig.(12) underscores a notable advantage in initiating the reverse process with the final configuration from the forward process preceding the U-turn. This approach yields a marked reduction in KID, translating to an elevated quality of the synthetic image. For instance, examining Fig.(11), we find that the KID value for the sigmoid- \(b\)-profile process, Figure 8: Kolmogorov-Smirnov test for forward and reverse dynamics under different \(b\)-protocols. Figure 9: Kernel Inception Distance (KID) for the reverse process with different \(b\)-protocols displayed as a function of \(n\). The inset provides a closer view of the primary trend at the lowest \(n\) values, where the lowest KID values (best results) are attained. (We remind that the synthetic sample is generated at the end of the reverse process, i.e., at \(n=0\).) with the U-turn executed at \(n=600\) (deemed the optimal U-turn point as discussed earlier), is approximately \(0.07\). In contrast, Fig.(12) demonstrates that random initiation of the reverse process at \(n=600\) leads to a significantly higher KID of about \(0.19\). _Remark_. After completing this work, we discovered a related method called "boomerang," which was recently reported in [10]. While there are some similarities between the boomerang and the U-turn diffusion described in this manuscript, it is essential to emphasize that they are distinct from each other. The boomerang method focuses on generating images that closely resemble the original samples, whereas the U-turn diffusion aims to create distinctly different images that approximate i.i.d. samples from the entire dataset. This fundamental difference also manifests in the applications suggested for the boomerang in [10], such as constructing privacy preserving datasets, data-augmentation and super-resolution. Given these distinctions, it would be intriguing to extend the analysis techniques we developed, including the auto-correlation functions, the score-function norm, KS criteria, and the KID Figure 10: Synthetic images generated at \(n=0\) by making U-turn at different times from the \([0,1000]\) range. metric, to the boomerang method and its interesting applications involving local sampling. ## 5 Conclusions and Path Forward This paper delves into the analysis of popular Score-Based Diffusion Models (SBD), which are rooted in the idea that observed data are outcomes of dynamic processes. These models involve two related stochastic sub-processes: forward and reversed dynamics. The design of these models offers flexibility in choosing the advection and diffusion components of the dynamics. Three distinct advection-to-diffusion \(b\)-protocols, developed in prior publications on the subject, were adopted to explore this freedom. While the Fokker-Planck equations for the two sub-processes are equivalent, actual samples diverge as one advances forward and the other backward in time. Our first accomplishment is extending analysis beyond single-time marginals to study time correlations through auto-correlation functions. This allowed quantification of information retention, by distributing in the score function, and than recovery of the information in the reverse process. The analysis unveiled diverse regimes, time scales, and their dependency on the chosen protocols. The study then connects the decay patterns in auto-correlation functions to single-time objects, average of the weighted score-function 2-norm and the Kolmogorov-Smirnov metric. The temporal behaviors of these single-time objects are linked to the two-time correlation patterns, providing insights for potential control applications (see discussion below). Informed by the temporal analysis of the SBD, a novel U-Turn Diffusion process, which is the climax of the manuscript, was devised, suggesting an optimal time to transition from forward to reverse sub-processes. The paper employs the KID test to assess the quality of U-Turn diffusion. Remarkably, the results demonstrate the existence of an optimal U-turn time for generating synthetic images which are of the best quality within the scheme. In summary, this work thus not only advances our understanding of the SBD models but also offers a new U-Turn algorithm to enhance the quality of synthetically generated data. The avenues for further exploration stemming from this study are delineated along three principal lines, each aiming to enhance further our understanding and application of the SBD models: * _Fine-Tuning Protocols Using Time-Local Indicators_: Our immediate focus will be on optimizing and controlling the \(b\)-protocols to be data-adaptive. Employing time-local indicators such as the weighted average norm of the score-function and the KS test, we intend to align the \(b\)-protocols with the specific data characteristics. * _Enhancing U-Turn enforced SBD with Data-Specific Dynamics:_ Building on the success of the U-Turn enforced SBD approach, we aim to extend its utility by incorporating data-specific correlations and sparsity features into the underlying advection/diffusion dynamics. For instance, when initial data showcases spatial correlations, we plan to develop SBD techniques grounded in spatio-temporal stochastic partial differential equations. * _Establishing Theoretical Connections to Non-Equilibrium Statistical Mechanics:_ We intend to work on connecting the U-Turn enforced SBD approach to non-equilibrium statistical mechanics concepts, particularly those like the fluctuation theorem (e.g., Jarzynski and Crook relations) and Schrodinger bridge approaches. The exploration of this theoretical nexus, informed by existing literature and approaches (Jarzynski 1997; Crooks 1999; Leonard and,Modal-X. Universite Paris Ouest, Bat. G, 200 av. de la Republique. 92001 Nanterre 2014; Chen, Georgiou, and Pavon 2021; Figure 11: KID score plotted against \(n\) for synthetic images generated with the U-turn conducted at the step \(n\). Notably, we observe a distinct increase in the rate of KID versus \(n\) growth at certain values of \(n\), which we identify as the optimal positions for conducting the U-turn. Figure 12: KID score for synthetic images generated (at \(n=0\)) by initiation of the reverse process at the time \(n\) with a random noise. We show KID as a function of the time \(n\) of the reverse process random initiation. Sohl-Dickstein et al. 2015; De Bortoli et al. 2021), holds potential for illuminating the underlying mechanisms driving generative AI's power.
2308.08712
Cohomological Kernels for Cyclic by Cyclic Semi-Direct Product Extensions
Let $F$ be a field and $E$ an extension of $F$ with $[E:F]=d$ where the characteristic of $F$ is zero or prime to $d$. We assume $\mu_{d^2}\subset F$ where $\mu_{d^2}$ are the $d^2$th roots of unity. This paper studies the problem of determining the cohomological kernel $H^n(E/F):=\ker(H^n(F,\mu_d) \rightarrow H^n(E,\mu_d))$ (Galois cohomology with coefficients in the $d$th roots of unity) when the Galois closure of $E$ is a semi-direct product of cyclic groups. The main result is a six-term exact sequence determining the kernel as the middle map and is based on tools of Positelski. When $n=2$ this kernel is the relative Brauer group ${\rm Br}(E/F)$, the classes of central simple algebras in the Brauer group of $F$ split in the field $E$. The work of Aravire and Jacob which calculated the groups $H^n_{p^m}(E/F)$ in the case of semidirect products of cyclic groups in characteristic $p$, provides motivation for this work.
Nathan Schley
2023-08-17T00:30:15Z
http://arxiv.org/abs/2308.08712v1
# Cohomological kernels for cyclic by cyclic semi-direct product extensions ###### Abstract. Let \(F\) be a field and \(E\) an extension of \(F\) with \([E\,:\,F]=d\) where the characteristic of \(F\) is zero or prime to \(d\). We assume \(\mu_{d^{2}}\subset F\) where \(\mu_{d^{2}}\) are the \(d^{2}\)th roots of unity. This paper studies the problem of determining the cohomological kernel \(H^{n}(E/F):=\ker(H^{n}(F,\mu_{d})\to H^{n}(E,\mu_{d}))\) (Galois cohomology with coefficients in the \(d\)th roots of unity) when the Galois closure of \(E\) is a semi-direct product of cyclic groups. The main result is a six-term exact sequence determining the kernel as the middle map and is based on tools of Positselski [P]. When \(n=2\) this kernel is the relative Brauer group \(\operatorname{Br}(E/F)\), the classes of central simple algebras in the Brauer group of \(F\) split in the field \(E\). The work of Aravire and Jacob (2008, 2018) [AJ] [AJO] which calculated the groups \(H^{n}_{p^{n}}(E/F)\) in the case of semidirect products of cyclic groups in characteristic \(p\), provides motivation for this work. ## Introduction This paper studies cohomological kernels of field extensions. Historically, such kernels have played a key role in the computation of relative Brauer groups, although the results presented here apply more generally to higher cohomology. Similarly, the computation of analogous cohomological kernels have played important roles in the development of the algebraic theory of quadratic forms. In 1980, the work of Merkurjev and Suslin on the conjecture of Albert showed that in the presence of roots of unity, the Brauer group is generated by classes of cyclic algebras. In terms of cohomology, this means that the cup product map \(H^{1}(F,\mu_{d})\times H^{1}(F,\mu_{d})\to H^{2}(F,\mu_{d})\cong\operatorname{Br }_{d}(F)\) is surjective. Their work was dependent upon detailed analyses of the K-theory of Severi-Brauer varieties and the relationship between the Milnor K-theory of a field and its Galois cohomology. In 2005, Positselki studied cohomological kernels of biquadratic extensions and certain degree 8 extensions [P] using a four-term exact sequence of Galois group modules \[0\longrightarrow M_{1}\longrightarrow M_{2}\longrightarrow M_{3} \longrightarrow M_{4}\longrightarrow 0\] with homotopy maps and some other properties to produce a six-term exact sequence of cohomology. Prior to his work, it was known by work analyzing the Witt ring that if \(E=F(\sqrt{a},\sqrt{b})\) is biquadratic then the kernel of the map \(H^{2}(F,\mathbb{Z}/2\mathbb{Z})\to H^{2}(E,\mathbb{Z}/2Z)\) is generated by the images of "expected elements" \((a)\smile(x)\) and \((b)\smile(y)\) for \(x,y\in F\) [EL] and that the analogue of this expected result for triquadratic extensions was false [ART]. The question of determining the kernel of \(H^{n}(F,\mathbb{Z}/2\mathbb{Z})\to H^{n}(E,\mathbb{Z}/2\mathbb{Z})\) for \(n\geq 3\) in the separable biquadratic case was considered by a number of researchers (Merkurjev, Tignol, Kahn), and it is this problem that Positselski solved with his tools. Positselski's tools also applied to dihedral extensions of degree 8, indicating the applicability of these techniques to the non-Galois case. Characteristic \(p\) versions of Positselski's machinery have been constructed by Aravire and Jacob (2012), for the separable biquadratic case and the dihedral and quaternion cases in characteristic \(2\) (2016), and more generally for the cyclic by cyclic semi-direct product cases in characteristic \(p>2\) by Aravire-Jacob-O'Ryan (2018). It is these latter constructions that this paper generalizes to the case where the characteristic is prime to the field degree \(d\) and the \(d^{2}\)th roots of unity are present in the field. The key to this work is determining the appropriate modules \(M_{3}\) and \(M_{4}\) (see below for the set-up) and establishing the requisite homotopies necessary to apply Positselski's tools. This is spelled out in Section 3. Sections 1 and 2 develop the background as well as provide details necessary for the application of Positselski's results that are not clearly spelled out in his paper. In particular, the "connecting map" \(\eta\,:\,H^{n}(\mathcal{G},M_{4})\to H^{n+1}(\mathcal{G},M_{1})\) needs to be carefully computed. Section 5 covers the case of a cyclic extension and uses this machinery to prove a well-known result from Hilbert 90 as a way to get a sense of how the machinery works. Section 5 covers a dihedral extension. Section 6 covers the more general case of an extension whose Galois group is a semi-direct product with certain conditions. The final section covers some cohomological interpretations of these results. ### Notation and Further Background Let \(F\) be a field, \(d\in\mathbb{N}\) with \(d>1\), we will assume that \(\operatorname{char}(F)=0\) or \((\operatorname{char}(F),d)=1\), and that \(\mu_{d^{2}}\subseteq F\), where \(\mu_{d^{2}}\) are the \(d^{2}\) distinct \(d^{2}\)th roots of unity. Let \(F_{\mathrm{sep}}\) denote the separable closure of \(F\), \(\mathcal{G}=\operatorname{Gal}(F_{\mathrm{sep}}/F)\), and \(H^{n}(\mathcal{G},M)\), the \(n\)th cohomology groups for any \(\mathbb{Z}[G]\)-module \(M\) [GalCoh]. Let \(E/F\) be an extension of degree \(d\), \(\mathcal{H}\subseteq\mathcal{G}\) be \(\operatorname{Gal}(F_{\mathrm{sep}}/E)\). We will also use the notation \(H^{n}(F,M)=H^{n}(\mathcal{G},M)\), so \(H^{n}(E,M)=H^{n}(\mathcal{H},M)\). We also denote by \(H^{n}(E/F,M):=\ker(H^{m}(F,M)\to H^{n}(E,M))\). The groups \(H^{0}(F,\mu_{d})\) and \(H^{1}(F,\mu_{d})\) have an interpretation from Kummer theory. Consider the following short exact sequence of \(\mathbb{Z}[\mathcal{G}]\)-modules \[0\longrightarrow\mu_{d}\overset{\leq}{\longrightarrow}F_{\mathrm{sep}}^{ \times}\overset{\cdot d}{\longrightarrow}F_{\mathrm{sep}}^{\times}\longrightarrow 0\] where the second map is multiplication by \(d\) over the \(\mathbb{Z}[\mathcal{G}]\)-modules. It is surjective because \(F_{\mathrm{sep}}\) is separably closed. This short exact sequence of \(\mathbb{Z}[\mathcal{G}]\)-modules yields a long exact sequence of cohomology [HA]. Note that \(\operatorname{Fix}_{\mathcal{G}}(\mu_{d})=\mu_{d}\) and \(\operatorname{Fix}_{\mathcal{G}}(F_{\mathrm{sep}}^{\times})=F^{\times}\) by Galois theory. Furthermore, \(H^{1}(\mathcal{G},F_{\mathrm{sep}}^{\times})\) is trivial by the cohomological version of Hilbert's Theorem 90. This information gives the long exact sequence. In particular we have the following three results: 1. \(H^{0}(F,\mu_{d})\cong\mathbb{Z}/d\,Z\). 2. \(H^{1}(F,\mu_{d})\cong F^{\times}/F^{\times d}\) and 3. \(H^{2}(F,\mu_{d})\) is the \(d\)-torsion of \(H^{2}(F,F^{\times}_{\mathrm{sep}})\). For \(a\in F^{\times}\) we use \((a)\in H^{1}(F,\mu_{d})\) to denote the class that \(aF^{\times d}\in F^{\times}/F^{\times d}\) corresponds to in the second identification. Since \(H^{2}(F,F^{\times}_{\mathrm{sep}})\cong\mathrm{Br}(F)\) is the Brauer group (the cohomological Brauer group and the Brauer group agree for fields), the third result will be of particular importance because it means \(H^{2}(F,\mu_{d})\) picks out the \(d\)-torsion in \(\mathrm{Br}(F)\). We use \(\smile\) to denote the cup product: \(\smile\): \(H^{r}(F,\mu_{d})\times H^{s}(F,\mu_{d})\to H^{r+s}(F,\mu_{d})\), which makes sense in our context because \(\mu_{d}\subset F\) and therefore has trivial \(\mathcal{G}\)-action so \(\mu_{d}^{\otimes 2}\cong\mu_{d}\) as \(\mathcal{G}\)-modules. ### The Problem Studied The problem studied in this paper is that of determining the kernels of scalar extension (restriction in group cohomology), \[\operatorname{res}_{E/F}\,:\,H^{n}(F,\mu_{d})\,,\,\longrightarrow\,H^{n}(E, \mu_{d})\] for various extension fields \(E/F\) of degree \(d\). The case where \(E/F\) is cyclic Galois is basic. In the cyclic case when \(n=2\), if the Brauer class of an \(F\)-division algebra \(D\) of index \(d\) lies in \(H^{2}(E/F,\mu_{d})\), then this \(D\) is a cyclic algebra (with maximal subfield \(E\).) More specifically, we know that when \(E=F(\sqrt[d]{a})\) (recall \(\mu_{d}\subset F\)) we have \(H^{2}(E/F,\mu_{d})=(a)\smile H^{1}(F,\mu_{d})\). Theorem 4.7 shows that in the cyclic case this is valid for all \(n\), namely \(H^{n+1}(E/F,\mu_{d})=(a)\smile H^{n}(F,\mu_{d})\), a result already known [Ar]. The next cases generalize this situation, where either the Galois closure of \(E\) is dihedral or \(E\) is an extension of degree \(d\) that becomes a cyclic extension when \(F\) is extended by a cyclic extension of degree prime to \(d\). In this latter case the Galois group of the Galois closure of \(E\) is a cyclic by cyclic semi-direct product. In these latter cases one cannot describe the cohomological kernel as a cup product by a class \((a)\) (indeed, \(H^{1}(E/F,\mu_{d})=0\)), but one does have the connecting map \(\eta\) from Positselski's theory to capture the kernel. In order to compute these kinds of kernels, Positselski used \(4\)-term exact sequences of Galois group modules with homotopy maps \[0\longrightarrow M_{1}\longrightarrow M_{2}\longrightarrow M_{3} \longrightarrow M_{4}\longrightarrow 0\] to produce a six-term exact sequence of cohomology. Here with \(M_{1}\cong\mu_{d}\) and \(M_{2}\) an appropriately selected induced module with \(H^{n}(\mathcal{G},M_{2})\cong H^{n}(E,\mu_{d})\), so that the six-term sequence can be used to compute the cohomological kernel \(H^{n}(E/F,\mu_{d})\). Aravire and Jacob [AJ] and Aravire, Jacob and O'Ryan [AJ0] have developed a variant of this machinery to compute cohomologial kernels in characteristic \(p\) for \(E/F\) of prime degree \(d=p\) with Galois closure having Galois group a semidirect product of two cyclic groups of order \(p\) and \(s\), where \(s|(p-1)\). This paper gives an analogous result when \(E/F\) is degree \(d\), \(F\) has characteristic prime to \(d\), and the Galois closure of \(E/F\) has Galois group a semidirect product of cyclic groups of order \(d\) and \(s\), with \(s|\phi(d)\) (the Euler \(\phi\)-function) and \(\mathbb{Z}/s\mathbb{Z}\) acting faithfully on \(\operatorname{Aut}(\mathbb{Z}/d\mathbb{Z})\). We recall from applications that \(H^{2}(\mathcal{G},\mu_{d})=Br_{d}(F)\), and standard notation from this subject will be used. ## 1. Arason's Theorem In his paper, Arason [Ar] proved that the third cohomological invariant, \(e_{3}\) of quadratic forms is well-defined. To accomplish this, he determined the cohomological kernel of a quadratic extension away from characteristic two (an equivalent result in group cohomology was proved independently by D.L. Johnson [J] at the same time.) We discuss Arason's results here because the approach he took provides a model for understanding the work of Positselski, and the computation of his connecting map lays a conceptual framework for the computation of Positselski's connecting map \(\eta\), see Example 3.7 of [HW] for further discussion. Let \(F\) be a field, \(\operatorname{char}(F)\neq 2\), \(E=F(\sqrt{a})\) a quadratic extension, and \(F_{\operatorname{sep}}\) the separable closure of \(F\). Let \(\mu_{2}\) be the square roots of unity \(\pm 1\); clearly \(\mu_{2}\subseteq F\). The result of Arason [Ar] is the following, a result which has a critical role in the algebraic theory of quadratic forms. It is a cohomological analogue of an exact sequence for the Witt ring (see [L] chap. 7 Sec. 3). We also sketch the proof. **Theorem 1.1**.: _Let \(F\) be a field, \(\operatorname{char}(F)\neq 2\), \(E=F(\sqrt{a})\) a quadratic extension. There is a long exact restriction/corestriction sequence_ _where the connecting map \(\partial\) is the cup product with the character function \(\chi_{K}\in H^{1}(F,\mu_{2})\), which corresponds to the class \((a)\in F^{\times}/F^{\times 2}\)._ **Proof**: (Sketch) This long exact sequence is induced from a short exact sequence of \(\mathcal{G}\)-modules \[0\longrightarrow\mu_{2}\longrightarrow\operatorname{Ind}_{\mathcal{H}}^{ \mathcal{G}}(\mu_{2})\longrightarrow\mu_{2}\longrightarrow 0\] that will give the restriction and corestriction on cohomology once we replace \(H^{n}(\mathcal{G},\operatorname{Ind}_{\mathcal{H}}^{\mathcal{G}}(\mu_{2}))\) with \(H^{n}(\mathcal{H},\mu_{2})\) using the Shapiro isomorphism. We note that as \(\mathcal{G}=\operatorname{Gal}(F_{\operatorname{sep}}/F)\) and \(\mathcal{H}=\operatorname{Gal}(F_{\operatorname{sep}}/E)\), we have \(H^{n}(\mathcal{G},\mu_{2})=H^{n}(F,\mu_{2})\) and \(H^{n}(\mathcal{H},\mu_{2})=H^{n}(E,\mu_{2})\). For our computations, we identify \(Ind_{\mathcal{H}}^{\mathcal{G}}(\mu_{2})\) with \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\) and use the sequence \[0\longrightarrow\mathbb{Z}/2\mathbb{Z}\longrightarrow\mathbb{Z}/2\mathbb{Z} \oplus\mathbb{Z}/2\mathbb{Z}\longrightarrow\mathbb{Z}/2\mathbb{Z} \longrightarrow 0.\] Here \(\mathbb{Z}/2\mathbb{Z}\) is a trivial \(\mathcal{G}\)-module since the field elements \((\pm 1)\) are fixed by the Galois group \(\mathcal{G}\), while the induced module has a \(\mathcal{G}\)-action of permuting the two entries for any \(g\not\in\mathcal{H}\), and a trivial \(\mathcal{G}\)-action for any \(h\in\mathcal{H}\). Then the maps are the diagonal map \(1\mapsto 1\oplus 1\) and the trace \(x\oplus y\mapsto x+y\), respectively, which happen to be the only non-zero choices of \(\mathbb{Z}[\mathcal{G}]\)-module homomorphisms. We need to show that the two \(\mathcal{G}\)-maps in the short exact sequence induce maps that commute with the restriction and corestriction through the Shapiro isomorphism, and we need to show that the snake-lemma connecting map \(\partial\) is the cup product with the character function \(\chi_{K}\). This latter fact is pulled out as Theorem 1.2 below. The Shapiro isomorphism gives the following, The composition of the restriction with the Shapiro map is induced by the identity map on \(\mathbb{Z}/2\mathbb{Z}\), followed by the diagonal map, which agrees with the induced map on the top. The Shapiro homomorphism on \(\mathbb{Z}[G]\)-modules is the diagonal map. When composed with the trace, \(1\) is sent to the sum of \(1\) over every \(\mathcal{H}\)-coset. This is also the map that induces the corestriction, and therefore the diagrams commute on cohomology. This concludes the proof sketch of Theorem 1.1. It remains to interpret the connecting map in the long exact sequence. Arason describes this as the cup product with the character function, and this will be shown next by direct computation. **Theorem 1.2**.: _The connecting map \(\partial\) in Arason's Theorem 1.1 sends a cocycle \(\sigma\) to \(\chi_{E}\righr\sigma\), the cup product with the character function \(\chi_{E}\) defined to be the composite \(\mathcal{G}\longrightarrow Gal(E/F)\longrightarrow\mu_{2}\), where the latter map is the unique non-trivial homomorphism from \(Gal(E/F)\longrightarrow\mu_{2}\)._ Proof: We will compute the connecting map directly. Let \(\sigma\in Z^{n-1}(F,\mathbb{Z}/2\mathbb{Z})\). We pick the lifting \(\mathcal{L}\) of the trace map that sends \(1\) to \(1\oplus 0\). So \(\mathcal{L}(\sigma)(g_{1},\ldots,g_{n-1})=(\sigma(g_{1},\ldots,g_{n-1})) \oplus 0\). Note that though \(\mathcal{L}\) is not a \(\mathcal{G}\)-map (if it were, the connecting map would have to be zero), \(\mathcal{L}\) is an abelian group homomorphism. And we will use this fact in the next step of the computation. The next step after the lifting \(\mathcal{L}\) is the chain map \(\delta\), which can be computed using the bar resolution. \[\delta(\mathcal{L}(\sigma))(g_{1},\ldots,g_{n}) =g_{1}\cdot\mathcal{L}(\sigma(g_{2},\ldots,g_{n}))-\mathcal{L}( \sigma(g_{1}g_{2},\ldots,g_{n}))+\cdots+(-1)^{n}\mathcal{L}(\sigma(g_{1}, \ldots,g_{n-1}))\] \[=g_{1}\cdot\mathcal{L}(\sigma(g_{2},\ldots,g_{n}))-\mathcal{L}(g _{1}\cdot\sigma(g_{2},\ldots,g_{n}))+\mathcal{L}(\delta(\sigma)(g_{1},\ldots, g_{n}))\] \[=g_{1}\cdot\mathcal{L}(\sigma(g_{2},\ldots,g_{n}))-\mathcal{L}(g _{1}\cdot\sigma(g_{2},\ldots,g_{n}))\] This simplification takes advantage of the fact that \(\mathcal{L}\) is an abelian group homomorphism as well as the fact that \(\sigma\) is a cocycle, so \(\delta\) applied to \(\sigma\) is zero, which gets sent by \(\mathcal{L}\) to \(0\) as well. Now, because our modules are \(2\)-torsion and \(\mathbb{Z}/2\mathbb{Z}\) has a trivial \(\mathcal{G}\)-action, \[g_{1}\cdot\mathcal{L}(\sigma(g_{2},\ldots,g_{n}))-\mathcal{L}(g_{1}\cdot \sigma(g_{2},\ldots,g_{n}))=(g_{1}+1)\mathcal{L}(\sigma(g_{2},\ldots,g_{n})).\] Furthermore, \((h+1)\) annihilates all of \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\) whenever \(h\in\mathcal{H}\), and for every \(g_{1}\not\in\mathcal{H}\), \((g_{1}+1)(1\oplus 0)=1\oplus 1\). Thus \[(g_{1}+1)\mathcal{L}(\sigma(g_{2},\ldots,g_{n}))=\begin{cases}\sigma(g_{2}, \ldots,g_{n})\oplus\sigma(g_{2},\ldots,g_{n})&\text{ if }g_{1}\not\in\mathcal{H}\\ 0\oplus 0&\text{ if }g_{1}\in\mathcal{H}\end{cases}\] This is the same as the diagonal image of the function \[\chi_{H}(g_{1})\cdot\sigma(g_{2},\ldots,g_{n}))=\begin{cases}\sigma(g_{2}, \ldots,g_{n})&\text{ if }g_{1}\not\in\mathcal{H}\\ 0&\text{ if }g_{1}\in\mathcal{H}\end{cases},\] which completes the computation of the connecting map \(\partial\). Therefore \(\partial(\sigma)=\chi_{H}\righr\sigma\), the cup product with the character function \(\chi_{H}\). This completes the proof of Theorem 1.2. These ideas in the proof of Theorem 1.2 will be generalized in the sections that follow when we compute Positselski's connecting map \(\eta\). ## 2. Positselski's 6-Term Cohomological Sequence If we attempt to construct a short exact sequence in the fashion of Arason's theorem for a cyclic extension of degree \(d>2\), then the dimensions of the modules over \(\mathbb{Z}/d\mathbb{Z}\) would be \(1\), \(\mathrm{d}\), \(1\), which makes exactness (and hence this approach) impossible. However, with the right machinery we can obtain 6-term exact sequences with a connecting map whose image is the cohomological kernel. Positselski's theorem offers exactly this machinery. The following definition collects the hypotheses needed for this machinery [P]. **Definition 2.1**.: Positselski's **Hypotheses**_: Let \(\mathcal{G}\) be a pro-finite group, let \(d,n\in\mathbb{Z}\) with \(d\geq 2\) and \(n\geq 0\), and let_ \[0\longrightarrow A_{2}\xrightarrow{d_{1}}B_{2}\xrightarrow{d_{2}}C_{2} \xrightarrow{d_{3}}D_{2}\longrightarrow 0\] _be a 4-term exact sequence of free \(\mathbb{Z}/d^{2}\mathbb{Z}\)-modules with a discrete action of \(\mathcal{G}\). Let \(h_{1},h_{2},h_{3}\) be homotopy maps_ \[A_{2}\xleftarrow{h_{1}}B_{2}\xleftarrow{h_{2}}C_{2}\xleftarrow{h_{3}}D_{2}.\] _Furthermore, let \(A_{1},B_{1},C_{1},D_{1}\) be the \(d\)-torsion of \(A_{2},B_{2},C_{2},D_{2}\) (respectively), let \(\overline{A_{2}}:=A_{2}/A_{1}\), noting that \(A_{1}\cong\overline{A_{2}}\) as \(\mathcal{G}\)-modules._ _If the homotopy maps satisfy the "prism" condition, that \(d_{i}h_{i}+h_{i+1}d_{i+1}=d\cdot id\) for all \(i\in\{0,1,2,3\}\), and if the Bockstein maps are 0 for all 4 modules, i.e. the kernel/cokernel short exact sequences_ \[0\xrightarrow{\ \(x\in X_{2}\) such that \(\pi(x)=\overline{x}\). Then multiply it by \(d\). \(d\cdot x\) is now unique, since any two such \(x\)'s differ by a multiple of \(d\). Furthermore, \(d\cdot x\) is a \(d\)-torsion element of \(X_{2}\) and is therefore an element of \(X_{1}\). Let \(B_{X},B_{Y},B_{Z}\) denote the Bockstein homomorphisms, which are the snake-lemma connecting maps: \(B_{X}:H^{n}(\overline{X_{2}})\longrightarrow H^{n+1}(X_{1})\), etc. Finally, let \(\overline{\partial}:H^{n}(\overline{Z_{2}})\longrightarrow H^{n+1}( \overline{X_{2}})\). Then_ 1. _There are well-defined homomorphisms_ \(\widetilde{h_{2}}:Z_{1}\longrightarrow X_{1}\) _and_ \(\widetilde{\overline{h_{2}}}:\overline{Z_{2}}\longrightarrow\overline{X_{2}}\) _defined as_ \(\widetilde{h_{2}}=d_{1}^{-1}h_{2}\)_,_ \(\widetilde{\overline{h_{2}}}=d_{1}^{-1}h_{2}\)_, both of which are equal to_ \(-h_{1}f\)_, for any lifting_ \(f\) _of_ \(d_{2}\)_._ 2. \(\Phi_{X}^{*}\overline{\partial}=\widehat{h_{2}}B_{Z}-B_{X}\widehat{\overline{ h_{2}}}\)_, where_ \(\widehat{h_{2}}\) _and_ \(\widehat{\overline{h_{2}}}\) _are the induced maps on cohomology._ **Proof**: 1. Note that because \(d_{1}\) is injective, \(d_{1}^{-1}\) is a map from \(\operatorname{im}(d_{1})\) to either \(X_{1}\) or \(\overline{X_{2}}\). We will show that \(\operatorname{im}(h_{2})\subseteq\operatorname{im}(d_{1})\) in \(Z_{1}\) and \(\overline{Z_{2}}\), which will prove that the definition of \(\widetilde{h_{2}}=d_{1}^{-1}h_{2}\) makes sense. Let \(f\) be a lifting of \(d_{2}\), such a lifting exists because \(d_{2}\) is surjective, and let \(z\) be an element of either \(Z_{1}\) or \(\overline{Z_{2}}\). Then from the prism condition, \[h_{2}(z) =h_{2}(d_{2}(f((z))))\] \[=(h_{2}d_{2})f(z)\] \[=d\cdot f(z)-(d_{1}h_{1})(f(z))\] \[=-d_{1}(h_{1}(f(z)))\] \[=d_{1}(h_{1}(-f(z)))\] which is in the image of \(d_{1}\). Furthermore, \(\ker(d_{2})=\operatorname{im}(d_{1})\subseteq\ker(h_{1})\) for both \(Y_{1}\) and \(\overline{Y_{2}}\), since both modules are \(d\)-torsion, making the composition \(h_{1}d_{1}\) the \(0\) map from the prism condition at \(X\). Thus, any two liftings of \(d_{2}\) differ by an element in the kernel of \(h_{1}\), which shows that \(-h_{1}f\) is well-defined. To show the equality \(d_{1}^{-1}h_{2}=-h_{1}f\), we can apply \(d_{1}^{-1}\) to both sides of the previous equality: \[d_{1}^{-1}h_{2} =d_{1}^{-1}d_{1}h_{1}(-f)\] \[=h_{1}(-f)\] \[=-h_{1}f\] 2. We assume without loss of generality that \(X\subseteq Y\) and \(d_{1}\) is the inclusion map. This allows the the snake-lemma connecting maps to be computed by applying the coboundary map \(\delta\) to any pre-image of a given cocycle. Computation will be done this way for both Bockstein maps \(B_{X}\), \(B_{Z}\) and for \(\overline{\partial}\). The inclusions also make \(h_{1}=-h_{2}d_{2}\) in \(Y_{1}\) and \(\overline{Y_{2}}\) (but not \(Y_{2}\)), and furthermore \(h_{1}\) restricted to \(X\subseteq Y\) is multiplication by \(d\). Below is a diagram of the exact square. Let \(\mathscr{L}\) be any lifting of \(d_{2}\,:\,\overline{Y_{2}}\longrightarrow\overline{Z_{2}}\), and let \(\mathscr{L}^{\prime}\) be a lifting of \(\pi\,:\,Y_{2}\longrightarrow\overline{Y_{2}}\) that maps \(\overline{X_{2}}\) into \(X_{2}\). We define a lifting for \(\pi\,:\,\overline{Z_{2}}\longrightarrow Z_{2}\) by \(\mathscr{L}_{1}\,:\,=\,d_{2}\mathscr{L}^{\prime}\mathscr{L}\), and a lifting for \(d_{2}\,:\,Y_{2}\longrightarrow Z_{2}\) by \(\mathscr{L}_{2}\,:\,=\,\mathscr{L}^{\prime}\mathscr{L}\pi\). With these choices for \(\mathscr{L}_{1}\) and \(\mathscr{L}_{2}\), the bottom right square commutes for the liftings, as \[\mathscr{L}_{2}\mathscr{L}_{1}=(\mathscr{L}^{\prime}\mathscr{L}\pi)(d_{2} \mathscr{L}^{\prime}\mathscr{L})=\mathscr{L}^{\prime}\mathscr{L}(\pi d_{2}) \mathscr{L}^{\prime}\mathscr{L}=\mathscr{L}^{\prime}\mathscr{L}(d_{2}\pi \mathscr{L}^{\prime}\mathscr{L})=\mathscr{L}^{\prime}\mathscr{L}.\] Let \(\sigma_{z}\in Z^{n}(\overline{Z_{2}})\), and let \(\sigma_{y}=\mathscr{L}^{\prime}(\mathscr{L}(\sigma_{z}))=\mathscr{L}_{2}( \mathscr{L}_{1}(\sigma_{z}))\in C^{n}(Y_{2})\). The liftings can be seen in the following diagram. We use \([\sigma]\) to denote the cohomology class of \(\sigma\) whenever \(\sigma\) is a cocycle. We recall that \(d_{1}^{-1}\) is the identity map. Direct computation and an application of the prism condition yield the desired result as follows. \[-B_{X}(\widehat{\widehat{h}}_{2}([\sigma_{z}])) =-B_{X}([\overline{h}_{2}\sigma_{z}]))=B_{X}([-\overline{h}_{2} \sigma_{z}])=B_{X}([h_{1}\mathscr{L}\sigma_{z}])\] \[=B_{X}([h_{1}\pi\sigma_{y}])=B_{X}([\pi h_{1}\sigma_{y}])=[\delta \mathscr{L}^{\prime}\pi h_{1}\sigma_{y}]\] \[=[\delta h_{1}\sigma_{y}]=[h_{1}(\delta\sigma_{y})]\] \[\widehat{h_{2}}(B_{Z}([\sigma_{z}])) =\widehat{h_{2}}(B_{Z}([\pi d_{2}\sigma_{y}]))=\widehat{h_{2}}([( \delta\mathscr{e}^{\prime})\pi d_{2}\sigma_{y}])\] \[=\widehat{h_{2}}([\delta d_{2}\sigma_{y}])=[h_{2}\delta d_{2} \sigma_{y}]=[(h_{2}d_{2})(\delta\sigma_{y})]\] \[\Phi_{X}^{*}(\overline{\partial}([\sigma_{z}])) =\Phi_{X}^{*}([\delta(\mathscr{e}^{\prime}\sigma_{z})])=\Phi_{X} ^{*}([\delta\pi\sigma_{y}])=\Phi_{X}^{*}([\pi\delta\sigma_{y}])\] \[=[(\Phi_{X}\pi)\delta\sigma_{y}]=[d\cdot\delta\sigma_{y}]\] Using these computations and the prism condition at \(Y_{1}\), we have the following equalities for any \(\sigma_{z}\in Z^{n}(\overline{Z_{2}})\). \[\Phi_{X}^{*}\overline{\partial}([\sigma_{z}])=[d\cdot(\delta\sigma_{y})]=[(h_ {2}d_{2})(\delta\sigma_{y})-h_{1}(\delta\sigma_{y})]=\widehat{h_{2}}(B_{Z}([ \sigma_{z}]))-B_{X}(\widehat{h_{2}}([\sigma_{z}]))\] and therefore \[\Phi_{X}^{*}\overline{\partial}=\widehat{h_{2}}B_{Z}-B_{X}\widehat{h_{2}}.\] This concludes the proof of Lemma 2.3. This next lemma gives the definition of the map \(\eta\) and uses a splitting of the four term sequence into two three term exact sequences. It shows how \(\eta\) is related to the connecting maps of these short exact sequences via the homotopies provided by the Positeselski Hypotheses. **Lemma 2.4**.: _Using the language of the previous lemma and viewing the 4-term exact sequence_ \[0\longrightarrow A\longrightarrow B\longrightarrow C\longrightarrow D\longrightarrow 0\] _as two short exact sequences_ \[0\longrightarrow A\longrightarrow B\longrightarrow\Delta\longrightarrow 0\] \[0\longrightarrow\Delta\longrightarrow C\longrightarrow D\longrightarrow 0\] _with the second sequence starting with an inclusion \(\Delta\subseteq C\), and given two liftings \(f_{2}:\Delta\longrightarrow B\) and \(f_{3}:D\longrightarrow C\) for the the terminal, surjective maps in each of these two respective short exact sequences, we have the following equivalent definitions of \(\eta\):_ \[-\widehat{h_{2}}\partial_{\Delta_{1}D_{1}}=\eta=\partial_{A_{1}\Delta_{1}} \widehat{h_{3}}\] _where \(\widetilde{h_{2}}:=d^{-1}h_{2}=-h_{1}f_{2}:C\longrightarrow A\) for the first short exact sequence as in the previous lemma, and \(\widehat{h_{3}}:=h_{3}=-h_{2}f_{3}:D\longrightarrow\Delta\) is the analogue of \(\widetilde{h_{2}}\) for the second short exact sequence. Also, \(f_{3}\) is any lifting of \(d_{3}\) (not necessarily a \(\mathcal{G}\)-map) and \(\delta:C^{n}(\mathcal{G},C_{1})\longrightarrow C^{n+1}(\mathcal{G},C_{1})\) is the group cohomology coboundary map._ Before proceeding with the proof, we note that though it is tempting to define \(\widetilde{h_{3}}:D\longrightarrow\Delta\) as \(f_{2}h_{3}\), where \(f_{2}\) is a lifting from \(C\) to \(B\) in the four-term exact sequence, however \(\Delta\) is already a submodule of \(C\), so this is not necessary. The lifting that proceeds \(h_{3}\) needs to be a lifting of the inclusion, which is the identity map in this case. **Proof**: The first homotopy we need for the short exact sequence \[0\longrightarrow\Delta\stackrel{{\subseteq}}{{\longrightarrow }}C\stackrel{{ d_{3}}}{{\longrightarrow}}D\longrightarrow 0\] is \(d_{2}h_{2}\,:\,C\longrightarrow\Delta\), and the fact that it satisfies the prism condition at \(\Delta\) and \(C\) follows from the prism condition being satisfied for the four-term exact sequence. We are ready to prove equality claimed between the two definitions of \(\eta\). Let \(\sigma\in Z^{1}(\mathcal{G},D_{1})\). Then \[-\widehat{h}_{2}\partial_{\Delta_{1}D_{1}}([\sigma]) =[-(h_{1}f_{2})(\delta f_{3}\sigma)]=[(-h_{1}f_{2})(\delta f_{3} \sigma)]=[\widetilde{h_{2}}(\delta f_{3}\sigma)]=[(d_{1}^{-1}h_{2})(\delta f_ {3}\sigma)]\] \[=[d_{1}^{-1}h_{2}\delta f_{3}\sigma]=[d_{1}^{-1}\delta h_{2}f_{3} \sigma]=[d_{1}^{-1}\delta(f_{2}d_{2})h_{2}f_{3}\sigma]\] \[=[(d_{1}^{-1}\delta d_{2}^{-1})(d_{2}h_{2}f_{3})\sigma]=\partial_ {A_{1}\Delta_{1}}\widehat{h}_{3}([\sigma])\] The third equality above uses the prism condition to interchange the two definitions of \(\widetilde{h}_{2}\) as in the previous lemma, the sixth equality follows from the commutativity of the chain map with the homotopies, the seventh equality is true because With the right choice of lifting (not necessarily \(f_{2}\)), applying \(d_{2}\) and then the lifting is the same as the identity map, but this difference in lifting choice is just a coboundary, which means that any lifting of \(d_{2}\) yields the desired equality. Finally, the last equality follows from the definitions of the connecting map \(\partial_{A_{1}\Delta_{1}}\) and the homomorphism \(\widehat{h}_{3}\) from \(D\) to \(\Delta\). This concludes the proof of Lemma 2.4. \(\Box\) The reader may note that the only difference between this definition of \(\eta\) and the definition of \(\partial\) from the snake lemma is the \(h_{2}\) in the composition. \(h_{2}\) can be viewed as the way to connects the two middle terms of the 4-term exact sequence, where the snake lemma has 1 middle term in a 3-term exact sequence and there is no need for this intermediate step. In this sense they are as close as can be considering the different number of modules in the exact sequence. With Lemmas 2.3 and 2.4 proved, we move on to the proof of Positselski's theorem. Up to this point we have not used the assumption from the Positselski Hypotheses that the Bochstein maps are zero. We will do so now. However as noted in Remark 2.5 below we will see that we need not assume that the Bochstein map \(B_{D}\) is zero; we shall only use that the composite \(\widehat{h}_{3}\circ B_{D}\) is zero. **Proof of Theorem 2.2**: With \(\eta\) defined, it is required to show exactness at the inner four terms of the 6-term sequence. Exactness of \(H^{n}(B_{1}\oplus D_{1})\stackrel{{ d_{2}^{*}+h_{3}^{*}}}{{ \longrightarrow}}H^{n}(C_{1})\stackrel{{ d_{3}^{*}}}{{ \longrightarrow}}H^{n}(D_{1})\): The composition \(d_{3}d_{2}\) is the 0 map on the modules, which makes it the zero map on cohomology as well. The composition \(d_{3}h_{3}\) is also the 0 map because it is multiplication by \(d\) and \(D_{1}\) is \(d\)-torsion. Now we show that \(d_{2}^{*}+h_{3}^{*}\) maps onto the kernel of \(d_{3}^{*}\). Let \(\sigma_{c}\in Z^{n}(\mathcal{G},C_{1})\) such that \([\sigma_{c}]\) is in the kernel of \(d_{3}^{*}\). Let \(\Delta_{1}=\ker(d_{3})\subseteq C_{1}\). Then, after adding a coboundary from \(B^{n}(\mathcal{G},C_{1})\) to \(\sigma_{c}\) if necessary, we may assume that \(\sigma_{c}\in Z^{n}(\mathcal{G},\Delta_{1})\subseteq Z^{n}(\mathcal{G},C_{1})\). Let \(\overline{\sigma}_{c}\in Z^{n}(\overline{\Delta}_{2})\) be the corresponding element to \(\sigma_{c}\) from the isomorphism \(\Delta_{1}\cong\overline{\Delta}_{2}\). Then \(B_{C}([\overline{\sigma}_{c}])=0\) in \(H^{n+1}(\mathcal{G},C_{1})\) by the assumption \(B_{C}=0\). Therefore \(B_{\Delta}([\overline{\sigma}_{c}])\) is in the kernel of \(\imath^{*}\), where \(\imath\,:\,\Delta\longrightarrow\,C\) is the inclusion. So \(B_{\Delta}([\overline{\sigma}_{c}])=\partial_{\Delta D}([\sigma_{d}])\) for some \(\sigma_{d}\in Z^{n}(\mathcal{G},D_{1})\) from the exactness of the long exact sequence induced by the Snake Lemma from the short exact sequence \[0\longrightarrow\Delta_{1}\longrightarrow C_{1}\longrightarrow D_{1} \longrightarrow 0.\] With \(B_{\Delta}([\overline{\sigma}_{c}])\) being in the image of \(\partial_{\Delta_{1}D_{1}}\), we can apply Lemma 2.3 to the short exact sequence \[0\longrightarrow A\longrightarrow B\longrightarrow\Delta\longrightarrow 0\] with the following result: Let the \(\widehat{h_{2}}:\Delta_{1}\longrightarrow A_{1}\) be as in the lemma. Then the following equalities hold in \(H^{n+1}(A_{1})\). \[\partial_{A_{1}\Delta_{1}}([\sigma_{c}])=B_{A}\widehat{h_{2}}([\overline{ \sigma}_{c}])-\widehat{h_{2}}B_{\Delta}([\overline{\sigma}_{c}])=-\widehat{h_ {2}}B_{\Delta}([\overline{\sigma}_{c}])=-\widehat{h_{2}}\partial_{\Delta_{1}D _{1}}([\sigma_{d}])=\eta([\sigma_{d}])\] with the second equality holding because \(B_{A}=0\) and the last equality following from the definition of \(\eta\). Furthermore, the equivalent definition of \(\eta\) as \(\partial_{A_{1}\Delta_{1}}\widehat{h_{3}}\) yields the equality \[\partial_{A_{1}\Delta_{1}}([\sigma_{c}])=\partial_{A_{1}\Delta_{1}}\widehat{h _{3}}([\sigma_{d}]),\] where \(\widehat{h_{3}}\), as in Lemma 2.4, comes from the short exact sequence \[0\longrightarrow\Delta_{1}\longrightarrow C_{1}\longrightarrow D_{1} \longrightarrow 0.\] The desired result follows from this: Since \(\partial_{A_{1}\Delta_{1}}\left([\sigma_{c}]-\widehat{h_{3}}([\sigma_{d}]) \right)=0\in H^{n+1}(G,A_{1})\), it follows that \([\sigma_{c}]-\widehat{h_{3}}([\sigma_{d}])=d_{2}^{*}([\sigma_{b}])\) for \(\sigma_{b}\in Z^{n}(B_{1})\). Therefore \([\sigma_{c}]=\widehat{h_{3}}([\sigma_{d}])+d_{2}^{*}([\sigma_{b}])\) for some \(\sigma_{D}\in Z^{n}(G,D_{1}),\sigma_{b}\in H^{n}(G,B_{1})\). This concludes exactness at \(H^{n}(G,C_{1})\). Exactness of \(H^{n}(C_{1})\stackrel{{ d_{3}^{*}}}{{\longrightarrow}}H^{n}(D_{1}) \stackrel{{\eta}}{{\longrightarrow}}H^{n+1}(A_{1})\): We first show that \(\eta d_{3}^{*}=0\) by using the definition of \(\eta=-\widehat{h_{2}}\partial_{\Delta_{1}D_{1}}\) in Lemma 2.4 and the fact that \(\partial_{\Delta_{1}D_{1}}d_{3}^{*}=0\) from exactness of the long exact sequence from the short exact sequence \[0\longrightarrow\Delta_{1}\longrightarrow C_{1}\longrightarrow D_{1} \longrightarrow 0.\] In fact, this means the kernel of \(\partial_{\Delta_{1}D_{1}}\) is equal to the image of \(d_{3}^{*}\), so the other containment amounts to showing that the kernel of \(\partial_{\Delta_{1}D_{1}}\) is no smaller than the kernel of \(-\widehat{h}_{2}\partial_{\Delta_{1}D_{1}}\). We shall prove this fact. However, we will start with the equivalent definition of \(\eta\) in Lemma 2.4, and then show this equality. Suppose \(\sigma_{d}\in Z^{n}(G,D_{1})\) such that \([\sigma_{d}]\) is in the kernel of \(\eta\). From Lemma 2.4, \(\eta([\sigma_{d}])=\partial_{A_{1}\Delta_{1}}\widehat{h_{3}}([\sigma_{d}])\), which means \(\widehat{h}_{3}([\sigma_{d}])\) is in the kernel of \(\partial_{A_{1}\Delta_{1}}\), the connecting map for the short exact sequence \[0\longrightarrow A_{1}\longrightarrow B_{1}\longrightarrow\Delta_{1} \longrightarrow 0,\] and therefore \(\widehat{h}_{3}([\sigma_{d}])=d_{2}^{*}([\sigma_{b}])\) for some \(\sigma_{b}\in Z^{n}(G,B_{1})\). Let \(\overline{\sigma}_{b},\overline{\sigma}_{d}\) be the corresponding cocycles to \(\sigma_{b},\sigma_{d}\) from the isomorphisms \(B_{1}\cong\overline{B}_{2},D_{1}\cong\overline{D}_{2}\), respectively. Then from the commuting of the Bockstein maps with the cohomology maps \(\widehat{h}_{3}\) induced from module homomorphisms \(\widetilde{h}_{3}\) and the vanishing of \(B_{B}\), we have the following equalities. \[0=d_{2}^{*}(B_{B}([\overline{\sigma}_{b}]))=B_{\Delta}(d_{2}^{*}([\overline{ \sigma}_{b}]))=B_{\Delta}(\widehat{h}_{3}([\sigma_{d}]))\] From Lemma 2.3 and the vanishing of the composite \(\widehat{h}_{3}B_{D}\), we have \[B_{\Delta}\widehat{h}_{3}([\overline{\sigma}_{d}])=B_{\Delta}\widehat{h}_{3}([ \overline{\sigma}_{d}])-\widehat{h}_{3}B_{D}([\overline{\sigma}_{d}])=\Phi^{*} \partial_{\Delta_{1}D_{1}}([\overline{\sigma}_{d}])=\partial_{\Delta_{1}D_{1}} ([\sigma_{d}]).\] Therefore \(\partial_{\Delta_{1}D_{1}}([\sigma_{d}])=0\). This concludes exactness at \(H^{n}(G,D_{1})\). Exactness of \(H^{n}(D_{1})\xrightarrow{\eta}H^{n+1}(A_{1})\xrightarrow{d_{1}}H^{n+1}(B_{1})\): To show that \(d_{1}\eta=0\), let \(\sigma_{d}\in Z^{n}(D_{1})\). Then by Lemma 2.4, \[d_{1}^{*}\eta([\sigma_{d}])=d_{1}^{*}(-\partial_{A_{1}\Delta_{1}}\widehat{h}_{ 3})([\sigma_{d}])=-(d_{1}^{*}\partial_{A_{1}\Delta_{1}})(\widehat{h}_{3}([ \sigma_{d}])),\] with \(d_{1}^{*}\partial_{A_{1}\Delta_{1}}\) being zero because of exactness of the long exact sequence from the short exact sequence \[0\xrightarrow{}A_{1}\xrightarrow{}B_{1}\xrightarrow{}\Delta_{1}\xrightarrow{ }0.\] Now suppose \(\sigma_{a}\in Z^{n+1}(G,A_{1})\) and \([\sigma_{a}]\) is in the kernel of \(d_{1}^{*}\). Then from exactness of the long exact sequence from the short exact sequence \[0\xrightarrow{}A_{1}\xrightarrow{}B_{1}\xrightarrow{}\Delta_{1}\xrightarrow{ }0,\] \([\sigma_{a}]=\partial_{A_{1}\Delta_{1}}([\sigma_{x}])\) for some \(\sigma_{x}\in Z^{n}(G,\Delta_{1})\). Let \(\overline{\sigma}_{x}\in Z^{n}(G,\overline{\Delta}_{2})\) be the element corresponding to \(\sigma_{x}\) through \(\Phi^{*}\). Then \[[\sigma_{a}]=\partial_{A_{1}\Delta_{1}}\Phi^{*}_{\Delta}([\overline{\sigma}_{ x}])=\Phi^{*}_{A}\partial_{A_{1}\Delta_{1}}([\overline{\sigma}_{x}])\] From Lemma 2.3 and the vanishing of the Bockstein map \(B_{A}\), \[\Phi^{*}_{A}\partial_{A_{1}\Delta_{1}}([\overline{\sigma}_{x}])=\widehat{h}_{ 2}B_{\Delta}([\overline{\sigma}_{x}])-B_{A}\widehat{h}_{2}([\overline{\sigma} _{x}])=\widehat{h}_{2}B_{\Delta}([\overline{\sigma}_{x}]).\] Furthermore, from the commuting of the Bockstein maps with the \(d_{i}\)'s and the vanishing of \(B_{C}\), we have \[d_{3}^{*}B_{\Delta}([\overline{\sigma}_{x}])=B_{C}i^{*}([\overline{\sigma}_{ x}])=0,\] where \(i^{*}\) is induced by the inclusion \(\subseteq\colon\Delta\xrightarrow{}C\). By exactness of the long exact sequence from the short exact sequence \[0\xrightarrow{}\Delta_{1}\xrightarrow{}C_{1}\xrightarrow{}D_{1} \xrightarrow{}0,\] \(B_{\Delta}([\overline{\sigma}_{x}])=\partial_{\Delta_{1}D_{1}}([\sigma_{d}])\) for some \(\sigma_{d}\in Z^{n}(G,D_{1})\). Therefore, using a definition of \(\eta\) in Lemma 2.4, \[[\sigma_{a}]=\widehat{h}_{2}B_{\Delta}([\overline{\sigma}_{x}])=\widehat{h}_{ 2}\partial_{\Delta_{1}D_{1}}([\sigma_{d}])=\eta([-\sigma_{d}]).\] This concludes exactness at \(H^{n}(G,A_{1})\). Exactness of \(H^{n+1}(A_{1})\xrightarrow{d_{1}}H^{n+1}(B_{1})\xrightarrow{d_{2}\oplus h_{ 1}}H^{n}(A_{1}\oplus C_{1})\): Both of the compositions \(d_{2}d_{1}\) and \(h_{1}d_{1}\) are \(0\) maps. In the first case this comes from exactness of the \(d_{i}\)'s, and in the second case from \(A_{1}\) being \(d\)-torsion. For the other containment, let \(\widehat{d}_{2}\,:\,H^{n+1}(G,B_{1})\xrightarrow{}H^{n+1}(G,\Delta_{1})\) be the map induced by \(d_{2}\) with codomain reduced to \(\Delta_{1}\), so that \(i^{*}\widehat{d}_{2}=d_{2}^{*}\,:\,H^{n+1}(G,B_{1})\xrightarrow{}H^{n+1}(G,C _{1})\), where \(\iota\) is induced from the inclusion \(\iota\,:\,\Delta_{1}\xrightarrow{}C_{1}\). Let \(\sigma_{b}\in Z^{n+1}(G,B_{1})\) such that \(h_{1}^{*}([\sigma_{b}])=0\) and \(d_{2}^{*}([\sigma_{b}])=0\). We will show that \([\sigma_{b}]=d_{1}^{*}([\sigma_{a}])\) for some \(\sigma_{a}\in Z^{n+1}(G,A_{1})\) by showing that \(\widehat{d}_{2}([\sigma_{b}])=0\). From the hypothesis, \(0=d_{2}^{*}([\sigma_{b}])=\iota^{*}(\widehat{d}_{2}([\sigma_{b}]))\), which means \(\widehat{d}_{2}([\sigma_{b}])=\partial_{\Delta_{1}D_{1}}([\sigma_{d}])\) for some \(\sigma_{d}\in Z^{n}(G,D_{1})\). This follows from the exactness of the long exact sequence from the short exact sequence \[0\xrightarrow{}\Delta_{1}\xrightarrow{}C_{1}\xrightarrow{}D_{1}\xrightarrow{ }0.\] Using the definitions of \(\eta\) from Lemma 2.3, we have \[0=h_{1}^{*}([\sigma_{b}])=\widehat{h}_{2}\widehat{d}_{2}([\sigma_{b}])= \widehat{h}_{2}\partial_{\Delta_{1}D_{1}}([\sigma_{b}])=-\eta([\sigma_{b}])=- \partial_{A_{1}\Delta_{1}}\widehat{h}_{3}([\sigma_{b}]),\] which means that \(\widehat{h}_{3}([\sigma_{b}])=\widehat{d}_{2}([\sigma_{b}^{\prime}])\) for some \(\sigma_{b}\in Z^{n+1}(G,B_{1})\) from exactness of the long exact sequence induced by the short exact sequence \[0\longrightarrow A_{1}\longrightarrow B_{1}\longrightarrow\Delta_{1} \longrightarrow 0.\] Let \(\overline{\sigma}_{d}\) and \(\overline{\sigma}_{b}^{\prime}\) be the corresponding element to \(\sigma_{d}\) and \(\sigma_{b}^{\prime}\) through the isomorphism \(\Phi^{*}\) (respectively). Then from Lemma 2.3 and the assumption \(\widehat{h}_{3}B_{D}=0\), \[\widetilde{d}_{2}([\sigma_{b}])=\partial_{\Delta_{1}D_{1}}([ \sigma_{d}]) =\widehat{h}_{3}B_{D}([\overline{\sigma}_{d}])-B_{\Delta}\widehat {h}_{3}([\overline{\sigma}_{d}])=-B_{\Delta}\widehat{h}_{3}([\overline{ \sigma}_{d}])\] \[=-B_{\Delta}\widehat{d}_{2}([\overline{\sigma}_{b}^{\prime}])=- \widehat{d}_{2}B_{B}([\overline{\sigma}_{b}^{\prime}])=0.\] This yields the desired result of \(\widehat{d}_{2}([\sigma_{b}])=0\). Therefore, from the exactness of the long exact sequence induced by the short exact sequence \[0\longrightarrow A_{1}\longrightarrow B_{1}\longrightarrow\Delta_{1} \longrightarrow 0\] we have \([\sigma_{b}]=d_{1}^{*}([\sigma_{a}])\) for some \(\sigma_{a}\in Z^{n+1}(G,A_{1})\). **Remark 2.5**.: _The proof given above follows the proof given by Positselski [P]. In Positeselski's write-up all four of the Bockstein maps were assumed to be zero for the six-term sequence to be exact, however as noted above this hypothesis can be weakened to assuming that \(B_{A}\), \(B_{B}\) and \(B_{C}\) are zero along with the vanishing of the composite \(\widehat{h}_{3}B_{D}\). Tabulating the use of these hypotheses, the exactness at \(H^{n}(C_{1})\) only required \(B_{A}=0\) and \(B_{C}=0\), at \(H^{n}(D_{1})\) exactness requires \(\widehat{h}_{3}B_{D}=0\), at \(H^{n+1}(A_{1})\) exactness requires \(B_{A}=0\) and \(B_{C}=0\), and exactness at \(H^{n+1}(B_{1})\) requires that \(\widehat{h}_{3}B_{D}=0\) and \(B_{B}=0\). Exactness at \(H^{n+1}(A_{1})\) is of particular interest when \(n=1\) because it can be used to calculate cohomological (in particular Brauer) kernels. These observations will be important in a sequel to this paper where four term exact sequences with homotopies are constructed, but where the fourth Bochstein map \(B_{D}\) fails to be zero, while the composite \(\widehat{h}_{3}B_{D}\) is shown to be zero._ ## 3. The General Setup and the Bockstein Maps For the sections that follow we adopt the following notation: Let \(G\) be a semi-direct product of \(\langle\tau\rangle\) by \(\langle\sigma\rangle\) with \(|\tau|=d\), \(|\sigma|=s\) and \(\langle\sigma\rangle\) has a faithful action on \(\langle\tau\rangle\). Let \(\mathcal{G}=\operatorname{Gal}(F^{\mathrm{sep}}/F)\) for some field \(F\) whose extension \(E\) is the degree-\(d\) extension we wish to study. Let \(\widetilde{E}\) be the Galois closure of \(E/F\), \(\mathcal{N}=\operatorname{Gal}(F^{\mathrm{sep}}/\widetilde{E})\triangleleft \mathcal{G}\). Assume further that \(G\cong\operatorname{Gal}(\widetilde{E}/F)\cong G/\mathcal{N}\), \(E=F(\beta)\), \(\widetilde{E}=F(\alpha,\beta)\), and let \(\widetilde{F}\) be the cyclic extension of \(F\) given by \(\operatorname{Fix}(\langle\tau\rangle)\) in this setup. Let \(\mathcal{H}=\operatorname{Gal}(F^{\mathrm{sep}}/E)\), \(\mathcal{J}=\operatorname{Gal}(F^{\mathrm{sep}}/\widetilde{F})\), \(H=\langle\sigma\rangle\) and \(J=\langle\tau\rangle\) so that \(\sigma(\beta)=\beta\) and \(\tau(\alpha)=\alpha\). We also assume that the \(d^{2}\)th roots of unity \(\mu_{d^{2}}\subseteq F\), and \(\operatorname{char}(F)\) does not divide \(d\) so that \(\mu_{d^{2}}\) contains \(d^{2}\) distinct roots of unity. The following three sections will cover the cases of \(E/F\) in increasing generality. In the next section \(E/F\) will be a cyclic extension so that \(\widetilde{E}=E\), \(\widetilde{F}=F\) and \(s=|\sigma|=1\) with no restrictions on \(d\). In the section that follows we assume \(s=|\sigma|=2\) so that \(G\) is a dihedral group with \(d\) odd. Lastly we let \(s=|\sigma|\) be any even positive integer with \(s|(d-1)\), hence \(d\) is still odd. We denote by \(\theta:\{0,1,\ldots,d-1\}\to\{0,1,\ldots,d-1\}\) the conjugation in \(\langle\tau\rangle\) by \(\sigma\), that is, \(\sigma\tau^{i}\sigma^{-1}=\tau^{\theta(i)}\). We define \(\theta_{j}\) by \(\sigma^{j}\tau\sigma^{-j}=\tau^{\theta_{j}}\) (in fact \(\theta_{j}=\theta^{j}(1)\) where the latter is the \(j\)'th iterate of \(\theta\), but the notation \(\theta_{j}\) is less cumbersome.) We assume that \(\theta\) has order \(s\), that is, conjugation by \(\sigma\) on \(\langle\tau\rangle\) has order \(s\). We assumed this from the faithful action in the semi-direct product setup of \(G\) above at the beginning of this section. As \(\tau\) has odd order, \(\sigma^{\frac{s}{2}}\tau^{i}\sigma^{-\frac{s}{2}}=\tau^{-i}\) for all \(i\). From this, \(\theta_{j+\frac{s}{2}}\equiv-\theta_{j}\) (mod \(d\)) and since \(0<\theta_{j}<d\) we must have \(\theta_{j}+\theta_{j+\frac{s}{2}}=d\). Part of the Positselski Hypotheses is the requirement that the Bockstein maps are zero for the four modules in the exact sequence, and we will show that this is indeed the case for the next three sections. To facilitate this in the later sections, we will prove a couple lemmas here. To set up these lemmas, let us first examine the long exact sequence over which the Bockstein map is defined for the module \(\mu_{d^{2}}\) for a given field \(F\) that contains those roots of unity. We will be using \(M_{1}=\mu_{d^{2}}\) as a trivial \(\mathcal{G}\)-module in all three of the following sections, and \(M_{2}\) will be an induced module with the same cohomology when taken over a slightly larger field. The long exact sequence associated with \(0\to\mu_{d}\to\mu_{d^{2}}\to\mu_{d^{2}}/\mu_{d}\to 0\) is the following \[\cdots\longrightarrow H^{n}(F,\mu_{d})\stackrel{{ i}}{{ \longrightarrow}}H^{n}(F,\mu_{d^{2}})\stackrel{{\pi}}{{ \longrightarrow}}H^{n}(F,\mu_{d^{2}}/\mu_{d})\stackrel{{\beta_{ \mu_{d^{2}}}}}{{\longrightarrow}}H^{n+1}(F,\mu_{d})\longrightarrow\cdots\] and the relevant Bockstein map is the connecting map labelled \(\beta_{\mu_{d^{2}}}\) in this sequence (see also [SV]). In this case the vanishing of the Bockstein map is given next. **Lemma 3.1**.: _Suppose \(F\) is a field with \(\mu_{d^{2}}\subset F\). Then the Bockstein map \(\beta_{\mu_{d^{2}}}\) associated with the short exact sequence \(0\to\mu_{d}\to\mu_{d^{2}}\to\mu_{d^{2}}/\mu_{d}\to 0\) is zero._ Proof.: The Bloch-Kato Conjecture, proved by Veovodski in [V] (see section 1.7 of [HW] for discussion of history), states that the norm residue homomorphisms below are surjective since \((\text{char}(F),d)=1\). Furthermore, the identity map commutes with the canonical quotient map through the norm residue homomorphism. This means we have a commutative diagram, Therefore the map \(\pi\) is surjective as well, so that \(\beta_{\mu_{d^{2}}}=0\) by exactness. This proves the lemma. \(\Box\) Lemma 3.1 checks the vanishing of the Bockstein map for \(\mu_{d^{2}}\), which takes care of \(M_{1}\) in every case we will consider. Similarly, \(M_{2}\) is either an induced module or a sum of such in every case, and Lemma 7 will apply. The field \(E\) may be used instead of \(F\) to apply Lemma 3.1 and obtain the result that \(\beta_{M_{2}}=0\). For the cyclic case, \(M_{3}=M_{2}\) and \(M_{4}=M_{1}\), so all the Bockstein maps have been shown to be zero for the cyclic case. The dihedral and semi-direct cases have different \(M_{3}\)'s and \(M_{4}\)'s, and these cases will be treated separately in the development of the verification of their respective Positselski Hypotheses. In doing so, the following lemma will be used. **Lemma 3.2**.: _Let \(\mathcal{J}\subseteq\mathcal{G}\) be an index \(s\) subgroup, \(X_{2}\) a free \(\mathbb{Z}\big{/}d^{2}\mathbb{Z}\)-module with a discrete action of \(\mathcal{G}\) with \((s,d)=1\). If \(\beta_{\mathcal{J},X}=0\) then \(\beta_{\mathcal{G},X}=0\) as well._ **Proof**: The restriction map \(res\,:\,H^{n}(\mathcal{G},X_{2})\longrightarrow H^{n}(\mathcal{J},X_{2})\) is injective because \(corores=\cdot s\), which is invertible because \((s,d)=1\). Now, we have the following commutative diagram in which case \[res\circ\beta_{\mathcal{G},X}=\beta_{\mathcal{J},X}ores=0res=0\] this means \(\beta_{\mathcal{G},X}=0\) because \(res\) is injective. This concludes the proof of Lemma 3.2. \(\Box\) With the previous discussion and the lemma proved, we have the framework necessary to show that all four modules have zero Bockstein maps for all three cases considered in this paper. ## 4. The Cyclic Case We begin our analysis with the cyclic case, namely when \(E/F\) is cyclic Galois of degree \(d\), and where we assume \(\mu_{d^{2}}\subseteq F\). In this case the kernel \(H^{2}(E/F,\mu_{d})\) has been understood since the early days of Class Field Theory. For suppose \(E=F(\sqrt[d]{a})\). Then one has the well-known exact sequence \[H^{1}(E,\mu_{d})\stackrel{{\mathrm{N}_{E/F}}}{{\longrightarrow}}H^ {1}(F,\mu_{d})\stackrel{{(a)\sim}}{{\longrightarrow}}H^{2}(F,\mu_ {d})\stackrel{{ i_{E/F}}}{{\longrightarrow}}H^{2}(E,\mu_{d})\] which describes cohomology classes in \(H^{2}(F,\mu_{d})\) that vanish in \(E\) as those corresponding to "symbol algebras" of the form \((a,b)_{F}\) for some \(b\in F\). This result also encodes the classes \((a,b)_{F}\) which vanish in \(H^{2}(F,\mu_{d})\) as those where \(b\in\mathrm{N}_{E/F}(E^{\times})\), that is \(b\) is a norm from \(E\). This section generalizes this classical information to all higher cohomology. The generalization of this result due to Voevodsky [HW], however the result there is the direct generalization of the four-term exact sequence in the presence of \(d\)th roots of unity, whereas the result obtained here extends this sequence one term to the right and left because the machinery of [P] gives a six-term sequence, but requiring \(d^{2}\)th roots of unity. Of course, the \(H^{2}\) result just mentioned is a consequence of Hilbert's Theorem 90 so this is not a surprise. Moreover, the generalization of Hilbert's Theorem 90 to higher K-theory is essential to Voevodsky's work, so this is also to be expected. ### The 4-Term Exact Sequence with Homotopies Let \(E/F\) be a cyclic extension of degree \(d\), with \(d\)th roots of unity \(\mu_{d}\subseteq F\) and \(\operatorname{Gal}(E/F)=G\cong\mathcal{G}/\mathcal{H}=\langle\tau\rangle\). Let \(F_{\mathrm{sep}}\) denote the separable closure of \(F\). Because \(E/F\) is cyclic, the Galois closure \(\widetilde{E}\) of \(E/F\) is \(E\), and \(\widetilde{F}=F\). This simplifies the general setup. It would be nice to use the restriction, corestriction sequence used in Arason's theorem, as was discussed in the introduction to Positselski's 6-term sequence, but the sequence \[0\longrightarrow\mathbb{Z}\stackrel{{\Delta}}{{\longrightarrow }}\operatorname{Ind}_{\mathcal{H}}^{G}(\mathbb{Z})\stackrel{{ Tr}}{{ \longrightarrow}}\mathbb{Z}\longrightarrow 0\] is not exact for every \(d>2\). This is why we turn to Positselski's machinery, where two induced modules in the middle resolve this dimension problem. We use the 4-term exact sequence \[0\longrightarrow\mathbb{Z}/d^{2}\mathbb{Z}\stackrel{{\Delta}}{{ \longrightarrow}}\operatorname{Ind}_{H}^{\mathcal{G}}(\mathbb{Z}/d^{2}\mathbb{Z })\stackrel{{(1-\tau)}}{{\longrightarrow}}\operatorname{Ind}_{H}^ {\mathcal{G}}(\mathbb{Z}/d^{2}\mathbb{Z})\stackrel{{\operatorname{ Tr}}}{{\longrightarrow}}\mathbb{Z}/d^{2}\mathbb{Z}\longrightarrow 0\] of \(\mathcal{G}\)-modules, where the maps \(\Delta\) and \(\operatorname{Tr}\) are defined in Definition 4.2 below. In the notation of this sequence, since \(\mu_{d^{2}}\subseteq F\), the \(\mathcal{G}\)-module \(\mu_{d^{2}}\) will be identified with \(\mathbb{Z}/d^{2}\mathbb{Z}\) and then the module \(\mathbb{Z}/d^{2}\mathbb{Z}\) acts as a vessel for the short exact sequence \[0\longrightarrow\mu_{d}\stackrel{{\varsigma}}{{ \longrightarrow}}\mu_{d^{2}}\stackrel{{\pi}}{{\longrightarrow}} \mu_{d^{2}}/\mu_{d}\longrightarrow 0\] making the \(\mathcal{G}\)-module homomorphisms chain maps. To do this, we identify the roots-of-unity short exact sequence with the additive short exact sequence \[0\longrightarrow d\,\mathbb{Z}/d^{2}\mathbb{Z}\stackrel{{ \varsigma}}{{\longrightarrow}}\mathbb{Z}/d^{2}\mathbb{Z}\stackrel{{ \pi}}{{\longrightarrow}}\mathbb{Z}/d\,\mathbb{Z}\longrightarrow 0.\] We also need a characterization of the induced module in order to facilitate computations in the four-term sequence given in Definition 4.2 below. This is the subject of the following lemma, which allows us to do induced module computation in the group ring. **Lemma 4.1**.: _We denote by \(\tilde{\tau}\) be a lifting of \(\tau\) to \(F_{sep}\), and let \(G=\langle\tau\rangle=\langle\tilde{\tau}\mathcal{H}\rangle=\mathcal{G}/ \mathcal{H}\). Define \(\phi:\operatorname{Ind}_{H}^{\mathcal{G}}(\mathbb{Z})\longrightarrow\mathbb{Z}[G]\) as follows: For any \(f:\mathcal{G}\longrightarrow\mathbb{Z}\) with the property \(f(hg)=h\cdot f(g)\) for every \(g\in\mathcal{G},h\in\mathcal{H}\),_ \[\phi(f)=\sum_{i=0}^{d-1}f(\tilde{\tau}^{i})\tau^{-i}\] _Then \(\phi\) is an isomorphism of \(\mathcal{G}\)-modules, that is, \(\operatorname{Ind}_{H}^{\mathcal{G}}(\mathbb{Z})\cong\mathbb{Z}[G]\)._ **Proof**: \(\phi\) is a \(\mathbb{Z}\)-linear map that restricts to a bijection between \(\mathbb{Z}\)-bases for the two \(\mathcal{G}\)-modules, so it is a \(\mathbb{Z}\)-module isomorphism. So we need only check that the action is preserved. Every \(g\in\mathcal{G}\) can be expressed as \(g=h\tilde{\tau}^{k}\), where \(h\in\mathcal{H}\). Let \(h_{i}=\tilde{\tau}^{i}h\tilde{\tau}^{-i}\in\mathcal{H}\) for each \(i\in\{0,\dots,d-1\}\) (these will be used to "hop over" the \(\tilde{\tau}\) terms). Then \[\phi(h\tilde{\tau}^{k}\cdot f) =\sum_{i=0}^{d-1}(h\tilde{\tau}^{k}\cdot f)(\tilde{\tau}^{i}) \tau^{-i}=\sum_{i=0}^{d-1}f(\tilde{\tau}^{i}h\tilde{\tau}^{k})\tau^{-i}=\sum _{i=0}^{d-1}f(h_{i}\tilde{\tau}^{i}\tilde{\tau}^{k})\tau^{-i}\] \[=\sum_{i=0}^{d-1}h_{i}\cdot f(\tilde{\tau}^{i}\tilde{\tau}^{k}) \tau^{-i}=\sum_{i=0}^{d-1}f(\tilde{\tau}^{i}\tilde{\tau}^{k})\tau^{-i}=\sum_{ j=0}^{d-1}f(\tilde{\tau}^{j})\tau^{-(j-k)}\] \[=\tau^{k}\sum_{j=0}^{d-1}f(\tilde{\tau}^{j})\tau^{-j}=\tilde{ \tau}^{k}h\cdot\phi(f)\] This concludes the proof of Lemma 4.1. We next give the four term sequence in the cyclic case. We will define the homotopies and verify the computational conditions for Positselski's 6-term sequence in the group ring \(\mathbb{Z}[G]\) in Theorem 4.4 below. **Definition 4.2**.: _Let \(G\) be as above with \(\mathbb{Z}\) a trivial \(G\)-module and with \(\mathbb{Z}[G]\) a \(G\)-module via multiplication on the left. We define \(G\)-module maps_ \[\Delta\,:\,\mathbb{Z}\longrightarrow\mathbb{Z}[G],\quad n\mapsto\bigoplus_{g \in G}ng,\mathrm{and}\] \[\mathrm{Tr}\,:\,\mathbb{Z}[G]\longrightarrow\mathbb{Z},\ \ \bigoplus_{g\in G}c_{g}g\mapsto\sum_{g\in G}c_{g}.\] _The Positselski modules \(M_{1}\), \(M_{2}\), \(M_{3}\), \(M_{4}\) and maps \(d_{1},d_{2},d_{3}\) for the cyclic case are defined as follows:_ \[0\longrightarrow\mathbb{Z}\stackrel{{\Delta}}{{ \longrightarrow}}\mathbb{Z}[G]\stackrel{{\cdot(1-\tau)}}{{ \longrightarrow}}\mathbb{Z}[G]\stackrel{{\tau_{T}}}{{ \longrightarrow}}\mathbb{Z}\longrightarrow 0.\] _The homotopies \(h_{1},h_{2},h_{3}\) are defined as follows:_ \[\mathbb{Z}\stackrel{{\tau_{T}}}{{\longleftarrow}}\mathbb{Z}[G ]\stackrel{{\cdot\sum_{i=\tau^{i}}}}{{\longleftarrow}}\mathbb{ Z}[G]\stackrel{{\Delta}}{{\longleftarrow}}\mathbb{Z}.\] **Remark 4.3**.: _The \(\mathcal{G}\)-module homomorphism \(d_{2}=(1-\tau)\,:\,\text{Ind}_{\mathcal{H}}^{\mathcal{G}}(\mathbb{Z}) \longrightarrow\text{Ind}_{\mathcal{H}}^{\mathcal{G}}(\mathbb{Z})\) is multiplication on the left by \((1-\tau)\) in the group ring \(\mathbb{Z}[G]\) as in the Hilbert 90 sequence, which is also multiplication on the right by \((1-\tau)\), since \(G=\langle\tau\rangle\) is abelian. We will use this last convention for the more general cases to follow. Another way of viewing this map is as the unique \(\mathcal{G}\)-module homomorphism that sends \(1\) to \((1-\tau)\). The same remarks apply to \(h_{2}=\cdot\sum-i\tau^{i}\), though the \(h_{2}\) map will differ in the later cases._ **Theorem 4.4**.: _The \(d_{i}\)s are exact, and the \(h_{i}\)s satisfy the prism condition._ **Proof**: Exactness follows from the usual Hilbert's Theorem 90 projective resolution argument for a cyclic extension. To show the prism condition, \[h_{2}d_{2}=d_{2}h_{2} =(1-\tau)\sum_{i=0}^{d-1}-i\tau^{i}\ =\ \sum_{i=0}^{d-1}-i\tau^{i}+\sum_{i=0}^{d-1}i\tau^{i+1}\] \[=-0\tau^{0}+\sum_{i=1}^{d-1}-i\tau^{i}+\sum_{i=1}^{d-1}(i-1)\tau^ {i}+(d-1)\tau^{d}\ =\ \sum_{i=1}^{d-1}-\tau^{i}+d\] while \(d_{1}h_{1}=h_{3}d_{3}=\sum_{i=1}^{d-1}\tau^{i}\). Therefore \(d_{1}h_{1}+h_{2}d_{2}=h_{2}d_{2}+d_{3}h_{3}=d\cdot\). Furthermore, the fact that \[\mathrm{Tr}\Delta=d\cdot\] follows from \(d=[\mathcal{G},\mathcal{H}]\), and thus the prism condition for \(h_{2}d_{1}=d_{3}h_{3}\) is verified. This concludes the proof of Theorem 4.4. \(\Box\) The only remaining requirement to check for the Positselski hypotheses is the Bockstein maps being zero. In the cyclic case, all four modules are either \(\mathbb{Z}\) or \(\text{Ind}_{\mathcal{H}}^{\mathcal{G}}(\mathbb{Z})\), with each mod \(d^{2}\mathbb{Z}\) to represent \(\mu_{d^{2}}\) with a trivial action. Both of these modules have been shown to have a zero Bockstein map in Lemma 3.2. Therefore the Positselski Hypotheses are satisfied by the four module exact sequence with homotopies defined in this section. ### The Connecting Map for the Cyclic Case Now we compute the connecting map \(\eta\). Let \(\overline{M_{i}}:=M_{i}/d\,M_{i}\) for \(i\in\{1,2,3,4\}\) and let \(c\in Z^{n-1}(F,\overline{M_{4}})\). We will begin the computation with a choice of lifting \(\ell\) of \(d_{3}\). For any \(x\in\overline{M_{4}}=\mathbb{Z}/d\mathbb{Z}\), define \(\ell(x)\ :=\ x\cdot 1_{G}\in\overline{M_{3}}=\mathbb{Z}[G]/d\mathbb{Z}[G]\). Note that this is not the diagonal map, nor is it a \(\mathcal{G}\)-map. With \(\ell\) chosen, we are ready to compute the connecting map \(\eta\) as the composition \(d_{1}^{-1}h_{2}\delta\ell\) where \(\delta\) is the chain complex map from Galois cohomology for which the Bar Resolution is used. We used the customary choice of \(\sigma\) for a cocycle in the Positselski framework, but we will reserve this symbol for the enlarged group \(G\) in future sections. So let \(c\in Z^{n-1}(\mathcal{G},\overline{M}_{4})\) denote our cocycle. We will compute the connecting map \(\eta\) applied to \(c\). What follows is a lemma that will reduce the complexity of this computation. **Lemma 4.5**.: _The following are true for \(c\) and for any \(g_{1},\ldots,g_{n}\in\mathcal{G}\)._ 1. \(\delta(\mathcal{C}(c))(g_{1},\ldots,g_{n})=-d_{2}\left(\sum_{i=0}^{k-1}\tau^{i }\cdot\mathcal{C}(c(g_{2},\ldots,g_{n}))\right)\)_, where_ \(k\in\{0,\ldots,d-1\}\) _are such that_ \(g_{1}\mathcal{N}=\tau^{k}\)_._ 2. \(h_{2}d_{2}\equiv-d_{1}h_{1}\) _(mod_ \(dM_{2}\)_)._ **Proof**: (2) follows from the prism condition at \(M_{2}\). To prove (1), we will start by using a similar argument to that used in the connecting map \(\partial\) for Arason's theorem, using the fact that \(\mathcal{C}\) is an abelian group homomorphism to make the computation of \(\delta(\mathcal{C}(c))\) easier: \[\delta(\mathcal{C}(c))(g_{1},\ldots,g_{n}) =g_{1}\cdot\mathcal{C}(c(g_{2},\ldots,g_{n}))-\mathcal{C}(g_{1} \cdot c(g_{2},\ldots,g_{n}))\] \[=\tau^{k}\cdot\mathcal{C}(c(g_{2},\ldots,g_{n}))-\mathcal{C}(c(g _{2},\ldots,g_{n}))\] \[=(\tau^{k}-1)\cdot\mathcal{C}(c(g_{2},\ldots,g_{n}))\] \[=-(1-\tau)\left(\sum_{i=0}^{k-1}\tau^{i}\cdot\mathcal{C}(c(g_{2},\ldots,g_{n}))\right)\] \[=-d_{2}\left(\sum_{i=0}^{k-1}\tau^{i}\cdot\mathcal{C}(c(g_{2}, \ldots,g_{n}))\right).\] \(\Box\) This next Theorem describes the connecting map in the cyclic case. **Theorem 4.6**.: _Let \(\chi_{\mathcal{H}}\,:\,\mathcal{G}\longrightarrow\mathbb{Z}/d\mathbb{Z}\) denote the character function that factors through the isomorphism \(\mathcal{G}/\mathcal{H}\stackrel{{\cong}}{{\longrightarrow}} \mathbb{Z}/d\mathbb{Z}\), namely \(\chi_{\mathcal{H}}(\tau^{k})=k\). Let \(\smile\) denote the cup product. Then_ \[\eta(c)=-\chi_{\mathcal{H}}\smile c.\] **Proof**: Since \(\mathcal{C}(c(g_{2},\ldots,g_{n})=c(g_{2},\ldots,g_{n})\), the equality \[\tau^{i}\cdot c(g_{2},\ldots,g_{n})=c(g_{2},\ldots,g_{n})\tau^{i}\] follows. We next compute using parts (1) and (2) of Lemma 4.5 as indicated. \[-\eta(c)(g_{1},\ldots,g_{n}) =d_{1}^{-1}h_{2}\delta\mathcal{C}(c)(g_{1},\ldots,g_{n})\ \stackrel{{\eqref{eq:c}}}{{=}}\ -d_{1}^{-1}h_{2}d_{2}\left(\sum_{i=0}^{k-1}\tau^{i}\cdot\mathcal{C}(c(g_{2}, \ldots,g_{n}))\right)\] \[=-d_{1}^{-1}h_{2}d_{2}\left(\sum_{i=0}^{k-1}c(g_{2},\ldots,g_{n}) \tau^{i}\right)\ \stackrel{{\eqref{eq:c}}}{{=}}\ d_{1}^{-1}d_{1}h_{1} \left(\sum_{i=0}^{k-1}c(g_{2},\ldots,g_{n})\tau^{i}\right)\] \[=h_{1}\left(\sum_{i=0}^{k-1}c(g_{2},\ldots,g_{n})\tau^{i}\right) \ =\ Tr\left(\sum_{i=0}^{k-1}c(g_{2},\ldots,g_{n})\tau^{i}\mathcal{H}\right)\] \[=\sum_{i=0}^{k-1}c(g_{2},\ldots,g_{n})\ =\ kc(g_{2},\ldots,g_{n})\ =\ ( \chi_{\mathcal{H}}\smile c)(g_{1},\ldots,g_{n}).\] This concludes the proof of Theorem 4.6. In view of Theorems 4.4 and 4.6 the machinery in Theorem 2.2 gives the following result. **Theorem 4.7**.: _In the cyclic case we have the following 6-term exact sequence._ _where \(d_{3}\) is the norm, \(\eta(c)=-\chi\rightharpoonup c\) and \(d_{1}\) is scalar extension._ ## 5. The Dihedral Case We noted in the introduction that the case where \([E\ :\ F]=4\) and \(\operatorname{Gal}(\widetilde{E}/F)\) is the dihedral group of order 8 was handled by Positselski in [P]. In fact, Positselski handled every case where \([E\ :\ F]\) is a multiple of 4. In this section we turn to the dihedral cases where \(d=[E\ :\ F]\) is odd. ### The 4 Term Exact Sequence with Homotpies For this section, we have the following notation. Let \(G\) be a dihedral group, \(G=\langle\sigma,\tau\rangle\) with \(|\tau|=d\) for some odd integer \(d\), \(|\sigma|=2\) and the relation \(\sigma\tau=\tau^{-1}\sigma\). We use the notation described earlier. In this case the diagrams of fields and groups are as follows. In the previous section in the cyclic case with \(G=\langle\tau\rangle\) the two short exact sequences that make up our 4-term sequence of modules come from adjacent terms in this projective resolution. \[\cdots\longrightarrow\mathbb{Z}[G]\xrightarrow{\cdot(1-\tau)}\mathbb{Z}[G] \xrightarrow{\cdot T_{\tau}}\mathbb{Z}[G]\xrightarrow{\cdot(1-\tau)}\mathbb{Z }[G]\xrightarrow{\cdot T_{\tau}}\mathbb{Z}[G]T_{\tau}\longrightarrow 0\] In this resolution, \(T_{\tau}=1+\tau+\cdots+\tau^{d-1}\) is the \(\tau\)-trace, though it will act as a norm map on roots of unity in the field. This projective resolution is commonly used in a proof of Hilbert's Theorem 90 for cyclic Galois extensions. The alternating short exact sequences are \[0\longrightarrow\mathbb{Z}[G]T_{\tau}\stackrel{{\leq}}{{ \longrightarrow}}\mathbb{Z}[G]\stackrel{{\cdot(1-\tau)}}{{ \longrightarrow}}\mathbb{Z}[G]\longrightarrow 0\] and \[0\longrightarrow\mathbb{Z}[G](1-\tau)\stackrel{{\leq}}{{ \longrightarrow}}\mathbb{Z}[G]\stackrel{{\cdot T_{\tau}}}{{ \longrightarrow}}\mathbb{Z}[G]T_{\tau}\longrightarrow 0.\] For the cyclic case where \(d=2\) these two short exact sequences are the same because \(T_{\tau}=1-\tau\). Furthermore, both short exact sequences begin and end with \(\mathbb{Z}/2\mathbb{Z}\) as a trivial \(G\)-module. This avoids the need for Positselski's machinery altogether, as both short exact sequences are the Arason sequence. The four term exact sequence of modules for the dihedral case is also similar to that of the cyclic case. The two short exact sequences that define it are \[0\longrightarrow\mathbb{Z}[G]T_{\sigma}T_{\tau}\stackrel{{\leq} }{{\longrightarrow}}\mathbb{Z}[G]T_{\sigma}\stackrel{{\cdot(1- \tau)}}{{\longrightarrow}}\mathbb{Z}[G]T_{\sigma}(1-\tau)\longrightarrow 0\] and \[0\longrightarrow\mathbb{Z}[G](1-\tau)\mathrm{E}\stackrel{{\leq}}{{ \longrightarrow}}\mathbb{Z}[G]\mathrm{E}\stackrel{{\cdot T_{\tau}}}{{ \longrightarrow}}\mathbb{Z}[G]T_{\tau}\mathrm{E}\longrightarrow 0\] where \(T_{\sigma}=(1+\sigma)\) is the \(\sigma\)-trace, and \(\mathrm{E}=(1-\sigma\tau)\in\mathbb{Z}[G]\). Of course, we need \(\mathbb{Z}[G]T_{\sigma}(1-\tau)\cong\mathbb{Z}[G](1-\tau)\mathrm{E}\) for the two short exact sequences to build a \(4\)-term exact sequence, and in fact \(T_{\sigma}(1-\tau)=(1-\tau)\mathrm{E}\). Thus, the \(4\)-term exact sequence is \[0\longrightarrow\mathbb{Z}[G]T_{\sigma}T_{\tau}\stackrel{{\leq} }{{\longrightarrow}}\mathbb{Z}[G]T_{\sigma}\stackrel{{\cdot(1- \tau)}}{{\longrightarrow}}\mathbb{Z}[G]\mathrm{E}\stackrel{{\cdot T _{\tau}}}{{\longrightarrow}}\mathbb{Z}[G]\mathrm{E}T_{\tau}\longrightarrow 0.\] Our first lemma describes the induced module \(\mathrm{Ind}_{H}^{\mathcal{G}}(\mathbb{Z})\) needed for this case and relates it to \(\mathbb{Z}[G]T_{\sigma}\) for computation. **Lemma 5.1**.: _Let \(\bar{\sigma}\) and \(\bar{\tau}\) be liftings of \(\sigma\) and \(\tau\) from \(G\) to \(\mathcal{G}\). Then for the trivial \(\mathbb{Z}[\mathcal{G}]\)-module \(\mathbb{Z}\), the map \(\phi:\mathrm{Ind}_{\mathcal{H}}^{\mathcal{G}}(\mathbb{Z})\longrightarrow\mathbb{ Z}[G]T_{\sigma}\) given by_ \[\phi(f)=\sum_{i=0}^{d-1}f(\bar{\tau}^{i})\tau^{-i}T_{\sigma}\] _is an isomorphism of \(\mathbb{Z}[\mathcal{G}]\)-modules._ **Proof**: The proof is similar to that for the cyclic case, but it is worth noting that this proof would not work for \(\mathbb{Z}[\{\tau\}]\) instead of \(\mathbb{Z}[G]T_{\sigma}\) because \(\mathcal{H}\) is not a normal subgroup of \(\mathcal{G}\). However, \(\mathcal{N}\) is normal in \(\mathcal{G}\) and we will use this fact. The bijectivity of \(\phi\) follows as in Lemma 4.1 and we proceed to check the compatibility of \(\mathcal{G}\)-actions. Every element of \(\mathcal{G}\) has a unique expression as \(n\bar{\sigma}^{m}\bar{\tau}^{k}\), with \(n\in\mathcal{N}\), \(m\in\mathbb{Z}/2\mathbb{Z},k\in\mathbb{Z}/d\mathbb{Z}\). Let \(\theta_{m}:\mathbb{Z}/d\mathbb{Z}\longrightarrow\mathbb{Z}/d\mathbb{Z}\) be the \(\sigma^{m}\)-conjugation automorphisms so that \(\sigma^{m}\tau^{i}=\tau^{\theta_{m}(i)}\sigma^{m}\) and \(\tau^{i}\sigma^{m}=\sigma^{m}\tau^{\theta_{-m}(i)}\). Finally, let \(n_{i,m}=\tilde{\tau}^{i}n\bar{\sigma}^{m}\tilde{\tau}^{\sigma_{-m}(-i)}\bar{ \sigma}^{-m}\in\mathcal{N}\) so that \(\tilde{\tau}^{i}n\tilde{\sigma}^{m}=n_{i,m}\tilde{\sigma}^{m}\tilde{\tau}^{ \theta_{-m}(i)}\). Then \[\phi(n\tilde{\sigma}^{m}\tilde{\tau}^{k}\cdot f) =\sum_{i=0}^{d-1}(n\tilde{\sigma}^{m}\tilde{\tau}^{k}\cdot f)(\tau^ {i})\tau^{-i}T_{\sigma}\ =\ \sum_{i=0}^{d-1}f(\tilde{\tau}^{i}n\tilde{\sigma}^{m}\tau^{k})\tau^{-i}T_{\sigma}\] \[=\sum_{i=0}^{d-1}f(n_{i,m}\tilde{\sigma}^{m}\tilde{\tau}^{\theta_ {-m}(i)}\tau^{k})\tau^{-i}T_{\sigma}\ =\ \sum_{i=0}^{d-1}(n_{i,m}\tilde{\sigma}^{m})\cdot f( \tilde{\tau}^{\theta_{-m}(i)}\tau^{k})\tau^{-i}T_{\sigma}\] \[=\sum_{i=0}^{d-1}f(\tilde{\tau}^{\theta_{-m}(i)+k})\tau^{-i}T_{ \sigma}\ =\ \sum_{j=0}^{d-1}f(\tilde{\tau}^{j})\tau^{-\theta_{m}(j-k)}T_{ \sigma}\ =\ \sum_{j=0}^{d-1}f(\tilde{\tau}^{j})\tau^{-\theta_{m}(j-k)}\sigma^{m}T_{\sigma}\] \[=\sum_{j=0}^{d-1}f(\tilde{\tau}^{j})\sigma^{m}\tau^{-(j-k)}T_{ \sigma}\ =\ \sum_{j=0}^{d-1}f(\tilde{\tau}^{j})\sigma^{m}\tau^{k}\tau^{-j}T_{ \sigma}\ =\ (\sigma^{m}\tau^{k})\cdot\phi(f)\] \[=(n\sigma^{m}\tau^{k})\cdot\phi(f).\] With the compatability of \(\mathbb{Z}[G]\)-action checked, this completes the proof of Lemma 5.1. **Remark 5.2**.: _The fact that \(|\sigma|=2\) was not used in this proof. And indeed, this same proof can be used in the analogous claim for the more general semi-direct case later. We will therefore refer to this lemma for the semi-direct case as well._ With \(\mathbb{Z}[G]T_{\sigma}\cong\operatorname{Ind}_{H}^{\mathcal{G}}(\mathbb{Z})\) established, we move on to defining the homomorphisms. **Definition 5.3**.: _Let \(d_{1},d_{2},d_{3}\) be the maps_ \[0\longrightarrow\mathbb{Z}[G]T_{\sigma}T_{\tau}\stackrel{{ \mathbb{C}}}{{\longrightarrow}}\mathbb{Z}[G]T_{\sigma}\stackrel{{ \cdot(1-\tau)}}{{\longrightarrow}}\mathbb{Z}[G]\mathrm{B}\stackrel{{ \cdot T_{\tau}}}{{\longrightarrow}}\mathbb{Z}[G]\mathrm{B}T_{\tau}\longrightarrow 0 \tag{1}\] _Let \(h_{1},h_{2},h_{3}\) be the homotopy maps_ \[\mathbb{Z}[G]T_{\sigma}T_{\tau}\stackrel{{\cdot T_{\tau}}}{{ \longleftarrow}}\mathbb{Z}[G]T_{\sigma}\stackrel{{ h_{2}}}{{\longleftarrow}}\mathbb{Z}[G]\mathrm{B}\stackrel{{ \mathbb{2}}}{{\longleftarrow}}\mathbb{Z}[G]\mathrm{B}T_{\tau} \tag{2}\] _where_ \[h_{2}(\mathrm{B})=\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i}T_{ \sigma}.\] We observe that \(\mathbb{Z}[G]T_{\sigma}T_{\tau}=\mathbb{Z}[G]\left(\sum_{g\in G}g\right)\cong \mathbb{Z}\) is a trivial \(\mathbb{Z}[G]\)-module, while \(\mathbb{Z}[G]\mathrm{B}T_{\tau}\) has a trivial \(\tau\)-action but \(\sigma\) and \(\sigma\tau\) act as multiplication by \((-1)\). The main result needed for applying Positselski's machinery is given next. **Theorem 5.4**.: _The sequence (1) is an exact sequence, and the homotopy maps in sequence (2) satisfy the prism condition. Furthermore, \(d_{1},d_{2},d_{3},h_{1},h_{2},h_{3}\) are \(\mathbb{Z}[G]\)-module homomorphisms._ **Proof**: The exactness of the \(d_{i}\)'s was discussed at the beginning of this section, so we move on to checking the prism condition. For the first and last modules, the composition is multiplication by \(d\) because \(T_{\tau}T_{\tau}=dT_{\tau}\). For the second module, \(d_{1}h_{1}(T_{\sigma})=T_{\tau}T_{\sigma}\) and \[h_{2}d_{2}(T_{\sigma}) = h_{2}(T_{\sigma}(1-\tau))\ =\ h_{2}((1-\tau)\mathrm{E})\ =\ (1-\tau)\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i}T_{\sigma}\] \[=(1-\tau)\sum_{i=0}^{d-1}(-i)\tau^{i}T_{\sigma}\ =\ (d-T_{\tau})T_{\sigma}\] Therefore \((d_{1}h_{1}+h_{2}d_{2})(T_{\sigma})=T_{\tau}T_{\sigma}+(d-T_{\tau})T_{\sigma}= dT_{\sigma}\). To verify the prism condition for the third module, \(h_{3}d_{3}(\mathrm{E})=h_{3}(\mathrm{B}T_{\tau})=\mathrm{B}T_{\tau}=T_{\tau} \mathrm{E}\); and using the identity \(T_{\sigma}(1-\tau)=(1-\tau)\mathrm{E}\), \[d_{2}h_{2}(\mathrm{E}) = d_{2}\left(\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i} T_{\sigma}\right)\ =\ \sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i}T_{\sigma}(1-\tau)\] \[= \sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i}(1-\tau) \mathrm{E}\ =\ \sum_{i=0}^{d-1}\left(-i\right)\tau^{i}(1-\tau)\mathrm{E}\] \[= (d-T_{\tau})\mathrm{E}.\] Therefore \((h_{3}d_{3}+d_{2}h_{2})(\mathrm{E})=T_{\tau}\mathrm{E}+(d-T_{\tau})\mathrm{E }=d\mathrm{E}\). Every one of these maps except for \(h_{2}\) is defined by left multiplication in the group ring \(\mathbb{Z}[G]\), so it remains to check that \(h_{2}\) preserves action by \(\mathcal{G}\). \(h_{2}\) is also a map from a cyclic \(\mathbb{Z}[G]\)-module to a cyclic \(\mathbb{Z}[G]\)-module defined by sending one generator to another. So it need only be checked that the annihilator of \(\mathrm{E}\), which is \(\mathbb{Z}[G](1+\sigma\tau)\), also annihilates the \(h_{2}\)-image of \(\mathrm{E}\). We will check this by showing that \(h_{2}\) preserves the action of \(\sigma\tau\). \[\sigma\tau\cdot h_{2}(\mathrm{E}) = \sigma\tau\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i}T_{ \sigma}=\sigma\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i+1}T_{\sigma}\] \[= \sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{d-(i+1)}\sigma T _{\sigma}=\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{(d-1)-i}T_{\sigma}\] \[= \sum_{i=0}^{d-1}\left(-\frac{d-1}{2}+((d-1)-i)\right)\tau^{(d-1) -i}T_{\sigma}=\sum_{j=0}^{d-1}\left(-\frac{d-1}{2}+j\right)\tau^{i}T_{\sigma}\] \[= -\sum_{j=0}^{d-1}\left(\frac{d-1}{2}-j\right)\tau^{i}T_{\sigma}= h_{2}\left(-\mathrm{E}\right)=h_{2}\left(\sigma\tau\cdot\mathrm{E}\right)\] This concludes the proof of Theorem 5.4. \(\Box\) The only remaining requirements to check for the Positselski hypotheses are that the Bockstein homomorphisms are zero. The first two modules are isomorphic to \(\mathbb{Z}\) and the induced module \(\mathrm{Ind}_{\mathcal{H}}^{\mathcal{G}}(\mathbb{Z})\) respectively. Both of these modules were shown to have zero Bockstein maps in the general setup section. For \(M_{3}\) and \(M_{4}\), we will show that as \(\mathcal{J}\)-modules, \(M_{3}\cong\mathbb{Z}[\boldsymbol{J}]\cong\mathrm{Ind}_{\mathcal{N}}^{\mathcal{ J}}(\mathbb{Z})\) and that \(M_{4}\cong\mathbb{Z}\) as a trivial \(\mathcal{J}\)-module. Then Lemma 3.2 will imply that the Bockstein map vanishes for \(M_{3}\) and \(M_{4}\). The isomorphisms to be defined have the property that \(\mathrm{E}\mapsto 1\in\mathbb{Z}[\mathcal{J}]\) and \(\mathrm{B}T_{\tau}\mapsto 1\in\mathbb{Z}\) for \(M_{3}\) and \(M_{4}\) respectively. **Lemma 5.5**.: _As \(\mathcal{J}\)-modules,_ 1. \(M_{3}=\mathbb{Z}[G]\mathrm{B}\cong Ind_{\mathcal{N}}^{\mathcal{J}}(\mathbb{Z})\)__ 2. \(M_{4}=\mathbb{Z}[G]\mathrm{B}T_{\tau}\cong Ind_{\mathcal{J}}^{\mathcal{J}}( \mathbb{Z})\)__ **Proof**: There are two steps. 1. We begin with the observation that \[M_{3}=\mathbb{Z}[G]\cdot(1-\sigma\tau)=\mathbb{Z}[\langle\tau\rangle]\cdot(1- \sigma\tau).\] The second equality above follows from \[\tau^{i}\sigma(1-\sigma\tau)=\tau^{i+1}\cdot\sigma\tau(1-\sigma\tau)=-\tau^{i+ 1}(1-\sigma\tau)\] and hence \[\left(\sum_{i=0}^{d-1}c_{i,0}\tau^{i}+\sum_{i=0}^{d-1}c_{i,1}\tau^{i}\sigma \right)(1-\sigma\tau)=\left(\sum_{i=0}^{d-1}(c_{i,0}-c_{i,1}\tau)\tau^{i} \right)(1-\sigma\tau)\in\mathbb{Z}[\langle\tau\rangle](1-\sigma\tau).\] Furthermore, the set \(\{\tau^{i}(1-\sigma\tau\mid 0\leq i\leq d-1\}\) is a \(\mathbb{Z}\)-basis for \(M_{3}\), with \(\tau^{j}\cdot\tau^{i}(1-\sigma\tau)=\tau^{j+i}(1-\sigma\tau)\). 2. Similarly \[M_{4}=\mathbb{Z}[G](1-\sigma\tau)T_{\tau}=\mathbb{Z}[\langle\tau \rangle](1-\sigma\tau)T_{\tau}=\mathbb{Z}[\langle\tau\rangle]T_{\tau}(1- \sigma\tau)=\mathbb{Z}T_{\tau}(1-\sigma\tau)=\mathbb{Z}(1-\sigma\tau)T_{\tau}\] and \(\mathbb{Z}(1-\sigma\tau)T_{\tau}\cong\mathbb{Z}\) via the \(\mathcal{J}\)-module isomorphism \((1-\sigma\tau)T_{\tau}\mapsto 1\). This concludes the proof of Lemma 5.5. Now, with \(M_{3}\) isomorphic to \(\mathbb{Z}[\boldsymbol{J}]\) and \(M_{4}\) isomorphic to \(\mathbb{Z}\) as \(\mathcal{J}\)-modules, the fact that \(\widetilde{E}/\widetilde{F}\) is a cyclic extension allows the application of Lemma 3.2 to reduce the problem to the cyclic case. This makes the Bockstein maps zero for \(M_{3}\) and \(M_{4}\) as well as \(M_{1}\) and \(M_{2}\). Therefore the Positselski Hypotheses are satisfied by the four module exact sequence with homotopies defined in this section. ### The Connecting Map for the Dihedral Case Now we compute the connecting map. Let \(\overline{M}_{i}\,:=\,M_{i}/d\,M_{i}\) for \(i\in\{1,2,3,4\}\). Note that Lemmas 5.1 and 5.5 extend to the same modules mod \(d^{2}\) and mod \(d\). In this section we will use the exact sequence of modules with homotopies defined in the previous section to describe the connecting map \(\eta\,:\,H^{n-1}(\mathcal{G},\overline{M}_{4})\longrightarrow\,H^{n}( \mathcal{G},\overline{M}_{1})\) given by Positselski's machinery. **Definition 5.6**.: _: With this notation we define the following._ 1. \(\mathcal{L}\,:\,\overline{M}_{4}\longrightarrow\,\overline{M}_{3}\)_, the_ \(d_{3}\)_-lifting defined as_ \(\mathcal{L}(z\mathrm{B}T_{\tau})=z\mathrm{B}\) _for every_ \(z\in\mathbb{Z}/d\mathbb{Z}\)_._ 2. \(\delta\,:\,C^{n-1}(\mathcal{G},\overline{M}_{3})\longrightarrow C^{n}( \mathcal{G},\overline{M}_{3})\) _the cochain map from the bar resolution._ 3. \(\widetilde{\eta}_{\ell}=\widetilde{\eta}\,:\,Z^{n-1}(\mathcal{G},\overline{M} _{4})\longrightarrow Z^{n}(\mathcal{G},\overline{M}_{1})\)_,_ \(\widetilde{\eta}(c)\,:=d_{1}^{-1}h_{2}\partial\mathcal{L}(c)\)_._ 4. \(\eta\,:\,H^{n-1}(\mathcal{G},\overline{M}_{4})\longrightarrow H^{n}( \mathcal{G},\overline{M}_{1})\)_,_ \(\eta(\{c\}):=[\widetilde{\eta}(c)]\)_._ We observe that our choice of lifting \(\ell\) is a \(\mathbb{Z}\)-module homomorphism, though it is not a \(\mathbb{Z}[G]\)-module homomorphism. Note, for \(x\in\mathrm{im}(d_{1})\), we let \(d_{1}^{-1}(x)\) denote the unique preimage element. We collect some basic properties of these maps next. **Lemma 5.7**.: _Let \(c\in Z^{n-1}(\mathcal{G},\overline{M}_{4})\) be a cocycle, \(g_{1},\ldots,g_{n}\in\mathcal{G}\), and let \(c^{\prime}\in\mathbb{Z}/d\mathbb{Z}\) such that \(c(g_{2},\ldots,g_{n})=c^{\prime}\cdot\mathrm{B}T_{\tau}\). Express the coset of \(g_{1}\) in \(\mathcal{G}/\mathcal{N}\) as \((\sigma\tau)\tau^{i}\). Then_ 1. \(\delta(\mathscr{L}(c))(g_{1},\ldots,g_{n})=g_{1}\cdot\mathscr{L}(c(g_{2},\ldots,g _{n}))-\mathscr{L}(g_{1}\cdot c(g_{2},\ldots,g_{n}))\) \(=(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)\cdot c^{\prime}(1-\tau) \mathrm{B}\). 2. _For any_ \(x\in C^{n}(G,\overline{M}_{2})\) _such that_ \(\delta(\mathscr{L}(c))=d_{2}(x)\)_,_ \(\widetilde{\eta}(c)=h_{1}(x)\)_._ **Proof**: For the first part, by definition, \[\delta(\mathscr{L}(c))(g_{1},\ldots,g_{n})=g_{1}\cdot\mathscr{L}(c(g_{2}, \ldots,g_{n}))-\mathscr{L}(c(g_{1}g_{2},\ldots,g_{n}))+\cdots\pm\mathscr{L}(c( g_{1},\ldots,g_{n-1}))\] We will next subtract an expanded form of \(\mathscr{L}(\delta(c))(g_{1},\ldots,g_{n})\) from the right side of this equation. This term is \(0\) because \(\mathscr{L}\) is a \(\mathbb{Z}\)-homomorphism and \(c\) is a cocycle. Subtracting the expanded form will leave us with only two remaining terms. The expansion is as follows. \[\mathscr{L}(\delta(c))(g_{1},\ldots,g_{n})\] \[=\mathscr{L}(\delta(c)(g_{1},\ldots,g_{n}))\] \[=\mathscr{L}\left(g_{1}\cdot c(g_{2},\ldots,g_{n})-c(g_{1}g_{2}, \ldots,g_{n})+\cdots\pm c(g_{1},\ldots,g_{n-1})\right)\] \[=\mathscr{L}(g_{1}\cdot c(g_{2},\ldots,g_{n}))-\mathscr{L}(c(g_{ 1}g_{2},\ldots,g_{n}))+\cdots\pm\mathscr{L}(c(g_{1},\ldots,g_{n-1})))\] We can now subtract and simplify \[\delta(\mathscr{L}(c))(g_{1},\ldots,g_{n})\] \[-0\] \[=\delta(\mathscr{L}(c))(g_{1},\ldots,g_{n})\] \[-\mathscr{L}(\delta(c))(g_{1},\ldots,g_{n})\] \[=g_{1}\cdot\mathscr{L}(c(g_{2},\ldots,g_{n}))-\mathscr{L}(c(g_{1 }g_{2},\ldots,g_{n}))+\cdots\pm\mathscr{L}(c(g_{1},\ldots,g_{n-1}))\] \[-\mathscr{L}(g_{1}\cdot c(g_{2},\ldots,g_{n}))+\mathscr{L}(c(g_{ 1}g_{2},\ldots,g_{n}))-\cdots\mp\mathscr{L}(c(g_{1},\ldots,g_{n-1}))\] \[=g_{1}\cdot\mathscr{L}(c(g_{2},\ldots,g_{n}))\] \[-\mathscr{L}(g_{1}\cdot c(g_{2},\ldots,g_{n}))\] This shows the first equality in part (1) of the lemma. For the next equality, we replace \(g_{1}\) with \((\sigma\tau)^{j}\tau^{i}\), \(g_{1}\)'s coset representative in \(G/\mathcal{N}\), and use the fact that \(\mathscr{L}\) preserves the action by \(\sigma\tau\), which is also multiplication by \((-1)\) for both \(\mathrm{B}\in\overline{M}_{3}\) and \(\mathrm{B}T_{\tau}\in\overline{M}_{4}\). \[\delta(\mathscr{L}(c))(g_{1},\dots,g_{n}) =g_{1}\cdot\mathscr{L}(c(g_{2},\dots,g_{n}))-\mathscr{L}(g_{1}\cdot c (g_{2},\dots,g_{n}))\] \[=(\sigma\tau)^{j}\tau^{i}\cdot\mathscr{L}(c(g_{2},\dots,g_{n}))- \mathscr{L}((\sigma\tau)^{j}\tau^{i}\cdot c(g_{2},\dots,g_{n}))\] \[=(\sigma\tau)^{j}\tau^{i}\cdot\mathscr{L}(c(g_{2},\dots,g_{n}))- \mathscr{L}((\sigma\tau)^{j}c(g_{2},\dots,g_{n}))\] \[=(\sigma\tau)^{j}\tau^{i}\cdot\mathscr{L}(c(g_{2},\dots,g_{n}))- \mathscr{L}((-1)^{j}\mathscr{L}(c(g_{2},\dots,g_{n}))\] \[=(\sigma\tau)^{j}\tau^{i}\cdot\mathscr{L}(c(g_{2},\dots,g_{n}))- (-1)^{j}\mathscr{L}(c(g_{2},\dots,g_{n}))\] \[=(\sigma\tau)^{j}\tau^{i}\cdot\mathscr{L}(c(g_{2},\dots,g_{n}))- (-\sigma\tau)^{j}\mathscr{L}(c(g_{2},\dots,g_{n}))\] \[=(\sigma\tau)^{j}\left(\tau^{i}\cdot\mathscr{L}(c(g_{2},\dots,g_{ n}))-\mathscr{L}(c(g_{2},\dots,g_{n}))\right)\] \[=(\sigma\tau)^{j}(\tau^{i}-1)\cdot\mathscr{L}(c(g_{2},\dots,g_{n }))\] \[=(\sigma\tau)^{j}(\tau^{i}-1)\cdot c^{\prime}\mathrm{B}\] \[=(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)(1-\tau) \cdot c^{\prime}\mathrm{B}\] \[=(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)\cdot c^{ \prime}(1-\tau)\mathrm{B}\] This proves the first statement of the lemma. The second part of the lemma follows from the prism condition. Modulo \(d\), \[d_{1}h_{1}+h_{2}d_{2}=\cdot d=0\] This means \[h_{2}d_{2}=-d_{1}h_{1}\] And hence \[\eta(c) =-(d_{1}^{-1})(h_{2}(\delta(\mathscr{L}(c)))\] \[=-(d_{1}^{-1})(h_{2}(d_{2}(x)))\] \[=-(d_{1}^{-1})(-d_{1}(h_{1}(x)))\] \[=h_{1}(x)\] This concludes the proof of Lemma 5.7. As an application we obtain a description of the connecting map \(\eta\) in this case. Although it is not a cup product, its description is almost one. **Corollary 5.8**.: _Let \(c\in Z^{n-1}(\mathcal{G},\overline{M}_{4})\), \(g_{1},g_{2},\dots,g_{n}\in\mathcal{G}\), \(c^{\prime}\in\mathbb{Z}/d\mathbb{Z}\) such that \(c(g_{2},\dots,g_{n})=c^{\prime}\mathrm{B}T_{\tau}\). Let \(\sigma^{i}\tau^{j}\), an element of \(\mathcal{G}/\mathcal{N}\), be the coset of \(g_{1}\). Then_ \[\widetilde{\eta}(c)(g_{1},\dots,g_{n})=-ic^{\prime}\in\overline{M}_{1}.\] Proof: From part (1) of Lemma 5.7, we know that \[\delta(\mathscr{L}(c))(g_{1},\dots,g_{n})=(\sigma\tau)^{j}\left(-\sum_{k=0}^{ i-1}\tau^{k}\right)\cdot c^{\prime}(1-\tau)\mathrm{B}.\] We will first find an \(x\in\overline{M}_{2}\) with this \(d_{2}\)-image, and then use part (2) of Lemma 5.7 to compute \(\widetilde{\eta}(c)\) by finding \(-h_{1}(x)\). This process begins by showing that \((\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)\cdot c^{\prime}(1+\sigma)\) is a suitable choice for \(x\). \[\delta(\mathscr{L}(c))(g_{1},\ldots,g_{n}) =(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)\cdot c^{ \prime}(1-\tau)\mathrm{E}\] \[=(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)\cdot c^{ \prime}(1+\sigma)(1-\tau)\] \[=(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)\cdot c^{ \prime}(1+\sigma)(1-\tau)\] \[=d_{2}\left((\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k} \right)\cdot c^{\prime}(1+\sigma)\right)\] Applying part (\(b\)) allows the computation of \(\eta(c)(g_{1},\ldots,g_{n})\) as follows. \[\widetilde{\eta}(c)(g_{1},\ldots,g_{n}) =h_{1}\left((\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k} \right)\cdot c^{\prime}(1+\sigma)\right)\] \[=(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)c^{\prime} \cdot h_{1}\left(1+\sigma\right)\] \[=(\sigma\tau)^{j}\left(-\sum_{k=0}^{i-1}\tau^{k}\right)c^{\prime} \cdot 1\] \[=-ic^{\prime}\cdot 1\] \[=-ic^{\prime}\] This concludes the proof of Corollary 5.8. \(\Box\) Let \(\chi:\mathcal{G}\longrightarrow\mathbb{Z}/d\mathbb{Z}\) be defined by \(\chi((\sigma\tau)^{j}\tau^{i}\mathcal{N})=i\). It should be noted that in this case \(\chi\) is no longer a character because it is not a homomorphism. However, the Corollary shows that the map \(\eta\) can be understood through a cup-product like structure which we denote by \(\smile^{\prime}\) whereby \((\chi\smile^{\prime}c)(g_{1},\ldots,g_{n})=\chi(g_{1})\cdot c(g_{2},\ldots,g_{ n})\). In view of Theorem 5.4 and Corollary 5.8 the machinery in Theorem 2.2 gives the following result, due to Positselski (Prop. 17 in [P]). **Theorem 5.9**.: _In the dihedral case we have the following 6-term exact sequence._ _where \(\eta(c)=\chi\smile^{\prime}c\) and \(d_{1}\) is scalar extension._ **Corollary 5.10**.: \(\eta\) _induces an isomorphism_ \[\frac{H^{n}(\mathcal{G},\overline{M}_{4})}{d_{3}H^{n}(\mathcal{G},\overline{M}_{ 3})}\ \stackrel{{\cong}}{{\longrightarrow}}H^{n}(E/F).\] In Theorem 7.9, it will be shown that \(H^{n}(\widetilde{F},\mu_{d})\) maps onto \(H^{n}(\mathcal{G},\overline{M}_{4})\) in such a way that the image of the corestriction from \(H^{n}(\widetilde{E},\mu_{d})\), \(\operatorname{cor}_{\widetilde{E}/\widetilde{F}}\), maps onto the image of \(d_{3}\) from \(\overline{M}_{3}\). This will be used to characterize the cohomological kernel for the dihedral setup as follows: \[\frac{H^{n}(\widetilde{F},\mu_{d})}{\operatorname{cor}_{\widetilde{E}/ \widetilde{F}}H^{n}(\widetilde{E},\mu_{d})}\cong H^{n}(E/F).\] ## 6. The Semi-Direct Case This section expands the ideas of the previous section. Technically, the previous two sections could be interpreted as applications of the results contained in this section, but for this paper we decided it would be prudent to present them separately illustrate the development of these tools. ### The 4 Term Exact Sequence with Homotopies For this section we adopt the following notation: \(G\) is a semi-direct product of \(\langle\tau\rangle\) by \(\langle\sigma\rangle\). We assume the order of \(\tau\) is \(d\), the order of \(\sigma\) is \(s\) and we will assume that \(s\) is even and divides \(d-1\), making \(d\) odd. We denote by \(\theta\,:\,\{0,1,\ldots,d-1\}\to\{0,1,\ldots,d-1\}\) the permutation defined by \(\sigma\)-conjugation on \(\langle\tau\rangle\) that makes \(\sigma\tau^{i}\sigma^{-1}=\tau^{\theta(i)}\) for every \(i\). We define \(\theta_{j}\) in a similar fashion to be defined by conjugation by \(\sigma^{j}\), so that \(\sigma^{j}\tau\sigma^{-j}=\tau^{\theta_{j}}\) for every \(j\). In fact \(\theta_{j}=\theta^{j}(1)\), where the latter is the \(j\)'th iterate of \(\theta\), but the notation \(\theta_{j}\) is less cumbersome. We assume that \(\theta\) has order \(s\), that is, conjugation by \(\sigma\) on \(\langle\tau\rangle\) does not have a smaller order than \(s\). As \(\tau\) has odd order, this means \(\sigma^{\frac{s}{2}}\tau^{i}\sigma^{-\frac{s}{2}}=\tau^{-i}\) for all \(i\). From this, \(\theta_{j+\frac{s}{2}}\equiv-\theta_{j}\ (\text{mod }d)\) and since \(0<\theta_{j}<d\) we must have \(\theta_{j}+\theta_{j+\frac{s}{2}}=d\). Here are the diagrams of the fields and Galois groups. In order to generalize from the dihedral case a special element \(\operatorname{B}_{d,s}=\operatorname{B}\in\mathbb{Z}[G]\) is the essential tool. It is described next. **Definition 6.1**.: _For \(\tau\) and \(\sigma\) as above,_ \[{\rm B}_{d,s}={\rm B}=(1-\sigma^{\frac{s}{2}})\tau^{\frac{d+1}{2}}\sum_{j=0}^{ \frac{s}{2}-1}\left(\sum_{i=0}^{\theta_{j}-1}\tau^{i}\right)\sigma^{j}.\] _We set \(T_{\sigma,s}=T_{\sigma}=1+\sigma+\sigma^{2}+\cdots+\sigma^{s-1}\) and define_ \[C_{d,s,i}=C_{i}=\tau^{i}T_{\sigma}(1-\tau).\] Note that this \({\rm B}\) is not quite the same it was in the previous section, even when restricting to the dihedral case (making \(s=2\)). This \({\rm B}\) is a generalization of what was used in the original calculations for the dihedral case. The dihedral \({\rm B}\) was modified along with the maps to make the computations and proofs smoother. We begin with some basic properties of \({\rm B}\). **Lemma 6.2**.: _Given the above assumptions and notation we have, (i) \(\sigma^{\frac{s}{2}}\cdot{\rm B}=-{\rm B}\). (ii) \((1-\tau){\rm B}=C_{\frac{d+1}{2}}\)._ **Proof**: Part (i) is clear by the definition of \({\rm B}\) since \(\sigma^{\frac{s}{2}}(1-\sigma^{\frac{s}{2}})=-(1-\sigma^{\frac{s}{2}})\). For (ii) We have \(\tau\sigma^{\frac{s}{2}}=\sigma^{\frac{s}{2}}\tau^{-1}\), and hence \[(1-\tau)(1-\sigma^{\frac{s}{2}})=1-\tau+\sigma^{\frac{s}{2}}\tau^{-1}-\sigma^{ \frac{s}{2}}=(1+\sigma^{\frac{s}{2}}\tau^{-1})(1-\tau).\] This allows us to make the substitution \[(1-\tau)(1-\sigma^{\frac{s}{2}})\tau^{\frac{d+1}{2}}=\tau^{\frac{d+1}{2}}(1+ \sigma^{\frac{s}{2}})(1-\tau)\] Now, if we multiply the inner sum in the defintion of \({\rm B}\) by \((1-\tau)\), we get \[(1-\tau)\left(\sum_{i=0}^{\theta_{j}-1}\tau^{i}\right)=1-\tau^{\sigma_{j}}\] These facts allow us to establish \((ii)\). \[(1-\tau){\rm B} = (1-\tau)(1-\sigma^{\frac{s}{2}})\tau^{\frac{d+1}{2}}\sum_{j=0}^{ \frac{s}{2}-1}\left(\sum_{i=0}^{\theta_{j}-1}\tau^{i}\right)\sigma^{j}\] \[= \tau^{\frac{d+1}{2}}(1+\sigma^{\frac{s}{2}})(1-\tau)\sum_{j=0}^{ \frac{s}{2}-1}\left(\sum_{i=0}^{\theta_{j}-1}\tau^{i}\right)\sigma^{j}\] \[\stackrel{{(1)}}{{=}} \tau^{\frac{d+1}{2}}(1+\sigma^{\frac{s}{2}})\sum_{j=0}^{\frac{s}{2 }-1}\left(1-\tau^{\theta_{j}}\right)\sigma^{j}\] \[\stackrel{{(2)}}{{=}} \tau^{\frac{d+1}{2}}(1+\sigma^{\frac{s}{2}})\sum_{j=0}^{\frac{s}{2 }-1}\sigma^{j}\left(1-\tau\right)\] \[= \tau^{\frac{d+1}{2}}T_{\sigma}\left(1-\tau\right)\] \[= C_{\frac{d+1}{2}}\] with the equality labeled (1) using the geometric series identity \((1-\tau)\sum_{i=0}^{\theta_{j}-1}\tau^{i}=1-\tau^{\theta_{j}}\) and the equality labeled (2) following from \((1+\sigma^{\frac{i}{2}})\sum_{j=0}^{\frac{i}{2}-1}\sigma^{j}\) being the sum of every power of \(\sigma\), i.e. \(T_{\sigma}\). This shows directly that \(C_{\frac{d+1}{2}}=(1-\tau)\mathrm{E}\) giving (ii). We next define the modules we need. **Definition 6.3**.: _Define all four modules to be submodules of \(\mathbb{Z}[G]\), defined as follows._ \[M_{1} = \mathbb{Z}[G]T_{\sigma}T_{\tau}\cong\mathbb{Z}\text{ as a trivial $\mathcal{G}$-module}\] \[M_{2} = \mathbb{Z}[G]T_{\sigma}=\sum_{i=0}^{d-1}\mathbb{Z}\tau^{i}T_{\sigma}\] \[M_{3} = \mathbb{Z}[G]\mathrm{E}=\sum_{j=0}^{\frac{i}{2}-1}\mathbb{Z}[( \tau)]\sigma^{j}\mathrm{E}\] \[M_{4} = \mathbb{Z}[G]\mathrm{E}T_{\tau}=\sum_{j=0}^{\frac{i}{2}-1}\mathbb{ Z}\cdot\sigma^{j}\mathrm{E}T_{\tau}\] This next lemma gives key properties of the modules. **Lemma 6.4**.: \[\mathbb{Z}[G]\mathrm{E} = \mathbb{Z}[G](1-\tau)\mathrm{E}\oplus\mathbb{Z}[\langle\sigma \rangle]\mathrm{E}\] \[= M_{3}^{\prime}\oplus M_{\mathrm{E}}\] \[\text{where }M_{3}^{\prime}=\sum_{i=0}^{d-1}\mathbb{Z}\cdot\tau^{i}T_{ \sigma}(1-\tau)\text{ and }M_{\mathrm{E}}=\sum_{j=0}^{\frac{i}{2}-1}\mathbb{Z}\cdot \sigma^{j}\mathrm{E}_{d,s}\text{ and the direct}\] \[\text{sum between }M_{3}^{\prime}\text{ and }M_{\mathrm{E}} \text{is that of $\mathbb{Z}$-modules, not of $\mathcal{G}$-modules}.\] **Proof**: For the (additive) direct summands of \(M_{3}\), the first identification \[\mathbb{Z}[G](1-\tau)\mathrm{E}=\sum_{i=0}^{d-1}\mathbb{Z}\cdot\tau^{i}T_{ \sigma}(1-\tau)\] follows from Lemma 6.2(\(i\)) while the second identification \[\mathbb{Z}[\langle\sigma\rangle]\mathrm{E}=\sum_{j=0}^{\frac{i}{2}-1}\mathbb{ Z}\cdot\sigma^{j}\mathrm{E}\] follows from Lemma 6.2 (ii). We also note that the \(\mathbb{Z}\)-ranks of \(M_{1}\), \(M_{2}\), \(M_{3}\), and \(M_{4}\) are, respectively \(1\), \(d\), \(d-1+\frac{s}{2}\), and \(\frac{s}{2}\) (although for the latter one has to check the linear independence of the \(\sigma^{j}\mathrm{E}\) from \(M_{3}^{\prime}\).) We next define the \(d_{i}\) maps, which are similar to those from the dihedral case. **Definition 6.5**.: _The \(d_{i}\,:\,M_{i}\to M_{i+1}\) are as follows:_ \[d_{1}\,:\,M_{1}\longrightarrow M_{2}\text{ is the inclusion }\mathbb{Z}[G]T_{\sigma}T_{\tau} \xrightarrow{\ \ \leq\ }\mathbb{Z}[G]T_{\sigma}.\] _If we view \(M_{1}\) as \(\mathbb{Z}\), then \(d_{1}(n)=\sum_{i=0}^{d-1}n\tau^{i}T_{\sigma}\)._ \(d_{2}\) _: \(M_{2}\)\(\rightarrow\)\(M_{3}\) is given by \(\cdot(1-\tau)\) : \(\mathbb{Z}[G]T_{\sigma}\longrightarrow\mathbb{Z}[G]T_{\sigma}(1-\tau) \subseteq\mathbb{Z}[G]\mathrm{B}\)._ _with "\(\subseteq\)" coming from the identity \(\mathbb{Z}[G]T_{\sigma}(1-\tau)=\mathbb{Z}[G](1-\tau)\mathrm{B}\) in Lemma 6.2._ \(d_{3}\) _: \(M_{3}\)\(\rightarrow\)\(M_{4}\) is given by \(\cdot T_{\tau}\) : \(\mathbb{Z}[G]\mathrm{B}\longrightarrow\mathbb{Z}[G]\mathrm{B}T_{\tau}\)._ We note that each map is a \(\mathcal{G}\)-module homomorphism. The map \(d_{1}\), which is an inclusion, can be thought of as the diagonal embedding if \(M_{1}\) is viewed as \(\mathbb{Z}\), with an image that has a trivial \(\mathcal{G}\)-action. The map \(d_{2}\) is right mulplitplication by \((1-\tau)\) and hence is a \(\mathcal{G}\)-map. By construction \(M_{3}^{\prime}\) is the image of \(d_{2}\). The map \(d_{3}\) is right multiplication by \(T_{\tau}\) and can be viewed as a trace map on \(M_{\mathrm{B}}\), the right summand of \(M_{3}\). The trace is also trivial on \(M_{3}^{\prime}\) because \((\tau-1)T_{\tau}=0\), so we need only consider \(d_{3}\) applied to \(M_{\mathrm{B}}\). The homotopy maps are given next. **Definition 6.6**.: _The \(h_{i}\) : \(M_{i+1}\)\(\rightarrow\)\(M_{i}\) are as follows:_ \(h_{1}\) _: \(\mathbb{Z}[G]T_{\sigma}\longrightarrow\mathbb{Z}[G]T_{\sigma}T_{\tau}\) := \(\cdot T_{\tau}\) is given by \(xT_{\sigma}\)\(\mapsto\)\(xT_{\sigma}T_{\tau}\)._ \(h_{2}\) _: \(\mathbb{Z}[G]\mathrm{B}\longrightarrow\mathbb{Z}[G]T_{\sigma}\) is given by \(h_{2}(x\mathrm{B})=x\sum_{i=0}^{d-1}(\frac{d-1}{2}-i)\tau^{i}\tau^{\frac{d+1} {2}}T_{\sigma}\) for every \(x\in\mathbb{Z}[G]\)._ \(h_{3}\) _: \(\mathbb{Z}[G]\mathrm{B}T_{\tau}\overset{\subset}{\longrightarrow}\mathbb{Z} [G]\mathrm{B}\) is the inclusion._ The next result verifies that the maps just defined satisfy the Positselski hypotheses. **Theorem 6.7**.: _In the semi-direct case, given the above definitions we have the following._ 1. _The_ \(d_{i}\)_'s are exact_ 2. \(h_{2}\) _is well-defined_ 3. _The prism condition is satisfied at all 4 modules._ **Proof**: For part (1), exactness follows from extending the Hilbert 90 sequence discussed in the dihedral case, \[0\longrightarrow\mathbb{Z}[\langle\tau\rangle]\cdot T_{\tau}\overset{\subset }{\longrightarrow}\mathbb{Z}[\langle\tau\rangle]\overset{\cdot(1-\tau)}{ \longrightarrow}\mathbb{Z}[\langle\tau\rangle](1-\tau)\longrightarrow 0.\] Here we replace \(\mathbb{Z}[\langle\tau\rangle]\) with \(\mathbb{Z}[G]T_{\sigma}\), which is isomorphic as a \(\mathbb{Z}[\langle\tau\rangle]\)-module to \(\mathbb{Z}[\langle\tau\rangle]\). The second short exact sequence is immediate from the direct sum decomposition, although it should be stated that it is not split exact, since the direct sum is only that of \(\mathbb{Z}\)-modules, not \(\mathcal{G}\)-modules. For part (2), as in the previous section we will show that the left annihilator of \(\mathrm{B}\) is in the left annihilator of \(h_{2}(\mathrm{B})\). Let \(x\in\mathbb{Z}[G]\) such that \(x\mathrm{B}=0\). We will use the direct sum decomposition of \(\mathbb{Z}[G]\) to express \(x\) as follows: \[x=(x_{1}(1-\tau),x_{2}),\] where \(x_{1}\in\mathbb{Z}[G]\) and \(x_{2}\in\mathbb{Z}[\langle\sigma\rangle]\). Now we use the direct sum decomposition of \(M_{3}=\mathbb{Z}[G]\mathrm{B}\): \[x\mathrm{B}=(x_{1}(1-\tau)\mathrm{B},x_{2}\mathrm{B})=(0,0).\] The fact that \(x_{1}\mathrm{B}=0\) and \(x_{2}\mathrm{B}=0\) will be used after we apply \(h_{2}\). But before applying \(h_{2}\) to the elements of each direct summand we first make two observations, \(i)\) and \(ii)\), about \(x_{1}\) and \(x_{2}\) respectively that will be important in understanding where \(h_{2}\) sends both of these elements. 1. \(x_{1}\tau^{\frac{d+1}{2}}T_{\sigma}=kT_{\tau}\) for some \(k\in\mathbb{Z}[\langle\sigma\rangle]\). This follows once \(x_{1}\tau^{\frac{d+1}{2}}T_{\sigma}\) is multiplied by \((1-\tau)\) we get \(0\), which is shown directly as follows, \[x_{1}\tau^{\frac{d+1}{2}}T_{\sigma}(1-\tau)=x_{1}(1-\tau)\mathrm{B}=0\] with the first equality following from Lemma 6.2\((ii)\). The left-annihilator of \((1-\tau)\) in \(\mathbb{Z}[G]\) is \(\mathbb{Z}[G]T_{\tau}\). 2. \(x_{2}\) is a left-multiple of \((1+\sigma^{\frac{s}{2}})\). This follows from the direct sum decomposition \[\mathbb{Z}[\langle\sigma\rangle]\mathrm{E}=\bigoplus_{j=0}^{\frac{s}{2}-1} \mathbb{Z}\sigma^{j}\mathrm{E}.\] With the above observations, we are ready to compute, starting with \(h_{2}(x_{1}(1-\tau))\mathrm{E}\): \[h_{2}(x_{1}(1-\tau)\mathrm{E}) =x_{1}(1-\tau)\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{ i}\tau^{\frac{d+1}{2}}T_{\sigma}=x_{1}\tau^{\frac{d+1}{2}}\Bigg{(}(1-\tau)\sum_{i=0}^ {d-1}\left(\frac{d-1}{2}-i\right)\tau^{i}\Bigg{)}T_{\sigma}\] \[=x_{1}\tau^{\frac{d+1}{2}}\left(d-T_{\tau}\right)T_{\sigma}=x_{1} \tau^{\frac{d+1}{2}}T_{\sigma}\left(d-T_{\tau}\right)\stackrel{{ i)}}{{=}}kT_{\tau}(d-T_{\tau})=k(dT_{\tau}-T_{\tau}^{2})\] \[=k(dT_{\tau}-dT_{\tau})=0\] Now we compute \(h_{2}(x_{2}\mathrm{E})\): \[h_{2}(x_{2}\mathrm{E}) \stackrel{{ ii)}}{{=}}h_{2}(x_{2}^{\prime}(1+\sigma^ {\frac{s}{2}})\mathrm{E})=x_{2}^{\prime}(1+\sigma^{\frac{s}{2}})\sum_{i=0}^{d- 1}\left(\frac{d-1}{2}-i\right)\tau^{i}\tau^{\frac{d+1}{2}}T_{\sigma}\] \[=x_{2}^{\prime}(d-1)T_{\sigma}+x_{2}^{\prime}(1+\sigma^{\frac{s }{2}})\sum_{i=0}^{d-1}\left(-i\right)\tau^{i}\tau^{\frac{d+1}{2}}T_{\sigma}\] \[=x_{2}^{\prime}(d-1)T_{\sigma}+x_{2}^{\prime}\sum_{i=0}^{d-1} \left(-i\right)\tau^{i}\tau^{\frac{d+1}{2}}T_{\sigma}+x_{2}^{\prime}\sum_{i=0} ^{d-1}\left(-i\right)\tau^{-i}\tau^{\frac{d-1}{2}}\sigma^{\frac{s}{2}}T_{\sigma}\] \[\stackrel{{*}}{{=}}x_{2}^{\prime}(d-1)T_{\sigma}+x_{2} ^{\prime}(-(d-1))T_{\sigma}=0\] For the second to last equality above (*), note that the coefficients in the two summands add to \(-(d-1)\) for each power of \(\tau\). With \(x_{1}(1-\tau)\) and \(h_{2}(x_{2})\) both in left-annihilator of \(h_{2}(\mathrm{E})\), we have shown that \(h_{2}\) is well-defined. For part (3), the prism condition at \(M_{1}\), which is \(h_{1}d_{1}=\cdot d\), holds because \(T_{\tau}\cdot T_{\tau}=d\cdot T_{\tau}\), as \(h_{1}d_{1}=\cdot T_{\tau}\). The same goes for the prism condition at \(M_{4}\), since \(d_{3}h_{3}=\cdot T_{\tau}\) as well. Furthermore, the prism condition at \(M_{2}\) holds from the following calculations: \((d_{1}h_{1})(T_{\sigma})=T_{\sigma}\cdot T_{\tau}\), while \[(h_{2}d_{2})(T_{\sigma}) =h_{2}(T_{\sigma}(1-\tau))=h_{2}((1-\tau)\tau^{\frac{d-1}{2}} \mathrm{E})=(1-\tau)\tau^{\frac{d-1}{2}}\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i \right)\tau^{\frac{d+1}{2}+i}T_{\sigma}\] \[=(1-\tau)\sum_{i=0}^{d-1}\left(-i\right)\tau^{i}T_{\sigma}=-T_{\tau }T_{\sigma}+d\cdot T_{\sigma}.\] Therefore \[(d_{1}h_{1}+h_{2}d_{2})(T_{\sigma})=T_{\sigma}T_{\tau}-T_{\sigma}T_{\tau}+d \cdot T_{\sigma}=d\cdot T_{\sigma}.\] The prism condition holds at \(M_{3}\) from the following calculations: \((h_{3}d_{3})(\mathrm{B})=\mathrm{B}\cdot T_{\tau}=T_{\tau}\cdot\mathrm{B}\), while \[(d_{2}h_{2})(\mathrm{B}) =d_{2}\left(\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{\frac {d+1}{2}+i}T_{\sigma}\right)=\left(\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right) \tau^{\frac{d+1}{2}+i}T_{\sigma}\right)(1-\tau)\] \[=\sum_{i=0}^{d-1}\left(\frac{d-1}{2}-i\right)\tau^{i}(1-\tau) \mathrm{B}=\sum_{i=0}^{d-1}-i\tau^{i}(1-\tau)\mathrm{B}=\left(-T_{\tau}+d \right)\mathrm{B}.\] Therefore \[(d_{2}h_{2}+h_{3}d_{3})(\mathrm{B})=T_{\tau}\cdot\mathrm{B}+\left(-T_{\tau}+d \right)\mathrm{B}=d\cdot\mathrm{B}.\] And finally, \(d_{3}h_{3}(\mathrm{B}T_{\tau})=d_{3}(\mathrm{B}T_{\tau})=\mathrm{B}T_{\tau}T_ {\tau}=d\mathrm{B}T_{\tau}\). This concludes the proof of Theorem 6.7. The only remaining part to check for the Positselski hypotheses is that the Bockstein homomorphisms are zero. \(M_{1}\) and \(M_{2}\) are \(\mathbb{Z}\) (as a trivial \(\mathcal{G}\)-module) and the induced module \(\mathrm{Ind}_{\mathcal{H}}^{\mathcal{G}}(\mathbb{Z})\) respectively, each are viewed modulo \(d^{2}\mathbb{Z}\) to represent \(\mu_{d^{2}}\) with a trivial \(\mathcal{G}\)-action on cohomology with their respective Galois groups. Both of these modules were shown to have zero Bockstein maps in the general setup section. For \(M_{3}\) and \(M_{4}\), we will show that as \(\mathcal{J}\)-modules, \(M_{3}/d\,M_{3}\cong\mathbb{Z}[\mathcal{J}]/d^{2}\mathbb{Z}[\mathcal{J}]\cong \bigoplus_{j=0}^{\frac{i}{2}-1}\mathrm{Ind}_{\mathcal{N}}^{\mathcal{J}}( \mathbb{Z}/d^{2}\mathbb{Z})\) and that \(M_{4}/d\,M_{4}\cong\bigoplus_{j=0}^{\frac{i}{2}-1}\mathbb{Z}/d^{2}\mathbb{Z}\) as a trivial \(\mathcal{J}\)-module. The isomorphisms are defined by \(\mathrm{B}\mapsto 1\in\mathbb{Z}[\mathcal{J}]\) and \(\mathrm{B}T_{\tau}\mapsto 1\in\mathbb{Z}\) respectively. **Lemma 6.8**.: _As \(\mathcal{J}\)-modules,_ 1. \(M_{3}=\mathbb{Z}[G]\mathrm{B}\cong\bigoplus_{j=0}^{\frac{i}{2}-1}\mathrm{Ind} _{\mathcal{N}}^{\mathcal{J}}(\mathbb{Z})\)__ 2. \(M_{4}=\mathbb{Z}[G]\mathrm{B}T_{\tau}\cong\bigoplus_{j=0}^{\frac{i}{2}-1}\mathrm{ Ind}_{\mathcal{J}}^{\mathcal{J}}(\mathbb{Z})\)__ **Proof**: Both parts of this lemma follow from the fact that \[\mathbb{Z}[G]\mathrm{B}=\bigoplus_{j=0}^{\frac{i}{2}-1}\sigma^{j}\mathbb{Z}[( \tau)]\mathrm{B}\] followed by the arguments in Lemma 5.5 applied to each direct summand. Note that in the dihedral case, \(s=2\) and therefore \(\sigma=\sigma^{\frac{i}{2}}\). So \(\sigma^{\frac{i}{2}}\) must be used to apply the same arguments. \(\Box\) Thus, the Bockstein maps have component maps for each direct summand that are identical to those in the dihedral case. And the dihedral Bockstein maps were shown to be zero. Therefore the Bockstein maps are zero on \(M_{3}\) and \(M_{4}\), completing the verification of the Positselski hypotheses for the \(4\)-term exact sequence of \(\mathcal{G}\)-modules. ### The Connecting Map for the Semi-Direct Case Now we compute the connecting map \(\eta\). Let \(\overline{M_{i}}\,:=\,M_{i}/d\,M_{i}\) for \(i\in\{1,2,3,4\}\). In this section we will use the exact sequence of modules with homotopies defined in the previous section to describe connecting map \(\eta\,:\,H^{n-1}(\mathcal{G},\overline{M_{4}})\longrightarrow H^{n}( \mathcal{G},\overline{M_{1}})\). **Definition 6.9**.: _Given the above notation we define the following._ 1. \(\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ }}}}}}}}}}}}} _._ 2. \(\delta:C^{n-1}(\mathcal{G},\overline{M_{3}})\longrightarrow C^{n}(\mathcal{G}, \overline{M_{3}})\) _the cochain map from the bar resolution._ 3. \(\widetilde{\eta}_{\ell}=\widetilde{\eta}:Z^{n-1}(\mathcal{G},\overline{M_{4}}) \longrightarrow Z^{n}(\mathcal{G},\overline{M_{1}}),\ \widetilde{\eta}(c):=-[d_{1}^{-1}h_{2}\delta \mathcal{L}(c)]\)_._ 4. \(\eta:H^{n-1}(\mathcal{G},\overline{M_{4}})\longrightarrow H^{n}(\mathcal{G}, \overline{M_{1}}),\ \eta([c]):=[\widetilde{\eta}(c)]\)_._ Our choice of lifting \(\ell\) is a \(\mathbb{Z}\)-module homomorphism, though it is not a \(\mathbb{Z}[G]\)-module homomorphism. We will see in a moment that though \(\ell\) is not a \(\mathbb{Z}[G]\)-module homomorphism, it is a module homomorphism for the subring \(\mathbb{Z}[\langle\sigma\rangle]\) of \(\mathbb{Z}[G]\) than properly contains \(\mathbb{Z}=\mathbb{Z}\cdot 1_{G}\). Also, since \(d_{1}\) is injective we let \(d_{1}^{-1}(x)\) denote the unique preimage element for every \(x\in\operatorname{im}(d_{1})\). This next lemma shows that \(\ell\) preserves action by \(\sigma\), a fact that was used at this point in the dihedral case in the previous section. However, in the dihedral case it was plain that \(\sigma\) had the same action as \(-1\) on \(\mathrm{B}\), since \(\sigma\) was just \(\sigma^{\frac{i}{2}}\), so no lemma was necessary. Furthermore, the fact that \(\sigma^{\frac{i}{2}}\) has this action of \(-1\) in the semi-direct case will be central to the proof that follows. **Lemma 6.10**.: \(\ell\) _is a \(\mathbb{Z}[\langle\sigma\rangle]\)-module homomorphism._ **Proof**: We will show that \(\ell\) preserves action by \(\sigma\). The last summand will be the only term of significance that needs to be addressed with respect to how it passes through the lifting after action by \(\sigma\). \[\begin{split}\mathcal{L}\left(\sigma\cdot\sum_{j=0}^{\frac{i}{2}- 1}x_{j}\sigma^{j}\mathrm{B}T_{\tau}\right)&=\mathcal{L}\left( \sum_{j=0}^{\frac{i}{2}-1}x_{j}\sigma^{j+1}\mathrm{B}T_{\tau}\right)\\ &=\mathcal{L}\left(x_{\frac{i}{2}-1}\sigma^{\frac{i}{2}}\mathrm{ B}T_{\tau}+\sum_{j=0}^{\frac{i}{2}-2}x_{j}\sigma^{j+1}\mathrm{B}T_{\tau}\right) =\mathcal{L}\left(-x_{\frac{i}{2}-1}\mathrm{B}T_{\tau}+\sum_{j=0}^{\frac{i}{2}- 2}x_{j}\sigma^{j+1}\mathrm{B}T_{\tau}\right)\\ &=-x_{\frac{i}{2}-1}\mathrm{B}+\sum_{j=0}^{\frac{i}{2}-2}x_{j} \sigma^{j+1}\mathrm{B}=x_{\frac{i}{2}-1}\sigma^{\frac{i}{2}}\mathrm{B}+\sum_{ j=0}^{\frac{i}{2}-2}x_{j}\sigma^{j+1}\mathrm{B}\\ &=\sum_{j=0}^{\frac{i}{2}-1}x_{j}\sigma^{j+1}\mathrm{B}=\sigma \cdot\sum_{j=0}^{\frac{i}{2}-1}x_{j}\sigma^{j}\mathrm{B}=\sigma\cdot\mathcal{ L}\left(\sum_{j=0}^{\frac{i}{2}-1}x_{j}\sigma^{j}\mathrm{B}T_{\tau}\right).\end{split}\] This proves the lemma. Now we are ready to compute the connecting map for the semi-direct case. The next lemma will allow us to find \(\widetilde{\eta}(c)(g_{1},\ldots,g_{n})\) by first expressing \(\delta(\mathcal{L}(c))(g_{1},\ldots,g_{n})\) as \(z\cdot\mathcal{L}(c(g_{2},\ldots,g_{n}))\) for an appropriate \(z\in\mathbb{Z}[G]\) based on \(g_{1}\), lifting this through \(d_{2}\), and finally applying \(-h_{1}\). **Lemma 6.11**.: _Let \(c\in Z^{n-1}(\mathcal{G},\overline{M_{4}})\) be a cocycle, \(g_{1},\ldots,g_{n}\in\mathcal{G}\), and let \(c^{\prime}\in\mathbb{Z}/d\mathbb{Z}\) such that \(c(g_{2},\ldots,g_{n})=c^{\prime}\cdot\mathrm{B}T_{\tau}\). Let \(\sigma^{i}\tau^{j}\), an element of \(\mathcal{G}/\mathcal{N}\), be the coset of \(g_{1}\). Then_ 1. \(\delta(\mathcal{L}(c))(g_{1},\ldots,g_{n})=g_{1}\cdot\mathcal{L}(c(g_{2}, \ldots,g_{n}))-\mathcal{L}(g_{1}\cdot c(g_{2},\ldots,g_{n}))\\ =\sigma^{j}\left(\tau^{i}-1\right)\cdot\mathcal{L}(c(g_{2}, \ldots,g_{n})).\)__ 2. _For any_ \(x\in C^{n}(\mathcal{G},\overline{M_{2}})\) _such that_ \(\delta(\mathcal{L}(c))=d_{2}(x)\)_,_ \(\widetilde{\eta}(c)=-h_{1}(x)\)_._ **Proof**: For (1), the first equality follows from the fact that \(\mathscr{L}\) is a \(\mathbb{Z}\)-module homomorphism, and hence the proof is identical to the analogous proof in the previous section. The second equality of (1) follows from \(\mathscr{L}\) being a \(\mathbb{Z}[\langle\sigma\rangle]\)-module homomorphism and \(\tau\) acting trivially on \(\overline{M_{4}}\). Note that \(\tau\) acts trivially on \(\overline{M_{4}}\) because \(T_{\tau}\) is in the center of \(\mathbb{Z}[G]\) because \(\langle\tau\rangle\) is normal in \(G\), and \(\tau T_{\tau}=T_{\tau}\). We use these two reasons in tandem to show the second equality: \[g_{1}\cdot\mathscr{L}(c(g_{2},\ldots,g_{n}))-\mathscr{L}(g_{1} \cdot c(g_{2},\ldots,g_{n})) =\sigma^{j}\tau^{i}\mathscr{L}(c(g_{2},\ldots,g_{n}))-\mathscr{L }(\sigma^{j}\tau^{i}\cdot c(g_{2},\ldots,g_{n})\] \[=\sigma^{j}\tau^{i}\cdot\mathscr{L}(c^{\prime})-\sigma^{j}\cdot \mathscr{L}(\tau^{i}c^{\prime})\] \[=\sigma^{j}\tau^{i}\cdot\mathscr{L}(c^{\prime})-\sigma^{j}\cdot \mathscr{L}(c^{\prime})\] \[=\sigma^{j}(\tau^{i}-1)\cdot\mathscr{L}(c^{\prime}).\] For (2), showing that \(\widetilde{\eta}(c)=h_{1}(x)\) relies only on the prism condition, and these details are identical to those in part (2) of Lemma 5.7, the analog to this lemma in the dihedral case. They will therefore be omitted here to prevent repetition. This concludes the proof of Lemma 6.11. \(\Box\) As a corollary we obtain a characterization of the connecting map. As in the case for the dihedral extensions, this connecting map behaves like a cup product. **Corollary 6.12**.: _Let \(g_{1},g_{2},\ldots,g_{n},c,c^{\prime},i,j\) be defined as in Lemma 6.11, and let \(c_{m}\in\mathbb{Z}/d\mathbb{Z}\) such that \(c(g_{2},\ldots,g_{n})=\sum_{m=0}^{\frac{z}{2}-1}c_{m}\sigma^{m}\mathrm{B}T_{\tau}\). Then_ \[\widetilde{\eta}(c)(g_{1},\ldots,g_{n})=-i\sum_{m=0}^{\frac{z}{2}-1}c_{m}( \theta_{s-m}-1)T_{\sigma}T_{\tau}\] **Proof**: From part (1) of Lemma 6.11, we know that \[\delta(\mathscr{L}(c))(g_{1},\ldots,g_{n})=\sigma^{j}\left(-\sum_{k=0}^{i-1} \tau^{k}\right)\cdot(1-\tau)c^{\prime}\mathrm{B}.\] Now we will find an \(x\in\overline{M_{2}}\) with the above \(d_{2}\)-image so we can apply \(-h_{1}\) to \(x\). By part (2) of Lemma 6.11, this yields \(\widetilde{\eta}(c)=-h_{1}(x)\). We show now that for \(x=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m=0}^{\frac{z}{2}-1}c_{m} \sigma^{m}\left(\sum_{r=0}^{\theta_{m}^{-1}-1}\tau^{r}\right)\tau^{\frac{d+1} {2}}T_{\sigma}\), we have the desired equality: \[d_{2}(x)=\delta(\mathscr{L}(c))(g_{1},\ldots,g_{n}).\] We start with the right hand side. \[\delta(\mathscr{L}(c))(g_{1},\dots,g_{n}) =\sigma^{j}\left(\tau^{i}-1\right)\cdot\mathscr{L}(c(g_{2},\dots,g _{n}))\] \[=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)(1-\tau)\cdot\mathscr{ L}(c^{\prime})\] \[=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)(1-\tau)\cdot\sum_{m =0}^{\frac{i}{2}-2}c_{m}\sigma^{m}\mathrm{B}\] \[=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m=0}^{\frac{i }{2}-2}c_{m}\sigma^{m}(1-\tau^{\theta_{m}^{-1}})\mathrm{B}\] \[=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m=0}^{\frac{i }{2}-2}c_{m}\sigma^{m}(1-\tau^{\theta_{m}^{-1}})\mathrm{B}\] \[=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m=0}^{\frac{i }{2}-1}c_{m}\sigma^{m}\left(\sum_{r=0}^{\theta_{m}^{-1}-1}\tau^{r}\right)(1- \tau)\mathrm{B}\] \[=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m=0}^{\frac{i }{2}-1}c_{m}\sigma^{m}\left(\sum_{r=0}^{\theta_{m}^{-1}-1}\tau^{r}\right)\tau^ {\frac{d+1}{2}}T_{\sigma}(1-\tau)\] \[=d_{2}\left(\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m =0}^{\frac{i}{2}-1}c_{m}\sigma^{m}\left(\sum_{r=0}^{\theta_{m}^{-1}-1}\tau^{r }\right)\tau^{\frac{d+1}{2}}T_{\sigma}\right).\] With \(x\) having the desired property, we can now apply \(h_{1}\) to \(x\). \[h_{1}(x) =h_{1}\left(\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m =0}^{\frac{j}{2}-1}c_{m}\sigma^{m}\left(\sum_{r=0}^{\theta_{m}^{-1}-1}\tau^{r }\right)\tau^{\frac{d+1}{2}}T_{\sigma}\right)\] \[=\sigma^{j}\left(-\sum_{k=0}^{i-1}t^{k}\right)\sum_{m=0}^{\frac{i }{2}-1}c_{m}\sigma^{m}\left(\sum_{r=0}^{\theta_{m}-m-1}\tau^{r}\right)\tau^{ \frac{d+1}{2}}T_{\sigma}T_{\tau}\] \[=-i\sum_{m=0}^{\frac{i}{2}-1}c_{m}\left(\theta_{s-m}-1\right)T_{ \sigma}T_{\tau}\] This concludes the proof of Corollary 6.12. In view of Theorem 6.7 and Corollary 6.12, the machinery in Theorem 2.2 gives the following result. **Theorem 6.13**.: _In the semi-direct case we have the following 6-term exact sequence_ _where \(\eta\) is as described in Corollary 6.12 and \(d_{1}\) is scalar extension._ **Corollary 6.14**.: \(\eta\) _induces an isomorphism_ \[\frac{H^{n}(\mathcal{G},\overline{M_{4}})}{d_{3}H^{n}(\mathcal{G},\overline{M_ {3}})}\stackrel{{\cong}}{{\longrightarrow}}H^{n}(E/F).\] At the end of this paper, it will be shown that \(d_{3}\) can be viewed as the corestriction map from \(\widetilde{E}\) and that \[\frac{H^{n}(\mathcal{G},\overline{M_{4}})}{\operatorname{cor}_{\widetilde{E }/\widetilde{F}}H^{n}(\widetilde{E},\mu_{d})}\cong H^{n}(E/F).\] ### Some Examples We conclude this section with a computation of \(\eta:H^{n}(\mathcal{G},\overline{M_{4}})\longrightarrow H^{n}(F,\mu_{d})\) for \(n=0\) and \(n=1\). 1. \(\eta\) for \(n=0\) In the semi-direct case, \(s\) is even and the non-trivial action of \(\sigma^{\frac{s}{2}}\) on \(\overline{M_{4}}\) is multiplication by \((-1)\). Hence \(\overline{M_{4}}\) has no fixed points with \(d\) being odd, and therefore \(\eta=0\) because \(H^{0}(\mathcal{G},\overline{M_{4}})\) is trivial. Note that this is not true for the cyclic case, where \(s=1\) is not even. In the cyclic case, \(H^{0}(\mathcal{G},\overline{M_{4}})=H^{0}(\mathcal{G},\mu_{d})=\mu_{d}\) and \(\eta\) is the well-known cup product with the character \(-\chi_{a}\) defined by the extension \(E/F=F(\alpha)/F\). 2. \(\eta\) for \(n=1\) Let \(\chi\in Z^{1}(\mathcal{G},\overline{M_{4}})\) be a crossed homomorphism. Then the identity \(\sigma\cdot\chi(\tau)=\theta_{1}\chi(\tau)\) may be deduced as from the cocycle condition as follows. \[\sigma\cdot\chi(\tau) =\sigma\cdot\chi(\tau)+\chi(\sigma)-\chi(\sigma)=\chi(\sigma \tau)-\chi(\sigma)=\chi(\tau^{\theta_{1}}\sigma)-\chi(\sigma)\] \[=\tau^{\theta_{1}}\cdot\chi(\sigma)+\chi(\tau^{\theta_{1}})-\chi( \sigma)=\tau^{\theta_{1}}\cdot\chi(\sigma)+(\tau^{\theta_{1}-1}+\cdots+1) \cdot\chi(\tau)-\chi(\sigma)\] \[\stackrel{{(*)}}{{=}}1\cdot\chi(\sigma)+(\theta_{1}) \cdot\chi(\tau)-\chi(\sigma)=\theta_{1}\chi(\tau)\] where the equality \((*)\) comes from \(\tau\) acting trivially on \(\overline{M_{4}}\). So for a unique \(t\in\mathbb{Z}/d\mathbb{Z}\), \[\chi(\tau)=t\sum_{m=0}^{\frac{s}{2}-1}\theta_{1}^{-m}\sigma^{m}\mathrm{B}T_{\tau}\] where \(t\) may be thought of as the \(\sigma^{0}\mathrm{B}T_{\tau}\)-coefficient of \(\chi(\tau)\). In this case \(\chi(\sigma)\) can be expressed as follows: Let \(s:\mathbb{Z}\longrightarrow\mathbb{Z}/d\mathbb{Z}\) such that \(s_{m+\frac{s}{2}}=-s_{m}\) for every \(m\in\mathbb{Z}\) so that \[\chi(\sigma)=\sum_{m=0}^{\frac{s}{2}-1}s_{m}\sigma^{m}\mathrm{E}T_{\tau}\] and more generally, \[chi(\sigma)=\sum s_{\gamma_{m}}\sigma^{\gamma_{m}}\mathrm{E}T_{\tau}\] for any choice of representatives \(\gamma_{1},\dots,\gamma_{\frac{s}{2}-1}\in\mathbb{Z}\) of the cosets \(\mathbb{Z}/(\frac{s}{2})\mathbb{Z}\). Suppose \(g_{1}\mathcal{N}=\sigma^{j}\tau^{i}\) and \(g_{2}\mathcal{N}=\tau^{k}\sigma^{\varepsilon}\). Then \[\chi(g_{1},g_{2})=k\cdot\chi(\tau)+\left(\sum_{\beta=0}^{\ell-1}\sigma^{\beta }\right)\cdot\chi(\sigma)=\sum_{m=0}^{\frac{s}{2}-1}\left(kt\theta_{-1}^{m}+ \sum_{\beta=0}^{\ell-1}s_{m-\beta}\right)\sigma^{m}\] and hence \[c_{m}=kt\theta_{-1}^{m}+\sum_{\beta=0}^{\ell-1}s_{m-\beta}.\] The connecting map formula in Corollary 6.12 may be applied to yield \[\widetilde{\eta}(g_{1},g_{2})=-i\sum_{m=0}^{\frac{s}{2}-1}\left(\theta_{1}^{- m}-1\right)c_{m}=-i\sum_{m=0}^{\frac{s}{2}-1}\left(\theta_{1}^{-m}-1\right) \left(kt\theta_{1}^{-m}+\sum_{\beta=0}^{\ell-1}s_{m-\beta}\right).\] ## 7. Interpreting the Sequences In this section we record some consequences of the sequences in Theorems 4.6, 5.9, and 6.13. As noted in the introduction, when \(E/F\) is cyclic one has the classical description of the relative Brauer group, \[\frac{F^{*}}{N_{E/F}(E^{*})}\cong\ker(\mathrm{Br}_{d}F\to\mathrm{Br}_{d}E).\] As an immediate consequence of Theorem 4.6, this classical result generalizes as follows. **Theorem 7.1**.: _If \(E/F\) is cyclic with \(\mu_{d}\subset F\) we have a description of the cohomological kernel for all \(n\geq 0\),_ \[\frac{H^{n}(F,\mu_{d})}{\mathrm{cor}_{E/F}H^{n}(E,\mu_{d})}\cong\ker(H^{n+1}( F,\mu_{d})\to H^{n+1}(E,\mu_{d})).\] The goal in this section is to find what generalizations of Theorem 7.1 are possible in the dihedral and semi-direct product cases considered in the previous two sections. We continue to assume the notation of the last section. To better understand \(M_{3}\) and \(M_{4}\) we need to introduce more induced modules and subgroups. **Definition 7.2**.: _We set \(T_{2}:=J+\sigma^{\frac{s}{2}}J\in\mathrm{Ind}_{J}^{\mathcal{G}}\mathbb{Z}\) and then set \(\mathcal{T}_{2}:=\sum_{j=0}^{\frac{s}{2}-1}\mathbb{Z}\cdot\sigma^{j}T_{2} \subset\mathrm{Ind}_{J}^{\mathcal{G}}\mathbb{Z}\). We note that \(\mathcal{T}_{2}\) is a \(\mathcal{G}\)-submodule of \(\mathrm{Ind}_{J}^{\mathcal{G}}\mathbb{Z}\) with \(\sigma^{\frac{s}{2}}T_{2}=T_{2}\). We set \(\mathcal{J}^{\prime}\) to be the unique group in \(\mathcal{G}\) of index \(\frac{s}{2}\) containing \(\mathcal{J}\) and denote by \(\mathcal{J}^{\prime}\) the corresponding subgroup of \(G\). This means that \(\mathcal{J}^{\prime}=\langle\tau,\sigma^{\frac{s}{2}}\rangle\)._ By Galois theory, since \(J^{\prime}\) has index \(\frac{s}{2}\) in \(G\), \(\mathcal{J}^{\prime}=\operatorname{Gal}(F_{sep}/F^{\prime})\) where \([F^{\prime}\,:\,F]=\frac{s}{2}\) and \(F\subset F^{\prime}\subset\tilde{F}=F(\alpha)\) (since \(\tilde{F}/F\) is cyclic, \(F^{\prime}\) is the unique such intermediate extension.) We have the following. **Lemma 7.3**.: _Given the above definitions. (i) As \(\mathcal{G}\)-modules, \(\mathcal{T}_{2}\cong\operatorname{Ind}_{\mathcal{J}}^{\mathcal{G}},\mathbb{Z}\). (ii) As \(\mathcal{G}\)-modules, \(M_{4}\,:\,=\,M_{3}/M_{3}^{\prime}\cong\operatorname{Ind}_{\mathcal{J}}^{ \mathcal{G}}\mathbb{Z}/\mathcal{T}_{2}\). In particular we have an exact sequence of \(\mathcal{G}\)-modules._ \[0\to\operatorname{Ind}_{\mathcal{J}}^{\mathcal{G}},\mathbb{Z}\to \operatorname{Ind}_{\mathcal{J}}^{\mathcal{G}}\mathbb{Z}\to M_{4}\to 0.\] **Proof.** We note that \(\operatorname{Ind}_{\mathcal{J}}^{\mathcal{G}},\mathbb{Z}\cong\bigoplus_{j=0 }^{\frac{s}{2}-1}\mathbb{Z}\cdot\sigma^{j}J^{\prime}\) with \(\sigma\) acting cyclicly and \(\tau\) acting trivially on the summands. This is exactly how \(\mathcal{G}\) acts on the summands of \(\mathcal{T}_{2}\) and the map \(\sigma^{j}T_{2}\mapsto\sigma^{j}\mathcal{J}^{\prime}\), giving the isomorphism required for (i). For (ii), by definition \(M_{4}\) is the free \(\mathbb{Z}\)-module with basis \(\overline{\mathrm{B}}\), \(\sigma\overline{\mathrm{B}}\),...,\(\sigma^{\frac{s}{2}-1}\overline{\mathrm{B}}\), with trivial \(\tau\)-action and with \(\sigma\) acting cyclically every summand except the last, which multiplies by -1 as it goes to the first summand: \(\sigma\cdot\sigma^{\frac{s}{2}-1}\overline{\mathrm{B}}=\sigma^{\frac{s}{2}} \overline{\mathrm{B}}=-\overline{\mathrm{B}}\). From this it follows that the map \(\sigma^{j}J\mapsto\sigma^{j}\overline{\mathrm{B}}\) defines a \(G\)-map \(\operatorname{Ind}_{\mathcal{J}}^{\mathcal{G}}\mathbb{Z}\to M_{4}\) with kernel \(\mathcal{T}_{2}\) (the latter as \(J+\sigma^{\frac{s}{2}}J\mapsto\overline{\mathrm{B}}+\sigma^{\frac{s}{2}} \overline{\mathrm{B}}=\overline{\mathrm{B}}-\overline{\mathrm{B}}=0\in M_{4}\).) This concludes the proof of Lemma 7.3. **Remarks.** (i) The \(\mathbb{Z}\)-ranks of the modules \(\operatorname{Ind}_{\mathcal{J}}^{\mathcal{G}},\mathbb{Z}\), \(\operatorname{Ind}_{\mathcal{J}}^{\mathcal{G}},\mathbb{Z}\), \(M_{4}\) are, respectively, \(\frac{s}{2}\), \(s\), and \(\frac{s}{2}\). (ii) Of course, all of these modules can be taken (mod \(d\)) and the same results apply. The cohomology of \(M_{4}\) can be interpreted using the sequence of Lemma 7.3. By definition we have \(F\subseteq F^{\prime}\subseteq\widetilde{F}\) where \([\widetilde{F}\,:\,F]=s\) and \([F^{\prime}\,:\,F]=\frac{s}{2}\). Computing cohomology in \(\mu_{d}\) and using the fact that \((d,s)=1\) we know that \(H^{n}(F^{\prime},\mu_{d})\to H^{n}(F,\mu_{d})\) must be injective for all \(n\). In particular the long exact sequence in cohomology gives exact sequences \[0\to H^{n}(F^{\prime},\mu_{d})\to H^{n}(\widetilde{F},\mu_{d})\to H^{n}( \mathcal{G},M_{4})\to 0.\] This means if we let \(\overline{H}^{n}(F,\mu_{d})\,:\,=\,\operatorname{cok}(H^{n}(F^{\prime},\mu_{ d})\to H^{n}(\widetilde{F},\mu_{d}))\) then the Positselski connecting map in Theorem 6.13 gives a map \(\overline{\eta}\,:\,\overline{H}^{n}(\widetilde{F},\mu_{d}){\to}H^{n+1}(F, \mu_{d})\) that computes the cohomological kernels as noted next. **Theorem 7.4**.: _In the above notation with \(E/F\) being a semi-direct extension, we have an exact sequence,_ \[\overline{H}^{n}(\widetilde{F},\mu_{d})\overset{\overline{\eta}}{\to}H^{n+1} (F,\mu_{d})\to H^{n+1}(E,\mu_{d}).\] To understand the kernel of \(\overline{\eta}\) one needs to further understand the cohomology of \(M_{3}\) and how it maps into the cohomology of \(M_{4}\). For this purpose, if \(\pi^{*}\,:\,H^{n}(\widetilde{F},\mu_{d})\to H^{n}(\mathcal{G},M_{4})\) is the induced map, we shall denote by \[N_{3}^{n}(E/F)\,:\,=\,\pi^{*-1}(\operatorname{im}(H^{n}(\mathcal{G},M_{3})\to H ^{n}(\mathcal{G},M_{4})))\subseteq H^{n}(\widetilde{F},\mu_{d})\] and then Theorem 6.13 gives the following result. **Theorem 7.5**.: _In the above notation with \(E/F\) being a semi-direct extension, we have the following characterization of the cohomological kernel \(H^{n+1}(E/F,\mu_{d})\),_ \[\frac{H^{n}(\widetilde{F},\mu_{d})}{i_{\widetilde{F}/F^{\prime}}H^{n}(F^{\prime },\mu_{d})+N_{3}^{n}(E/F)}\cong\ker(H^{n+1}(F,\mu_{d})\to H^{n+1}(E,\mu_{d})).\] When interpreted loosely, this result can be understood as the analogue of Theorem 7.1 in the more general case case. (When \(E/F\) is cyclic of degree \(d\) and \(s=1\) we would have \(\widetilde{F}=F\) and \(N_{3}^{n}(E/F)=\operatorname{cor}_{E/F}H^{n}(E,\mu_{d})\). Also, the subfield \(F^{\prime}\) doesn't exist in the cyclic case.) Next we turn to \(M_{3}\subset\mathbb{Z}[G]\). We set \(\mathcal{H}^{\prime}\ :=\langle\mathcal{J},\sigma\tau\rangle=\operatorname{ Gal}(F_{sep}/E^{\prime})\) where \(E^{\prime}\) is discussed above. We know by Lemma 6.8 the modules \(M_{3}=\mathbb{Z}[G]\cdot\operatorname{B}\), \(M_{3}^{\prime}=\mathbb{Z}[G]\cdot(1-\tau)\operatorname{B}\) and \(M_{4}=\mathbb{Z}[G]\cdot T_{\tau}\operatorname{B}\). By Lemma 7.3 we have the following. **Lemma 7.6**.: _The following diagram of \(G\)-modules and \(G\)-maps is commutative with exact rows and columns._ _The right column is that of Lemma 7.3, where \(\operatorname{Ind}_{J}^{\mathcal{G}},\mathbb{Z}=\mathbb{Z}[G]T_{\tau}(1+ \sigma^{\frac{s}{2}})\), and \(\operatorname{Ind}_{J}^{\mathcal{G}}\mathbb{Z}=\mathbb{Z}[G]T_{\tau}\). Here \(\mathcal{K}:=\ker(\cdot\operatorname{B}:\mathbb{Z}[G]\to M_{3})\) and \(\mathcal{K}^{\prime\prime}:=\ker(\cdot\operatorname{B}:\mathbb{Z}[G]\cdot(1- \tau)\to M_{3}^{\prime})\). Moreover, the \(\mathbb{Z}\)-ranks of \(\mathcal{K}^{\prime}\), \(\mathbb{Z}[G]\cdot(1-\tau)\), \(M_{3}^{\prime}\) are \((s-1)(d-1)\), \(s(d-1)\), \((d-1)\), resp., the \(\mathbb{Z}\)-ranks of \(\mathcal{K}\), \(\mathbb{Z}[G]\), \(M_{3}\) are \(sd-\frac{s}{2}-(d-1)\), \(sd\), \(\frac{s}{2}+(d-1)\), resp., and the \(\mathbb{Z}\)-ranks of \(\mathbb{Z}[G]\cdot T_{\tau}(1+\sigma^{\frac{s}{2}})\), \(\mathbb{Z}[G]\cdot T_{\tau}\), \(M_{4}\) are \(\frac{s}{2}\), \(s\), \(\frac{s}{2}\), resp._ **Proof.** For commutativity, as the first set of downarrows are inclusions as are the first right-rows, the only question is the lower right square. Since \(T_{\tau}\) is central in \(\mathbb{Z}[G]\), \(\operatorname{B}\cdot T_{\tau}=T_{\tau}\cdot\operatorname{B}\) and so the lower right square commutes. For exactness, the three downward arrows given by \(\cdot\operatorname{B}\) are surjective from the decomposition of \(M_{3}\) in Lemma 6.4. The first two columns are then exact from the choices of \(\mathcal{K}\) and \(\mathcal{K}^{\prime}\) as kernels of their respective \(\cdot\operatorname{B}\) maps. The last column is exact from Lemma 7.3, as mentioned above in this Theorem statement. The bottom row is second short exact sequence in the 4-term Positselski sequence in the semi-direct case, and above it is the Hilbert-90 short exact sequence from which it was constructed. With all other rows and columns being exact, it then follows by the usual diagram chase that that the top row is also exact. Finally the \(\mathbb{Z}\)-ranks of the bottom row are given in the remark following Lemma 7.3. The \(\mathbb{Z}\)-ranks of the two right columns are clear by previous work, so the ranks of \(\mathcal{K}\) and \(\mathcal{K}^{\prime}\) follow by arithmetic. As an application of Theorem 7.6 we can characterize \(N_{3}^{n}(E/F)\) via the corestriction. **Theorem 7.7**.: _In the above notation with \(E/F\) being a semi-direct extension, we have the following exact sequence calculating the cohomological kernel \(H^{n+1}(E/F,\mu_{d})\),_ \[\frac{H^{n}(\widetilde{F},\mu_{d})}{\operatorname{cor}_{\widetilde{E}/ \widetilde{F}}H^{n}(\widetilde{E},\mu_{d})}\to H^{n+1}(F,\mu_{d})\to H^{n+1}(E, \mu_{d}).\] _The first map is injective provided \(H^{n+1}(G,\mathcal{K})\to H^{n+1}(\widetilde{E},\mu_{d})\) is injective._ **Proof.** Consider the following diagram. The middle row is exact by Theorem 6.13. The two columns are exact by the long exact sequence of cohomology applied to the middle and right columns of the diagram in Theorem 7.6. The diagram commutes since all maps are those induced by the diagram in Theorem 7.6. The map \(H^{n+1}(F_{1},\mu_{d})\to H^{n+1}(\widetilde{F},\mu_{d})\) is injective since \([\widetilde{F}\,:\,F_{1}]\) is prime to \(d\). Therefore the map \(H^{n}(\widetilde{F},\mu_{d})\to H^{n}(G,\mathcal{M}_{4})\) is surjective. This gives a surjective map \(H^{n}(\widetilde{F},\mu_{d})\to\ker(H^{n+1}(F,\mu_{d})\to H^{n+1}(E,\mu_{d}))\). The exactness of the sequence follows by noting the diagram shows \(\operatorname{cor}_{\widetilde{E}/\widetilde{F}}(H^{n}(\widetilde{E},\mu_{d}))\) has trivial image in \(H^{n+1}(F,\mu_{d})\). For the second statement, if \(H^{n+1}(G,\mathcal{K})\to H^{n+1}(\widetilde{E},\mu_{d})\) is injective then \(H^{n}(\widetilde{E},\mu_{d})\to H^{n}(G,M_{3})\) is surjective and the result follows by the exactness of the second row. **Remark.** It is reasonable to conjecture that \(H^{n+1}(G,\mathcal{K})\to H^{n+1}(\widetilde{E},\mu_{d})\) is injective. But we need to understand \(\mathcal{K}\) better. This question will be studied in future work. The section closes by looking at the case where \(s=2\). We have \((1+\sigma)(1-\tau)=(1+\sigma)(1-\sigma\tau)=(1-\tau)(1-\sigma\tau)\) and we find \[C_{i} = \tau^{i}(1+\sigma)(1-\tau)=\tau^{i}(1-\sigma)(1-\sigma\tau)=\tau^ {i}(1-\tau)(1-\sigma\tau)\] \[\mathrm{B} = (1-\sigma)\tau^{\frac{d+1}{2}}=\tau^{\frac{d+1}{2}}-\tau^{-\frac {d+1}{2}}\sigma=\tau^{\frac{d+1}{2}}(1-\tau^{-1}\sigma)=\tau^{\frac{d+1}{2}}( 1-\sigma\tau).\] From this we find that \(M_{3}=\mathbb{Z}[J]\cdot(1-\sigma\tau)=\mathbb{Z}[J]\cdot\mathrm{B}\). Looking at \(M_{3}\) in this way may make what is going on when \(s=2\) more transparent (in particular the relationship to \(E^{\prime}\).) Even more, we have noted earlier that both \(\mathbb{Z}[J]\cdot(1\pm\sigma\tau)\) have \(\mathbb{Z}\)-rank \(d\), and therefore as \((1-\sigma\tau)(1+\sigma\tau)=0\) we know the kernel of the map \(\cdot(1\pm\sigma\tau)\) is \(\mathbb{Z}[J]\cdot(1\mp\sigma\tau)\). This leads to the following result. **Lemma 7.8**.: _When \(s=2\) we have two exact sequences_ \[0\to M_{3}\to\mathbb{Z}[G]\to\mathrm{Ind}^{G}_{H^{\prime}}\to 0\] _and_ \[0\to\mathrm{Ind}^{\mathcal{C}}_{H^{\prime}}\to\mathbb{Z}[G]\to M_{3}\to 0.\] _The second sequence coincides with the middle column of the diagram of Lemma 7.6 up to an automorphism of \(M_{3}\) and therefore \(\mathcal{K}\cong\mathrm{Ind}^{\mathcal{G}}_{H^{\prime}}\) in this case._ **Proof.** We know that \(\mathbb{Z}[J]\cdot(1+\sigma\tau)\cong\mathrm{Ind}^{\mathcal{G}}_{H^{\prime}}\) and \(M_{3}=\mathbb{Z}[J]\cdot(1-\sigma\tau)\). The exact sequences follow as the kernel of the map \(\cdot(1\pm\sigma\tau)\) is \(\mathbb{Z}[J]\cdot(1\mp\sigma\tau)\). For the second statement, in Lemma 7.6 the map \(\mathbb{Z}[G]\to M_{3}\) is multiplication \(\cdot\mathrm{B}\) where \(\mathrm{B}=\tau^{\frac{d+1}{2}}(1-\sigma\tau)\), whereas it is multiplication by \((1-\sigma\tau)\) in the lemma. However, multiplication by \(\tau^{\frac{d+1}{2}}\) is an autormorphism of \(M_{3}\) so the result follows. \(\square\) In the dihedral case (\(s=2\)), the exact sequence of Lemma 7.8 and the long exact cohomology sequence give the first column of the diagram in the proof of Theorem 7.7, \[\begin{array}{c}\cdots\to H^{n}(E^{\prime},\mu_{d})\to H^{n}(\widetilde{E}, \mu_{d})\to H^{n}(\mathcal{G},M_{3})\\ \\ \to H^{n+1}(E^{\prime},\mu_{d})\to H^{n+1}(\widetilde{E},\mu_{d})\to H^{n+1}( \mathcal{G},M_{3})\cdots.\end{array}\] However, \([\widetilde{E}:\,E^{\prime}]=2\) and \(d\) is odd, so we know \(H^{n+1}(\mathcal{G},\mathcal{K})=H^{n+1}(E^{\prime},\mu_{d})\to H^{n+1}( \widetilde{E},\mu_{d})\) is injective. This gives the following application of Theorem 7.7. **Theorem 7.9**.: _In the dihedral case (\(s=2\)) the cohomological kernel \(H^{n+1}(E/F,\mu_{d})\) is given by_ \[(*_{8})\quad\frac{H^{n}(\widetilde{F},\mu_{d})}{\mathrm{cor}_{\widetilde{E}/ \widetilde{F}}H^{n}(\widetilde{E},\mu_{d})}\xrightarrow{\cong}\ker(H^{n+1}(F, \mu_{d})\to H^{n+1}(E,\mu_{d})).\] **Proof.** Since \(H^{n+1}(\mathcal{G},\mathcal{K})=H^{n+1}(E^{\prime},\mu_{d})\to H^{n+1}( \widetilde{E},\mu_{d})\) is injective the result is immediate by Theorem 7.7. \(\square\) ## Acknowledgements. The author was supported by the James B. Axe Foundation. The author was also supported by the University of California at Santa Barbara as a graduate student, while many of these ideas were developed. These results appeared in the author's PhD dissertation. The author is very grateful to Bill Jacob for guidance and support during every step of the development of this paper.
2308.11257
HopPG: Self-Iterative Program Generation for Multi-Hop Question Answering over Heterogeneous Knowledge
The semantic parsing-based method is an important research branch for knowledge-based question answering. It usually generates executable programs lean upon the question and then conduct them to reason answers over a knowledge base. Benefit from this inherent mechanism, it has advantages in the performance and the interpretability. However, traditional semantic parsing methods usually generate a complete program before executing it, which struggles with multi-hop question answering over heterogeneous knowledge. On one hand, generating a complete multi-hop program relies on multiple heterogeneous supporting facts, and it is difficult for generators to understand these facts simultaneously. On the other hand, this way ignores the semantic information of the intermediate answers at each hop, which is beneficial for subsequent generation. To alleviate these challenges, we propose a self-iterative framework for multi-hop program generation (HopPG) over heterogeneous knowledge, which leverages the previous execution results to retrieve supporting facts and generate subsequent programs hop by hop. We evaluate our model on MMQA-T^2, and the experimental results show that HopPG outperforms existing semantic-parsing-based baselines, especially on the multi-hop questions.
Yingyao Wang, Yongwei Zhou, Chaoqun Duan, Junwei Bao, Tiejun Zhao
2023-08-22T08:00:50Z
http://arxiv.org/abs/2308.11257v2
HopPG: Self-Iterative Program Generation for Multi-Hop Question Answering over Heterogeneous Knowledge ###### Abstract The semantic parsing-based method is an important research branch for knowledge-based question answering. It usually generates executable programs lean upon the question and then conduct them to reason answers over a knowledge base. Benefit from this inherent mechanism, it has advantages in the performance and the interpretability. However, traditional semantic parsing methods usually generate a complete program before executing it, which struggles with multi-hop question answering over heterogeneous knowledge. On one hand, generating a complete multi-hop program relies on multiple heterogeneous supporting facts, and it is difficult for generators to understand these facts simultaneously. On the other hand, this way ignores the semantic information of the intermediate answers at each hop, which is beneficial for subsequent generation. To alleviate these challenges, we propose a self-iterative framework for multi-**hop** program **g**eneration (HopPG) over heterogeneous knowledge, which leverages the previous execution results to retrieve supporting facts and generate subsequent programs hop by hop. We evaluate our model on MMQA-T\({}^{21}\), and the experimental results show that HopPG outperforms existing semantic-parsing-based baselines, especially on the multi-hop questions. ## 1 Introduction Question answering is a fundamental task and plays a crucial role in the field of natural language processing Cui et al. (2020); Kwiatkowski et al. (2019); Liu et al. (2020); Choi et al. (2018); Fan et al. (2019). In recent years, question-answering tasks based on heterogeneous knowledge (HQA) have increasingly gained the attention of researchers Chen et al. (2020); Ma et al. (2021); Zhu et al. (2021); Talmor et al. (2021); Chen et al. (2021). These tasks require models to perform multi-hop reasoning on different structured knowledge, i.e., tables and texts. One category of the existing HQA method performs implicit answer reasoning, which takes the question and knowledge as input and performs reasoning in the semantic space, and then directly outputs the answer Pan et al. (2020); Sun et al. (2021); Wang et al. (2022); Eisenschlos et al. (2021); Kumar et al. (2021). Although this approach has proven effective in HQA tasks, it lacks interpretability, scalability, and symbolic reasoning abilities. In contrast, the semantic parsing-based (SP-based) approach explicitly derives answers by generating and executable programs, remedying the above-mentioned deficiencies and enabling researchers to monitor and improve each step of the derivation process. SP-based methods have been widely used in question-answering tasks on homogeneous knowledge sources, such as tables, knowledge graphs, and texts Yih et al. (2014); Bao et al. (2014); Choi et al. (2018); Abdelaziz et al. (2021); Zhou et al. (2022). Nevertheless, the SP-based question-answering methods over heterogeneous knowledge still require further exploration. Recently, Zhou et al. (2022) introduced UniRPG, a SP-based model designed for HQA. They defined a set of general atomic and higher-order operations for discrete reasoning over heterogeneous knowledge resources. During the program generation process, UniRPG takes questions and supporting facts as input pairs of BART Lewis et al. (2020) to generate a complete program in a single step. Although UniRPG has the ability to generate programs with tables and passages as supporting facts, it still struggles with multi-hop question answering for the following two reasons. First, generating a complete multi-hop program usually depends on multiple heterogeneous supporting facts, making it challenging for the model to receive and understand all the facts simultaneously due to the length limitation. On the other hand, intuitively, the reasoning results from the current hop are useful for selecting supporting facts and program generation in the next step. However, generating a complete program sequence before executing ignores the interaction between the reasoning results of the current hop and the subsequent program generation. To tackle these issues, we introduce HopPG, an iterative program generation framework designed explicitly for multi-hop answer reasoning based on heterogeneous knowledge. HopPG leverages the execution results of the previous program to select supporting facts and generate subsequent programs iteratively. In comparison to UniRPG, HopPG reduces the knowledge complexity used in each program generation step and incorporates information from the previous steps, enhancing the model's capability for multi-hop reasoning. In this paper, we utilize a subset of the MMQA dataset (Talmor et al., 2021) for evaluating HopPG. Specifically, we only focus on questions based on tables and texts, which we refer to as MMQA-T\({}^{2}\), excluding those requiring images as knowledge. It possesses the following notable characteristics: 1) questions are based on heterogeneous knowledge, 2) questions require multi-hop reasoning, and 3) MMQA-T\({}^{2}\) provides detailed annotations of the supporting facts and intermediate results for each hop, allowing us to construct more accurate pseudo programs. The experimental results show that HopPG outperforms existing SP-based baselines, especially on the multi-hop questions. Our contributions are summarized as follows: * We propose HopPG, a self-iterative program generation framework for multi-hop question answering over heterogeneous knowledge. The framework successfully addresses the limitations of existing SP-based models. * We collect the MMQA-T\({}^{2}\) dataset based on MMQA, which only contains multi-hop questions over tabular and textual knowledge. Moreover, we construct pseudo-multi-hop programs for each question in MMQA-T\({}^{2}\) to train the program generator with weak supervision. * We conduct extensive experiments and ablation studies on the MMQA-T\({}^{2}\) dataset. The experimental results show that HopPG outperforms the existing SP-based QA model, especially on the multi-hop questions. ## 2 Related Work ### Question Answering over Heterogeneous Knowledge Previous works have attempted to leverage knowledge with various background knowledge for question-answering, e.g., knowledge graphs and texts (Sun et al., 2019; Han et al., 2020). However, these researches are limited to using one type of knowledge as auxiliary information during answer reasoning and have not fully explored multi-hop reasoning across heterogeneous knowledge (HQA). To fill this gap, Chen et al. (2020) first propose the HybridQA dataset, which provides a WiKiTable accompanied by hyperlinked Wikipedia passages as evidence for each question. Based on HybridQA, Wenhu Chen (2021) proposed the OTT-QA dataset, requiring the system to retrieve relevant tables and texts for their given question. Additionally, Zhu et al. (2021) and Chen et al. (2021) introduced TAT-QA and FinQA, both requiring numerical reasoning over heterogeneous data. ### Semantic Parsing-based Methods The semantic parsing-based methods reason answers by translating questions into executable programs such as SPARQL (Xiong et al., 2022) and SQL(Hui et al., 2022). Previous semantic-parsing-based question-answering methods always research over homogeneous knowledge resources, i.e., texts (Zhou et al., 2022), tables (Liu et al., 2022) and knowledge graphs (Yih et al., 2016). Zhou et al. (2022) proposed UniRPG, which first applies the semantic parsing-based method on question answering over heterogeneous knowledge including tables and texts. Inspired by these works, we propose HopPG in this paper to address the limitations of UniRPG in generating multi-hop programs based on heterogeneous knowledge. ## 3 Methodology ### Task Definition Multi-hop question answering over heterogeneous knowledge, i.e., tables and texts, aims to retrieve the supporting facts from the given knowledge and derive the answer to the question. Given a \(H\)-hop question \(\mathcal{Q}\) and \(K\) fact candidates \(\mathcal{T}=\{t^{1},...t^{K}\}\), the model requires deriving the final answer \(a^{H}\) from \(\mathcal{T}\). Apart from \(a^{H}\), we use \(\{a^{1},...,a^{H-1}\}\) to represent the intermediate reasoning results in each hop. Each \(a^{h}\), where \(h\leq H\), can be a table cell, a text span, or a math calculation result. The task is formalized as follows: \[a^{H}=\arg\max P(a^{H}|\mathcal{Q},\mathcal{T};\theta). \tag{1}\] In this work, \(\mathcal{T}\) contains a table and a set of texts. We assume that \(t^{1}\) represents the table and the others \(\{t^{2},...,t^{K}\}\) are the \(K\)-\(1\) texts. Specifically, the table \(t^{1}\) consists of \(m\times n\) cells \(\{c_{ij}\}\), where \(m\) and \(n\) are the numbers of rows and columns. ### Definition of Multi-hop Program #### 3.2.1 Operations HopPG generates logical programs based on the pre-difined operations. Aligning with the previous work [23], our operations contains 4 atomic operations and 11 high-order operations, including CELL, SPAN, CELL_VALUE, SPAN_VALUE, KV, MULTISPAN, COUNT, SUM, AVG, ARGMAX, and ARGMIN. Based on these operations, we extend an atomic operation and two high-order operations to enable the programs to efficiently solve multi-hop questions and more reasoning types. The extended operations are listed as follows: * YESNO: This is an atomic operation designed for "yes or no" reasoning. HopPG converts this reasoning type as a test span extraction problem. Specifically, we connect two words, "yes" and "no", at the beginning of the input sequence of the generator. The arguments of YESNO are the same as the other atomic operations, which are \((s,e)\). During execution, YESNO checks the extracted span, if its content is "no", YESNO returns "no" as the answer. For all other cases, the operation returns "yes". * COMPOSE: This operation means the current hop is an intermediate hop, and the answering reasoning requires iteration. The argument of the COMPOSE is one of the atomic operations, CELL or SPAN, and COMPOSE directly returns the result of the atomic operation as the current-hop result. * INTERSECT: This operation also means the current hop is an intermediate hop. Its argument is a high-order operation MULTISPAN, and INTERSECT directly returns the results of MULTISPAN, a set of table cells or text spans. #### 3.2.2 Program Templates We define multi-hop program templates for questions of different reasoning types. These templates are used to construct pseudo programs for weakly supervised training and to constrain program generation. We decompose the multi-hop reasoning into multiple single-hop programs. In this work, we set the maximum hop count \(H\)=2. The defined templates are listed in Table 1. Specifically, for questions of "compose" reasoning types, HopPG directly outputs the execution results of the 2-hop program as the final answer. In contrast, when tackling "intersect" questions, HopPG compares the results from the two hops and outputs the overlap cells or spans as the final answers. We execute the Figure 1: The framework of HopPG. This figure gives a 2-hop question as the example. pseudo programs constructed using our templates and evaluate the question-answering performance on the development set of MMQA-T\({}^{2}\), the EM and F1 scores are 91.27% and 93.89%, respectively. ### Framework of HopPG We first decompose H-hop reasoning into multiple single hops, and define the corresponding programs as \(\mathcal{P}\)=\(\{p^{1},...,p^{H}\}\). Based on our program templates, HopPG generates the program \(p^{h}\) hop by hop in an iterative way. As Figure 1 shows, the framework of our HopPG mainly contains three modules: _fact retriever_, _program generator_, _program executor_, and a _iteration detection process_. During \(h\)-hop reasoning, the **fact retriever** first selects a supporting fact \(t^{h}\) for the current hop based on the question \(Q\) and the previous-hop result \(a^{h-1}\). After that, the **program generator** receives \(Q\), \(a^{h-1}\) and \(t^{h}\) as inputs and generate the program \(p^{h}\). Subsequently, \(p^{h}\) is executed by the **program executor** and \(a^{h}\) can be derived. Notably, for programs comprising multi-level operations, the executor executes from the atomic operations to the high-order operations. At this point, the **iteration detection** process checks if the high-order operation of \(p^{h}\) is a multi-hop operation2. If it is, HopPG returns \(a^{h}\) to iterate the above process. Otherwise, HopPG terminates the iteration and outputs \(a^{h}\) as the final answer. Footnote 2: COMPOSE and INTERSECT, which are defined in Section 3.2 In HopPG, the fact retriever and the program generator are trainable and trained separately among these modules. We will introduce their details in the following sections. #### 3.3.1 Supporting Fact Retriever The retriever in HopPG aims to select the supporting fact \(t^{h}\) for the h-hop program generation from the provided candidates, including a table and a set of texts. for the \(h\)-hop program generation. Following (Yoran et al., 2022), we finetune the BART-large (Lewis et al., 2020) model using the training set of MMQA-T\({}^{2}\) as our retriever. The input of the retriever is a sequence consisting of the question \(Q\), the golden execution result of the previous hop \(\bar{a}^{h-1}\), and one of the fact candidates \(t^{i}\): \[\text{Inp}_{R}^{h}=[\langle s\rangle;Q;\langle\backslash s\rangle;\bar{a}^{h- 1};\langle\backslash s\rangle;t^{i}] \tag{2}\] Notably, the table \(t^{1}\) is flattened by connecting its rows. For a first-hop example, the previous-hop execution result \(|\bar{a}^{0}\rangle\) is set to "None". The retriever receives the input sequences of all candidates and outputs a score vector \(\boldsymbol{\delta}\)=\((s^{1},...,s^{K})\) of them, then the model is fine-tuned using the following loss function, where \(l\) is the golden fact's index: \[\mathcal{L}=\texttt{CrossEntropy}(l,\texttt{Softmax}(\boldsymbol{\delta})) \tag{3}\] After tuning, the supporting fact retrieval accuracy of our retriever is 90.7%. During the program generation process, the tuned retriever ranks all candidate inputs and selects the fact \(t^{h}\)=\(argmax(\boldsymbol{\delta})\) as the supporting fact of the current hop \(h\). #### 3.3.2 Program Generator The program generator aims to generate the corresponding program for each hop. In this work, \begin{table} \begin{tabular}{l|l|l} \hline \hline \multirow{2}{*}{**Reasoning Type**} & \multicolumn{2}{c}{**Multi-hop Program Templates**} \\ \cline{2-3} & HOP-1 & HOP-2 \\ \hline \multirow{4}{*}{Span Extraction} & \(\texttt{CELL}\ (s,e)\) & - \\ & \(\texttt{CELL}\_VALUE\ (s,e)\) & - \\ & \(\texttt{SPAN}\ (s,e)\) & - \\ & \(\texttt{MULTISPAN}(\texttt{CELL}_{\text{i}}/\texttt{SPAN}_{1},...,\texttt{ CELL}_{\text{a}}/\texttt{SPAN}_{\text{a}})\) & - \\ \hline YesNo & YESNO \((s,e)\) & - \\ \hline \multirow{2}{*}{Compare} & \(\texttt{ARGMAX}(\texttt{KV}_{1},\texttt{KV}_{2})\) & - \\ & \(\texttt{ARGMIN}(\texttt{KV}_{1},\texttt{KV}_{2})\) & - \\ \hline \multirow{4}{*}{Calculation} & \(\texttt{SUM}(\texttt{CV}_{1},...,\texttt{CV}_{\text{a}})\) & - \\ & \(\texttt{AVG}(\texttt{CV}_{1},...,\texttt{CV}_{\text{a}})\) & - \\ & \(\texttt{COUNT}(\texttt{CV}_{1},...,\texttt{CV}_{\text{a}})\) & - \\ \hline \multirow{2}{*}{ \begin{tabular}{l} Compose + Span Extraction \\ Compose + YesNo \\ \end{tabular} } & \(\texttt{COMPOSE}(\texttt{CELL}/\texttt{SPAN})\) & \(\texttt{CELL}/\texttt{SPAN}/\texttt{MULTISPAN}\) \\ \hline Intersect & INTERSECT(MULTISPAN) & MULTISPAN \\ \hline \hline \end{tabular} \end{table} Table 1: The defined multi-hop program templates for MMQA-T\({}^{2}\). the generator is a BART-based model equipped with a structure-aware knowledge reader (Zhang et al., 2020), which is designed to enhance the table understanding ability of the encoder. We use the training set of MMQA-T\({}^{2}\) together with our constructed pseudo programs to train the UniRPG. Specifically, for the \(h\)th-hop program generation, the input sequence consists of the question \(Q\), the execution result of the previous-hop program \(a^{h-1}\), and the supporting fact \(t^{h}\) selected by the retriever. We connect a text span _"Yes or No"_ to the question to transfer the YESNO reasoning type into a span extraction problem. Formally, the \(h\)th-hop input of our generator is represented as follows: \[\text{Inp}_{G}^{h}=[\langle s\rangle;\textit{yes~{}or~{}no};Q;\langle\langle s \rangle;a^{h-1};\langle\langle s\rangle;t^{h}] \tag{4}\] The input sequence is fed into the structure-aware knowledge reader, which injects table structure information into the self-attention layers of the BART-encoder with structure-aware attention. The reader learns the input and outputs the representations of input tokens. Then, we feed it into the encoder to learn the representation \(\mathbf{K}=\{\mathbf{k}_{i}\}_{i=1}^{L}\), where \(L\) is the length of the input sequence. Subsequently, the representations vectors \(\mathbf{K}\) of the input tokens are used to decode the program based on our defined operations. For the generator training, we collect the golden supporting fact and the corresponding program for each hop of the questions. All these data are utilized to train the program generator. #### 3.3.3 Program Executor To perform answer derivation, we implement a symbolic program executor for HopPG that executes the generated programs based on their meanings. When dealing with programs comprising multiple levels of operations, the executor executes from the atomic operations to the high-order operations. #### 3.3.4 Iteration Detector In HopPG, we add an iteration detection process after the program execution to determine whether further hop generation and reasoning are needed. During the inference phase of HopPG, the iteration detection process checks the outermost operation of the current generated program to determine if it is one of the multi-hop operations, which include COMPOSE and INTERSECT. If the operation belongs to multi-hop operations, HopPG performs the next hop generation iteratively. Otherwise, the current result is considered as the final answer. ## 4 Experiments ### Dataset The MMQA-T\({}^{2}\) used in this paper is a subset of MMQA (Talmor et al., 2021). Specifically, we collect all questions based on tables and texts and exclude questions using images from MMQA, in a total of 15688 training instances and 1501 de \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **Knowledge** & **Question Type** & **Hop** & **Distribution** \\ \hline \multirow{5}{*}{Only Table} & Span Extraction & 1 & 42.5\% \\ & YesNo & 1 & 3.1\% \\ & Compare & 1 & 5.8\% \\ & Calculation & 1 & 0.6\% \\ & Intersect & 2 & 2.8\% \\ & Compose + Span Extraction & 2 & 6.0\% \\ \hline \multirow{2}{*}{Only Text} & Span Extraction & 1 & 10.7\% \\ & YesNo & 1 & 5.1\% \\ \hline \multirow{5}{*}{Table + Text} & Interest & 2 & 4.1\% \\ & Compose + Span Extraction (table to text) & 2 & 9.4\% \\ & Compose + Span Extraction (text to table) & 2 & 4.8\% \\ & Compose + YesNo (table to text) & 2 & 2.0\% \\ & Compare (Compose + Span Extraction (text to table)) & 3 & 3.0\% \\ \hline \hline \end{tabular} \end{table} Table 2: The question type distribution of the training set of MMQA-T\({}^{2}\). \begin{table} \begin{tabular}{c|c c c c c c c|c} \hline \hline & **Extraction** & **Compare** & **Compose** & **YesNo** & **Calculation** & **Intersect** & **Compose\_Compare** & **Total** \\ \hline Train & 7512 & 819 & 3463 & 1170 & 101 & 978 & 425 & 14122 \\ Dev & 834 & 90 & 346 & 130 & 11 & 108 & 47 & 1566 \\ Test & 748 & 70 & 383 & 142 & 6 & 88 & 64 & 1501 \\ \hline \hline \end{tabular} \end{table} Table 3: Basic statistics of MMQA-T\({}^{2}\). velopment instances. Each question in MMQA is provided with 1 table together with 10 texts as candidate facts. The question type of the collect instances is shown in Table 2. Based on the question types, we re-split the training instances we collected in a ratio of 9:1 as the training set of the development set of MMQA-T\({}^{2}\), respectively. The collected development instances from MMQA are directly used as the test set of MMQA-T\({}^{2}\). We give the final basic statistics of MMQA-T\({}^{2}\) in Table 3. To further demonstrate MMQA-T\({}^{2}\), we present the distribution of the training set questions in Table 2 based on knowledge utilized, reasoning type, and the number of hops. It can be observed that 38.5% of the questions are multi-hop, with 23.3% of those requiring both tables and text to derive answers. These statistics indicate that MMQA-T\({}^{2}\) is a suitable dataset for evaluating HopPG, designed to improve question-answering performance in multi-hop reasoning over heterogeneous knowledge. ### Implementation Details The program generator of HopPG is initialized using BART-base and optimized with AdamW. The training settings are consistent with UniRPG. Specifically, the learning rate, batch size and weight decay are set to 1e-4, 128 and 0.01, respectively. When generating programs, we set the Beam Size of the beam search method as 4. The experiments are conducted on NVIDIA A100 GPU. ### Baselines AutoRouting and Implicit-DecompThese two baselines are from Talmor et al. (2021). We compare HopPG's question-answering performance with these baselines on the original MMQA dataset. For questions requiring images, we directly use the prediction results from Implicit-Decomp. UniRPGWe reproduce the UniRPG-base version to directly generate complete multi-hop programs based on our operations. We select the top 2 candidate knowledge ranked by retrieval scores as the supporting facts. Then the question and the two facts are concatenated as input for UniRPG. In addition, we cut off the over-length input directly. ### Main Results Table 4 shows the question-answering results of our HopPG and the baseline models. We use EM and F1 scores as the evaluation metrics and report the results on both the original MMQA and MMQA-T\({}^{2}\) datasets. For image-based questions in MMQA, we use the predictions of Implicit-Decomp. As the table shows, UniRPG, the first semantic parsing-based method on HQA tasks, achieves significant performance on MMQA-T\({}^{2}\) by 63.09 EM and 70.29 F1. It is proved that UniRPG can effectively solve the answer reasoning over tabular and textual knowledge by generating executable programs. Based on the results of UniRPG, our HopPG further brings improvements on MMQA-T\({}^{2}\). The EM scores increase by 1.37 on the single-hop questions and 1.22 on the multi-hop questions. The improvements are from the ability of HopPG to reduce the complexity of the supporting facts, and sufficiently utilize previous-hop execution results during program generations. These improvements demonstrate the effectiveness of HopPG. \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline **Model** & \multicolumn{2}{c|}{**MMQA**} & \multicolumn{2}{c}{**MMQA-T\({}^{2}\)**} \\ \hline & Overall & Single-hop & Multi-hop & Overall & Single-hop & Multi-hop \\ \hline AutoRouting & 42.10 / 49.05 & - & - & - & - \\ Implicit-Decomp & 48.80 / 55.49 & 51.61 / 58.36 & 44.59 / 51.19 & 54.30 / 62.15 & 57.34 / 64.75 & 46.23 / 55.26 \\ UniRPG\({}^{\dagger}\) & 53.87 / 60.20 & 57.22 / 64.05 & 48.87 / 54.45 & 63.09 / 70.29 & 65.05 / 72.72 & 56.69 / 63.84 \\ \hline HopPG (ours) & **54.61** / **61.00** & **58.38** / **64.92** & **48.98** / **55.18** & **63.76** / **71.14** & **66.42** / **73.54** & **57.91** / **64.77** \\ \hline \hline \end{tabular} \end{table} Table 4: Results on the complete MMQA and the MMQA-T\({}^{2}\) datasets. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **Type** & **Implicit-Decomp** & **UniRPG\({}^{\dagger}\)** & **HopPG** \\ \hline TableQ & **72.35** / **79.89** & 63.41 / 69.93 & 65.04 / 70.33 \\ TextQ & 49.65 / 50.01 & 65.60 / 73.64 & **67.13** / **75.19** \\ Compose(TableQ,TextQ) & 50.00 / 56.70 & 70.73 / 75.80 & **71.95** / **75.99** \\ Compose(TextQ,TableQ) & 37.96 / 49.89 & 50.93 / 60.67 & **56.48** / **68.05** \\ Intersect(TableQ,TextQ) & 48.98 / 54.24 & 44.89 / 48.63 & **48.98** / **55.24** \\ \hline \hline \end{tabular} \end{table} Table 5: Results on questions with different types of MMQA-T\({}^{2}\). ### Ablation Studies To provide a more detailed and intuitive demonstration of HopPG's performance on different question types, we compare the question-answering results between HopPG and baselines in Table 5, where the results are reported on questions with different hop numbers and knowledge sources. #### 4.5.1 Questions with Homogeneous Knowledge The results in the table indicate that Implicit-Decomp performs excellently on table-based questions, because it utilizes the table pre-trained model, TAPAS (Herzig et al., 2020), for table-based question answering. Compared to TAPAS, semantic parsing-based models like UniRPG and HopPG offer the advantage of interpretability and avoid the need for expensive pre-training on a large number of tables. The table also shows that HopPG outperforms the baseline models on text-based questions because it generates programs using only the selected text as input, rather than simply concatenating complex and heterogeneous knowledge. This greatly reduces the difficulty of model inference. #### 4.5.2 Multi-hop Questions with Heterogeneous Knowledge As expected, HopPG achieves significant improvements in multi-hop question answering, which are mainly from the following three reasons: 1) Compared to directly generating complete programs, HopPG improves the accuracy of program generation at each hop by reducing the complexity of knowledge. 2) The program generation for each hop can refer to the execution results of the previous hop. 3) In HopPG, the errors in the program generated at the previous hop do not directly lead to incorrect results in the final output, which to some extent reduces error propagation. Compared to Implicit-Decomp, our semantic-parsing-based pipeline brings significant improvement on multi-hop questions, even without using the table pre-trained model for TableQ. This further proves the effectiveness of our pipeline designed for multi-hop and hybrid questions without fine-designation for table question answering. ### Case Studies #### 4.6.1 Q1: Fix the single-hop program. In Table 6, we present three cases of questions that UniRPG fails to answer but were fixed by HopPG. Among them, Q1 is a single-hop question with the answer contained in a given textual knowledge. For UniRPG, all candidate texts are concatenated with flattened tables as input the sequence. In this experiment, the pseudo-program for Q1 is the same in both UniRPG and HopPG. As shown in the first row of the table, UniRPG incorrectly predicts the end index of the answer string in the serialized knowledge, while HopPG successfully extracts the answer. This is because UniRPG directly concatenates all candidate supporting facts as input, which makes it difficult for the model to understand the knowledge and reason about the answer due to the redundant information. In contrast, HopPG reduces the complexity of the input knowledge by retrieving the necessary supporting fact for each hop, thereby improving the reasoning accuracy. #### 4.6.2 Q2: Fix the second hop of the multi-hop program. UniRPG cannot handle Q2, a multi-hop question, due to its limited operation set for multi-hop reasoning. Therefore, in UniRPG, Q2 is treated as a single-hop question and directly annotated with the span extraction pseudo-program. As shown in Table 6, the model is confused and incorrectly predicts the MULTISPAN operation for Q2 instead of SPAN. because the question format of multi-hop questions is different from that of single-hop ques \begin{table} \begin{tabular}{l|l|c|c} \hline \hline & **Model** & **Golden Program** & **Generated Program** \\ \hline \multirow{4}{*}{Q1} & UniRPG & SPAN(86,89) & SPAN(86,164) \\ \cline{2-3} & HopPG-h1 & SPAN(88,91) & SPAN(88,91) \\ \cline{2-3} & HopPG-h2 & CELL(354,248) & MULTISPAN(CELL(58,69),...,CELL(244,264)) \\ \hline \multirow{4}{*}{Q2} & UniRPG-h1 & COMPGSE(WULTISPAN(CELL(71,76),CELL(100,101))) & COMPGSE(WULTISPAN(CELL(71,76),CELL(71,76))) \\ \cline{2-3} & HopPG-h2 & CELL(252,256) & CELL(252,256) \\ \hline \multirow{4}{*}{Q3} & UniRPG & CELL(352,356) & CELL(352,373) \\ \cline{2-3} & HopPG-h1 & COMPGSE(CELL(354,358)) & COMPGSE(CELL(354,358)) \\ \cline{1-1} \cline{2-3} & HopPG-h2 & CELL(356,360) & CELL(356,360) \\ \hline \hline \end{tabular} \end{table} Table 6: Case studies. tions. The incorrect operation further leads to the generation of a series of wrong string indices. For simplicity, we omits the intermediate CELL operations in UniRPG's result for Q2 in Table 6. In contrast, HopPG generates two-hop programs iteratively for this question. Although the generated first-hop program is not entirely correct, the lack of information about an intermediate result does not have a decisive impact on the generation of the program for the second iteration. HopPG ultimately successfully generates the correct second-hop program for Q2 and obtains the correct answer. #### 4.6.3 Q3: Fix all hops of the multi-hop program. Q3 is also a multi-hop question. As shown in the table, UniRPG predicts the correct operation, but it incorrectly predict the string index for the answer. In contrast, HopPG successfully generates a two-hop program for the question not only obtaining the correct answer but also making the reasoning process interpretable, demonstrating the advantage of HopPG in handling multi-hop questions. ### Error Analysis To conduct error analysis over the test set of MMQA-T\({}^{2}\), we collected the programs generated by HopPG that are inconsistent with the pseudo programs we annotated. These wrong programs are primarily caused by two reasons: incorrect operation prediction and incorrect string index \((s,e)\) prediction. According to our statistical analysis, among the wrong cases of single-hop questions, 27% of them have incorrect operation predictions, while 99% have incorrect string index predictions. The proportions of these two reasons in the cases of two-hop questions ('COMPOSE' reasoning type) also align with the aforementioned trend. Specifically, for their first-hop program generation, the proportions of incorrect operation prediction and incorrect string index prediction are 32.4% and 100%, respectively. For the second-hop, the proportions of these two factors are 7% and 98%, respectively. This indicates that if the operation selection is incorrect, the model will struggle to accurately retrieve the require information from the knowledge. Additionally, in the wrong cases of two-hop questions, the proportions of errors in the first-hop and second-hop program generation are 50.0% and 69.9%, respectively. Based on our observations, an error in the first-hop program does not necessarily lead to an error in the second-hop generation, as the execution results of the first-hop program only serve as input information for the second-hop. In fact, among the cases where the first-hop program generation are incorrect, 60.2% of them generate the correct second-hop programs and obtain the correct answers to the questions. This proves that the iterative generation way of HopPG can to some extent mitigate the impact of reasoning errors in previous hops on the final result. ## 5 Conclusion We propose HopPG, a self-iterative program generation approach for multi-hop question answering over heterogeneous knowledge. Unlike directly generating complete programs for multi-hop questions, HopPG iteratively generates programs for each hop based on the execution results from the previous-hop program. We evaluate our model using a subset of MMQA that only contains text-based and table-based questions and construct pseudo programs for each instance to train HopPG under weak supervision. The experimental results demonstrate that HopPG brings significant improvements for multi-hop question answering over heterogeneous knowledge, outperforming semantic parsing-based question-answering models that directly generate complete programs.
2302.03608
Online Reinforcement Learning with Uncertain Episode Lengths
Existing episodic reinforcement algorithms assume that the length of an episode is fixed across time and known a priori. In this paper, we consider a general framework of episodic reinforcement learning when the length of each episode is drawn from a distribution. We first establish that this problem is equivalent to online reinforcement learning with general discounting where the learner is trying to optimize the expected discounted sum of rewards over an infinite horizon, but where the discounting function is not necessarily geometric. We show that minimizing regret with this new general discounting is equivalent to minimizing regret with uncertain episode lengths. We then design a reinforcement learning algorithm that minimizes regret with general discounting but acts for the setting with uncertain episode lengths. We instantiate our general bound for different types of discounting, including geometric and polynomial discounting. We also show that we can obtain similar regret bounds even when the uncertainty over the episode lengths is unknown, by estimating the unknown distribution over time. Finally, we compare our learning algorithms with existing value-iteration based episodic RL algorithms in a grid-world environment.
Debmalya Mandal, Goran Radanovic, Jiarui Gan, Adish Singla, Rupak Majumdar
2023-02-07T17:12:49Z
http://arxiv.org/abs/2302.03608v1
# Online Reinforcement Learning with Uncertain Episode Lengths ###### Abstract Existing episodic reinforcement algorithms assume that the length of an episode is fixed across time and known a priori. In this paper, we consider a general framework of episodic reinforcement learning when the length of each episode is drawn from a distribution. We first establish that this problem is equivalent to online reinforcement learning with general discounting where the learner is trying to optimize the expected discounted sum of rewards over an infinite horizon, but where the discounting function is not necessarily geometric. We show that minimizing regret with this new general discounting is equivalent to minimizing regret with uncertain episode lengths. We then design a reinforcement learning algorithm that minimizes regret with general discounting but acts for the setting with uncertain episode lengths. We instantiate our general bound for different types of discounting, including geometric and polynomial discounting. We also show that we can obtain similar regret bounds even when the uncertainty over the episode lengths is unknown, by estimating the unknown distribution over time. Finally, we compare our learning algorithms with existing value-iteration based episodic RL algorithms on a grid-world environment. ## 1 Introduction We consider the problem of _episodic reinforcement learning_, where a learning agent interacts with the environment over a number of episodes [1]. The framework of episodic reinforcement learning usually considers two types of episode lengths: either each episode has a fixed and invariant length \(H\), or each episode may have a varying length controlled by the learner. The fixed-length assumption is relevant for recommender systems [1] where the platform interacts with a user for a fixed number of rounds. Variable length episodes arise naturally in robotics [1], where each episode is associated with a learning agent completing a task, and so the length of the episode is entirely controlled by the learner. Fixed horizon lengths make the design of learning algorithms easier, and is the usual assumption in most papers on theoretical reinforcement learning [1, 2]. In this paper, we take a different perspective on episodic reinforcement learning and assume that the length of each episode is drawn from a distribution. This situation often arises in online platforms where the length of an episode (i.e., the duration of a visit by a user) is not fixed a priori, but follows a predictable distribution [1]. Additionally, various econometric and psychological evidence suggest that humans learn by maintaining a risk/hazard distribution over the future [2], which can be interpreted as a distribution over the horizon length. Despite a large and growing literature on episodic reinforcement learning, except for [20], uncertain episodic lengths or settings with general survival rates of agents have not been studied before. **Our Contributions**: In this paper, we describe reinforcement learning algorithms for general distributions over episode lengths. Our main contribution is a general learning algorithm which can be adapted to a given distribution over episode lengths to obtain sub-linear regret over time. In particular, our contributions are the following. * We first establish an equivalence between maximization of expected total reward with uncertain episode lengths and maximization of expected (general) discounted sum of rewards over an infinite horizon. In particular, we show that minimization of regret is equivalent in these two environments. * Next we design a learning algorithm for the setting with arbitrary distribution over the episode lengths. Our algorithm generalizes the value-iteration based learning algorithm of Azar et al. [1] by carefully choosing an effective horizon length and then updating the backward induction step based on the distribution over episode lengths. In order to analyze its regret, we use the equivalence result above, and bound its regret for a setting with general discounting. * We instantiate our general regret bound for different types of discounting (or equivalently episode distributions), including geometric and polynomial discounting, and obtain sub-linear regret bounds. For geometric discounting with parameter \(\gamma\), we bound regret by \(\widetilde{O}(\sqrt{SAT}/(1-\gamma)^{1.5})\) which matches the recently established minimax optimal regret for the non-episodic setting [14]. For the polynomial discounting of the form \(h^{-p}\) we upper bound regret by \(\widetilde{O}(\sqrt{SAT}^{\frac{1}{2-1/p}})\). * Finally, we show that we can obtain similar regret bounds even when the uncertainty over the episode lengths is unknown, by estimating the unknown distribution over time. In fact, for geometric discounting, we recover the same regret bound (i.e. \(\widetilde{O}(\sqrt{SAT}/(1-\gamma)^{1.5})\) up to logarithmic factors, and for the polynomial discounting we obtain a regret bound of \(\widetilde{O}(\sqrt{SAT}^{\frac{1}{1+2p}})\), which asymptotically matches the previous regret bound. Our results require novel and non-trivial generalizations of episodic learning algorithms and straightforward extensions to existing algorithms do not work. Indeed, a naive approach would be to use the expected episode length as the fixed horizon length \(H\). However, this fails with heavy-tailed distributions which often appear in practice. Alternately, we could compute an upper bound on the episode length so that with high probability the lengths of all the \(T\) episodes are within this bound. Such an upper bound can be computed with the knowledge of distribution over episode lengths and using standard concentration inequalities. However, these upper bounds become loose either with a large number of episodes or for heavy-tailed distributions. ### Related Work **Episodic Reinforcement Learning**: Our work is closely related to the UCB-VI algorithm of Azar et al. [1], which achieves \(O(\sqrt{HSAT})\) regret for episodic RL with fixed horizon length \(H\). The main difference between our algorithm and UCB-VI is that we use a different equation for backward-induction where future payoffs are discounted by a factor of \(\gamma(h+1)/\gamma(h)\) at step \(h\), where \(\gamma\) is a general discount function. Beyond [1], several papers have considered different versions of episodic RL including changing transition function [15, 16], and function approximation [15, 17, 18]. **General Discounting**: Our work is also closely related to reinforcement learning with general discounting. Even though geometric discounting is the most-studied discounting because of its theoretical properties [1], there is a wealth of evidence suggesting that humans use general discounting and time-inconsistent decision making [19, 20, 21]. In general, optimizing discounted sum of rewards with respect to a general discounting might be difficult as we are not guaranteed to have a stationary optimal policy. Fedus et al. [20] study RL with hyperbolic discounting and learn many \(Q\)-values each with a different (geometric) discounting. Our model is more general, and our algorithm is based on a modified value iteration. We also obtain theoretical bounds on regret in our general setting. Finally, Pitis [20] introduced more general state, action based discounting but that is out of scope of this paper. **Stochastic Shortest Path**: Our work is related to the stochastic shortest path (SSP), introduced by Bertsekas et al. [1]. In SSP, the goal of the learner is to reach a designated state in an MDP, and minimize the expected total cost of the policy before reaching that goal. Recently, there has been a surge of interest in deriving online learning algorithms for SSP [14, 15, 16]. Our setting differs from SSP in two ways. First, the horizon length is effectively controlled by the learner in SSP, once she has a good approximation of the model. But in our setting, the horizon length is drawn from a distribution at the start of an episode by the nature, and is unknown to the learner during that episode. Second, when the model is known in SSP, different policies induce different distributions over the horizon length. Therefore, in contrast to our setting, minimizing regret in SSP is not the same as minimizing regret under general discounting. **Other Related Work**: Note that uncertainty over episode lengths can also be interpreted as hazardous MDP [13], where hazard rate is defined to be the negative rate of change of log-survival time. Sozou [15] showed that different prior belief over hazard rates imply different types of discounting. We actually show equivalence between general discounting and uncertain episode lengths, even in terms of regret bounds. Finally, this setting is captured by the partially observable Markov decision processes [10], where one can make the uncertain parameters hidden and/or partially observable. ## 2 Model We consider the problem of episodic reinforcement learning with uncertain episode length. An agent interacts with an MDP \(\mathcal{M}=(S,\mathcal{A},r,\mathbb{P},\mathbb{P}_{\mathbb{H}})\), where \(\mathbb{P}_{\mathbb{H}}\) denotes the probability distribution over the episode length. We assume that the rewards are bounded between \(0\) and \(1\). The agent interacts with the environment for \(T\) episodes as follows. * At episode \(k\in[T]\), the starting state \(x_{k,1}\) is chosen arbitrarily and the length of the episode \(H_{k}\sim\mathbb{P}_{\mathbb{H}}(\cdot)\). 1 Footnote 1: The parameter \(H_{k}\) is unknown to the learner during episode \(k\). * For \(h\in[H_{k}]\), let the state visited be \(x_{k,h}\) and the action taken be \(a_{k,h}\). Then, the next state \(x_{k,h+1}\sim\mathbb{P}(\cdot|x_{k,h},a_{k,h})\). The agent interacts with the MDP \(\mathcal{M}\) for \(T\) episodes and the goal is to maximize the expected undiscounted sum of rewards. Given a sequence of \(k\) episode lengths \(\{H_{k}\}_{k\in[T]}\) the expected cumulative reward of an agent's policy \(\boldsymbol{\pi}=\{\pi_{k}\}_{k\in[T]}\) is given as \[\text{Rew}\left(\boldsymbol{\pi};\{H_{k}\}_{k\in[T]}\right)=\sum_{k=1}^{T} \mathbb{E}\left[\sum_{h=1}^{H_{k}}r(x_{k,h},a_{k,h})\right]\] Since each \(H_{k}\) is a random variable drawn from the distribution \(\mathbb{P}_{\mathbb{H}}(\cdot)\), we are interested in expected reward with respect to distribution \(\mathbb{P}_{\mathbb{H}}\). \[\mathbb{E}\left[\text{Rew}\left(\boldsymbol{\pi};\{H_{k}\}_{k\in [T]}\right)\right]\] \[=\mathbb{E}\left[\sum_{k=1}^{T}\sum_{H_{k}=1}^{\infty}\mathbb{P}_ {\mathbb{H}}(H_{k})\sum_{h=1}^{H_{k}}r(x_{k,h},a_{k,h})\right]\] \[=\mathbb{E}_{\pi}\left[\sum_{k=1}^{T}\sum_{h=1}^{\infty}\mathbb{P }_{\mathbb{H}}(H\geq h)r(x_{t,h},a_{t,h})\right] \tag{1}\] As is standard in the literature on online learning, we will consider the problem of minimizing regret instead of maximizing the reward. Given an episode length \(H_{k}\) and starting state \(x_{k,1}\) let \(\pi_{k}^{\star}\) be the policy that maximizes the expected sum of rewards over \(H_{k}\) steps i.e. \(\pi_{k}^{\star}\in\operatorname*{argmax}_{\pi}\mathbb{E}_{\pi}\left[\sum_{h=1 }^{H_{k}}r(x_{k,h},a_{k,h})|x_{k,1}\right].\) We will write \(V^{\pi_{k}}(x_{k,1};H_{k})\) to write the (undiscounted) value function of a policy \(\pi_{k}\) over \(H_{k}\) steps starting from state \(x_{k,1}\). Then \(\pi_{k}^{\star}\) is also defined as \(\pi_{k}^{\star}\in\operatorname*{argmax}_{\pi}V^{\pi}(x_{k,1};H_{k})\). We will also write \(V^{\star}(x_{k,1};H_{k})\) to denote the corresponding value of the optimal value function. Now we can define the regret over \(T\) steps as follows. **Definition 1**.: _The regret of a learning algorithm \(\boldsymbol{\pi}=\{\pi_{k}\}_{k\in[T]}\) over \(T\) steps with episode lengths \(\{H_{k}\}_{k\in[T]}\) is_ \[\text{Reg}\left(\boldsymbol{\pi};\{H_{k}\}\right)=\sum_{k\in[T]}V^{\star}(x_{k,1};H_{k})-V^{\pi_{k}}(x_{k,1};H_{k}) \tag{2}\] Note that the regret as defined in eq. (2) is actually a random variable as the episode lengths are also randomly generated from the distribution \(\mathtt{P}_{\mathtt{H}}(\cdot)\). So we will be interested in bounding the expected regret. Let \(V^{\star}(x_{k,1})\) be the expected value of \(V^{\star}(x_{k,1};H_{k})\) i.e. \(V^{\star}(x_{k,1})=\sum_{\ell}V^{\star}(x_{k,1};\ell)\mathtt{P}_{\mathtt{H}}(\ell)\). Then the expected regret of a learning algorithm is given as \[\text{Reg}(\boldsymbol{\pi};\mathtt{P}_{\mathtt{H}}(\cdot))=\sum_{k\in[T]}V^{ \star}(x_{k,1})-\mathbb{E}_{H_{k}}\left[V^{\pi_{k}}(x_{k,1};H_{k})\right]\] ### An Equivalent Model of General Discounting We first establish that the problem of minimizing regret in our setting is equivalent to minimizing regret in a different environment, where the goal is to minimize discounted reward over an infinite horizon with a general notion of discounting. By setting \(\gamma(h)=\mathtt{P}_{\mathtt{H}}(H\geq h)\), the expected reward in eq. (1) becomes a sum of \(T\) expected rewards under the general discounting function \(\left\{\gamma(h)\right\}_{h=1}^{\infty}\). \[\mathbb{E}\left[\text{Rev}(\boldsymbol{\pi};\{H_{k}\}_{k\in[T]})\right]=\sum _{t=1}^{T}\mathbb{E}\left[\sum_{h=1}^{\infty}\gamma(h)r(x_{t,h},a_{t,h})|x_{k,1}\right]\] Therefore, we consider the equivalent setting where the agent is interacting with the MDP \(\mathcal{M}=(S,\mathcal{A},r,\mathbb{P},\boldsymbol{\gamma})\) where \(\boldsymbol{\gamma}=\left\{\gamma(h)\right\}_{h=1}^{\infty}\) is a general discounting factor. We will require the following two properties from the discounting factors: 1. \(\gamma(1)=1\), 2. \(\sum_{h=1}^{\infty}\gamma(h)\leq M\) for some universal constant \(M>0\). The first assumption is without loss of generality as we can normalize all the discount factors without affecting the maximization problem. The second assumption guarantees that the optimal policy is well-defined. Note that this assumption rules out hyperbolic discounting \(\gamma(h)=\frac{1}{1+h}\), but does allow discount factors of the form \(\gamma(h)=h^{-p}\) for any \(p>1\). Finally, note that our original reformulation of \(\gamma(h)=\mathtt{P}_{\mathtt{H}}(H\geq h)\) trivially satisfies the first assumption. The second assumption essentially ensures that the horizon length has a finite mean. We will also write \(\Gamma(h)\) to define the sum of the tail part of the series starting at \(h\) i.e. \[\Gamma(h)=\sum_{j\geq h}\gamma(j) \tag{3}\] In this new environment, the learner solves the following episodic reinforcement learning problem over \(T\) episodes. **Environment: General Discounting** 1. The starting state \(x_{k,1}\) is chosen arbitrarily. 2. The agent maximizes \(\mathbb{E}\left[\sum_{h=1}^{\infty}\gamma(h)r(x_{k,h},a_{k,h})|x_{k,1}\right]\) over an infinite horizon. Notice that even though the new environment is episodic, the length of each episode is infinite. So this environment is not realistic, and we are only introducing this hypothetical environment to design our algorithm and analyze its performance. Suppose that we are given a learning algorithm \(\boldsymbol{\pi}=\{\pi_{k}\}_{k\in[T]}\). We allow the possibility that \(\pi_{k}\) is a non-stationary policy as each \(\pi_{k}\) is used to maximizing a discounted sum of rewards with respect to a general discounting factor and in general the optimal policy need not be stationary. A non-stationary policy \(\pi_{k}\) is a collection of policies \(\{\pi_{k,h}\}_{h=1}^{\infty}\) where \(\pi_{k,h}:(S\times\mathcal{A})^{h-1}\times S\rightarrow\Delta(\mathcal{A})\). Given a non-stationary policy \(\pi_{k}\) at episode \(k\), we define the state-action \(Q\) function and the value function as \[Q^{\pi_{k}}(x,a;\boldsymbol{\gamma}) =\mathbb{E}\left[\sum_{h=1}^{\infty}\gamma(h)r(x_{k,h},a_{k,h})| x_{k,1}=x,a_{k,1}=a\right]\] \[V^{\pi_{k}}(x;\boldsymbol{\gamma}) =\mathbb{E}\left[\sum_{h=1}^{\infty}\gamma(h)r(x_{k,h},a_{k,h})| x_{k,1}=x\right]\] Here \(a_{k,h}\sim\pi_{k,h}(x_{k,1},a_{k,1},\ldots,x_{k,h-1},a_{k,h-1},x_{k,h})\). In this environment, we again measure the regret as the sum of sub-optimality gaps over the \(T\) episodes. **Definition 2**.: _Let the optimal value function be defined as \(V^{*}(x;\mathbf{\gamma})=\sup_{\pi}V^{\pi}(x;\mathbf{\gamma})\). Then we define regret for a learning algorithm \(\mathbf{\pi}=\left\{\pi_{k}\right\}_{k\in[T]}\) as_ \[\text{Reg}(\mathbf{\pi},\mathbf{\gamma})=\sum_{k=1}^{T}V^{*}(x_{k,1};\mathbf{\gamma})-V^{ \pi_{k}}(x_{k,1};\mathbf{\gamma}) \tag{4}\] Our next result shows that it is sufficient to minimize regret with respect to the new environment of episodic reinforcement learning. In fact, if any algorithm has regret \(\mathcal{R}(T)\) with respect to the new benchmark, then it has regret at most \(\mathcal{R}(T)\) with respect to the original environment with uncertain episode lengths. **Lemma 1**.: _For any learning algorithm \(\mathbf{\pi}=\left\{\pi_{k}\right\}_{k\in[T]}\) we have the following guarantee:_ \[\text{Reg}(\mathbf{\pi};\mathtt{P}_{\mathtt{H}}(\cdot))\leq\text{Reg}(\mathbf{\pi}; \mathbf{\gamma}).\] We also show that a converse of lemma 1 holds with additional restrictions on the discount factor \(\mathbf{\gamma}\). **Lemma 2**.: _Suppose the discount factor \(\mathbf{\gamma}\) is non-increasing. Then there exists a distribution \(\mathtt{P}_{\mathtt{H}}(\cdot)\) over the episode lengths so that_ \[\text{Reg}(\mathbf{\pi};\mathbf{\gamma})\leq\text{Reg}(\mathbf{\pi};\mathtt{P}_{\mathtt{H }}(\cdot)).\] Because of lemma 1, it is sufficient to bound a learning algorithm's regret for the environment with infinite horizon and general discounting. Therefore, we now focus on designing a learning algorithm that acts in an episodic setting with uncertain episode lengths, but analyze its regret in the infinite horizon setting with general discounting. ## 3 Algorithm: Regret Minimization under General Discounting We now introduce our main algorithm. Given a non-stationary policy \(\pi_{k}\), we define the state-action function and value function at step \(h\) as follows. \[Q_{h}^{\pi_{k}}(x,a) =\mathbb{E}\left[\sum_{j=1}^{\infty}\gamma(j)r(x_{k,h+j-1},a_{k,h+ j-1})\mid\mathcal{H}_{h-1},x_{k,h}=x,a_{k,h}=a\right]\] \[V_{h}^{\pi_{k}}(x) =\mathbb{E}\left[\sum_{j=1}^{\infty}\gamma(j)r(x_{k,h+j-1},a_{k,h +j-1})\mid\mathcal{H}_{h-1},x_{k,h}=x\right]\] where \(\mathcal{H}_{h-1}=(x_{k,1},a_{k,1},\ldots,a_{k,h-1})\) and \(a_{k,h+j}\sim\pi_{k,h+j}(\mathcal{H}_{h+j-1},x_{k,h+j})\). Note that, both the state-action \(Q\)-function and the value function depend on the history \(\mathcal{H}_{h-1}\). Moreover, conditioned on the history, we are evaluating the total discounted reward as if the policy \(\{\pi_{k,h+j}\}_{j\geq 0}\) was used from the beginning. We first establish some relations regarding the above state-action and value functions. We drop the episode index \(k\) for ease of exposition. Given a non-stationary policy \(\pi=\left\{\pi_{h}\right\}_{h\geq 1}\) let \[Q_{h}^{\pi}(x,a)=r(x,a)+\gamma(2)\cdot\mathbb{E}\left[\sum_{j=1} ^{\infty}\frac{\gamma(j+1)}{\gamma(2)}r(x_{h+j},a_{h+j})|\mathcal{H}_{h-1},x_{ h}=x,a_{h}=a\right]\] \[=r(x,a)+\gamma(2)\mathbb{E}_{x_{h+1}\sim\mathbb{P}(\cdot|x,a)} \left[\mathbb{E}\left[\sum_{j=1}^{\infty}\frac{\gamma(j+1)}{\gamma(2)}r(x_{h +j+1},a_{h+j+1})|\mathcal{H}_{h},x_{h+1}\right]\right]\] \[=r(x,a)+\gamma(2)\mathbb{E}_{x_{h+1}\sim\mathbb{P}(\cdot|x,a)} \left[V_{h+1}^{\pi}(x_{h+1};\mathbf{\gamma}_{2})\right]\] where in the last line we write \(\mathbf{\gamma}_{2}\) to denote the discount factor \(\mathbf{\gamma}_{2}(j)=\frac{\gamma(j+1)}{\gamma(2)}\) and \(V_{h+1}^{\pi}(x_{h+1};\mathbf{\gamma}_{2})\) is the value function at time-step \(h\) with respect to the new discount factor \(\mathbf{\gamma}_{2}\). By a similar argument one can write the action-value function with respect to the discount factor \(\mathbf{\gamma}_{2}\) as the following expression. \[Q_{h}^{\pi}(x,a;\mathbf{\gamma}_{2})\] \[=r(x,a)+\gamma_{2}(2)\mathbb{E}_{x_{h+1}\sim\mathbb{P}(\cdot|x,a) }\left[V_{h+1}^{\pi}(x_{h+1};\mathbf{\gamma}_{2})\right]\] \[=r(x,a)+\frac{\gamma(3)}{\gamma(2)}\mathbb{E}_{x_{h+1}\sim \mathbb{P}(\cdot|x,a)}\left[V_{h+1}^{\pi}(x_{h+1};\mathbf{\gamma}_{3})\right]\] where the discount factor \(\mathbf{\gamma}_{3}\) is given as \(\mathbf{\gamma}_{3}(j)=\frac{\gamma(j+2)}{\gamma(3)}\). In general, we have the following relation. \[Q_{h}^{\pi}(x,a;\mathbf{\gamma}_{k})=r(x,a)+\frac{\gamma(k+1)}{\gamma(k)}\mathbb{ E}_{x_{h+1}\sim\mathbb{P}(\cdot|x,a)}\left[V_{h+1}^{\pi}(x_{h+1};\mathbf{\gamma}_{k+1})\right] \tag{5}\] where the discount factor \(\gamma_{k}\) is defined as \(\gamma_{k}(j)=\frac{\gamma(j+k-1)}{\gamma(k)}\) for \(j=1,2,\ldots\). Notice that when \(\mathbf{\gamma}\) is a geometric discounting, we only need equation. \[Q_{h}^{\pi}(x,a)=r(x,a)+\gamma\mathbb{E}_{x_{h+1}\sim\mathbb{P}(\cdot|x,a)} \left[V_{h+1}^{\pi}(x_{h+1})\right] \tag{6}\] Description of the Learning Algorithm: The sequence of recurrence relations eq. (5) motivates our main algorithm (1). Our algorithm is based on the upper confidence value iteration algorithm (UCBVI[1]). In an episodic reinforcement learning setting with fixed horizon length \(H\), UCBVI uses backward induction to update the \(Q\)-values at the end of each episode, and takes greedy action according to the \(Q\)-table. However, in our setting, there is no fixed horizon length and the \(Q\)-values are related through an infinite sequence of recurrence relations. So, algorithm 1 considers a truncated version of the sequence of recurrence relations eq. (5). In particular, given an input discount factor \(\{\gamma(h)\}_{h=1}^{\infty}\)2 and a parameter \(\Delta\), algorithm 1 first determines \(N(\Delta)\) as a measure of effective length of the horizon. In particular, we set \(N(\Delta)\) to be an index so that \(\Gamma(N(\Delta))=\sum_{j\geq N(\Delta)}\gamma(j)\leq\Delta\). Note that, such an index \(N(\Delta)\) always exists as we assumed that the total sum of the discounting factors converges. Then algorithm 1 maintains an estimate of the \(Q\) value for all possible discount factors up to \(N(\Delta)\) i.e. \(\mathbf{\gamma}_{k}\) for \(k=1,\ldots,N(\Delta)\). Footnote 2: Recall that \(\gamma(h)=\texttt{P}_{\texttt{B}}(H\geq h)\). The details of the update procedure is provided in the appendix. In the update procedure, we first set the \((N(\Delta)+1)\)-th \(Q\)-value to be \(\Delta/\gamma(N(\Delta)+1)\) which is always an upper bound on the \(Q\)-value with discount factor \(\gamma_{N(\Delta)+1}\) because of the way algorithm 1 sets the value \(N(\Delta)\). Then starting from level \(N(\Delta)\), we update the \(Q\)-values through backward induction and eq. (5). Note that our algorithm needs to maintain \(N(\Delta)\) action-value tables. We will later show that in order to obtain sublinear regret we need to choose \(\Delta\) based on the particular discount factor. In particular, for the geometric discount factor \(\gamma(h)=\gamma^{h-1}\) we need to choose \(N(\Delta)=\frac{\log T}{\log(1/\gamma)}\). On the other hand, discounting factor of the form \(\gamma(h)=1/h^{p}\) requires \(N(\Delta)=O\left(T^{1/(2p-1)}\right)\). ## 4 Analysis The next theorem provides an upper bound on the regret \(\text{Reg}(\mathbf{\pi};\mathbf{\gamma})\). In order to state the theorem, we need a new notation. Let the function \(t:N\to\mathbb{R}\) be defined as \[t(h)=\left\{\begin{array}{cc}1&\text{if }h=1\\ \frac{\gamma(h)}{\gamma(1)}\prod_{j=2}^{h}\left(1+\frac{\gamma(j)}{j^{\beta} \Gamma(j)}\right)&\text{o.w.}\end{array}\right.\] Note that the function \(t\) is parameterized by the parameter \(\beta\) and depends on the discount factor \(\gamma(\cdot)\). **Theorem 1** (Informal).: _With probability at least \(1-\delta\), Algorithm 1 has the following regret._ \[\text{Reg}(\mathbf{\pi};\mathbf{\gamma})\leq\frac{\Delta T}{\gamma(N(\Delta)+1)}t(N( \Delta)+1)+\max_{h\in[N(\Delta)]}t(h)\frac{\Gamma(h+1)}{\gamma(h)}\widetilde{ O}\left(\sqrt{SATN(\Delta)}\right)\] Theorem 1 states a generic bound that holds for any discount factor. The main terms in the bound are \(O\left(\sqrt{SATN(\Delta)}\right)\), \(\Delta T\), and several factors dependent on the discount factor \(\gamma\). We now instantiate the bound for different discount factors by choosing appropriate value of \(\Delta\) and the parameter \(\beta\). **Corollary 2**.: _Consider the discount factor \(\gamma(h)=h^{-p}\). For \(p\geq 2\) and \(T\geq O(S^{3}A)\) we have_ \[\text{Reg}(T)\leq\widetilde{O}\left(S^{1/2}A^{1/2}T^{\frac{1}{2-1/p}}\right)\] _and for \(1<p<2\) and \(T\geq O\left((S^{3}A)^{\frac{2p-1}{p-1}}\right)\) we have_ \[\text{Reg}(T)\leq\widetilde{O}\left((p-1)^{-\frac{p}{p-1}}S^{1/2}A^{1/2}T^{ \frac{1}{2-1/p}}\right)\] We prove corollary 2 by substituting \(\beta=p-1\) and \(\Delta=O\left(T^{-\frac{p-1}{2p-1}}\right)\). Note that this result suggests that as \(p\) increases to infinity, the regret bound converges to \(O(\sqrt{T})\). This also suggests that for exponentially decaying discounting factor, our algorithm should have exactly \(O(\sqrt{T})\) regret. We verify this claim next. **Corollary 3**.: _Consider the discount factor \(\gamma(h)=\gamma^{h-1}\) for \(\gamma\in[0,1)\) and suppose \(T\geq\frac{S^{3}A}{(1-\gamma)^{4}}\). Then algorithm 1 has regret at most_ \[\text{Reg}(T)\leq\widetilde{O}\left(\sqrt{SAT}/(1-\gamma)^{1.5}\right)\] Here we substitute \(\beta=3/2\) and \(\Delta=T^{-1}/(1-\gamma)\). Our regret bound for the geometric discounting matches the minimax optimal regret bound of the non-episodic setting of [1]. Proof Sketch of Theorem 1: We now give an overview of the main steps of the proof. Although the proof is based upon the proof of the UCB-VI algorithm [1], there are several differences. * Let \(V_{h}^{\star}(\cdot)\) be the optimal value function under discounting factor \(\gamma_{h}(\cdot)\) i.e. \(V_{h}^{\star}(x)=\sup_{\pi}V^{\pi}(x;\gamma_{h})\). We first show that the estimates \(V_{k,h}\) maintained by Algorithm 1 upper bound the optimal value functions i.e. \(V_{k,h}(x)\geq V_{h}^{\star}(x)\) for any \(k,h\in[N(\Delta)]\). * Let \(\widetilde{\Delta}_{k,h}=V_{k,h}-V_{h}^{\pi_{k}}\). Then regret can be bounded as \[\text{Reg}(\mathbf{\pi};\mathbf{\gamma})=\sum_{k=1}^{T}V^{\star}(x_{k,1})-V_{1}^{\pi_{k} }(x_{k,1})\leq\sum_{k=1}^{T}V_{k,1}(x_{k,1})-V_{1}^{\pi_{k}}(x_{k,1})\leq\sum_{k =1}^{T}\widetilde{\Delta}_{k,1}(x_{k,1})\] * Let \(\widetilde{\delta}_{k,h}=\widetilde{\Delta}_{k,h}(x_{k,h})\). Then, the main part of the proof of theorem 1 is establishing the following recurrent relation. \[\widetilde{\delta}_{k,h}\leq\frac{\gamma(h+1)}{\gamma(h)}\left(1+\frac{\gamma (h+1)}{(h+1)^{\beta}\Gamma(h+1)}\right)\widetilde{\delta}_{k,h+1}+\sqrt{2L} \overline{z}_{k,h}+e_{k,h}+b_{k,h}+\varepsilon_{k,h}+f_{k,h}\] Here \(\overline{z}_{k,h}\) and \(\varepsilon_{k,h}\) are Martingale difference sequences and \(b_{k,h},e_{k,h},f_{k,h}\) are either the bonus term or behave similarly as the bonus term. * We complete the proof by summing the recurrence relation above over all the episodes and from \(h=1\) to \(N(\Delta)\). Although [1] established a similar recurrence relation, there are two major differences. First the multiplicative factor in front of \(\widetilde{\delta}_{k,h+1}\) is changing with time-step \(h\) and is not a constant. This is because the backward induction step uses eq. (5) in our setting. Second, after expanding the recurrence relation from \(h=1\) to \(N(\Delta)\) the final term is no longer zero and an extra \(O(\Delta T)\) term shows up in the regret bound. ## 5 Estimating the Discount Function In this section we consider the situation when the discount function \(\gamma(h)=\mathbb{P}_{\texttt{H}}(H\geq h)\) is not unknown. We start with the assumption that the optimal value of \(N(\Delta)\) (say \(H^{\star}\)) is known. The next lemma bounds the regret achieved by running an algorithm with \(N(\Delta)=H^{\star}\) with the true discounting \(\gamma\) and an estimate of the discounting \(\hat{\gamma}\). Our algorithm partitions the entire sequence of \(T\) episodes into blocks of lengths \(B,2B,2^{2}B,\ldots,2^{s}B\) for \(s=\log(T/B)-1\). At the end of each block the algorithm recomputes an estimate of \(\gamma\). Recall that we defined \(\gamma(h)=\Pr(H\geq h)\). Since every episode we get one sample from the distribution of \(H\) (the random length of the current episode) we can use the empirical distribution function of horizon length to obtain \(\hat{\gamma}\). At the end of block \(B\), the algorithm computes \(\hat{\gamma}_{B}\), and runs algorithm 1 with this estimate and \(\hat{\Delta}_{B}=\hat{\Gamma}_{B}(H^{\star}+1)=\sum_{h\geq H^{\star}+1}\hat{ \gamma}_{B}(h)\) for the block \(B+1\). **Theorem 4** (Informal).: _When run with horizon length \(H^{\star}\), algorithm 2 has the following regret bound with probability at least \(1-\delta\)_ \[\text{Reg}(\pi;\mathbf{\gamma})\leq\min_{L\in[T]}\Big{(}T\Gamma(L+1)+2L \log(T)\sqrt{T})\Big{)}+\Gamma(H^{\star})T\] \[+\max_{h\in[H^{\star}]}\frac{t(h)}{\gamma(h)}g(h)\Gamma(h+1)\frac{ O(T^{-1/4})}{\Gamma(h+1)}\widetilde{O}\left(\sqrt{SATH^{\star}}\right)\] _where \(g(h)=\exp\Big{\{}O\left(\sum_{k=2}^{h}\frac{T^{-1/4}}{\gamma(k)+k^{\prime }\Gamma(k)}\right)\Big{\}}\)._ Our proof relies on bounding the estimation error of \(\hat{\gamma}\) and \(\hat{\Gamma}\). We can use the classical DKW inequality [1] to bound the maximum deviation between empirical CDF (\(\widehat{\text{P}}_{\text{H}}(\cdot)\)) and true CDF (\(\text{P}_{\text{H}}\)). Through a union bound over the \(\log(T)\) blocks, this immediately provides a bound between \(\left\|\hat{\gamma}_{j}-\gamma_{j}\right\|_{\infty}\) for all \(j\in[\log(T/B)]\). However, we also need to bound the distance between \(\hat{\Gamma}_{j}(\cdot)\) and \(\Gamma(\cdot)\) for all \(j\) (defined in (3)). A naive application of DKW inequality results in an additive bound between \(\hat{\Gamma}_{j}(h)\) and \(\Gamma(h)\) that grows at a rate of \(h\). This is insufficient for our case to get a sublinear regret bound. However, we show that we can use the multiclass fundamental theorem [1] to derive an error bound that grows at a rate of \(\sqrt{\log h}\) and this is sufficient for our proof. The main challenge in the proof of theorem 4 is controlling the growth of the term \(t(h)/\gamma(h)\). Notice that this term is a product of \(h\) terms of the form \(1+\frac{\gamma(k)}{k^{\beta}\Gamma(k)}\), so any error in estimating \(\gamma\) could blow up the product by a factor of \(h\). We could show that the regret is multiplied by an additional function \(g(h)\) which is parameterized by \(\beta\). We next instantiate theorem 4 for different discount factors and show that we can obtain regret bounds similar to corollary 2, and 3 up to logarithmic factors. Figure 1: Comparison of our algorithm with different variants of UCB-VI on the Taxi environment [1]. The regret is measured over \(100\) episodes, and the length of each episode is drawn independently from a given distribution. Each plot shows average regret and standard error from \(10\) trials. **Corollary 5**.: _Consider the discount factor \(\gamma(h)=h^{-p}\) for \(p\geq 2\). Then the regret of algorithm 2 is_ \[\text{Reg}(T)\leq\left\{\begin{array}{ll}\widetilde{O}\left(\sqrt{SAT}^{\frac{p +1}{2p}}\right)&\text{ if }T\geq O\left((S^{3/2}A^{1/2})^{p}\right)\\ \widetilde{O}\left(S^{2}AT^{\frac{1}{2-1/p}}\right)&\text{ if }T\leq O\left((S^{3/2}A^{1/2})^{p}\right) \end{array}\right.\] **Corollary 6**.: _Consider the geometric discount factor \(\gamma(h)=\gamma^{h-1}\) for \(\gamma\in[0,1)\) and suppose \(\frac{T}{\log^{3}T}\geq\frac{S^{3}A}{(1-\gamma)^{\star}}\). Then algorithm 2 has regret at most \(\widetilde{O}\left(\sqrt{SAT}/(1-\gamma)^{1.5}\right).\)_ For the polynomial discounting we get a regret of the order of \(T^{(p+1)/2p}\) which is worse than the regret bound of theorem 1 by a factor of \(T^{1/2p}\). However, the difference goes to zero as \(p\) increases and approaches the same limit of \(\widetilde{O}(\sqrt{T})\). On the other hand, for geometric discounting we recover the same regret as corollary 3. Interestingly, He et al. [1] obtained a similar bound on regret for the non-episodic setting where the learner maximizes her long-term geometrically distributed reward. **Unknown \(N(\Delta)\)**: Note that algorithm 2 takes as input the optimal value of \(N(\Delta)\) or \(H^{\star}\). However, this problem can be handled through a direct application of model selection algorithms in online learning [13]. Let \(\text{Reg}(H^{\star})\) be the regret when algorithm 2 is run with true \(H^{\star}\). We now instantiate algorithm 2 for different choices of \(H^{\star}\) and perform model selection over them. In particular, we can consider \(H^{\star}=2,2^{2},\ldots,2^{O(\log T)}\) as it is sufficient to consider \(H^{\star}=O(T)\). Moreover, given true \(H^{\star}\) there exists \(\widetilde{H}\leq 2H^{\star}\) for which the regret is increased by at most a constant. This step requires bounding \(\frac{t(H^{\star})}{\gamma(H^{\star})}/\frac{t(\widetilde{H})}{\gamma( \widetilde{H})}\) and is constant for the discounting factors considered in the paper. We now apply algorithm 1 from [13] to the collection of \(O(\log T)\) models and obtain a regret bound of at most \(O\left(\sqrt{\log T}\text{Reg}(\widetilde{H})\right)=\widetilde{O}(\text{Reg }(H^{\star}))\). ## 6 Experiments We evaluated the performance of our algorithm on the Taxi environment, a \(5\times 5\) grid-world environment introduced by [1]. The details of this environment is provided in the appendix, since the exact details are not too important for understanding the experimental results. We considered \(100\) episodes and each episode length was generated uniformly at random from the following distributions. 3 Footnote 3: Here \(\gamma(h)\) refers to probability that the episode lengths exceeds \(h\) i.e. \(\gamma(h)=\Pr(H\geq h)\). 1. Geometric discounting \(\gamma(h)=\gamma^{h-1}\). 2. Polynomial discounting \(\gamma(h)=h^{-p}\). 3. Quasi-Hyperbolic discounting \(\gamma(h)=\beta^{1\{h>1\}}\gamma^{h-1}\) Figure 1 shows some representative parameters for three different types of discounting. For the geometric discounting, we show \(\gamma=0.9,0.95\) and \(0.975\). For the polynomial discounting we generated the horizon lengths from a polynomial with \(p\in\{1.4,1.6,2.0\}\) and added an offset of \(20\). Finally, for the Quasi-hyperbolic discounting, we fixed \(\gamma\) at \(0.95\) and considered three values of \(\beta\): \(0.7,0.8\), and \(0.9\). We compared our algorithm (1) with two variants of UCB-VI [1] - (a) UCB-VI-Hoeffding computes bonus terms using Chernoff-Hoeffding inequality, and (b) UCB-VI-Bernstein computes bonus terms using Bernstein-Freedman inequality. It is known that when the horizon length is fixed and known, UCB-VI-Bernstein achieves minimax optimal regret bounds. We implemented two versions of UCB-VI with three different assumed horizon lengths. Figure 1 shows that, for several situations, our algorithm strict improves in regret compared to all the other variants of UCB-VI. These include Geometric discounting (\(\gamma=0.95\) and \(0.975\)) and Quasi-Hyperbolic discounting (all possible choices of \(\beta\)). For the other scenarios (e.g. polynomial discounting), our algorithm performs as well as the best version UCB-VI. Figure 1 also highlights the importance of choosing not only the right horizon length but also the correct update equation in backward induction. Consider for example, figure 0(b) for the geometric discounting with \(\gamma=0.95\). Here the expected horizon length is \(\frac{1}{1-\gamma}=20\). However, different UCB-VI variants (horizon lengths \(10,20,30\) and Bernstein and Hoeffding variants) perform worse. Our algorithm benefits by choosing the right effective horizon length, and also the correct update equation (6). ## 7 Conclusion In this paper, we have designed reinforcement learning algorithms when the episode lengths are uncertain and drawn from a fixed distribution. Our general learning algorithm (1) and result (theorem 1) can be instantiated for different types of distributions to obtain sub-linear regret bounds. Some interesting directions of future work include extension of our algorithm to function approximation [15], changing probability transition function [15], etc. We are also interested in other models of episode lengths. For example, one can consider a setting where the lengths are adversarially generated but there is a limit on the total amount of change. This is similar to the notion of variation budget [1] considered in the literature on non-stationary multi-armed bandits. ## Acknowledgements Goran Radanovic acknowledges that his research was, in part, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 467367360.
2303.15343
Sigmoid Loss for Language Image Pre-Training
We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP). Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes. Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that achieves 84.5% ImageNet zero-shot accuracy in two days. The disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio. Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32k being sufficient. We release our models at https://github.com/google-research/big_vision and hope our research motivates further explorations in improving the quality and efficiency of language-image pre-training.
Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer
2023-03-27T15:53:01Z
http://arxiv.org/abs/2303.15343v4
# Sigmoid Loss for Language Image Pre-Training ###### Abstract We propose a simple pairwise sigmoid loss for image-text pre-training. Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes. With only four TPUv4 chips, we can train a Base CLIP model at 4 k batch size and a Large LiT model at 20 k batch size, the latter achieves 84.5% ImageNet zero-shot accuracy in two days. This disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio. Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32 k being sufficient. We hope our research motivates further explorations in improving the quality and efficiency of language-image pre-training. ## 1 Introduction Contrastive pre-training using weak supervision from image-text pairs found on the web is becoming the go-to method for obtaining generic computer vision backbones, slowly replacing pre-training on large labelled multi-class datasets. The high-level idea is to simultaneously learn an aligned representation space for images and texts using paired data. Seminal works CLIP [31] and ALIGN [20] established the viability of this approach at a large scale, and following their success, many large image-text datasets became available privately [47, 11, 19, 39] and publicly [33, 5, 13, 6, 34]. The standard recipe to pre-train such models leverages the image-text contrastive objective. It aligns the image and text embeddings for matching (positive) image-text pairs while making sure that unrelated (negative) image-text pairs are dissimilar in the embedding space. This is achieved via a batch-level softmax-based contrastive loss, applied twice to normalize the pairwise similarity scores across all images, then all texts. A naive implementation of the softmax is numerically unstable; it is usually stabilized by subtracting the maximum input value before applying the softmax [16], which requires another pass over the full batch. In this paper, we propose a simpler alternative: the sigmoid loss. It does not require any operation across the full batch and hence greatly simplifies the distributed loss implementation and boosts efficiency. Additionally, it conceptually decouples the batch size from the definition of the task. We compare the proposed sigmoid loss with the standard softmax loss across multiple setups. In particular, we investigate sigmoid-based loss with two prominent approaches for image-text learning: CLIP [31] and LiT [47], which we call sig \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Image & Text & BS & \#TPUv4 & Days & INet-0 \\ \hline SigLiT & \(\copyright\) B/8 & L\({}^{\star}\) & 32 k & 4 & 1 & 79.7 \\ SigLiT & \(\copyright\) g/14 & L & 20 k & 4 & 2 & 84.5 \\ \hline SigLIP & B/16 & B & 16 k & 16 & 3 & 71.0 \\ SigLIP & B/16 & B & 32 k & 32 & 2 & 72.1 \\ SigLIP & B/16 & B & 32 k & 32 & 5 & 73.4 \\ \hline \hline \end{tabular} * _We use a variant of the L model with 12 layers._ \end{table} Table 1: **SigLiT and SigLIP results.** Sigmoid loss is memory efficient, allows larger batch sizes (BS) that unlocks language image pre-training with a small number of chips. SigLiT model with a _frozen public_\(\copyright\) L/16 checkpoint [35], trained on the LiT image-text dataset [47] using four TPUv4 chips for one day, achieves 79.7% 0-shot accuracy on ImageNet. The same setup with a g/14 checkpoint [46] leads to 84.5% accuracy, trained for two days. With a _public unlocked_\(\copyright\) B/16 image checkpoint [35], trained on the WebLI dataset [11], SigLIP achieves 71.0% 0-shot accuracy using 16 TPU-v4 chips for three days. The last two rows show results with randomly initialized models. training (_SigLIP_) and sigmoid LiT (_SigLiT_), respectively. We find that the sigmoid loss performs significantly better than the softmax loss when the batch size is smaller than 16 k. As the train batch size grows, the gap closes. Importantly, the sigmoid loss is symmetric, requires just a single pass, and a typical implementation requires less memory than the softmax loss. This enables successful training of a SigLiT model at a batch size of _one million_. However, we find that the performance saturates with growing batch size, both for softmax and sigmoid. The good news is that a reasonable batch size, i.e. 32 k, is sufficient for image-text pre-training. This conclusion also holds for multilingual SigLIP training on over 100 languages. In Table 1, we present setups for image-text pre-training that require a moderate amount of TPUv4 chips for training. SigLiT is surprisingly efficient, reaching 79.7% zero-shot accuracy on ImageNet in just a single day on four chips. SigLIP's more demanding from-scratch training reaches 73.4% zero-shot accuracy in 5 days with 32 TPUv4 chips. This compares favorably to prior works such as FLIP [26] and CLIP [31], which require approximately 5 and 10 days respectively on 256 TPUv3 cores. When fine-tuning a pre-trained vision backbone in SigLIP, denoted as in Table 1, we found that disabling the weight decay on the pre-trained backbone leads to better results (see Figure 4 for details). We hope our work paves the way for making the nascent language-image pre-training field more accessible. ## 2 Related Work Contrastive learning with the sigmoid loss.One prior work proposes a similar sigmoid loss for the task of unsupervised dimensionality reduction [17]; in the scope of contrastive image-text learning, the vast majority of works rely on the softmax-based InfoNCE loss as popularized by [37]. Contrastive language-image pre-training has become popular since CLIP [31] and ALIGN [20] applied softmax contrastive learning [48, 37, 9, 21] to large-scale image-text datasets. Both models perform very well on zero-shot transfer tasks, including classification and retrieval. Follow-up works show that contrastively pre-trained models produce good representations for fine-tuning [41, 14], linear regression [20], object detection [27], semantic segmentation [28] and video tasks [45]. Generative language-image pre-trainingBesides softmax contrastive pre-training, various alternatives have been proposed. GIT [39], SimVLM [40], and LEMON [19] successfully pre-train models using a generative text decoder instead, while CoCa [44] adds such a decoder to the discriminative CLIP/ALIGN setup, thus combining the pros and cons of both approaches into a single very capable model. BLIP [25] further proposes CapFilt which uses the generative decoder to create better captions and the discriminative part of the model to filter pairs. Language-Image pre-training is a very active field and surveys [7] rapidly become outdated. Efficient language-image pre-trainingOn the other hand, few works have tried making language image pre-training more efficient. LiT [47] and FLIP [26] are notable attempts, the former requires a pre-trained and locked backbone, and the latter sacrifices quality by randomly dropping visual tokens. BASIC [30] and LIAION [1] look at scaling batch-size but only go up to 16 k and 160 k respectively, by using many hundreds of chips, and for the former also mixing in a large private classification dataset [30, 43]. The recent Lion optimizer [10] claims to be able to reduce the training cost to reach similar quality. ## 3 Method In this section, we first review the widely-used softmax-based contrastive loss. We then introduce the pairwise sigmoid loss and discuss its efficient implementation. Given a mini-batch \(\mathcal{B}=\{(I_{1},T_{1}),(I_{2},T_{2}),\dots\}\) of image-text pairs, the contrastive learning objective encourages embeddings of matching pairs \((I_{i},T_{i})\) to align with each other, while pushing embeddings of unmatched pairs \((I_{i},T_{j\neq i})\) apart. For practical purposes, it is assumed that for all images \(i\), the text associated with a different image \(j\) is not related to \(i\), and vice-versa. This assumption is usually noisy and imperfect. ### Softmax loss for language image pre-training When using the softmax loss to formalize this objective, an image model \(f(\cdot)\) and a text model \(g(\cdot)\) are trained to minimize the following objective: \[-\frac{1}{2|\mathcal{B}|}\sum_{i=1}^{|\mathcal{B}|}\left(\overbrace{\log\frac{ e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{ \text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{j}}}}^{\text{image}\rightarrow\text{ text}\text{softmax}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{ \sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{ \text{text}\rightarrow\text{image softmax}}\overbrace{\log\frac{e^{\text{t} \mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t} \mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i}\cdot \mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot \mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}\overbrace{\log \frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^ {\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{ image softmax}}\overbrace{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}^{ \text{text}\rightarrow\text{image softmax}}}^{\text{text}\rightarrow\text{image softmax}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{ \sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{ \text{text}\rightarrow\text{image softmax}}}\overbrace{\sum_{j=1}^{|\mathcal{B}|}e^ {\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}^{\text{text}\rightarrow\text{ image softmax}}}^{\text{text}\rightarrow\text{image softmax}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{| \mathcal{B}|}e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}}^{\text{text} \rightarrow\text{image softmax}}}\overbrace{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t} \mathbf{x}_{j}\cdot\mathbf{y}_{i}}}^{\text{text}\rightarrow\text{image softmax}}}^{\text{text}\rightarrow\text{image softmax}}\overbrace{\log\frac{e^{ \text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{ \text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{ image softmax}}}\overbrace{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}^{\text{ text}\rightarrow\text{image softmax}}}^{\text{text{t}\rightarrow\text{image softmax}}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{| \mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text} \rightarrow\text{image softmax}}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i} \cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot \mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}}\overbrace{\log \frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{ \text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}}\overbrace{\log\frac{e^{\text{t} \mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t} \mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}}\overbrace{\log\frac{e^{\text{t} \mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t} \mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}}\overbrace{\log\frac{e^{\text{t} \mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_ {j}\cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}}\overbrace{\log \frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{ \text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text{t}\rightarrow \text{image softmax}}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{ \sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text }\rightarrow\text{image softmax}}}\overbrace{\log\frac{e^{\text{t}\mathbf{x}_{i} \cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\text{t}\mathbf{x}_{j} \cdot\mathbf{y}_{i}}}}^{\text{text}\rightarrow\text{image softmax}}}\overbrace{\log \frac{e^{\text{t}\mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{ \text{t}\mathbf{x}_{j}\cdot\mathbf{y}_{i}}}}^{\text{text{t}\rightarrow\text{image softmax}}}\overbrace{\log\frac{e^{\text{t} \mathbf{x}_{i}\cdot\mathbf{y}_{i}}}{\sum_{j=1}^{|\mathcal{B}| due to the asymmetry of the softmax loss, the normalization is independently performed two times: across images and across texts [31]. The scalar \(t\) is parametrized as \(\exp(t^{\prime})\), where \(t^{\prime}\) is a global freely learnable parameter. ### Sigmoid loss for language image pre-training Instead of the softmax-based contrastive loss, we propose a simpler alternative that does not require computing global normalization factors. The sigmoid-based loss processes every image-text pair independently, effectively turning the learning problem into the standard binary classification on the dataset of all pair combinations, with a positive labels for the matching pairs \((I_{i},T_{i})\) and negative labels for all other pairs \((I_{i},T_{j\neq i})\). It is defined as follows: \[-\frac{1}{|\mathcal{B}|}\sum_{i=1}^{|\mathcal{B}|}\sum_{j=1}^{|\mathcal{B}|} \underbrace{\log\frac{1}{1+e^{z_{ij}(-t\mathbf{x}_{i}\cdot\mathbf{y}_{j}+b)}}}_ {\mathcal{L}_{ij}}\] where \(z_{ij}\) is the label for a given image and text input, which equals 1 if they are paired and \(-1\) otherwise. Note that at initialization, the heavy imbalance coming from the many negatives dominates the loss, leading to large initial optimization steps attempting to correct this bias. To alleviate this, we introduce an additional learnable bias term \(b\) similar to the temperature \(t\). We initialize \(t^{\prime}\) and \(b\) to 10 and -10 respectively. This makes sure the training starts roughly close to the prior and does not require massive over-correction. Algorithm 1 presents a pseudocode implementation of the proposed sigmoid loss for language image pre-training. ### Efficient "chunked" implementation Contrastive training typically utilizes data parallelism. Computing the loss when data is split across \(D\) devices necessitates gathering all embeddings [47] with expensive all-gathers and, more importantly, the materialization of a memory-intensive \(|\mathcal{B}|\times|\mathcal{B}|\) matrix of pairwise similarities. The sigmoid loss, however, is particularly amenable to a memory efficient, fast, and numerically stable implementation that ameliorates both these issues. Denoting the per-device batch size as \(b=\frac{|\mathcal{B}|}{D}\), the loss is reformulated as: \[-\frac{1}{|\mathcal{B}|}\underbrace{\sum_{d_{i}=1}^{D}}_{\mathbf{A}:\mathbf{y} \text{ device }d_{i}}\underbrace{\sum_{d_{j}=1}^{D}}_{\begin{subarray}{c}d_{j}=1 \\ \text{across devices}\end{subarray}}\underbrace{\underbrace{\sum_{i=bd_{i}}^{ \text{C: per device}}}_{\begin{subarray}{c}\text{loss}\\ \text{all local}\\ \text{positives}\end{subarray}}}_{\begin{subarray}{c}b(d_{i}+1)\\ \text{ness from}\\ \text{next device}\end{subarray}}\mathcal{L}_{ij}\] This is particularly simple for the sigmoid loss as each pair is an independent term in the loss. Figure 1 illustrates this method. In words, we first compute the component of the loss corresponding to the positive pairs, and \(b-1\) negative pairs. We then permute representations across devices, so each device takes negatives from its neighbouring device (next iteration of sum **B**). The loss is then calculated with respect to this chunk (sum **C**). This is done independently in each device, such that each device computes the loss with respect to its local batch \(b\). Losses can then simply be summed across all devices (sum **A**). Individual collective permutes (for sum **B**) are fast (and indeed \(D\) collective Figure 1: **Efficient loss implementation** demonstrated via a mock setup with 3 devices and a global batch size of 12. There are no all-gathers, and at any point in time only the bright yellow square (size \(4\times 4\)) is materialized in memory. permutes is typically faster than two all-gathers between \(D\) devices), and the memory cost at any given moment is reduced from \(|\mathcal{B}|^{2}\) to \(b^{2}\) (for sum **C**). Usually \(b\) is constant as scaling \(|\mathcal{B}|\) is achieved by increasing the number of accelerators. Due to being quadratic with respect to the batch size, the vanilla loss computation rapidly bottlenecks scaling up. This chunked approach enabled training with batch sizes over 1 million on relatively few devices. ## 4 Results In this section, we evaluate the proposed SigLiT and SigLIP models across a wide range of batch sizes. We discuss what can be achieved with a small number of accelerator chips, using both SigLiT and SigLIP recipes. We also briefly discuss the impact of batch size on multilingual language image pre-training. We ablate the importance of our large-batch stabilization modification and the introduced learned bias term and present a study on the effect of positive and negative pairs ratio in the sigmoid loss. Lastly, we explore SigLIP's data noise robustness. To validate our models, we report zero-shot transfer results on the ImageNet dataset [12] and zero-shot retrieval results across 36 languages on the XM3600 dataset [36]. We use the ScalingViT-Adafactor optimizer [46] by default for all our experiments. ### SigLiT: Scaling batch size to the limit Following [47], we use the same precomputed embeddings for the images using a ViT-g vision model, and train a base size text tower from scratch with the same hyperparameters using the LiT image-text dataset [47]. We perform a study over a wide range of batch sizes, from 512 to \(1\,M\), demonstrating the impact of batch size for contrastive learning. Results are presented in Figure 2 (left). When the batch size is smaller than \(16\,k\), sigmoid loss outperforms softmax loss by a large margin. With growing batch sizes, we observe that softmax loss quickly catches up and potentially slightly underperforms sigmoid loss with a large enough batch size. Overall, we recommend using the SigLIP recipe for large batch sizes as well, due to the simplicity, compute savings, and straightforward memory efficient implementation. There is a consensus that contrastive learning benefits from large batch sizes, while most of the existing studies stop at 64 k batch size [47, 30, 9]. We successfully trained an SigLiT model at one million batch size, to explore the limit of contrastive learning. To our surprise, the performance saturates at 32 k batch size, further scaling up the batch size only gives a minor boost, and the model peaks at 256 k batch size. Our best SigLiT with a \(B\)-sized text mode achieves 84.7% zero-shot transfer accuracy on ImageNet, while the original LiT paper reports a slightly better 85.2% score with a 10 times larger \(g\)-sized text model. Figure 3 presents the impact of training duration for different batch sizes. It demonstrates that large, \(262\,k\) batch size significantly outperforms smaller \(8\,k\) batch size when trained for a sufficiently long time. Note, that for short training durations, large batch size leads to the fewer absolute number of update steps and thus needs more time to ramp up. Figure 2: The effect of pre-training batch size. **Left: SigLiT results**, trained for 18B seen examples. Sigmoid loss outperforms the softmax loss significantly with small batch sizes, and performs similarly at larger batch sizes. We successfully trained an SigLiT model with up to _one million_ batch size. However, performance for both sigmoid and softmax saturate at around 32 k batch size. **Middle: SigLIP results**, trained for 9B seen examples. Both sigmoid loss and softmax loss saturate at a reasonable batch size, while the peak of the sigmoid loss comes earlier and slightly outperforms the peak of the softmax loss. A very large batch size hurts both losses. **Right: mSigLIP results**, trained for 30B seen examples. With a multilingual setup using over 100 languages, 32 k batch size is surprisingly sufficient and scaling beyond that hurts performance on a 36-language cross-modal retrieval task. ### SigLIP: Sigmoid loss is beneficial for language-image pre-training We pre-train SigLIP models on the WebLI dataset [11], using only English image and text pairs. We use moderately-sized models: B/16 ViT for image embeddings and B-sized transformer for text embeddings. The input images are resized to 224\(\times\)224 resolution. The text is tokenized by a 32 k vocabulary sentencepiece tokenizer [24] trained on the English C4 dataset [32], and a maximum of 16 text tokens are kept. Figure 2 middle plot shows SigLIP results, With less than 32 k batch size, SigLIP outperforms CLIP baselines with the standard softmax loss. On the other end of the scale, the memory efficiency of the sigmoid loss enabled much larger batch sizes. For example, with four TPU-v4 chips, we could fit a batch size of 4096 with a Base SigLIP but only 2048 with a corresponding CLIP model. The two advantages together demonstrate significant benefits of the sigmoid loss for language image pre-training with fixed resources, which will be discussed in Section 4.5. As batch size increases, the gap between the sigmoid and the softmax losses diminish. SigLIP performs best at batch size 32 k, whereas the softmax loss required 98 k for optimal performance and still didn't outperform the sigmoid based variant. Scaling further, a larger batch size like 307 k hurts both losses. ### mSigLIP: Multi-lingual pre-training We further scale up the training data by keeping all the _100 languages_ from the WebLI dataset [11]. With multilingual data, one usually needs to use a larger international vocabulary. We first verify the impact of two tokenizers: a small multilingual vocabulary with 32 k tokens [32], and a large multilingual vocabulary with 250 k tokens [42]. We train B-sized ViT and text models for \(900\,M\) total examples seen, and observe slightly more than 1% improvement when using a larger vocabulary. However, the token embeddings become huge for very large vocabulary sizes. Following the standard setup, we would need to store a \(N\times W\) token embedding lookup table to train the multilingual model, where \(N\) is the vocabulary size mentioned above and \(W\) is the embedding dimension of the text model. To save memory, we propose to use a "bottlenecked" token embedding. We use \(N\times K\) embedding matrix and additional \(K\times W\) projection, where the bottleneck \(K\) is much smaller than \(W\). In our experiments, we observed that using a large multilingual vocabulary with a bottleneck can be scaled up as efficiently as using a small multilingual vocabulary. Specifically, by enabling the bottleneck of size \(K=96\) for Base architecture with \(W=768\), we only see about a half percent quality drop on ImageNet zero-shot transfer, compared to using the full \(250k\) vocabulary. With the memory improvements, we train mSigLIP models for various batch sizes, for a total of 30 billion examples seen. Table 2 and Figure 2 (right plot) show the results. A batch size of 32 k is sufficient for a multilingual setup as well. On the XM3600 cross-modal retrieval tasks, we found that going beyond 32 k batch size leads to worse results on average while on ImageNet zero-shot transfer it stays flat. mSigLIP sets the new state-of-the-art on XM3600 text to image retrieval task, with only a Base size model. Our best result is 34.9%, which is more than 6% higher than the previously reported result 28.5% [11] with a standard LiT model [47] using a much larger four billion ViT-e model. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & 16 k & 32 k & 64 k & 128 k & 240 k \\ \hline INet-0 & 71.6 & 73.2 & 73.2 & 73.2 & 73.1 \\ \hline XM avg & 34.8 & 34.9 & 34.4 & 33.6 & 32.7 \\ \hline XM de & 54.7 & 54.8 & 55.4 & 54.3 & 54.7 \\ XM en & 46.5 & 46.2 & 46.5 & 46.6 & 46.6 \\ XM hi & 9.1 & 8.5 & 7.9 & 8.1 & 7.3 \\ XM ru & 50.1 & 49.9 & 49.7 & 48.6 & 49.3 \\ XM zh & 30.7 & 32.5 & 32.0 & 30.6 & 23.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Multilingual SigLIP results with various batch sizes, pre-trained for 30 billion seen examples. We report zero-shot transfer results on ImageNet (INet-0) and averaged text to image retrieval results across 36 languages on the crossmodal 3600 dataset (XM). The full table on 36 languages can be found in Appendix. Figure 3: **SigLiT ImageNet 0-shot transfer results with different training durations.** Large batch size results in a big performance boost, but needs a sufficiently long schedule to ramp up, as for short schedules, very large batch size results in a small number of gradient update steps. ### SigLiT with four TPU-v4 chips For many practitioners, the important question usually is "what can be trained with a limited amount of resources?" We explore the usage of SigLiT models in this section with only four TPU-v4 chips, as the memory efficient sigmoid loss is suitable for this application scenario. We follow the same setup as in section 4.1. We use the publicly available ViT-Augreg-B/8 [35] model as the frozen (\(\circleddelta\)) vision tower, and precompute embeddings to accelerate the training [47]. The text model is a Large Transformer, but with a depth of only 12 layers (instead of 24). It is trained using the LION [10] optimizer with decoupled weight decay \(1\times 10^{-7}\), linearly warm-up of learning rate over 6.5k steps up to a peak of \(1\times 10^{-4}\), followed by a cosine decay to 0. We train for a total of 65 000 steps with a batch size of 32k - this leads to just under one day of training. Table 1 shows the results when training a model on four chips for one day, achieving 79.7% 0-shot ImageNet classification accuracy; very competitive in this limited resource regime. With a ViT-g/14 [46] model as the vision tower and a Large text tower, we can train at 20 k batch size on four chips for 107 k steps in under two days. This further pushes the 0-shot ImageNet classification accuracy up to 84.5%. ### SigLIP with a small amount of TPU-v4 chips It's resource demanding to train a CLIP model from-scratch in general, with SigLIP it's possible to fit a larger train batch size with fewer amount of chips. In this section, we explore ways to train SigLIP models efficiently with pre-trained weights. We use pre-trained weights to initialize the image model to accelerate the pre-training, which was originally discussed in [47]. We use the public and unlocked \(\circleddelta\) ViT-Augreg-B/16 [35] model to initialize our vision tower and fine-tune on the same WebLI English data as used for SigLIP. In all the experiments, we apply a 0.1 learning rate multiplier to the pre-trained image tower to make it suitable for fine-tuning. Figure 4 presents unlocked \(\circleddelta\) fine-tuning results alongside from-scratch randomly initialized baselines. We used 16 TPU-v4 chips and train at 16 k batch size for 2.4 B examples seen. We found that the fine-tuning setup doesn't perform well out-of-the-box; this is consistent with prior works [47] where finetuning image models degraded visual representation quality. This is evidenced by ImageNet 10-shot linear classification, where in Figure 4 the fine-tuned setup is barely better than the from-scratch baseline. We hypothesize that the default weight decay applied to the pre-trained weights reduces their effectiveness. Moti Figure 4: **Top**: SigLIP with pre-trained encoders ramps up quickly. However, only disabling weight decay on the pre-trained encoder weights leads to stable behavior and good ImageNet 0-shot transfer results. **Bottom**: ImageNet 10-shot transfer results, where decaying the pre-trained weights leads to deterioration of the pre-trained model visual representation quality. Disabling weight decay makes the curve flatter. Figure 5: **The effect of Adam and AdaFactor’s \(\beta_{2}\).** As we increase batch-size, we observe more frequent training instability. This instability can mainly be seen in the loss curves (top) and is caused by spikes in the gradient norm (middle) which results in large parameter updates (bottom). Decreasing the \(\beta_{2}\) momentum value stabilizes the training. Even though occasional gradient spikes still happen (see step at 2B), they do not destabilize the training process. vated by the fine-tuning recipe from [15, 46, 22], that uses no weight decay, we also propose disabling weight decay on the pre-trained weights for SigLIP training. Weight decay is therefore only applied to the randomly initialized weights in the text model. This simple modification significantly improved SigLIP results. Figure 4 shows that with our improved recipe, SigLIP reaches 71% 0-shot accuracy on ImageNet, using \(16k\) batch size, trained on 16 chips for three days. We also present from-scratch results in the bottom rows of Table 1: with 32 TPUv4 chips for only two days, SigLIP achieves 72.1% 0-shot accuracy. This presents a significant training cost reduction e.g. compared to CLIP (approx. 2500 TPUv3-days for 72.6%) reported in [26]. ### Stabilizing large-batch training As we move to large batch sizes, the language image pre-training using transformers becomes increasingly more unstable, even when using a modestly-sized model (e.g. Base size). The reason for these instabilities is large spikes in the gradient norms, which translate to large-magnitude changes in the weights that may destabilize the training process, see Figure 5. We observe that reducing \(\beta_{2}\) in Adam and AdaFactor from its default 0.999 to 0.95 (which was suggested in [18, 8]) is enough to stabilize the training. 2 Intuitively, this allows recovering from gradient spikes quicker. We opt for setting \(\beta_{2}=0.95\) by default for all our experiments. Footnote 2: Lucas thanks Kaiming He and Xinlei Chen for discussion of \(\beta_{2}\). ### Negative ratio in sigmoid loss One question which arises when shifting the perspective from the softmax's "pick the right class" view to the sigmoid's "rate this pair" view, is the imbalance in positive versus negative pairs. For a batch size \(|\mathcal{B}|\), the batch contains \(|\mathcal{B}|\) positive pairs, but \(|\mathcal{B}|^{2}-|\mathcal{B}|\) negative examples. In the modest batch-size of 16 k, there are actually 268 M negative examples for only 16 k positive ones. At the same time, because the sigmoid loss decomposes into a sum of per-example losses, we can perform controlled experiments to study the effect of the mini-batch composition and distribution of examples visited. We run experiments in the SigLiT setup at batch-size 16 k for 900 M steps and vary the composition of the batch by masking out (_i.e_. ignoring) enough negative examples to reach a target "positive : negative" ratio, masking in the following ways: * **Random:** Randomly choose negative pairs to mask. * **Hard:** Keep hardest negative pairs (highest loss). * **Easy:** Keep easiest negatives pairs (lowest loss). * **Hard + matching total pairs seen:** Masking examples while training for a fixed number of steps does decrease the total number of _pairs_ seen during training. Hence in the _matched pairs_ setting, we increase the number of training steps by the masking ratio in order to keep the number of pairs seen constant. Figure 6 shows the effect of the various masking strategies. Randomly removing negatives to rebalance does deteriorate performance. Keeping the easiest examples does not work at all, while keeping the hardest negatives does almost Figure 6: **The effect of batch composition. We simulate various batch compositions by masking out negatives, either randomly, keeping only the hardest, or the easiest. With no masking, we have 16 k negatives for each positive in the batch (1:16 k) and the strongest masking we apply (1:1.6) results in almost balanced minibatches. In one setting we _match total pairs_ seen by training for significantly longer. We observe ImageNet 0-shot score, the final value of the learned bias, and the average logits of positive and negative pairs. Overall, the imbalance does not seem to be detrimental, but finding an _efficient_ way of mining negatives might be beneficial.** maintain the quality, indicating that, as could be expected, a lot of the learning on the negative side comes from the harder examples. This is further confirmed by the slightly increased performance of training longer on the hardest examples in order to match the total pairs seen. We also look at the value of the learned bias at the end of training as well as the average logit value for positive and negative examples across these settings, and find the result mostly follows what one would expect: as fewer negatives are present, the bias and logits become more positive overall. Interestingly, when training with more hard negative pairs, the average logits of positive pairs stays mostly flat. This study confirms that (1) the imbalance does not seem to be a major reason for concern, while at the same time (2) coming up with an _efficient_ way of including more negative examples can be promising but is not trivial. ### Bias term in sigmoid loss We ablate the bias term in the loss function, using the Base architecture with an 8 k batch size, trained for 900M examples with the SigLIP setup. Zero-shot transfer results are reported on ImageNet [12], Oxford-iiit pet [29] and Cifar100 [23]. Table 3 presents results with and without a bias term in the sigmoid loss. Enabling the bias term with a \(-10\) initialization consistently improves performance across all tasks. This is because the bias term ensures that the training starts close to the prior, preventing dramatic over-correction in early optimization. In contrast, a randomly chosen bias term initialization, such as the 0 initialization in Table 3, fails to address the over-correction issue, leading to significantly worse results. This effect is particularly noticeable when using a small temperature t\({}^{\prime}\) initialization. We set the bias and temperature initialization to \(b=-10\) and \(t^{\prime}=\log 10\) (hence \(t=10\)) as the default for all experiments. ### Label noise robustness Prior works demonstrated improved robustness against label noise when using the sigmoid loss for classification models [2]. This property would be particularly useful here in the face of the famously noisy nature of popular large-scale image-text datasets. In order to study this for SigLIP, we train M/16 image models alongside an M text model at batch size 16384 for 3.6 billion seen examples. We corrupt the training data using one of the following methods: * **Image**: With probability \(p\), replace the image with uniform random noise. * **Text**: With probability \(p\), replace tokenized text with a new sequence of randomly sampled tokens, up to some (sampled) sequence length. * **Batch alignment**: Randomly shuffle the ordering of \(p\)% of the batch. * **Image & text**: Independently apply (1) and (2), each with probability \(p\). * **Image, text & batch**: Alongside (4), also shuffle fraction \(p\) of alignments. Results from varying the likelihood of the corruption are shown in Figure 7. Models trained with sigmoid loss are increasingly robust to all kinds of added noise. \begin{table} \begin{tabular}{c c c c c} \hline \hline b & t\({}^{\prime}\) & INet-0 & Pet-0 & C100-0 \\ \hline n/a & log 10 & 62.0 & 81.8 & 59.9 \\ -10 & log 10 & **63.0** & **82.4** & **61.0** \\ -10 & log 1 & 61.0 & 80.0 & 60.4 \\ 0 & log 10 & 61.7 & 79.9 & 59.0 \\ 0 & log 1 & 53.7 & 73.2 & 53.8 \\ \hline \hline \end{tabular} \end{table} Table 3: **Bias (b) and temperature (t\({}^{\prime}\)) initialization.** Results are reported using Base architecture, 8 k batch size, trained for 900M examples. Enabling the bias term b with \(-10\) initialization improves results consistently. Figure 7: **Sigmoid-training increases robustness to data noise. Titles show the type of corruption applied, and x-axes show the probability with which they are applied. With increasing corruption severity, M-scale models trained with sigmoid loss for 3.6 billion examples retain superiority over corresponding softmax baseline.** ## 5 Conclusion We conducted a study on two language-image pre-training instances that used the sigmoid loss: SigLiT and SigLIP. Our results demonstrate that the sigmoid loss performs better than the softmax baseline, particularly for small train batch sizes. This loss function is also more memory efficient, which allows larger train batch sizes without requiring additional resources. We performed a thorough investigation of the batch size in contrastive learning. Surprisingly, we found that a relatively modest batch size of 32 k yielded nearly optimal performance. Further studies have been performed to understand better the introduced bias term in the sigmoid loss, robustness to data noises and the impact of positive and negative pairs ratio in the sigmoid loss. We hope this work will facilitate language-image pre-training research with limited resources. Acknowledgements.We thank Daniel Keysers, Ilya Tolstikhin, Olivier Bousquet and Michael Tschannen for their valuable feedback and discussions on this paper. We thank Joan Puigcerver, Josip Djolonga and Black Hechtman for discussions on efficient implementations of the chunked contrastive loss. We also thank Ross Wightman for spotting a mistake in the pseudocode in the first version of this paper. Similarly, we thank Boris Dayma for spotting a typo making \(t\) vs \(t^{\prime}\) confusing, which we fixed in the third version of this paper. As always, we thank the Google Brain team at large for providing a supportive research environment. We use the big_vision codebase [4, 3] for all experiments in this project.
2307.08183
Tangent Ind-Categories
In this paper we show that if $\mathscr{C}$ is a tangent category then the Ind-category $\operatorname{Ind}(\mathscr{C})$ is a tangent category as well with a tangent structure which locally looks like the tangent structure on $\mathscr{C}$. Afterwards we give a pseudolimit description of $\operatorname{Ind}(\mathscr{C})_{/X}$ when $\mathscr{C}$ admits finite products, show that the $\operatorname{Ind}$-tangent category of a representable tangent category remains representable (in the sense that it has a microlinear object), and we characterize the differential bundles in $\operatorname{Ind}(\mathscr{C})$ when $\mathscr{C}$ is a Cartesian differential category. Finally we compute the $\operatorname{Ind}$-tangent category for the categories $\mathbf{CAlg}_{A}$ of commutative $A$-algebras, $\mathbf{Sch}_{/S}$ of schemes over a base scheme $S$, $A$-$\mathbf{Poly}$ (the Cartesian differential category of $A$-valued polynomials), and $\mathbb{R}$-$\mathbf{Smooth}$ (the Cartesian differential category of Euclidean spaces). In particular, during the computation of $\operatorname{Ind}(\mathbf{Sch}_{/S})$ we give a definition of what it means to have a formal tangent scheme over a base scheme $S$.
Geoff Vooys
2023-07-17T01:17:13Z
http://arxiv.org/abs/2307.08183v1
# Tangent Ind-Categories ###### Abstract. In this paper we show that if \(\mathscr{C}\) is a tangent category then the \(\operatorname{Ind}\)-category \(\operatorname{Ind}(\mathscr{C})\) is a tangent category as well with a tangent structure which locally looks like the tangent structure on \(\mathscr{C}\). Afterwards we give a pseudolimit description of \(\operatorname{Ind}(\mathscr{C})_{/X}\) when \(\mathscr{C}\) admits finite products, show that the \(\operatorname{Ind}\)-tangent category of a representable tangent category remains representable (in the sense that it has a microlinear object), and we characterize the differential bundles in \(\operatorname{Ind}(\mathscr{C})\) when \(\mathscr{C}\) is a Cartesian differential category. Finally we compute the \(\operatorname{Ind}\)-tangent category for the categories \(\mathbf{CAlg}_{A}\) of commutative \(A\)-algebras, \(\mathbf{Sch}_{/S}\) of schemes over a base scheme \(S\), \(A\)-\(\mathbf{Poly}\) (the Cartesian differential category of \(A\)-valued polynomials), and \(\mathbb{R}\)-\(\mathbf{Smooth}\) (the Cartesian differential category of Euclidean spaces). In particular, during the computation of \(\operatorname{Ind}(\mathbf{Sch}_{/S})\) we give a definition of what it means to have a formal tangent scheme over a base scheme \(S\). Key words and phrases:Tangent category, \(\operatorname{Ind}\)-category, \(\operatorname{Ind}\)-completion, Cartesian differential category, formal scheme 2020 Mathematics Subject Classification: Primary 18F40 ###### Contents * 1 Introduction * 1.1 Results and Structure of the Paper * 2 A Review of Tangent Categories * 3 Tangent Ind-Categories * 4 Properties of Ind-Tangent Categories * 4.1 A Pseudolimit Result * 4.2 Representable Ind-Tangent Categories * 4.3 Differential Bundles and Ind-Tangent Categories * 5 Ind-Tangent Category Examples * 5.1 The Tangent Structure on Commutative \(A\)-Algebras * 5.2 The Zariski Tangent Structure on Formal Schemes * 5.3 The Ind-Tangent Category of the CDC of Polynomials * 5.4 The Ind-Tangent Category of the CDC of Smooth Maps ## 1. Introduction This paper is a study of two different categorical geometric concepts and the ways in which they interact with each other. On one hand, we have the categorification of tangent bundles and differential geometry (as discovered by Rosicky in [17] and later rediscovered by Cockett and Crutwell in [3]) and the categorical approach to differential reasoning by the semantics of the tangent bundle functor itself; on the other hand we have \(\operatorname{Ind}\)-categories (as discovered by Grothendieck in [1] in its relation to presheaf toposes the mechanics involved in taking filtered colimits) which give an inductive cocompletion of a category and allow for well-behaved "at the limit" geoemtric arguments. In particular, \(\operatorname{Ind}\)-completions and \(\operatorname{Ind}\)-arguemnts often carry local, nearly infinitesimal, information and behave as though they are completions of spaces in certain adic topologies. Because of the geometric similarities between studying infinitesimal neighborhoods of successive tangent vectors and their geometry together with the completions of a space in an adic topology involving tangent vectors, in this paper we examine how these categorical structures interact. Let us first recall in more detail how tangent categories are incarnated. On one hand, tangent categories are a categorical generalization of the tangent bundle of a (smooth) manifold and provide a category-theoretic setting in which to do differential geometry. Originally discovered by Rosicky in [17] and later rediscovered and refined by Cockett and Crutwell in [3], tangent categories have given connections between differential geometry, higher category theory, (de Rham) cohomology, linear logic, programming, machine learning, and have become ubiquitous in both theoretical computer science and pure mathematics. These have been studied in detail in many of these different contexts, and in [7] we even see some interactions of enriched tangent structures. Moreover, work of Crutwell and Lemay (cf. [5]) has recently shown that the category \(\mathbf{Sch}_{/S}\) of schemes over a base scheme \(S\) admits a tangent structure as well. On the other hand, ind-categories arise as the free filtered cocompletion of a category \(\mathscr{C}\) and were first studied in detail by Grothendieck in [1]. These categories are important in algebraic geoemtry, as by [19] there is a close relationship with the categories \(\mathbf{FSch}_{/S}\) and \(\mathrm{Ind}(\mathbf{Sch}_{/S})\) where \(\mathbf{FSch}_{/S}\) is the category of formal schemes over \(S\). This means in particular that ind-categories can be thought of as formal completions of objects along some closed subobject. We then can think of each object in \(\mathrm{Ind}(\mathscr{C})\), by way of formal analogy, as an object which is (adically) completed along some closed subobject and hence admits all formal infinitesimal neighborhoods along that subobject. Our intuition as we proceed is to study these formal infinitesimal neighborhoods in these objects when they _also_ have a notion of tangent in the following sense: if we have infinitesimal neighborhoods and we have tangent vectors, then we should expect to be able to complete the tangent space to allow infinitesimal neighborhoods of the tangent vectors themselves. Note that this sits in contrast to the fact that if we take the free cocompletion of the category \(\mathscr{C}\) (as modeled, for instance, by the presheaf topos \([\mathscr{C}^{\mathrm{op}},\mathbf{Set}]\)), we cannot expect \([\mathscr{C}^{\mathrm{op}},\mathbf{Set}]\) to be a tangent category even when \(\mathscr{C}\) is -- cf. [3] for details. The difference here is that our infinitesimal neighborhoods are sufficiently well behaved (via their filtrations) whereas the free cocompletion has colimits which are too wild to witness and interact well with the categorical limits that describe tangent-theoretic information. ### Results and Structure of the Paper We begin this paper with a quick review of tangent categories, their morphisms, and the category (and 2-category) of tangent categories. Afterwards we familiarize ourselves with the ind-construction and get to know the \(\mathrm{Ind}\) pseudofunctor \(\mathrm{Ind}:\mathfrak{Cat}\to\mathfrak{Cat}\). We then study the interaction of the tangent structures as they pass through the \(\mathrm{Ind}\) pseudofunctor and lift to the ind-case. This culminates in our first big result, which is Theorem 3.27. **Theorem 1.1** (cf. Theorem 3.27).: _Let \((\mathscr{C},\mathbb{T})\) be a tangent category. Then the category \((\mathrm{Ind}(\mathscr{C}),\mathrm{Ind}(\mathbb{T}))\) is a tangent category where \(\mathrm{Ind}(\mathbb{T})\) is the tangent structure_ \[\mathrm{Ind}(\mathbb{T}):=(\mathrm{Ind}(T),\hat{p},\mathrm{Ind}(0),\hat{+}, \hat{\ell},\hat{c})\] _where \(\mathrm{Ind}(T)\) is the indicization of \(T\) induced by Proposition 3.7 and \(\hat{p},\mathrm{Ind}(0),\hat{+},\hat{\ell},\) and \(\hat{c}\) are the natural transformations constructed in Lemmas 3.15, 3.19, 3.21, and Lemma 3.23, respectively._ After establishing this we show a functoriality result in Theorem 3.28 (which, as a corollary, allows us to deduce that \(\mathrm{Ind}:\mathfrak{Cat}\to\mathfrak{Cat}\) restricts to a pseudofunctor \(\mathrm{Ind}:\mathfrak{Tan}\to\mathfrak{Tan}\)). **Theorem 1.2** (cf. Theorem 3.28).: _Let \((F,\alpha):(\mathscr{C},\mathbb{T})\to(\mathscr{D},\mathbb{S})\) be a morphism of tangent categories. Then the induced map \((\mathrm{Ind}(F),\hat{\alpha}):(\mathrm{Ind}(\mathscr{C}),\mathrm{Ind}( \mathbb{T}))\to(\mathrm{Ind}(\mathscr{D}),\mathrm{Ind}(\mathbb{S}))\) is a morphism of tangent categories where \(\hat{\alpha}\) is the natural transformation:_ _Furthermore, \((\mathrm{Ind}(F),\hat{\alpha})\) is a strong tangent morphism if and only if \((F,\alpha)\) is a strong tangent morphism._ We also show how the \(\operatorname{Ind}\) construction interacts with the forgetful functor \(\operatorname{Forget}:\mathfrak{Tan}\to\mathfrak{Cat}\) and the free tangent functor \(\operatorname{Free}:\mathfrak{Cat}\to\mathfrak{Tan}\). More explicitly we show that the diagram commutes strictly but in the other case that there is a pseudonatural transformation which cannot be a pseudonatural equivalence. In particular, we show that \(\operatorname{Ind}\) does not commute with Free, even up to pseudonatural equivalence. After presenting these theorems we show how the \(\operatorname{Ind}\)-tangent structure interacts with various constructions which are important in the tangent category literature. In particular, we show that the \(\operatorname{Ind}\)-tangent category of a representable tangent category remains representable and then determine when, for a Cartesian differential category \(\mathscr{C}\), an object in \(\operatorname{Ind}(\mathscr{C})\) is a differential bundle over the terminal object \(\top_{\operatorname{Ind}(\mathscr{C})}\). As a corollary we give a necessary and sufficient condition for recognizing when, for a Cartesian differential category \(\mathscr{C}\), \(\operatorname{Ind}(\mathscr{C})\) is not a Cartesian differential category. **Proposition 1.3** (cf. Proposition 4.4).: _Let \(\mathscr{C}\) be a Cartesian closed tangent category. If \(\mathscr{C}\) is representable then there is an object \(\underline{D}\) in \(\operatorname{Ind}(\mathscr{C})\) for which \(\operatorname{Ind}(T)\cong[\underline{D},-]\)._ **Proposition 1.4** (cf. Proposition 4.23).: _Let \(\mathscr{C}\) be an \(A\)-linear Cartesian differential category for a commutative rig \(A\). Then \(\mathscr{C}=\mathscr{C}_{\,D\,\operatorname{lin}}\) if and only if every object in \(\operatorname{Ind}(\mathscr{C})\) is a differential bundle over the terminal object \(\top_{\operatorname{Ind}(\mathscr{C})}\)._ We close this paper with a review of the tangent structure on the categories \(\operatorname{\mathbf{CAlg}}_{A}\) of commutative algebras over a commutative rig \(A\); \(\operatorname{\mathbf{Sch}}_{/S}\) over a base scheme \(S\); the \(A\)-linear Cartesian differential category \(A\)**-Poly** of polynomials over, again, a commutative rig \(A\); and then the Cartesian differential category \(\mathbb{R}\)**-Smooth** of Euclidean spaces and smooth maps between them. Of particular interest are a definition of a formal tangent scheme (together with an explicit calculation) together with the following characterizations of the differential objects in \(\operatorname{Ind}(A\)**-Poly**) and \(\operatorname{Ind}(\mathbb{R}\)**-Smooth**). **Theorem 1.5** (cf. Theorem 5.10).: _The category of differential objects in \(\operatorname{Ind}(A\)-\(\operatorname{\mathbf{Poly}})\) together with linear bundle maps between them is equivalent to \(\operatorname{Ind}(A\)-\(\operatorname{\mathbf{Mod}}_{\operatorname{f.d.}})\). In particular, there is an equivalence of categories of \(\operatorname{\mathbf{Diff}}(\operatorname{Ind}(A\)-\(\operatorname{\mathbf{Poly}}))\) and the category \(A\)-\(\operatorname{\mathbf{Mod}}\) of \(A\)-modules._ **Theorem 1.6** (cf. Theorem 5.12).: _The category of differential objects in \(\operatorname{Ind}(\mathbb{R}\)**-\(\operatorname{\mathbf{Smooth}})\) together with linear bundle maps between them is equivalent to \(\operatorname{Ind}(\mathbb{R}\)**-\(\operatorname{\mathbf{Vec}}_{\operatorname{f.d.}})\). In particular, there is an equivalence of categories between \(\operatorname{\mathbf{Diff}}(\operatorname{Ind}(\mathbb{R}\)**-\(\operatorname{\mathbf{Smooth}}))\) and the category \(\mathbb{R}\)**-\(\operatorname{\mathbf{Vec}}\) of \(\mathbb{R}\) vector spaces._ **Acknowledgments.** I would like to thank JS Lemay for many interesting conversations regarding this material and suggesting many of the directions in which to examine the \(\operatorname{Ind}\)-construction and the \(\operatorname{Ind}\)-tangent structure1. I'd also like to thank JS, Rick Blute, and Dorette Pronk for reading early drafts of this paper and providing helpful suggestions. Additionally I'd like to thank Rory Lucyshyn-Wright for interesting conversations regarding the \(\operatorname{Ind}\)-pseudofunctor. Some of this work was presented at the 2023 Foundational Methods in Computer Science conference, and I'm grateful to the organizers for the hospitality and intellectually stimulating environment which allowed me to produce Sections 4 - 5 of this paper. ## 2. A Review of Tangent Categories Let us recall what it means for a category to be a tangent category following R. Cockett and G. Crutwell in [3]. These categories are an abstraction of what it means to have a category and tangent bundle functor, together with all the relations and structure we require/expect of such a bundle object. In what follows below, if \(\alpha:FX\to X\) is a natural transformation of an endofunctor \(F:\mathscr{C}\to\mathscr{C}\), we write \(F_{2}X\) for the pullback if it exists in \(\mathscr{C}\). **Definition 2.1** ([3, Definition 2.3]).: A category \(\mathscr{C}\) has a tangent structure \(\mathbb{T}=(T,p,0,+,\ell,c)\) when the following six axioms hold: 1. \(T:\mathscr{C}\to\mathscr{C}\) is a functor equipped with a natural transformation \(p:T\Rightarrow\operatorname{id}_{\mathscr{C}}\) such that for every object \(X\) in \(\mathscr{C}\) all pullback powers of \(p_{X}:TX\to X\) exist and for all \(n\in\mathbb{N}\) the functors \(T^{n}\) preserve these pullback powers. 2. There are natural transformations \(+:T_{2}\Rightarrow T\) and \(0:\operatorname{id}_{\mathscr{C}}\to T\) for which each map \(p_{X}:TX\to X\) describes an additive bundle in \(\mathscr{C}\), i.e., \(p:TX\to X\) is an internal commutative monoid in \(\mathscr{C}_{/X}\) with addition and unit given by \(+\) and \(0\), respectively. 3. There is a natural transformation \(\ell:T\Rightarrow T^{2}\) such that for any \(X\in\mathscr{C}_{0}\), the squares \[\begin{CD}TX@>{\ell_{X}}>{}>T^{2}X\\ @V{p_{X}}V{}V@V{}V{T_{p_{X}}}V\\ X@>{}>{0_{X}}>{}>TX\end{CD}\] & \[\begin{CD}TX@>{\ell_{X}}>{}>TX@>{TX\times_{X}}>{}>T^{2}X@>{(\ell_{X}\circ \pi_{1},\ell_{X}\circ\pi_{2})}>{}>T^{2}X@>{X\times_{TX}}>{}>T^{2}X\\ @V{+_{X}}V{}V@V{+_{X}}V{}V\\ TX@>{}>{\ell_{X}}>{}>T^{2}X\end{CD}\] & \[\begin{CD}X@>{0_{X}}>{}>TX@>{0_{X}}>{}>TX@>{0_{X}}>{}>TX@>{0_{X}}>{}>TX \\ @V{0_{TX}}V{}V@V{}V{TX}V\\ TX@>{}>{\ell_{X}}>{}>T^{2}X\end{CD}\] & \[\begin{CD}TX@>{0_{X}}>{}>TX@>{0_{X}}>{}>TX@>{0_{X}}>{}>TX@>{0_{X}}>{}>TX \\ @V{0_{TX}}V{}V@V{}V{0_{TX}}V\\ TX@>{}>{\ell_{X}}>{}>T^{2}X\end{CD}\] all commute, i.e., \((\ell_{X},0_{X})\) is a morphism of bundles in \(\mathscr{C}\) (cf. [3, Definition 2.2]). 4. There is a natural transformation \(c:T^{2}\Rightarrow T^{2}\) such that for all \(X\in\mathscr{C}_{0}\) the squares \[\begin{CD}T^{2}X@>{c_{X}}>{}>T^{2}X\\ @V{T^{2}X}V{}V@V{T^{2}X}V{}V\\ TX@>{(T*)_{X}}V{}V@V{((T*)_{X})}V{}V\\ TX@>{((T*)_{X})}V{}V@V{((T*T)_{X})}V{}V\\ TX@>{}>{T^{2}X}>{}>T^{2}X\end{CD}\] & \[\begin{CD}TX@>{c_{X}}>{}>T^{2}X@>{c_{X}}>{}>TX@>{c_{X}\circ\pi_{1},c_{X} \circ\pi_{2})}>{}>T^{2}X@>{c_{X}\circ\pi_{2}}>{}>TX@>{\operatorname{id}_{TX}}>{}>TX \\ @V{}V{((T*)_{X})}V@V{}V{((T*)_{X})}V\\ TX@>{}>{c_{X}}>{}>T^{2}X\end{CD}\] & \[\begin{CD}TX@>{\operatorname{id}_{TX}}>{}>TX@>{((T*+T)_{X}}V{}V@V{((T* )_{X})}V{}V\\ X@>{(T*0)_{X}}>{}>TX@>{((T*0)_{X})}V{}V\\ TX@>{}>{c_{X}}>{}>T^{2}X\end{CD}\] & \[\begin{CD}TX@>{\operatorname{id}_{TX}}>{}>TX@>{((T*+T)_{X}}V{}V\\ X@>{(T*0)_{X}}>{}>TX@>{((T*+T)_{X}}V{}V\\ X@>{(T*0)_{X}}>{}>TX@>{((T*T)_{X}}V{}V\\ X@>{((T*T)_{X}}V{}V\\ X@>{(T*0)_{X}}>{}>TX@>{((T*T)_{X}}V{}V\\ X@>{((T*T)_{X}}V{}V\\ X@>{(T*T)_{X}}>{}>TX@>{((T*T)_{X}}V{}V\\ X@>{(T*0)_{X}}>{}>TX@>{((T*+T)_{X}} * Axiom 1 names \(T\) the tangent functor of the tangent structure \(\mathbb{T}\) on \(\mathscr{C}\) and \(p\) the bundle map. * Axiom 2 describes the object \(TX\to X\) as an additive tangent bundle in \(\mathscr{C}\). * Axiom 3 names \(\ell\) as the vertical lift which lifts a tangent to a tangent of a tangent. * Axiom 4 names \(c\) as the canonical flip which looks like the interchange of mixed partial derivatives. * Axiom 5 gives the coherence relations \(\ell\) and \(c\) must satisfy. * Axiom 6 describes the universality of the vertical lift. Note also that when we say that \(\mathscr{C}\) is a tangent category, we really mean that \((\mathscr{C},\mathbb{T})\) is a category equipped with a specific tangent structure \(\mathbb{T}\) and have simply left the explicit mention of the tangent structure \(\mathbb{T}\) out. In fact, this is an abuse of notation; it is possible for a category to have multiple distinct tangent structures so we will only say \(\mathscr{C}\) is a tangent category to mean that \((\mathscr{C},\mathbb{T})\) is a category equipped with a specific (potentially unspecified) tangent structure. Equally as important to tangent categories are the morphisms of tangent categories. These come in two flavours: one, which is lax, and another which is strong. While we will usually work with strong tangent morphisms, it is important for Theorem 3.28 to have the full definition. **Definition 2.3** ([3, Definition 2.7]).: Let \((\mathscr{C},\mathbb{T})=(\mathscr{C},T,p,0,+,\ell,c)\) and \((\mathscr{D},\mathbb{S})=(\mathscr{D},S,q,0^{\prime},\oplus,\ell^{\prime},c^{ \prime})\) be tangent categories. A morphism of tangent categories is a pair \((F,\alpha):(\mathscr{C},\mathbb{T})\to(\mathscr{D},\mathbb{S})\) where \(F:\mathscr{C}\to\mathscr{D}\) is a functor and \(\alpha\) is a natural transformation for which the diagrams of functors and natural transformations commute. Additionally, we say that the morphism \((F,\alpha)\) is strong if \(\alpha\) is a natural isomorphism and if \(F\) preserves the equalizers and pullbacks of the tangent structure \((\mathscr{C},\mathbb{T})\). To finish up this section we give a short description of the 1 and 2-categories of tangent categories, tangent morphisms, and their natural transformations. **Definition 2.4**.: The category \(\mathbf{Tan}\) is defined as follows: * Objects: Tangent categories \((\mathscr{C},\mathbb{T})\). * Morphisms: Tangent morphisms \((F,\alpha):(\mathscr{C},\mathbb{T})\to(\mathscr{D},\mathbb{S})\). * Composition: The composition of two tangent morphisms \((F,\alpha):(\mathscr{C},\mathbb{T})\to(\mathscr{D},\mathbb{S})\) and \((G,\beta):(\mathscr{D},\mathbb{S})\to(\mathscr{E},\mathbb{R})\) is defined to be the pair \[\big{(}G\circ F,(\beta\ast F)\circ(G\ast\alpha)\big{)}.\] * Identities: The identity on a tangent category is \((\operatorname{id}_{\mathscr{C}},e):(\mathscr{C},\mathbb{T})\to(\mathscr{C}, \mathbb{T})\) where \(e\) is the natural transformation witnessing the equality \(\operatorname{id}_{\mathscr{C}}\circ T=T\circ\operatorname{id}_{\mathscr{C}}\). **Definition 2.5**.: The 2-category \(\mathfrak{Tan}\) is defined by: * Zero-and-one-morphisms: As in \(\mathbf{Tan}\). * 2-morphisms: A 2-morphism as in the displayed diagram \[\xy(0,0){(C,T)}="(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C, \beta)"(C,\beta)"(C,\beta)"(C,\beta)"(C,\beta) * Objects: Presheaves \(P\in[\mathscr{C}^{\mathrm{op}},\mathbf{Set}]\) for which there is an isomorphism of functors \[P\cong\varliminf_{i\in I}\mathscr{C}(-,X_{i})=\varliminf_{i\in I}\mathbf{y}\,X_ {i}\] where \(I\) is a filtered category. * Morphisms: For any objects \(P,Q\in\operatorname{Ind}(\mathscr{C})_{0}\), we define \[\operatorname{Ind}(\mathscr{C})(P,Q):=[\mathscr{C}^{\mathrm{op}},\mathbf{ Set}](P,Q).\] * Composition and identities: As in \([\mathscr{C}^{\mathrm{op}},\mathbf{Set}]\). We will call the objects of \(\operatorname{Ind}_{\mathbf{PSh}}(\mathscr{C})\) ind-presheaves of \(\mathscr{C}\). In [12] the category \(\operatorname{Ind}_{\mathbf{PSh}}(\mathscr{C})\) is taken as the definition of the ind-category of \(\mathscr{C}\). We will take an approach more like the one used in [1] where we think of \(\operatorname{Ind}(\mathscr{C})\) as the category of filtered colimits by keeping track of the filtered diagram in \(\mathscr{C}\) which defines the colimit. The only difficult part of this more "representation agnostic" view is that it is more difficult to define hom-sets. However, from the presheaf description we have above and the co(Yoneda Lemma) we get natural isomorphisms which tell us how to define the \(\operatorname{Ind}\)-hom sets. Fix two \(\operatorname{Ind}\)-presheaves \(P\) and \(Q\) over \(\mathscr{C}\) and write each as a filtered colimit \[P\cong\varliminf_{i\in I}\mathbf{y}(X_{i})\] and \[Q\cong\varliminf_{j\in J}\mathbf{y}(X_{j}).\] We then have by definition, the co(Yoneda Lemma), the Yoneda Lemma, and the fact that (co)limits in \([\mathscr{C}^{\mathrm{op}},\mathbf{Set}]\) are representable (cf. [1, Equation I.8.2.5.1]) \[\operatorname{Ind}_{\mathbf{PSh}}(\mathscr{C})(P,Q) \cong[\mathscr{C}^{\mathrm{op}},\mathbf{Set}]\left(\varliminf_{ i\in I}\mathbf{y}\,X_{i},\varliminf_{j\in J}\mathbf{y}\,Y_{j}\right)\cong\varliminf_{ i\in I}[\mathscr{C}^{\mathrm{op}},\mathbf{Set}]\left(\mathbf{y}\,X_{i},\varliminf_{j\in J }\mathbf{y}\,Y_{j}\right)\] \[\cong\varliminf_{i\in I}\left(\varliminf_{j\in J}[\mathscr{C}^ {\mathrm{op}},\mathbf{Set}](\mathbf{y}\,X_{i},\mathbf{y}\,Y_{j})\right)\cong \varliminf_{i\in I}\left(\varliminf_{j\in J}\mathscr{C}(X_{i},Y_{j})\right).\] In particular, this leads us to a definition of hom-sets in a "representation agnostic" formulation of \(\operatorname{Ind}(\mathscr{C})\). **Definition 3.3** ([1, Section I.8.2]).: If \(\mathscr{C}\) is a category then we define an ind-object of \(\mathscr{C}\) to be a functor \(F:I\to\mathscr{C}\) where \(I\) is a filtered category. If no confusion is likely to arise, we will write ind-objects as systems \(\underline{X}=(X_{i})_{i\in I}=(X_{i})\) where \(X_{i}:=F(i)\) for all \(i\in I_{0}\) and leave the transition morphisms \(F(f),f\in I_{1}\) and even the functor \(F\) implicit. **Definition 3.4** ([1, Section I.8.2]).: The \(\operatorname{Ind}(\mathscr{C})\) of a category \(\mathscr{C}\) is defined as follows: * Objects: \(\operatorname{Ind}\)-objects \(\underline{X}\) of \(\mathscr{C}\). * Morphisms: We define the \(\operatorname{hom}\)-sets \(\operatorname{Ind}(\mathscr{C})(\underline{X},\underline{Y})\) for \(\underline{X}=(X_{i})_{i\in I}\) and \(\underline{Y}=(Y_{j})_{j\in J}\) via \[\operatorname{Ind}(\mathscr{C})(\underline{X},\underline{Y}):=\varliminf_{ i\in I}\left(\varliminf_{j\in J}\mathscr{C}(X_{i},Y_{j})\right).\] * Composition: Induced by degree-wise composition after taking the filtered limit of the filtered colimit of maps \(X_{i}\to Y_{j}\). * Identities: \(\operatorname{id}_{(X_{i})}\) is the identity transformation for the functor \(\underline{X}:I\to\mathscr{C}\). **Remark 3.5**.: While the definition of \(\operatorname{hom}\)-sets above looks complicated in general, defining morphisms between ind-objects that have the same indexing category is straightforward. In the case \(\underline{X}=(X_{i})_{i\in I}\) and \(\underline{Y}=(Y_{i})_{i\in I}\) then a morphism \(\rho:\underline{X}\to\underline{Y}\) is given by a collection of maps \(\rho_{i}:X_{i}\to Y_{i}\) for all \(i\in I\) which are compatible with the transition morphisms of each functor. In particular, \(\rho\) is a natural transformation when \(F\) and \(G\) are the functors representing \(\underline{X}\) and \(\underline{Y}\), respectively. When working with \(\operatorname{Ind}(\mathscr{C})\) it is frequently helpful to represent each object as a presheaf (and in fact we will use this technique below); we will thus follow [1] and define the functor \(L:\operatorname{Ind}(\mathscr{C})\to[\mathscr{C}^{\operatorname{op}},\mathbf{Set}]\) as follows: * For any ind-object \(\underline{X}=(X_{i})_{i\in I}\) define \(L(\underline{X})\) to be the presheaf \[L(\underline{X})=\lim_{i\in\underline{I}}\mathbf{y}\,X_{i}.\] * For any morphism \(\rho:\underline{X}\to\underline{Y}\) in \(\operatorname{Ind}(\mathscr{C})\) we define \(L(\rho)\) to be the image of \(\rho\) under the natural isomorphism \[\operatorname{Ind}(\mathscr{C})(\underline{X},\underline{Y})=\lim_{i\in I} \left(\lim_{j\in\mathcal{C}}\mathscr{C}(X_{i},Y_{j})\right)\cong[\mathscr{C}^ {\operatorname{op}},\mathbf{Set}]\left(\lim_{i\in I}\mathbf{y}\,X_{i},\lim_{j \in\underline{J}}\mathbf{y}\,Y_{j}\right)=[\mathscr{C}^{\operatorname{op}}, \mathbf{Set}](L\underline{X},L\underline{Y}).\] **Proposition 3.6** ([1, Section I.8.2.4; cf. Line I.8.2.4.8]).: _The functor \(L:\operatorname{Ind}(\mathscr{C})\to[\mathscr{C}^{\operatorname{op}},\mathbf{ Set}]\) is fully faithful, exact, and has essential image equal to the category \(\operatorname{Ind}_{\mathbf{Psh}}(\mathscr{C})\). In particular, \(\operatorname{Ind}(\mathscr{C})\simeq\operatorname{Ind}_{\mathbf{Psh}}( \mathscr{C})\)._ This means that when checking to see if certain objects are isomorphic in the ind-category, where constructions are often more straightforward to perform (there is less worry about making sure you haven't done something weird and misused the Yoneda Lemma), and then check for isomorphisms through the functor \(L\).2 We now recall and discuss the construction of ind-functors so that we can then build ind-natural transformations and introduce the ind-pseudofunctor. Footnote 2: Because \(L\) is fully faithful, it is in particular conservative so isomorphisms \(L\underline{X}\cong L\underline{Y}\) come uniquely from isomorphisms \(\underline{X}\cong\underline{Y}\). **Proposition 3.7** ([1, Section I.8.6.1]).: _If \(F:\mathscr{C}\to\mathscr{D}\) is a functor then there is a functor \(\operatorname{Ind}(F):\operatorname{Ind}(\mathscr{C})\to\operatorname{Ind}( \mathscr{D})\) for which given a composable pair of functors \(\mathscr{C}\overset{F}{\to}\mathscr{D}\overset{G}{\to}\mathscr{E}\) there is a natural isomorphism_ \[\operatorname{Ind}(G\circ F)\cong\operatorname{Ind}(G)\circ\operatorname{Ind}( F).\] **Remark 3.8**.: The construction of the functor \(\operatorname{Ind}(F):\operatorname{Ind}(\mathscr{C})\to\operatorname{Ind}( \mathscr{D})\) is important for our applications, so we will describe it here in the two main ways of working with the category \(\operatorname{Ind}(\mathscr{C})\): the reduced \(\underline{X}=(X_{i})\) language and the more formal language \(\underline{X}:I\to\mathscr{C}\). In the first case, the functor \(\operatorname{Ind}(F)(\underline{X})\) acts by \[\operatorname{Ind}(F)(\underline{X}):=(FX_{i}).\] In the second case we find that \[\operatorname{Ind}(F)(\underline{X})=F\circ\underline{X}:I\to\mathscr{D}\,.\] The assignment on morphisms is more complicated to describe. If we have a morphism \[\rho\in\operatorname{Ind}(\mathscr{C})((X_{i})_{i\in I},(Y_{j})_{j\in J})\] then \(\operatorname{Ind}(F)(\rho)\) is induced by the assignment \[\lim_{I}\left(\operatorname{colim}_{J}\rho_{ij}\right)\mapsto\lim_{I}\left( \operatorname{colim}_{J}F(\rho_{ij})\right)\] However, there is a clean description in case \(\underline{X},\underline{Y}:I\to\mathscr{C}\) are functors defined on the same indexing category. In this case a morphism \(\rho:\underline{X}\to\underline{Y}\) is exactly a natural transformation and so we define \(\operatorname{Ind}(F)\) algebraically by \[\operatorname{Ind}(F)(\rho_{i})=(F\rho_{i})\] and, in the categorical perspective, \[\operatorname{Ind}(F)(\rho)=F*\rho.\] **Corollary 3.9**.: _For any category \(\mathscr{C}\) there is an equality \(\operatorname{Ind}(\operatorname{id}_{\mathscr{C}})=\operatorname{id}_{ \operatorname{Ind}(\mathscr{C})}\)._ Proof.: The verification that \(\operatorname{id}_{\operatorname{Ind}(\mathscr{C})}(\underline{X})=\underline{X}= \operatorname{Ind}(\operatorname{id}_{\mathscr{C}})(\underline{X})\) for any object \(\underline{X}\) is trivial and omitted. For the case of morphisms note that if \(\lim(\operatorname{colim}\rho_{ij}):(X_{i})\to(Y_{j})\) is any morphism then \(\operatorname{Ind}(\operatorname{id}_{\mathscr{C}})\) acts on \(\lim(\operatorname{colim}\rho_{ij})\) via \[\lim(\operatorname{colim}\rho_{ij})\mapsto\lim\bigg{(}\operatorname{colim} \big{(}\operatorname{id}_{\mathscr{C}}(\rho_{ij})\big{)}\bigg{)}=\lim( \operatorname{colim}(\rho_{ij})),\] which is exactly the identity assignment. **Remark 3.10**.: As we proceed in this paper, for any composable pair of functors \(\mathscr{C}\xrightarrow{F}\mathscr{D}\xrightarrow{G}\mathscr{E}\) we write \(\phi_{F,G}\) for the compositor natural isomorphism: \[\operatorname{Ind}(\mathscr{C})\xrightarrow[\cong]{\operatorname{Ind}(G \circ F)}\operatorname{Ind}(\mathscr{E})\] While in general we do only have a natural isomorphism \(\operatorname{Ind}(G\circ F)\cong\operatorname{Ind}(G)\circ\operatorname{Ind}(F)\), the lemma we prove below (for use later when we prove that \(\hat{\ell}\) and \(\hat{c}\) are components of bundle maps) shows that these compositor natural isomorphisms are actually relatively well-behaved in the sense that the action of \(\operatorname{Ind}(G)\circ\operatorname{Ind}(F)\) and \(\operatorname{Ind}(G\circ F)\) on certain classes of morphisms are the same. **Lemma 3.11**.: _Let \(F:\mathscr{C}\to\mathscr{D}\) and \(G:\mathscr{D}\to\mathscr{E}\) be functors. If \(\underline{X},\underline{Y}:I\to\mathscr{C}\) are two objects in \(\operatorname{Ind}(\mathscr{C})\) with \(\rho=(\rho_{i})_{i\in I}:\underline{X}\to\underline{Y}\) an ind-morphism, then_ \[\big{(}\operatorname{Ind}(G)\circ\operatorname{Ind}(F)\big{)}(\underline{\rho} )=\operatorname{Ind}(G\circ F)(\underline{\rho}).\] Proof.: Because both \(\underline{X}\) and \(\underline{Y}\) have the same indexing category, it follows from Remark 3.8 \[\operatorname{Ind}(G\circ F)(\underline{\rho})=(G\circ F)*\rho\] Similarly, we find that \[\operatorname{Ind}(F)(\rho)=F*\rho=\iota_{F}*\rho\] where \(\iota_{F}\) is the identity natural transformation on \(F\). Then \[\operatorname{Ind}(G)\big{(}F\rho_{i}\big{)}_{i\in I}=G*(\iota_{F}*\rho)=\iota _{G}*(\iota_{F}*\rho)=(\iota_{G}*\iota_{F})*\rho=(\iota_{G\circ F})*\rho.\] Thus we conclude that \[\operatorname{Ind}(G\circ F)(\rho_{i})=(G\circ F)*\rho=\iota_{G\circ F}*\rho= \iota_{G}*(\iota_{F}*\rho)=\operatorname{Ind}(G)\big{(}\operatorname{Ind}(F)( \rho)\big{)},\] as was desired. **Proposition 3.12**.: _Let \(\mathscr{C},\mathscr{D}\) be categories with functors \(F,G:\mathscr{C}\to\mathscr{D}\) and let \(\alpha:F\Rightarrow G\) be a natural transformation. Then there is a natural transformation_ \[\operatorname{Ind}(\alpha):\operatorname{Ind}(F)\Rightarrow\operatorname{Ind}(G).\] Proof.: Fix \(\underline{X}=(X_{i})\in\operatorname{Ind}(\mathscr{C})_{0}\). We define \(\operatorname{Ind}(\alpha)_{\underline{X}}:\operatorname{Ind}(F)(X_{i})\to \operatorname{Ind}(G)(X_{i})\) by setting (cf. Remark 3.5) \[\operatorname{Ind}(\alpha)_{\underline{X}}:=(\alpha_{X_{i}})_{i\in I}.\] To check that this is a natural transformation fix a map \(\rho:(X_{i})_{i\in I}\to(Y_{j})_{j\in J}\) so that the sequences \((\rho_{ij})\) are a family of morphisms compatible with the transition morphisms of \(\underline{X}\) and \(\underline{Y}\). In particular, for any \(f:i\to i^{\prime}\) in \(I\) and \(g:j\to j^{\prime}\) in \(J\) the square commutes. To prove naturality, it suffices to show that for any any \(f:i\to i^{\prime}\) in \(I\) and for any \(g:j\to j^{\prime}\) in \(J\), the cube commutes; note that we are writing \(f_{*}\) for the image of \(f:i\to i^{\prime}\) as a transition morphism of \(\underline{X}\) and similarly for \(g_{*}\) as a transtion morphism of \(\underline{Y}\). However, note that the squares commute by the naturality of \(\alpha\). Similarly, the squares commute by assumption of the \(\rho_{ij}\) being compatible with the transition morphisms and the functoriality of \(F\) and \(G\). Thus the cube commutes, which shows that \(\operatorname{Ind}(\alpha)\) is indeed natural after taking the filtered limit of the filtered colimit of maps. **Corollary 3.13**.: _Given a two-cell of categories_ _we have_ \[\operatorname{Ind}(\beta\circ\alpha)=\operatorname{Ind}(\beta)\circ \operatorname{Ind}(\alpha).\] Proof.: Let \(\underline{X}=(X_{i})_{i\in I}\) be an object of \(\operatorname{Ind}(\mathscr{C})_{0}\). We then calculate from Proposition 3.12 that \[\operatorname{Ind}(\beta\circ\alpha)_{\underline{X}}=\left((\beta\circ\alpha) _{X_{i}}\right)_{i\in I}=(\beta_{X_{i}}\circ\alpha_{X_{i}})_{i\in I}=(\beta_{X _{i}})_{i\in I}\circ(\alpha_{X_{i}})_{i\in I}=\operatorname{Ind}(\beta)_{ \underline{X}}\circ\operatorname{Ind}(\alpha)_{\underline{X}}.\] We now need to know how the \(\operatorname{Ind}\)-assignment interacts with horizontal composition of natural transformations in order to verify that it is pseudofunctorial in \(\mathfrak{Cat}\). Recall that pseudofunctoriality declares immediately that if we have a \(2\)-cell then we have the identity \[\operatorname{Ind}(\beta)*\operatorname{Ind}(\alpha)=\phi_{G,K}^{-1}\circ \operatorname{Ind}(\beta*\alpha)\circ\phi_{F,H};\] in particular, establishing the above identity is equivalent to establishing pseudofunctoriality on \(2\)-morphisms. **Lemma 3.14**.: _The \(\operatorname{Ind}\)-assignment is pseudofunctorial on \(2\)-morphisms in the sense that if we have natural transformations \(\alpha:F\Rightarrow G:\mathscr{C}\to\mathscr{D}\) and if \(\beta:H\Rightarrow K:\mathscr{D}\to\mathscr{E}\) then_ \[\operatorname{Ind}(\beta*\alpha)=\phi_{G,K}^{-1}\circ\operatorname{Ind}(\beta)* \operatorname{Ind}(\alpha)\circ\phi_{F,H}.\] Proof.: This is a straightforward verification using Lemma 3.11 and the observation that every object and morphism in sight in the definitions of \(\operatorname{Ind}(\beta*\alpha)\) and \(\operatorname{Ind}(\beta)*\operatorname{Ind}(\alpha)\) involve only the same indexing categories. We now will endeavour to show that if \((\mathscr{C},\mathbb{T})\) is a tangent category then the ind-category \(\operatorname{Ind}(\mathscr{C})\) naturally inherits an ind-tangent structure which we will call \(\operatorname{Ind}(\mathbb{T})\) from \(\mathbb{T}\). We begin this study by first showing the existence of the natural transformations \(\operatorname{Ind}(p)\) and \(\operatorname{Ind}(0)\); constructing these is straightforward, while the natural transformations \(\hat{+},\hat{\ell}\), and \(\hat{c}\) take a little more work to define for technical \(2\)-categorical reasons: the \(2\)-functor \(\operatorname{Ind}:\mathfrak{Cat}\to\mathfrak{Cat}\) is a pseudofunctor and not strictly functorial. **Lemma 3.15**.: _If \(\mathscr{C}\) is a tangent category then there are ind-bundle and ind-zero natural transformations \(\operatorname{Ind}(p):\operatorname{Ind}(T)\Rightarrow\operatorname{id}_{ \operatorname{Ind}(\mathscr{C})}\) and \(\operatorname{Ind}(0):\operatorname{id}_{\operatorname{Ind}(\mathscr{C})} \Rightarrow\operatorname{Ind}(T)\)._ Proof.: Apply Corollary 3.9 and Proposition 3.12 to the natural transformations \(p:T\Rightarrow\operatorname{id}_{\mathscr{C}}\) and \(0:\operatorname{id}_{\mathscr{C}}\Rightarrow T\). Because every tangent category \(\mathscr{C}\) admits (finite) pullbacks against the bundle maps \(p_{X}:TX\to X\) for every object \(X\in\mathscr{C}_{0}\), it follows from [1, Proposition I.8.9.1.c] that \(\operatorname{Ind}(\mathscr{C})\) admits finite pullbacks agains the ind-bundle maps \(\operatorname{Ind}(p)_{\underline{X}}:\operatorname{Ind}(T)\underline{X}\to \underline{X}\) for every \(\underline{X}\in\operatorname{Ind}(\mathscr{C})_{0}\). In order to have our tangent structure \(\operatorname{Ind}(\mathbb{T})\) be essentially given as the indication of the tangent structure \(\mathbb{T}\), we nee now must establish a natural isomorphism (and hence also the existence of) \(\operatorname{Ind}(T)_{\operatorname{Ind}(p)}\times_{\operatorname{Ind}(p)} \operatorname{Ind}(T)=\operatorname{Ind}(T)_{2}\cong\operatorname{Ind}(T_{2})\). Afterwards, we'll establish that compositional powers of \(\operatorname{Ind}(T)\) preserve and commute with these pullback powers before constructing the ind-bundle addition and proving that \(\operatorname{Ind}(p)_{\underline{X}}\) is a commutative monoid in \(\operatorname{Ind}(\mathscr{C})_{/\underline{X}}\). **Proposition 3.16**.: _If \((\mathscr{C},\mathbb{T})\) is a tangent category then there is a natural isomorphism of functors \(\operatorname{Ind}(T_{2})\cong\operatorname{Ind}(T)_{2}:\operatorname{Ind}( \mathscr{C})\to\operatorname{Ind}(\mathscr{C})\)._ Proof.: First that the pullback powers \(\operatorname{Ind}(T_{n})\underline{X}=\operatorname{Ind}(T)\underline{X}\times _{\underline{X}}\cdots\times_{\underline{X}}\operatorname{Ind}(T)\underline{X}\) follows from the fact that for any objects \(\underline{X}=(X_{i}),\underline{Y}=(Y_{i}),\underline{Z}=(Z_{i})\) the object \(\underline{W}=(X_{i}\times_{Z_{i}}Y_{i})\) represents the pullback \(\underline{X}\times_{\underline{Z}}\underline{Y}\) in \(\operatorname{Ind}(\mathscr{C})\); the verification of this fact is routine and straightforward. We use the presheaf realization functor \(L:\operatorname{Ind}(\mathscr{C})\to[\mathscr{C}^{\operatorname{op}}, \mathbf{Set}]\) to prove the remainder of proposition, i.e., that \(\operatorname{Ind}(T)_{2}\cong\operatorname{Ind}(T_{2})\). On objects we have that on one hand \[\operatorname{Ind}(T)_{2}\underline{X}=\operatorname{Ind}(T)\underline{X} \times_{\underline{X}}\operatorname{Ind}(T)\underline{X}=(TX_{i})\times_{(X_{ i})}(TX_{i})\] while on the other hand \[\operatorname{Ind}(T_{2})\underline{X}=(T_{2}X_{i})=(TX_{i}\times_{X_{i}}TX_{ i}).\] Taking the image under \(L\) and using that filtered colimits commute with finite limits in \([\mathscr{C}^{\operatorname{op}},\mathbf{Set}]\) we find \[L\big{(}\operatorname{Ind}(T_{2})\underline{X}\big{)} =\varinjlim_{i\in\overline{I}}\mathbf{y}(TX_{i}\times_{X_{i}}TX_ {i})\cong\varinjlim_{i\in\overline{I}}\mathbf{y}\,TX_{i}\times_{\operatorname{ cop}\mathbf{y}\,X_{i}}\varinjlim_{i\in\overline{I}}\mathbf{y}\,TX_{i}\times_{ \operatorname{colim}\mathbf{y}\,X_{i}}\varinjlim_{i\in\overline{I}}\mathbf{y}\, TX_{i}\] \[\cong L(TX_{i})\times_{LX_{i}}L(TX_{i})\cong L\left(\operatorname {Ind}(T)\underline{X}\times_{\underline{X}}\operatorname{Ind}(T)\underline{X} \right)=L\left(\operatorname{Ind}(T)_{2}\underline{X}\right).\] From the fact that \(L\) is conservative, we conclude that \(\operatorname{Ind}(T)_{2}\underline{X}\cong\operatorname{Ind}(T_{2})\underline{X}\); that this isomorphism is natural in \(\underline{X}\) because it is induced by the universal property of the limit together with limit preservation isomorphisms. **Corollary 3.17**.: _For any tangent category \((\mathscr{C},\mathbb{T})\) and any \(m\in\mathbb{N}\) there is an isomorphism \(\operatorname{Ind}(T)_{m}\cong\operatorname{Ind}(T_{m})\)._ Proof.: This follows mutatis mutandis to the proof of Proposition 3.16. **Corollary 3.18**.: _Let \((\mathscr{C},\mathbb{T})\) be a tangent category and let \(m,n\in\mathbb{N}\). Then there is a natural isomorphism of functors_ \[\operatorname{Ind}(T)^{n}\circ\operatorname{Ind}(T)_{m}\cong\operatorname{Ind} (T)_{m}\circ\operatorname{Ind}(T)^{n}.\] Proof.: Since \(\mathscr{C}\) is a tangent category, we have natural a natural isomorphism \(T^{n}\circ T_{m}\cong T_{m}\circ T^{n}\). Apply Proposition 3.12 to this natural isomorphism to get a natural isomorphism \(\operatorname{Ind}(T^{n}\circ T_{m})\cong\operatorname{Ind}(T_{m}\circ T^{n})\). Finally, conjugating by the to the natural isomorphism by the \(\operatorname{Ind}\)-compositor \(\phi_{f,g}:\operatorname{Ind}(f)\circ\operatorname{Ind}(g)\to\operatorname{ Ind}(g\circ f)\) multiple times together together with using Corollary 3.17 gives \[\operatorname{Ind}(T)^{n}\circ\operatorname{Ind}(T)_{m} \cong\operatorname{Ind}(T^{n})\circ\operatorname{Ind}(T_{m}) \cong\operatorname{Ind}(T^{n}\circ T_{m})\cong\operatorname{Ind}(T_{m}\circ T ^{n})\cong\operatorname{Ind}(T_{m})\circ\operatorname{Ind}(T^{n})\] \[\cong\operatorname{Ind}(T)_{m}\circ\operatorname{Ind}(T)^{n}.\] Let us now prove the existence of the ind-addition natural transformation \(\operatorname{Ind}(+)\) and prove that for any \(\underline{X}\in\operatorname{Ind}(\mathscr{C})_{0}\), \(\operatorname{Ind}(p)_{\underline{X}}\) is a commutative monoid in \(\operatorname{Ind}(\mathscr{C})_{/\underline{X}}\). **Lemma 3.19**.: _Let \((\mathscr{C},\mathbb{T})\) be a tangent category. Then there is an ind-addition natural transformation_ \[\hat{(}+):\operatorname{Ind}(T)_{2}\Rightarrow\operatorname{Ind}(T).\] Proof.: We define \(\operatorname{Ind}(+)\) to be the natural transformation displayed below \[\operatorname{Ind}(T)_{2}\xrightarrow{\cong}\operatorname{Ind}(T_{2}) \xrightarrow{\operatorname{Ind}(+)}\operatorname{Ind}(T)\] where \(\operatorname{Ind}(+)\) is the indicization of the natural transformation \(+:T_{2}\Rightarrow T\) of Proposition 3.12. **Proposition 3.20**.: _For any tangent category \((\mathscr{C},\mathbb{T})\) and any \(\underline{X}\in\operatorname{Ind}(\mathscr{C})_{0}\), the object \(\hat{(}p)_{\underline{X}}:\operatorname{Ind}(T)\underline{X}\to\underline{X}\) is a commutative monoid in \(\operatorname{Ind}(\mathscr{C})_{/\underline{X}}\) with unit \(\operatorname{Ind}(0)_{\underline{X}}\) and addition \(\hat{+}\)._ Proof.: We must show that in \(\operatorname{Ind}(\mathscr{C})_{/\underline{X}}\) the diagrams commute. To prove the commutativity of these diagrams we note that \(\operatorname{Ind}(T)_{2}\underline{X}\) carries the natural isomorphism \(\operatorname{Ind}(T)_{2}\cong\operatorname{Ind}(T_{2})\) which mediates between the pullback in \(\operatorname{Ind}(\mathscr{C})\) and the ind-object given by the pullbacks \((TX_{i}\times_{X_{i}}TX_{i})\) and the map \(\hat{+}_{\underline{X}}\) is defined by first using this mediating isomorphism before acting on the ind-object \((TX_{i}\times_{X_{i}}TX_{i})\). As such, it follows by naturality to verify each of the diagrams above on the corresponding incarnation of ind-objects whose components are all given \(i\)-locally as the pullback of objects in \(\mathscr{C}\). More explicitly, to verify the first diagram it suffices to show that the diagram commutes in \(\operatorname{Ind}(\mathscr{C})\). However, because \(TX_{i}\) is a bundle over \(X_{i}\) for all \(i\in I\), so the diagram for each \(i\in I\). Thus by taking the image of the diagram under the functor \(L\) it follows that the diagram of ind-presheaves commutes and so via the fact that \(L\) is fully faithful, it follows that the diagram commutes in \(\operatorname{Ind}(\mathscr{C})\) as well. The commutativity of the other diagrams is verified mutatis mutandis to this diagrams, and so are omitted. We now build the ind-vertical lift \(\operatorname{Ind}(\ell):\operatorname{Ind}(T)\Rightarrow\operatorname{Ind}(T )^{2}\) and prove that it induces a bundle morphism in \(\operatorname{Ind}(\mathscr{C})\). **Lemma 3.21**.: _If \((\mathscr{C},\mathbb{T})\) is a tangent category then there is an ind-vertical lift transformation \(\hat{\ell}:\operatorname{Ind}(T)\Rightarrow\operatorname{Ind}(T)^{2}\)._ Proof.: We define \(\hat{\ell}\) as in the diagram of functors and natural transformations \[\operatorname{Ind}(T)\xrightarrow{\operatorname{Ind}\ell}\operatorname{Ind} (T^{2})\xrightarrow{\phi_{T,T}}\operatorname{Ind}(T)^{2}\] where \(\operatorname{Ind}\ell:\operatorname{Ind}(T)\Rightarrow\operatorname{Ind}(T ^{2})\) is the transformation induced by applying Proposition 3.12 and \(\phi_{T,T}:\operatorname{Ind}(T^{2})\cong\operatorname{Ind}(T)^{2}\) is the compositor isomorphism. **Proposition 3.22**.: _If \((\mathscr{C},\mathbb{T})\) is a tangent category then for any object \(\underline{X}\) of \(\operatorname{Ind}(\mathscr{C})\), the pair of morphisms \((\hat{\ell}_{\underline{X}},\operatorname{Ind}(0)_{\underline{X}})\) describes a morphism of bundles in \(\operatorname{Ind}(\mathscr{C})\)._ Proof.: We must prove that the diagrams commute in \(\operatorname{Ind}(\mathscr{C})\). Write \(\underline{X}=(X_{i})_{i\in I}\) and consider that the first diagram has bottom edge calculated by \[\operatorname{Ind}(0)_{\underline{X}}\circ\operatorname{Ind}(p)_{\underline{X }}=(0_{X_{i}})\circ(p_{X_{i}})=(0_{X_{i}}\circ p_{X_{i}})=((T*p)_{X_{i}}\circ \ell_{X_{i}})\] because \(\mathscr{C}\) is a tangent category. Alternatively, by Lemma 3.21 we find that the upper half of the diagram is calculated by \[\big{(}\operatorname{Ind}(T)*\operatorname{Ind}(p)\big{)}_{\underline{X}}\circ \hat{\ell}_{X_{i}}=\big{(}(T*p)_{X_{i}}\big{)}\circ(\ell_{X_{i}})=\big{(}(T*p) _{X_{i}}\circ\ell_{X_{i}}\big{)}=\operatorname{Ind}(0)_{\underline{X}}\circ \operatorname{Ind}(p)_{\underline{X}},\] so the first diagram indeed commutes. The third diagram is verified to commute similarly, so it suffices to show now that the second diagram commutes. For this we note that on one hand \[\hat{\ell}_{\underline{X}}\circ\hat{+}_{\underline{X}}=(\ell_{X_{i}})_{i\in I }\circ(0_{X_{i}})_{i\in I}=(\ell_{X_{i}}\circ 0_{X_{i}})_{i\in I}=\big{(}(+*T)_{X_{i}} \circ\langle\ell_{X_{i}}\circ\pi_{1,i},\ell_{X_{i}}\circ\pi_{2,i}\rangle\big{)} _{i\in I}\] because \(\mathscr{C}\) is a tangent category. Now on the other hand we note that \[\big{(}\hat{+}*\operatorname{Ind}(T)\big{)}_{\underline{X}}\circ \Big{\langle}\hat{\ell}_{\underline{X}}\circ\pi_{1},\hat{\ell}_{\underline{X }}\circ\pi_{2}\Big{\rangle} =((+*T)_{X_{i}})\circ\langle(\ell_{X_{i}})\circ(\pi_{1,i}),\ell_{ X_{i}}\circ(\pi_{2,i})\rangle\] \[=\big{(}(+*T)_{X_{i}}\big{)}\circ\langle(\ell_{X_{i}}\circ\pi_{1,i}),(\ell_{X_{i}}\circ\pi_{2,i})\rangle\] \[=\big{(}(+*T)_{X_{i}}\big{)}\circ\big{(}\langle\ell_{X_{i}}\circ \pi_{1,i},\ell_{X_{i}}\circ\pi_{2,i}\rangle\big{)}\] \[=((+*T)_{X_{i}}\circ\langle\ell_{X_{i}}\circ\pi_{1,i},\ell_{X_{i} }\circ\pi_{2,i}\rangle)\] so it follows that the diagram indeed commutes. We now provide the existence of the ind-canonical flip. This amounts to being able to commute partial derivative operators within formal \(2\)-jets of our tangent object \(\operatorname{Ind}(T)\underline{X}\to\underline{X}\). Afterwards we will prove that \(\hat{c}\) is one of the components of a bundle morphism. **Lemma 3.23**.: _Let \((\mathscr{C},\mathbb{T})\) be a tangent category. Then there is an ind-canonical flip natural transformation_ \[\hat{c}:\operatorname{Ind}(T)^{2}\Rightarrow\operatorname{Ind}(T)^{2}.\] Proof.: We define the ind-canonical flip via the diagram where \(\phi_{T,T}\) is the compositor isomorphism \(\theta:\operatorname{Ind}(T)^{2}\stackrel{{\cong}}{{\Longrightarrow}} \operatorname{Ind}(T^{2})\) and \(\operatorname{Ind}c\) is the indicization of the canonical flip \(c:T^{2}\Rightarrow T^{2}\) asserted by Proposition 3.12. **Proposition 3.24**.: _If \((\mathscr{C},\mathbb{T})\) is a tangent category then for any object \(\underline{X}\) of \(\operatorname{Ind}(\mathscr{C})\), the pair of morphisms \((\operatorname{id}_{\operatorname{Ind}(T)\underline{X}},\hat{c}_{\underline{X}})\) describe a bundle morphism._ Proof.: Following Definition 2.1, we must show that the diagrams \[\begin{CD}\operatorname{Ind}(T)^{2}\underline{X}@>{\hat{c}_{\underline{X}}}>{}> \operatorname{Ind}(T)^{2}\underline{X}\\ @V{(\operatorname{Ind}(T)*\operatorname{Ind}(p))\underline{X}}V{}V@V{}V{ \operatorname{Ind}(T)\underline{X}}V{}V\\ \operatorname{Ind}(T)\underline{X}@>{}>{}>\operatorname{Ind}(T)\underline{X}\\ \operatorname{Ind}(T)^{2}\underline{X}\times_{\operatorname{Ind}(T)\underline{X}} \operatorname{Ind}(T)^{2}\underline{X}@>{\hat{c}_{\underline{X}}\circ\pi_{1 },\hat{c}_{\underline{X}}\circ\pi_{2})}>{}>\operatorname{Ind}(T)^{2}\underline{X }\times_{\operatorname{Ind}(T)\underline{X}}\operatorname{Ind}(T)^{2}\underline{X }\\ @V{(\operatorname{Ind}(T)^{*}\hat{+})\underline{X}}V{}V@V{}V{(\hat{+}* \operatorname{Ind}(T))\underline{X}}V\\ \operatorname{Ind}(T)^{2}\underline{X}@>{\hat{c}_{\underline{X}}}>{}> \operatorname{Ind}(T)^{2}\underline{X}\\ \operatorname{Ind}(T)\underline{X}@>{\operatorname{Ind}(T)\underline{X}}>{}> \operatorname{Ind}(T)\underline{X}\\ \operatorname{Ind}(T)\underline{X}@>{(\operatorname{Ind}(T)*0)\underline{X}}>{}> \operatorname{Ind}(T)^{2}\underline{X}\\ \operatorname{Ind}(T)^{2}\underline{X}@>{\hat{c}_{\underline{X}}}>{}> \operatorname{Ind}(T)^{2}\underline{X}\\ \operatorname{Ind}(T)\underline{X}@>{(\operatorname{Ind}(T)\underline{X}}>{}> \operatorname{Ind}(T)\underline{X}\\ \operatorname{Ind}(T)^{2}\underline{X}@>{\hat{c}_{\underline{X}}}>{}> \operatorname{Ind}(T)^{2}\underline{X}\\ \end{CD}\] commute. The first and third diagrams are established similarly, so we need only establish the commutativity of the first and second diagrams to prove the proposition. Let \(\underline{X}=(X_{i})_{i\in I}=(X_{i})\) be an object of \(\operatorname{Ind}(\mathscr{C})\). For this we begin by establishing the commutativity of the first digram. Note that on one hand \[\big{(}\operatorname{Ind}(T)*\operatorname{Ind}(p)\big{)}_{\underline{X}}= \big{(}(T*p)_{X_{i}}\big{)}=\big{(}(p*T)_{X_{i}}\circ c_{X_{i}}\big{)}\] because \(\mathscr{C}\) is a tangent category. On the other hand we calculate that \[\big{(}\operatorname{Ind}(p)*\operatorname{Ind}(T)\big{)}_{ \underline{X}}\circ\hat{e}_{\underline{X}} =\big{(}(p*T)_{X_{i}}\big{)}\circ\theta_{\underline{X}}^{-1} \circ\underline{e}_{\underline{X}}\circ\theta_{\underline{X}}=\big{(}(p*T)_{ X_{i}}\big{)}\circ\theta_{\underline{X}}^{-1}\circ(c_{X_{i}})\circ\theta_{ \underline{X}}\] \[=\big{(}(p*T)_{X_{i}}\circ c_{X_{i}}\big{)},\] which shows that the first diagram indeed commutes. To show the commutativity of the second diagram we note that on one hand a routine check shows \[\hat{e}_{\underline{X}}\circ\big{(}\operatorname{Ind}(T)*\hat{+}\big{)}_{ \underline{X}}=\theta_{\underline{X}}^{-1}\circ(c_{X_{i}})\circ\big{(}(T*+)_{ X_{i}}\big{)}\circ\theta_{\underline{X}}.\] On the other hand \[\big{(}\hat{+}*T\big{)}_{\underline{X}}\circ\big{\langle}\hat{e}_ {\underline{X}}\circ\pi_{1},\hat{e}_{\underline{X}}\circ\pi_{2}\big{\rangle}\] \[=\big{(}\hat{+}*T\big{)}_{\underline{X}}\circ\Big{\langle}\theta_ {\underline{X}}^{-1}\circ(c_{X_{i}})\circ\theta_{\underline{X}}\circ\pi_{1}, \theta_{\underline{X}}^{-1}\circ(c_{X_{i}})\circ\theta_{\underline{X}}\circ\pi _{2}\Big{\rangle}\] \[=\theta_{\underline{X}}^{-1}\circ\big{(}(+*T)_{X_{i}}\big{)}\circ \big{\langle}(c_{X_{i}})\circ\theta_{\underline{X}}\circ\pi_{1},(c_{X_{i}}) \circ\theta_{\underline{X}}\circ\pi_{2}\big{\rangle}\] \[=\theta_{\underline{X}}^{-1}\circ\big{(}(+*T)_{X_{i}}\big{)}\circ \big{(}(c_{X_{i}})\circ\pi_{1,i},(c_{X_{i}})\circ\pi_{2,i}\big{)}\big{)}\circ \theta_{\underline{X}}\] \[=\theta_{\underline{X}}^{-1}\circ\big{(}(p*T)_{X_{i}}\circ\langle (c_{X_{i}})\circ\pi_{1,i},(c_{X_{i}})\circ\pi_{2,i}\rangle\big{)}\circ\theta_ {\underline{X}}\] \[=\theta_{\underline{X}}^{-1}\circ\big{(}c_{X_{i}}\circ(T*+)_{X_{i }}\big{)}\circ\theta_{\underline{X}}=\theta_{\underline{X}}^{-1}\circ(c_{X_{i}} )\circ\big{(}(T*+)_{X_{i}}\big{)}\circ\theta_{\underline{X}},\] which shows that the second diagram commutes. From here we show the coherences that \(\hat{e}\) and \(\hat{\ell}\) satisfy. **Lemma 3.25**.: _Let \((\mathscr{C},\mathbb{T})\) be a tangent category. The ind-canonical flip \(\hat{e}:\operatorname{Ind}(T)^{2}\Rightarrow\operatorname{Ind}(T)^{2}\) is an involution, \(\hat{e}\circ\hat{\ell}=\hat{\ell}\), and the diagrams_ _commute._ Proof.: That \(\hat{c}\) is an involution follows from the calculation, for any \(\underline{X}=(X_{i})\in\operatorname{Ind}(\mathscr{C})_{0}\), \[\hat{e}^{2}=\theta^{-1}\circ(c_{X_{i}})\circ\theta\circ\theta^{-1}\circ(c_{X_ {i}})\circ\theta^{i}=\theta^{-1}\circ(c_{X_{i}})^{2}\circ\theta=\theta^{-1} \circ\operatorname{id}_{\operatorname{Ind}(T^{2})\underline{X}}\circ\theta= \operatorname{id}_{\operatorname{Ind}(T)^{2}\underline{X}}.\] The second identity follows from the calculation \[\hat{c}_{\underline{X}}\circ\hat{\ell}_{\underline{X}} =\theta_{\underline{X}}^{-1}\circ(c_{X_{i}})\circ\theta_{ \underline{X}}\circ\theta_{\underline{X}}^{-1}\circ(\ell_{X_{i}})=\theta_{ \underline{X}}^{-1}\circ(c_{X_{i}})\circ(\ell_{X_{i}})=\theta_{\underline{X}}^ {-1}\circ(c_{X_{i}}\circ\ell_{X_{i}})\] \[=\theta_{\underline{X}}^{-1}\circ(\ell_{X_{i}})=\hat{\ell}_{ \underline{X}}.\] Finally the verification of the commuting diagrams is tedious but straightforward check using the naturality of the isomorphisms \(\operatorname{Ind}(T^{n})\cong\operatorname{Ind}(T)^{n}\) together with the identities satisfied by the canonical flip and vertical lift in \(\mathscr{C}\). As the last necessary ingredient in showing that \((\operatorname{Ind}(\mathscr{C}),\operatorname{Ind}(\mathbb{T}))\), we will prove the universality of the ind-vertical lift. This proof will rely on ind-presheaves \(\operatorname{Ind}_{\mathbf{Psh}}(\mathscr{C})\) and the functor \(L:\operatorname{Ind}(\mathscr{C})\to[\mathscr{C}^{\operatorname{op}},\mathbf{ Set}]\) as it is convenient when proving properties about limits and colimits that need not hold on the nose, i.e., those properties which need only hold up to isomorphism. **Proposition 3.26**.: _Let \((\mathscr{C},\mathbb{T})\) be a tangent category and let \(\underline{X}\in\operatorname{Ind}(\mathscr{C})_{0}\). Then the diagram_ _is an equalizer in \(\operatorname{Ind}(\mathscr{C})\)._ Proof.: Because being an equalizer is true along isomorphic objects in \(\mathscr{C}\), stable under equivalence of categories, and in light of Proposition 3.6, it suffices to prove that the diagram is an equalizer in \([\mathscr{C}^{\operatorname{op}},\mathbf{Set}]\). For this we calculate that the diagram above is is isomorphic to the diagram where the horizontal morphisms are the untangling of the \(L\) morphisms defined above. By the fact that filtered colimits commute with finite limits in the presheaf topos \([\mathscr{C}^{\operatorname{op}},\mathbf{Set}]\), we find that the diagram above is isomorphic to the diagram of presheaves: Finally, each \(i\)-indexed component of the diagram above, is an equalizer by the fact that the Yoneda Lemma is continuous and the fact that \((\mathscr{C},\mathbb{T})\) is a tangent category. This implies in turn that the diagram and hence are both equalizers in \([\mathscr{C}^{\operatorname{op}},\mathbf{Set}]\). Finally, appealing Proposition to 3.6 proves the result. We now have the ingredients to show that \(\operatorname{Ind}(\mathscr{C})\) is a tangent category whenever \(\mathscr{C}\) is a tangent category. **Theorem 3.27**.: _Let \((\mathscr{C},\mathbb{T})\) be a tangent category. Then the category \((\operatorname{Ind}(\mathscr{C}),\operatorname{Ind}(\mathbb{T}))\) is a tangent category where \(\operatorname{Ind}(\mathbb{T})\) is the tangent structure_ \[\operatorname{Ind}(\mathbb{T}):=(\operatorname{Ind}(T),\operatorname{Ind}(p), \operatorname{Ind}(0),\hat{+},\hat{\ell},\hat{c})\] _where \(\operatorname{Ind}(T)\) is the indicization of \(T\) induced by Proposition 3.7 and \(\operatorname{Ind}(p),\operatorname{Ind}(0),\hat{+},\hat{\ell},\) and \(\hat{c}\) are the natural transformations constructed in Lemmas 3.15, 3.19, 3.21, and Lemma 3.23, respectively._ Proof.: We now establish that \((\operatorname{Ind}(\mathscr{C}),\operatorname{Ind}(\mathbb{T}))\) satisfies the axioms of Definition 2.1: 1. That \(\operatorname{Ind}(\mathscr{C})\) admits all tangent pullbacks follows from [1, Proposition I.8.9.1.c] and that \(\operatorname{Ind}(T)\) and its compositional powers preserve the pullback powers \(\operatorname{Ind}(T)_{m}\) is Corollary 3.18. 2. The ind-zero transformation \(\operatorname{Ind}(0)\) is constructed in Lemma 3.15 and the ind-addition transformation is constructed in Lemma 3.19. That each object \(\operatorname{Ind}(p)_{\underline{X}}:\operatorname{Ind}(T)\underline{X} \to\underline{X}\) is a commutative monoid in \(\operatorname{Ind}(\mathscr{C})_{/\underline{X}}\) is Proposition 3.20. 3. The ind-vertical lift \(\hat{\ell}\) exists by Lemma 3.21 and the pair \((\hat{\ell}_{\underline{X}},\operatorname{Ind}(0)_{\underline{X}})\) is a bundle morphism for any \(\underline{X}\) in \(\operatorname{Ind}(\mathscr{C})_{0}\) by Proposition 3.22. 4. The existence of the ind-canonical flip follows from Lemma 3.23 while the fact that \((\hat{c}_{\underline{X}},\operatorname{id}_{\operatorname{Ind}(T)\underline{X }})\) is a bundle map for any \(\underline{X}\) in \(\operatorname{Ind}(\mathscr{C})_{0}\) follows from Proposition 3.24. 5. The coherences between the ind-canonical flip and ind-vertical lift are given in Lemma 3.25. 6. The universality of the ind-vertical lift is proved in Proposition 3.26. Because each axiom of Definition 2.1 is satisfied it follows that \((\operatorname{Ind}(\mathscr{C}),\operatorname{Ind}(\mathbb{T}))\) is a tangent category. We now show the functoriality of the \(\operatorname{Ind}\)-construction on tangent morphisms. **Theorem 3.28**.: _Let \((F,\alpha):(\mathscr{C},\mathbb{T})\to(\mathscr{D},\mathbb{S})\) be a morphism of tangent categories. Then the induced map \((\operatorname{Ind}(F),\hat{\alpha}):(\operatorname{Ind}(\mathscr{C}), \operatorname{Ind}(\mathbb{T}))\to(\operatorname{Ind}(\mathscr{D}), \operatorname{Ind}(\mathbb{S}))\) is a morphism of tangent categories where \(\hat{\alpha}\) is the natural transformation:_ _Furthermore, \((\operatorname{Ind}(F),\hat{\alpha})\) is a strong tangent morphism if and only if \((F,\alpha)\) is a strong tangent morphism._ Proof.: The fact that \((\operatorname{Ind}(F),\hat{\alpha})\) is a tangent morphism is a straightforward but tedious \(2\)-categorical verification; we illustrate the first such verification and omit the rest. To establish the first diagram of functors and natural transformations we simply paste the commuting triangles as in the diagram below to establish that commutes. The remaining four axioms are verified and established similarly and so are omitted. For the final claim regarding detecting when \((F,\alpha)\) is strong: \(\implies:\) assume that \((\operatorname{Ind}(F),\hat{\alpha})\) is a strong tangent morphism. Then it is strong on all constant objects \(\underline{X}=(X)\) for \(X\in\mathscr{C}_{0}\); as such it follows that \(\alpha_{X}\) is an isomorphism for every \(X\in\mathscr{C}_{0}\). That \(F\) preserves tangent pullbacks and equalizers is shown similarly. \(\Longleftarrow:\) Assume \(\alpha\) is a natural isomorphism so that \(\operatorname{Ind}(\alpha)\) and \(\hat{\alpha}=\phi_{F,S}^{-1}\circ\operatorname{Ind}(\alpha)\circ\phi_{F,S}\) are isomorphisms as well. Similarly, if \(F\) preserves tangent pullbacks and equalizers, so does \(\operatorname{Ind}(F)\). **Corollary 3.29**.: _The diagram_ _commutes strictly._ Proof.: This is immediate after unwinding the definitions. We now close this section by proving that while the \(\operatorname{Ind}\) pseudofunctor does not commute (even up to isomorphism) with the free tangent functor, there is a pseudonatural transformation: To do this, however, we necessitate a discussion of what it means to be a free tangent functor, i.e., we need to at least give a short description of the \(2\)-functor \(\operatorname{Free}:\mathfrak{Cat}\to\mathfrak{Tan}\). In [14] P. Leung showed that the category \(\mathbf{Weil}_{1}\) of Weil algebras constitute the free tangent structure in a very precise sense (we will recall and see some of this later, but one of the main results in [14] is that tangent structures on a category \(\mathscr{C}\) are equivalent to Weil category structures on \(\mathscr{C}\)). While we will get to know this category in more detail later (by comparing and contrasting the exposition of [14] and what is presented in B. MacAdam's thesis [15]), for now we just need to know the following result: for any category \(\mathscr{C}\), the free tangent category over \(\mathscr{C}\) is \(\operatorname{Free}(\mathscr{C}):=\mathbf{Weil}_{1}\times\mathscr{C}\). We will briefly recall the definition of \(\mathbf{Weil}_{1}\), however. **Definition 3.30** ([14, Definition 4.2]).: The category \(\mathbf{Weil}_{1}\) is defined as follows: * Objects: For each \(n\in\mathbb{N}\), define the following crigs: \[W^{n}:=\begin{cases}\mathbb{N}&\text{if}\,n=0;\\ \mathbb{N}[x_{1},\cdots,x_{n}]/(x_{i}x_{j}:1\leq i,j\leq n)&\text{if}\,n\geq 1.\end{cases}\] Then the objects of \(\mathbf{Weil}_{1}\) are the closure of \(\{W^{n}\mid n\in\mathbb{N}\}\) under finite coproducts \(\otimes_{\mathbb{N}}\) of crigs. * Morphisms: Morphisms \(\varphi:A\to B\) of crigs for which the canonical maps commute; note that in each case \(\mathfrak{m}_{A}\) and \(\mathfrak{m}_{B}\) are the unique maximal ideals of \(A\) and \(B\), respectively. **Theorem 3.31** ([14]; [15, Observation 4.2.5]).: _For any category \(\mathscr{C}\), there is a free tangent category given by \(\mathbf{Weil}_{1}\times\mathscr{C}\)._ Because the product construction is \(2\)-functorial we find immediately that there is a functor \(\operatorname{Free}:\mathfrak{Cat}\to\mathfrak{Tan}\) which sends a category \(\mathscr{C}\) to the free tangent category over itself: \(\operatorname{Free}(\mathscr{C}):=\mathbf{Weil}_{1}\times\mathscr{C}\). It acts on \(1\)-morphisms by \(\operatorname{Free}(f):=\operatorname{id}_{\mathbf{Weil}_{1}}\times f\) and on \(2\)-morphisms by \(\operatorname{Free}(\alpha):=\iota_{\operatorname{id}_{\mathbf{Weil}_{1}}}\times\alpha\). To construct our pseudonatural transformation we compute that on one hand \[\operatorname{Ind}(\operatorname{Free}(\mathscr{C}))=\operatorname{Ind}( \mathbf{Weil}_{1}\times\mathscr{C})\] while on the other hand \[\operatorname{Free}(\operatorname{Ind}(\mathscr{C}))=\mathbf{Weil}_{1}\times \operatorname{Ind}(\mathscr{C}).\] To construct our pseudonatural transformation \(\alpha\) we first note that by [12, Proposition 6.1.12] for any category \(\mathscr{C}\) there is a natural equivalence of categories \[\operatorname{Ind}(\operatorname{Free}(\mathscr{C}))=\operatorname{Ind}( \mathbf{Weil}_{1}\times\mathscr{C})\simeq\operatorname{Ind}(\mathbf{Weil}_{1}) \times\operatorname{Ind}(\mathscr{C}).\] Now consider the embedding \(\operatorname{incl}_{\mathbf{Weil}_{1}}:\mathbf{Weil}_{1}\to\operatorname{ Ind}(\mathbf{Weil}_{1})\) which sends an object \(X\) to the functor3\(\underline{X}:\mathbb{1}\to\mathscr{C}\) given by \(*\mapsto X,\operatorname{id}_{*}\mapsto\operatorname{id}_{X}\) and sends a morphism \(f:X\to Y\) to the corresponding natural transformation \(\underline{f}:\underline{X}\to\underline{Y}\). We then define \(\alpha_{\mathscr{C}}\) to be the composite: Footnote 3: Recall that \(1\) is the terminal category, i.e., the category with one object (which we label as \(*\)) and one morphism (namely id.). \[\mathbf{Weil}_{1}\times\operatorname{Ind}(\mathscr{C})\xrightarrow{ \operatorname{incl}_{\mathbf{Weil}_{1}}\times\operatorname{id}_{\operatorname {Ind}(\mathscr{C})}}\operatorname{Ind}(\mathbf{Weil}_{1})\times\operatorname{ Ind}(\mathscr{C})\xrightarrow{\simeq}\operatorname{Ind}(\mathbf{Weil}_{1}\times \mathscr{C})\] To see how to define the witness transformations let \(F:\mathscr{C}\to\mathscr{D}\) be a functor. We then get the pasting diagram where the natural isomorphism in the bottom-most square is induced from the natural equivalence of [12, Proposition 6.2.11]. Note this pastes to the invertible \(2\)-cell which we define to be our witness transformation \(\alpha_{f}\). From here the pseudonaturality of \(\alpha\) is routine but straightforward to check: the compatibility with the compositors for \(\operatorname{Free}(-)\) and \(\operatorname{Ind}(-)\) more or less comes down to the fact that the equivalence \(\operatorname{Ind}(\mathbf{Weil}_{1}\times\mathscr{C})\simeq\operatorname{Ind}( \mathbf{Weil}_{1})\times\operatorname{Ind}(\mathscr{C})\) is natural and the action of \(\operatorname{Ind}\) on \(\operatorname{\mathfrak{Cat}}\) occurs only in the right-handed variable and does not interact with the \(\mathbf{Weil}_{1}\)-variable otherwise. This leads to the following proposition. **Proposition 3.32**.: _There is a pseudonatural transformation_ _where the transition functors are given by_ \[\mathbf{Weil}_{1}\times\operatorname{Ind}(\mathscr{C})\xrightarrow{ \operatorname{incl}_{\mathbf{Weil}_{1}}\times\operatorname{id}_{\operatorname {Ind}(\mathscr{C})}}\operatorname{Ind}(\mathbf{Weil}_{1})\times\operatorname{ Ind}(\mathscr{C})\xrightarrow{\simeq}\operatorname{Ind}(\mathbf{Weil}_{1}\times \mathscr{C})\] **Remark 3.33**.: Note that since \(\mathbf{Weil}_{1}\not\simeq\operatorname{Ind}(\mathbf{Weil}_{1})^{4}\) it is impossible for \(\alpha\) to be taken to be an invertible \(2\)-cell as \(\alpha_{\mathscr{C}}\) is an equivalence if and only if \(\mathscr{C}=\varnothing\) is the empty category. ## 4. Properties of Ind-Tangent Categories In this section we examine how the Ind-construction interacts with various important and standard constructions in the tangent category world and in algebraic geometry. We in particular give a pseudolimit characterization of the slice category \(\operatorname{Ind}(\mathscr{C})_{X}\) where \(X=(X_{i})\) is an Ind-object, we examine the Ind-tangent category of a representable tangent category, how \(\operatorname{Ind}\) interacts with differential objects (and more generally how \(\operatorname{Ind}\) interacts with differential bundles), and finally how \(\operatorname{Ind}\) interacts with Cartesian differential categories. To do these investigations we need some basic structural results regarding the \(\operatorname{Ind}\)-pseudofunctor and its preservation of adjoints. Before proceeding, however, let us recall the definition of a representable tangent category. ### A Pseudolimit Result In this short subsection we show that there is a sensible characterization of the slice category \(\operatorname{Ind}(\mathscr{C})_{/(X_{i})}\) (at least when \(\mathscr{C}\) has finite pullbacks) in terms of a pseudolimit over the diagram of categories \(\operatorname{Ind}(\mathscr{C})_{/X_{i}}\) with morphism functors induced by the pullback functors. While we do not explicitly use this in this article, I anticipate that this will have future uses when studying the tangent category of formal schemes explicitly and also in certain functional-analytic characterizations (cf. Subsection 5.4 below) involving slice tangent categories. **Proposition 4.1**.: _Let \(\mathscr{C}\) be a finitely complete category and let \(\mathfrak{X}=(X_{i})\) be an object in \(\operatorname{Ind}(\mathscr{C})\). Consider the pseudofunctor \(G:I^{\operatorname{op}}\to\mathfrak{Cat}\) given by:_ * _The object assignment sends_ \(i\in I_{0}\) _to the category_ \(\operatorname{Ind}(\mathscr{C})_{/X_{i}}\)_._ * _The morphism assignment sends_ \(\varphi:i\to i^{\prime}\) _to the pullback functor_ \(\bar{\varphi}^{*}:\operatorname{Ind}(\mathscr{C})_{/X_{i^{\prime}}}\to \operatorname{Ind}(\mathscr{C})_{X_{i}}\)_, i.e., the functor which sends an object_ \(Z\to X_{i^{\prime}}\) _to a chosen pullback_ _where_ \(\bar{\varphi}:X_{i}\to X_{i^{\prime}}\) _is the structure map induced from_ \(\mathfrak{X}\)_._ * _The compositors are induced by the natural isomorphisms of pullbacks._ _Then \(\operatorname{Ind}(\mathscr{C})_{/\,\mathfrak{X}}\) is the pseudolimit of the diagram in \(\mathfrak{Cat}\) induced by \(F\)._ Proof.: Begin by noting that \(\mathfrak{X}\) is the filtered colimit of the \(X_{i}\) in \(\operatorname{Ind}(\mathscr{C})\); this follows from the fact that under the equivalence \(\operatorname{Ind}(\mathscr{C})\simeq\operatorname{Ind}_{\mathbf{Psh}}( \mathscr{C})\), \(\mathfrak{X}\) is identified with the presheaf \[\operatorname*{colimim}_{i\in I}\mathbf{y}(X_{i})\] in \(\operatorname{Ind}_{\mathbf{Psh}}(\mathscr{C})\) and equivalences preserve colimits. Consequently we write \(\alpha_{i}:X_{i}\to\mathfrak{X}\) for the colimit structure maps for all \(i\in I\). We now will establish that any diagram factors through \(\operatorname{Ind}(\mathscr{C})_{/\,\mathfrak{X}}\). Let \(\mathscr{D}\) be a category such that there are functors \(F_{i}:\mathscr{D}\to\mathscr{C}_{/X_{i}}\) for all \(i\in I_{0}\) such that for any \(\varphi:i\to i^{\prime}\) in \(I\), there is an invertible \(2\)-cell Now observe that the natural isomorphism \(\rho\) implies that in \(\mathscr{C}\) there is an isomorphism of functors \[F_{i}(-)\cong(\bar{\varphi}^{*}\circ F_{i^{\prime}})(-)=F_{i^{\prime}}(-)\times _{X_{i^{\prime}}}X_{i}\] where the pullback is taken as in the diagram: Consequently taking the first projection of the pullback above and pre-composing with the isomorphism \(\rho\) gives a structure natural transformation Taking the colimit of the \(\hat{\varphi}\) for every object \(d\in\mathscr{D}_{0}\), which exists in \(\operatorname{Ind}(\mathscr{C})\) because \(I\) is a filtered category, produces a family of objects which we define by \[F(d):=\operatorname*{colim}_{i\in I}F_{i}(d).\] The universal property of the colimit allows us to extend this to a functor \(\tilde{F}:\mathscr{D}\to\operatorname{Ind}(\mathscr{C})\); we claim that this extension naturally factors through the forgetful functor \(\operatorname{Ind}(\mathscr{C})_{/\mathfrak{X}}\to\operatorname{Ind}(\mathscr{C})\). To see this, however, we simply use the universal property of the colimits in sight to produce the commuting diagram with \(\theta\) defined by the universal property of \(F(d)\). This gives us the functor \(F:\mathscr{D}\to\operatorname{Ind}(\mathscr{C})_{/\mathfrak{X}}\). We now claim that this functor \(\theta\) allows us to factor the \(2\)-cells \(\rho_{\varphi}\) in the sense that there is a pasting diagram which pastes to \(\rho_{\varphi}\). To see this we first will show the existence of \(\gamma\). To this end note that on one hand, for any \(d\in\mathscr{D}_{0}\) we have \[(\alpha_{i}^{*}\circ\theta)(d)=F(d)\times_{\mathfrak{X}}X_{i}.\] Now note that \(X_{i}\) as a constant object is isomorphic in \(\operatorname{Ind}(\mathscr{C})\) to the \(I\)-indexed object \(\underline{X_{i^{\prime}}}:I\to\mathscr{C}\) where \(\underline{X_{i^{\prime}}}(x):=X_{i^{\prime}}\) for all objects and \(\underline{X_{i^{\prime}}}(\varphi)=\operatorname{id}_{X_{i^{\prime}}}\) for all morphisms. Under this representation we get that \(\overline{X_{i^{\prime}}}\cong\operatorname{colim}_{I}\underline{X_{i^{ \prime}}}(i)\) and so \[(\alpha_{i^{\prime}}^{*}\circ\theta)(d)=F(d)\times_{\mathfrak{X}}X_{i}\cong F( d)\times_{\mathfrak{X}}\left(\operatorname*{colim}_{i\in I}\underline{X_{i^{ \prime}}}(i)\right)\cong\left(\operatorname*{colim}_{I}F_{i}(d)\right)\times_{ \operatorname{colim}_{i\in I}X_{i}}\left(\operatorname*{colim}_{i\in I} \underline{X_{i^{\prime}}}(i)\right).\] Using now that each filtered colimit in sight is taken over the same (filtered indexing category and that filtered colimits commute naturally with pullbacks in \(\operatorname{Ind}(\mathscr{C})\) we find \[(\alpha_{i^{\prime}}^{*}\circ\theta)(d)\cong\Big{(}\operatorname{colim}_{I}F_{i }(d)\Big{)}\times_{\operatorname{colim}_{i\in I}X_{i}}\left(\operatorname{ colim}_{i\in I}X_{i^{\prime}}(i)\right)\cong\operatorname{colim}_{i\in I} \left(F_{i}(d)\times_{X_{i}}\underline{X_{i^{\prime}}(i)}\right)=\operatorname {colim}_{i\in I}\left(F_{i}(d)\times_{X_{i}}X_{i^{\prime}}\right)\] Note that since there is a natural isomorphism \(F_{i}(-)\cong F_{i^{\prime}}(-)\times_{X_{i^{\prime}}}X_{i}\) whenever there is a morphism \(\tilde{\varphi}:i\to i^{\prime}\) in \(\operatorname{Ind}(\mathscr{C})\). Using that \(I\) is filtered and that we can compute the colimit after applying the \(d\)-component of this natural isomorphism allows us to deduce that \[\operatorname{colim}_{i\in I}\left(F_{i}(d)\times_{X_{i}}X_{i^{\prime}}\right) \cong\operatorname{colim}_{i\in I}\left(\left(F_{i^{\prime}}(d)\times_{X_{i^{ \prime}}}X_{i}\right)\times_{X_{i}}X_{i^{\prime}}\right)\cong\operatorname{ colim}_{i\in I}\left(F_{i^{\prime}}(d)\right)\cong F_{i^{\prime}}(d)\] because the final colimit is taken in a variable which is not present in the object \(F_{i^{\prime}}(d)\). Note that each isomorphism presented is natural; defining \(\gamma\) to be the composite of these gives our desired 2-cell: Establishing the existence of \(\delta\) is similar to establishing that of \(\gamma\) and omitted. Finally the existence of the 2-cell follows immediately from the fact that \(\mathfrak{X}\cong\operatorname{colim}_{I}X_{i}\). That the resulting pasting diagram pastes to \(\rho_{\varphi}\) is also an extremely tedious but straightforward argument that uses the fact that \(\gamma\) and \(\delta\) involve \(\rho_{\varphi}\) in establishing the natural isomorphism and is also omitted. We now finally establish the remaining property of \(\operatorname{Ind}(\mathscr{C})_{I\,\mathfrak{X}}\) as a pseudolimit of the diagram of shape \(G\). That is, assume that we have two functors \(F,F^{\prime}:\mathscr{D}\to\operatorname{Ind}(\mathscr{C})_{/\,\mathfrak{X}}\) such that for all \(i\in I_{0}\) there is a 2-cell for which the diagram of functors and natural transformations, for any \(\varphi:i\to i^{\prime}\) in \(I\), commutes. We must establish that there is a unique 2-cell \(\gamma:F\to F^{\prime}\) making \(\alpha_{i}^{*}*\gamma=\gamma_{i}\) for all \(i\in I_{0}\). However, note that \(\alpha_{i}^{*}\circ F=F(-)\times_{\mathfrak{X}}X_{i}\) and \(\alpha_{i}^{*}\circ F^{\prime}=F^{\prime}(-)\times_{\mathfrak{X}}X_{i}\) for all \(i\in I_{0}\) and the conditions on the naturality of \(\gamma_{i}\) and the coherences regarding the interchange of the \(\gamma_{i}\) with the pullbacks \(\tilde{\varphi}^{*}\) tell us that we have a uniquely determined natural transformation \[\operatorname{colim}_{I}\left(F(-)\times_{\mathfrak{X}}X_{i}\right)\xrightarrow{ \operatorname{colim}_{I}\left(\gamma_{i}\right)}\operatorname{colim}_{I}\left( F^{\prime}(-)\times_{\mathfrak{X}}X_{i}\right).\] Using the natural isomorphisms \(\operatorname{colim}_{I}(F(-)\times_{\mathfrak{X}}X_{i})\cong F(-)\) and \(\operatorname{colim}_{I}(F^{\prime}(-)\times_{\mathfrak{X}}X_{i})\) induced by the fact that filtered colimits commute with pullbacks in \(\operatorname{Ind}(\mathscr{C})\) gives our desired natural transformation: Finally that this satisfies \(\alpha_{i}^{*}*\gamma=\gamma_{i}\) is immediate by construction. ### Representable Ind-Tangent Categories **Definition 4.2** ([7, Definition 3]).: Let \((\mathscr{C},\mathbb{T})\) be a Cartesian closed tangent category. We say that \(\mathscr{C}\) is a representable tangent category if there exists an object \(D\) of \(\mathscr{C}\) for which there is a natural isomorphism \(T(-)\cong[D,-]\). Because of this we will first explain how the Ind-construction preserves adjoints. **Proposition 4.3**.: _Let \(F\dashv G:\mathscr{C}\to\mathscr{D}\) be an adjunction. Then there is an adjunction \(\operatorname{Ind}(F)\dashv\operatorname{Ind}(G):\operatorname{Ind}( \mathscr{C})\to\operatorname{Ind}(\mathscr{D})\)._ Proof.: Let \(\underline{X}=(X_{i})_{i\in I}\) be an object of \(\operatorname{Ind}(\mathscr{C})\) and let \(\underline{Y}=(Y_{j})_{j\in J}\) be an object of \(\operatorname{Ind}(\mathscr{D})\). Then because \(F\dashv G\), \(\mathscr{C}(X_{i},GY_{j})\cong\mathscr{D}(FX_{i},Y_{j})\) for all \(i\in I_{0}\) and all \(j\in J_{0}\). Then we get that \[\operatorname{Ind}(\mathscr{C})(\underline{X},\operatorname{Ind}(G) \underline{Y}) =\operatorname{Ind}(\mathscr{C})((X_{i}),(GY_{j}))=\lim_{i\in I} \left(\lim_{j\in\underline{J}}\mathscr{C}(X_{i},GY_{j})\right)\] \[\cong\lim_{i\in I}\left(\lim_{j\in J}\mathscr{D}(FX_{i},Y_{j}) \right)=\operatorname{Ind}(\mathscr{D})((FX_{i}),(Y_{j}))=\operatorname{Ind}( \mathscr{D})(\operatorname{Ind}(F)\underline{X},\underline{Y}).\] Thus \(\operatorname{Ind}(F)\dashv\operatorname{Ind}(G):\operatorname{Ind}( \mathscr{C})\to\operatorname{Ind}(\mathscr{D})\). We can then use this to expand internal hom functors on Cartesian closed categories to the Ind-category at least when the co-representing object \(\underline{X}\) is constant, i.e., when \(\underline{X}=(X)_{*\in\{*\}}\). **Proposition 4.4**.: _Let \(\mathscr{C}\) be a for which there is an object \(X\in\mathscr{C}_{\,0}\) such that \(\mathscr{C}\) admits a product functor \((-)\times X:\mathscr{C}\to\mathscr{C}\) and there is an internal hom functor \([X,-]:\mathscr{C}\to\mathscr{C}\) with \((-)\times X\dashv[X,-]\). Then if \(\underline{X}=(X)_{*\in\{*\}}\) is the constant object at \(X\) in \(\operatorname{Ind}(\mathscr{C})\) the internal hom functor \([\underline{X},-]:\operatorname{Ind}(\mathscr{C})\to\operatorname{Ind}( \mathscr{C})\) exists and is right adjoint to the product functor \((-)\times\underline{X}\)._ Proof.: That \(\operatorname{Ind}((-)\times X)\dashv\operatorname{Ind}([X,-])\) is a consequence of Proposition 4.3. Because there is a natural isomorphism of functors \[\operatorname{Ind}((-)\times X)\cong(-)\times\underline{X}\] we get that \((-)\times\underline{X}\dashv\operatorname{Ind}([X,-])\) and so the internal hom exists. The final statement about the functor \([\underline{X},-]\) being isomorphic to \(\operatorname{Ind}([X,-])\) follows from the fact that for any \((Y_{i})_{i\in I}\) and any \((Z_{j})_{j\in J}\) in \(\operatorname{Ind}(\mathscr{C})\), \[\operatorname{Ind}(\mathscr{C})\big{(}(Y_{i}),[\underline{X},(Z_{ j})]\big{)} =\lim_{i\in I}\left(\lim_{j\in\underline{J}}\mathscr{C}(Y_{i},[X,Z_{ j}])\right)\cong\lim_{i\in I}\left(\lim_{j\in\underline{J}}\mathscr{C}(Y_{i} \times X,Z_{j})\right)\] \[=\operatorname{Ind}(\mathscr{C})\left((Y_{j})\times\underline{X},(Z_{j})\right)\] which gives the desired isomorphism \([\underline{X},-]\cong\operatorname{Ind}([X,-])\). **Proposition 4.5**.: _Let \((\mathscr{C},\mathbb{T})\) be a representable tangent category. Then there is an object \(\underline{D}\) in the Ind-tangent category \((\operatorname{Ind}(\mathscr{C}),\operatorname{Ind}(\mathbb{T}))\) for which \(\operatorname{Ind}(T)\cong[\underline{D},-]\)._ Proof.: Since \(\mathscr{C}\) is a representable tangent category, \(\mathscr{C}\) is Cartesian closed and there is an object \(D\in\mathscr{C}_{\,0}\) for which \([D,-]\cong T\). It then follows from Proposition 4.4 that the internal hom functor \(\operatorname{Ind}([D,-])\cong[\underline{D},-]\) exists. A routine calculation shows \[[\underline{D},-]\cong\operatorname{Ind}([D,-])\cong\operatorname{Ind}(T)\] which in turn proves the proposition. **Remark 4.6**.: Proposition 4.5 technically holds in more general (formal) situations than merely for Cartesian closed categories \(\mathscr{C}\). If there were a tangent category \((\mathscr{C},\mathbb{T})\) with an object \(D\in\mathscr{C}_{\,0}\) for which there is an internal hom functor and natural isomorphism \([D,-]\cong T\), then there is a natural isomorphism \([\underline{D},-]\cong\operatorname{Ind}(T)\) as well. I am unaware, however, of tangent categories \((\mathscr{C},\mathbb{T})\) with this property that are not Cartesian closed5. ### Differential Bundles and Ind-Tangent Categories One of the most important structures we can encounter in the theory of tangent categories is that of differential bundles. These describe objects \(E\) over some base object \(M\) together with a structure map \(q:E\to M\) that make \(E\) look "locally like" they are given by tangent bundles. While these are of course related to vector bundles in algebraic geometry and differential geometry, they are also related to linear logic and allow a clean description of Cartesian differential categories (cf. [4, Section 3.5]) In this subsection we will study how the Ind-construction translates pseudofunctors and describe the cases where we can say that an object \(\underline{E}\to\underline{M}\) in \(\operatorname{Ind}(\mathscr{C})\) are "locally" determined by differential bundles in \(\mathscr{C}\). Consequently we necessitate a short review of differential bundles and differential objects in tangent categories. **Definition 4.7** ([3, Definition 2.1, 2.2]).: Let \(\mathscr{C}\) be a category. An additive bundle in \(\mathscr{C}\) is a morphism \(p:A\to M\) such that \(p:A\to M\) is a commutative monoid in \(\mathscr{C}_{/M}\). A morphism of additive bundles \((p:A\to M,\alpha:A\times_{M}A\to A,\zeta:M\to A)\) and \((q:B\to N,\mu:B\times_{N}B\to B,\eta:N\to B)\) is a pair of maps \(f:A\to B\) and \(g:M\to N\) such that the diagrams commute. **Definition 4.8** ([4, Definition 2.3]).: Let \((\mathscr{C},\mathbb{T})\) be a tangent category. A differential bundle in \(\mathscr{C}\) is a quadruple \(q=(q,\sigma,\zeta,\lambda)\) where: * \(q:E\to M\) is a morphism which is a commutative monoid internal to \(\mathscr{C}_{/M}\) with addition map \(\sigma:E_{2}\to E\) and unit map \(\zeta:M\to E\); * \(\lambda:E\to TE\) is a morphism called the lift of \(q\); * For any \(n\in\mathbb{N}\), the pullback powers \(E_{n}=E\times_{M}\dots\times_{M}E\) exist in \(\mathscr{C}\) for all \(n\in\mathbb{N}\) and for each \(m\in\mathbb{N}\) the functor \(T^{m}\) preserves these pullbacks; * The pair \((\lambda,0):(q,\sigma,\zeta,\lambda)\to(Tq,T\sigma,T\zeta,T\lambda)\) is an additive bundle map; * The pair \((\lambda,\zeta):(q,\sigma,\zeta,\lambda)\to(p,+,0,\ell)\) is an additive bundle morphism; * The lift \(\lambda\) is universal in the sense that if \(\mu\) is the morphism then the diagram below is a pullback which is preserved by \(T^{n}\) for all \(n\in\mathbb{N}\): * The equation \(\ell\circ\lambda=T(\lambda)\circ\lambda\) holds. **Definition 4.9** ([4, Definition 2.3]).: A morphism of differential bundles from \((q:E\to M,\sigma,\zeta,\lambda)\) to \((q^{\prime}:F\to N,\mu,\eta,\rho)\) is a pair of morphisms \(f:E\to F\) and \(g:M\to N\) such that the diagram commutes. Additionally, we say that the morphism \((f,g)\) is linear if it preserves the lift in the sense that the diagram commutes. **Remark 4.10**.: In the definition of a morphism of differential bundles, the pair \((f,g)\) need not be an additive bundle map, i.e., \((f,g)\) need not preserve the monoidal operations of \(q:A\to M\). However, by [4, Proposition 2.16] it does follow that if the bundle map is _linear_ then it is also an additive bundle map. We now show the first preservation result of the subsection: namely that two ind-objects indexed by the same filtered category which are locally differential bundles with linear transition maps give rise to an ind-differential bundle. **Proposition 4.11**.: _Let \(I\) be a filtered index category and let \(\mathscr{C}\) be a tangent category. If \(\underline{E}=(E_{i})_{i\in I}\) and \(\underline{M}=(M_{i})_{i\in I}\) are \(\operatorname{Ind}\)-objects such that:_ * _For each_ \(i\in I_{0}\)_,_ \(E_{i}\) _and_ \(M_{i}\) _there is a differential bundle_ \((q_{i}:E_{i}\to M_{i},\sigma_{i}:E_{i}\times_{M_{i}}E_{i}\to E_{i},\zeta_{i}:M_ {i}\to E_{i},\lambda_{i}:E_{i}\to TE_{i})\)_;_ * _For each map_ \(\varphi:i\to i^{\prime}\) _in_ \(I\)_, if_ \(\tilde{\varphi}:E_{i}\to E_{i^{\prime}}\) _and_ \(\widehat{\varphi}:M_{i}\to M_{i^{\prime}}\) _denote the corresponding structure maps in_ \(\underline{E}\) _and_ \(\underline{M}\) _then_ \((\tilde{\varphi},\widehat{\varphi}):(q_{i},\sigma_{i},\zeta_{i},\lambda_{i}) \to(q_{i^{\prime}},\sigma_{i^{\prime}},\zeta_{i^{\prime}},\lambda_{i^{\prime }})\) _is a linear morphism of differential bundles;_ _then there is a differential bundle \((\underline{q},\underline{\sigma},\underline{\zeta},\underline{\lambda})\) in \(\operatorname{Ind}(\mathscr{C})\)._ Proof.: First note that the condition that there be a bundle map \(q_{i}:E_{i}\to M_{i}\) for all \(i\in I\) and that the maps \((\tilde{\varphi},\widehat{\varphi})\) being a linear differential bundle map mean that the diagrams all commute for every morphism \(\varphi:i\to i^{\prime}\) in \(I\). This allows us to deduce that there are maps \((q_{i})_{i\in I}=:\underline{E}\to\underline{M}\) and \((\lambda_{i})_{i\in I}=:\underline{\lambda}:\underline{E}\to\operatorname{ Ind}(T)\underline{E}\) in \(\operatorname{Ind}(\mathscr{C})\). Similarly, by [4, Proposition 2.16] it follows that each pair \((\overline{\varphi},\widehat{\varphi})\) is an additive morphism of bundles which implies that the diagrams commute. Since \(\underline{E}\times_{\underline{M}}\underline{E}=(E_{i}\times_{M_{i}}E_{i})_{ i\in I}\), this allows us to deduce that the maps \((\sigma_{i})_{i\in I}=:\underline{\sigma}:\underline{E}\times_{\underline{M}} \underline{E}\to\underline{E}\) and \((\zeta_{i})_{i\in I}=:\underline{\zeta}:\underline{M}\to\underline{E}\) exist in \(\operatorname{Ind}(\mathscr{C})\) as well. This defines our quadruple \((\underline{q},\underline{\sigma},\underline{\zeta},\underline{\lambda})\) which we will now show is a differential bundle in an axiom-by-axiom verification. First note that the pullbacks \(\underline{E}_{\alpha}:=\underline{E}\times_{\underline{M}}\cdots\times_{ \underline{M}}\underline{E}\) exist by an argument mutatis mutandis to the argument which showed why tangent pullbacks exist in Proposition 3.16, i.e., the object \(\underline{X}=(E_{i}\times_{M_{i}}\times_{M_{i}}\cdots\times_{M_{i}}E_{i})_{ i\in I}\) represents the pullback \(\underline{E}_{n}\). The argument as to why the compositional powers \(\operatorname{Ind}(T)^{m}\) preserve these pullbacks is similar to the argument why \(\operatorname{Ind}(T)^{m}\) preserves tangent pullbacks given in Corollary 3.18. To show that \((\underline{\lambda},\operatorname{Ind}(0)_{\underline{M}}):(\underline{q}, \underline{\sigma},\underline{\zeta},\underline{\lambda})\to(\operatorname{ Ind}(T)\underline{q},\operatorname{Ind}(T)(\underline{\sigma}),\operatorname{Ind}(T)( \underline{\zeta}),\operatorname{Ind}(T)(\underline{\lambda}))\) is an additive bundle morphism we need to show that the diagrams \[\begin{CD}\underline{E}@>{\underline{\lambda}}>{}>\operatorname{Ind}(T) \underline{E}\\ @V{}V{\operatorname{Ind}(T)\underline{q}}V@V{}V{\operatorname{Ind}(T) \underline{q}}V\\ \underline{M}@>{}>{\operatorname{Ind}(0)_{\underline{M}}}>{}>\operatorname{Ind}(T) \underline{M}\end{CD}\] \[\begin{CD}\underline{E}@>{\underline{\lambda}}>{}>\operatorname{Ind}(T) \underline{E}\\ @V{}V{\operatorname{Ind}(T)\underline{q}}V@V{}V{\operatorname{Ind}(T) \underline{q}}V\\ \underline{E}@>{}>{\underline{\lambda}}>\operatorname{Ind}(T)\underline{E}\\ \underline{E}@>{}>{\underline{E}}>{\underline{\lambda}}>\operatorname{Ind}(T) \underline{E}\end{CD}\] \[\begin{CD}\underline{E}@>{\underline{\lambda}}>{}>\operatorname{Ind}(T) \underline{E}\\ @V{}V{\operatorname{Ind}(T)\underline{E}}@>{}>{\underline{\lambda}}>{}> \operatorname{Ind}(T)\underline{E}\\ \underline{E}@>{}>{\underline{E}}>{\underline{\lambda}}>\operatorname{Ind}(T) \underline{E}\end{CD}\] \[\begin{CD}\underline{E}@>{\underline{\lambda}}>{}>\operatorname{Ind}(T) \underline{E}\\ @V{}V{\operatorname{Ind}(T)\underline{E}}@>{}>{\underline{\lambda}}>{}> \operatorname{Ind}(T)\underline{E}\end{CD}\] all commute. However, as each of the diagrams commutes for each \(i\in I_{0}\), and each given pullback represents the \(i\)-component of the corresponding Ind-object it follows that the diagrams between Ind-objects commute. Thus \((\underline{\lambda},\operatorname{Ind}(0)_{\underline{M}})\) is an additive bundle map. Establishing that \((\underline{\lambda},\underline{\zeta})\) is an additive bundle morphism is done similarly and hence omitted. For the universality of the lift \(\underline{\lambda}\) we note that each of the maps in the desired pullback are defined \(I\)-locally. Consequently we can deduce that the pullbacks exist and are preserved by all powers \(\operatorname{Ind}(T)^{m}\) follows similarly to our verification of the existence and preservation of \(\underline{E}_{n}\). Finally we verify the last equation holds. Explicitly we calculate that \[\begin{split}\hat{\ell}\circ\underline{\lambda}& =\phi_{T,T}\circ(\ell_{i})_{i\in I}\circ(\lambda_{i\in I})=\phi_{T,T}\circ( \ell_{i}\circ\lambda_{i})_{i\in I}=\phi_{T,T}\circ(T(\lambda_{i})\circ\lambda _{i})_{i\in I}=\phi_{T,T}\circ\phi_{T,T}^{-1}\circ\operatorname{Ind}(T) \underline{\lambda}\circ\underline{\lambda}\\ &=\operatorname{Ind}(T)\underline{\lambda}\circ\underline{\lambda }\end{split}\] since the equation \(\ell_{i}\circ\lambda_{i}=T(\lambda_{i})\circ\lambda_{i}\) holds for all \(i\in I_{0}\). Thus \((\underline{q},\underline{\sigma},\underline{\zeta},\underline{\lambda})\) is a differential bundle in \(\operatorname{Ind}(\mathscr{C})\). A particularly important class of differential bundles in a tangent category with finite limits are the bundles over the terminal object \(\top\). These objects have various remarkable properties, such as satisfying \(TE\cong E\times E\) among other relations (cf. [4, Definition 3.1, Proposition 3.4] -- the definition describes the definition of differential structures on objects and the proposition proves that in finitely complete tangent categories such structures are exactly differential bundles whose bundle map is a map to the terminal object). We will explore these more afterwards when we examine the structure of \(\operatorname{Ind}(\mathscr{C})\) in the case where \(\mathscr{C}\) is a Cartesian differential category, but for the moment we simply have a nice corollary to consider. We will, however, make a formal declared definition of differential objects for easier reading and then give the corollary. **Definition 4.12**.: Let \(\mathscr{C}\) be a finitely complete tangent category. A differential object in \(\mathscr{C}\) is an object \(X\) of \(\mathscr{C}\) such that if \(\top\) is the terminal object of \(\mathscr{C}\) then the unique map \(!_{X}:X\to\top\) is a differential bundle in \(\mathscr{C}\). **Definition 4.13**.: Let \(\mathscr{C}\) be a tangent category with finite products. We denote by \(\operatorname{\mathbf{Diff}}(\mathscr{C})\) the subcategory of \(\mathscr{C}\) generated by the differential bundles over \(\top_{\mathscr{C}}\) and the linear bundle maps \((\varphi,\operatorname{id}_{\top})\) between them. **Corollary 4.14**.: _Let \(\mathscr{C}\) be a tangent category with finite products and let \(I\) be a filtered index category. Then if \(\underline{E}=(E_{i})_{i\in I}\) is an \(\operatorname{Ind}\)-object such that each \(E_{i}\) is a differential bundle over the terminal object \(\top_{\mathscr{C}}\) and if each structure map \(E_{i}\to E_{i^{\prime}}\) is part of a linear morphism of differential bundles then \(\underline{E}\) is a differential object in \(\operatorname{Ind}(\mathscr{C})\)._ Proof.: Begin by observing that the object \(\top=(\top_{\mathscr{C}})_{i\in I}\) is an \(\operatorname{Ind}\)-object. The hypotheses in the statement of the corollary together with Proposition 4.11 give that the map \(\underline{!}:\underline{E}\to\underline{\top}\) is a differential bundle in \(\operatorname{Ind}(\mathscr{C})\). The result now follows from [4, Proposition 3.4] because \[\top\cong\top_{\operatorname{Ind}(\mathscr{C})}\] so \(\underline{E}\) is a differential bundle over the terminal object of \(\operatorname{Ind}(\mathscr{C})\). Let us now discuss some of the \(\operatorname{Ind}\)-tangent categorical structure of a Cartesian differential category. Cartesian differential categories (CDCs) are important objects in the pantheon of categorical differential geometry and have their genesis in providing semantics for the differential \(\lambda\)-calculus and for many of the differential operations performed in algebraic and differntial geometry; cf. [2]. We will recall these categories below, but a main feature they enjoy is that every object in a CDC is a differential object, and that is the main property we will be focusing on and working with. To describe CDCs, however, we'll have to also first introduce left additive categories and a small barrage of terminology surrounding these objects, which are essentially categories _almost_ enriched in the category of modules over a rig save that only post-composition is a morphism of modules. These are examples of what are called skew-enriched categories in [18]. **Definition 4.15** ([2, Definition 1.1.1]; [13, Definition 2.1]).: Let \(A\) be a crisp6. A left \(A\)-linear category is a category \(\mathscr{C}\) such that: Footnote 6: That is, \(A\) is a commutative ring without negatives. * For every pair of objects \(X,Y\in\mathscr{C}_{0}\) the hom-set \(\mathscr{C}(X,Y)\) is an \(A\)-module. We will write the juxtaposition \(af\) of an element \(a\in A\) with a morphism \(f\in\mathscr{C}(X,Y)\) to denote the action of \(A\) on \(\mathscr{C}(X,Y)\). * For any morphism \(f:X\to X^{\prime}\) in \(\mathscr{C}\) and for any \(B\in\mathscr{C}_{0}\) the pre-composition morphism \(f^{*}:\mathscr{C}(X^{\prime},Y)\to\mathscr{C}(X,Y)\) is a morphism of \(A\)-modules. That is, for all \(a,b\in A\) and for all \(g,h\in\mathscr{C}(X^{\prime},Y)\), the equation \[(ag+bh)\circ f=a(g\circ f)+b(h\circ f)\] holds. **Definition 4.16** ([2, Definition 1.1.1]; [13, Definition 2.1]).: Let \(\mathscr{C}\) be a left \(A\)-linear category for a crisp \(A\). Then we say that a morphism \(f\in\mathscr{C}_{1}\) is linear if the post-composition by \(f\) map is also an \(A\)-module morphism. That is, if \(f:X\to X^{\prime}\), \(Y\in\mathscr{C}_{0}\) is arbitrary, and if \(a,b\in A\) and \(g,h\in\mathscr{C}(Y,X)\) then the equation \[h\circ(af+bg)=a(f\circ g)+b(f\circ g)\] holds in \(\mathscr{C}(Y,X^{\prime})\). **Definition 4.17** ([2, Definition 1.1.1]; [13, Definition 2.1]).: Let \(A\) be a crisp and let \(\mathscr{C}\) be a left \(A\)-linear category. The subcategory of \(\mathscr{C}\) spanned by the linear morphisms is denoted \(\mathscr{C}_{\mathrm{lin}}\). Moreover, a left \(A\)-linear category is said to be \(A\)-linear if and only if \(\mathscr{C}=\mathscr{C}_{\mathrm{lin}}\). With this we can describe Cartesian \(A\)-linear categories and then, finally, Cartesian Differential Categories. **Definition 4.18** ([2, Definition 1.2.1]; [13, Definition 2.2]).: Let \(A\) be a crisp. A Cartesian left \(A\)-linear category is a left \(A\)-linear category \(\mathscr{C}\) with finite products for which all the projection maps \[\pi_{i}^{0\cdots n}:A_{0}\times A_{1}\times\cdots\times A_{n}\to A_{i}\] are linear. We now get to the definition of a CDC. It is worth noting, however, that we are taking the convention used in [13] which says that the linear argument of the differential operator is the second argument; earlier works, such as [2], used the convention that the first argument was the linear argument. This will not present any serious issue, but it is worth noting the swap which appears in the various references in the literature. **Definition 4.19** ([2, Definition 2.1.1]; [13, Definition 2.5]).: Let \(A\) be a crisp. A Cartesian differential \(A\)-linear category is a Cartesian left \(A\)-linear category \(\mathscr{C}\) equipped with a differential combinator \(D^{7}\) which gives the differentiation of a morphism, \[\frac{f:X\to Y}{D(f):X\times X\to Y}\] where \(D(f)\) is called the derivative of \(f\), which satisfies the following seven axioms: * \(D(af+bg)=aD(f)+bD(g)\) for all \(a,b\in A\) and for all morphisms \(f,g\). * \(D(f)\circ\langle g,ah+bk\rangle=a(D(f)\circ\langle g,h\rangle)+b(D(g)\circ \langle g,k\rangle)\) for all \(a,b\in A\). * \(D(\mathrm{id}_{X})=\pi_{1}^{X,X}:X\times X\to X\) and, for any product \(X_{0}\times\cdots\times X_{n}\), \[D(\pi_{i}^{0\cdots n})=\pi_{i}^{0\cdots n}\circ\pi_{1}^{X_{0}\times\cdots \times X_{n},X_{0}\times\cdots\times X_{n}}=\pi_{n+i+1}^{(0\cdots n)(0\cdots n )}:(X_{0}\times\cdots\times X_{n})\times(X_{0}\times\cdots\times X_{n})\to X _{i}.\] * \(D\big{(}\langle f_{0},\cdots,f_{n}\rangle\big{)}=\langle D(f_{0}),\cdots,D(f_{ n})\rangle\). * \(D(g\circ f)=D(g)\circ\langle f\circ\pi_{0},D(f)\rangle\). * \(D(D(f))\circ\langle g,h,0,k\rangle=D(f)\circ\langle g,k\rangle\). * \(D(D(f))\circ\langle g,h,k,0\rangle=D(D(f))\circ\langle g,k,h,0\rangle\). When \(\mathscr{C}\) is an \(A\)-linear CDC then there is a class of maps in \(\mathscr{C}\) which are more important than those which are merely linear: the maps which are linear with respect to the differential combinator \(D\). These maps are called differential-linear and, by a result in [4], correspond to linear bundle maps for differential bundles over a terminal object. **Definition 4.20** ([13, Definition 2.6]).: Let \(A\) be a crisp and let \(\mathscr{C}\) be an \(A\)-linear CDC. We say that a map \(f:A\to B\) is differntial-linear (in short form, we say \(f\) is \(D\)-linear) if and only if \(D(f)=f\circ\pi_{0}\). The subcategory of \(\mathscr{C}\) spanned by the \(D\)-linear maps is denoted \(\mathscr{C}_{D\,\mathrm{lin}}\). It is, of course, a fact that \(\mathscr{C}_{D\,\mathrm{lin}}\) is a subcategory of \(\mathscr{C}_{\,\mathrm{lin}}\) by [2, Lemma 2.2]. However, it need not be the case that every \(A\)-linear morphism in a CDC be \(D\)-linear; thus we restrict our attention primarily to the \(D\)-linear notion of linearity as these are the maps which interact with the tangent categorical properties best. Here are the important results for our purposes that show \(A\)-linear CDCs are tangent categories where every object is a differential bundle over a terminal object \(\top\) and that linear bundle morphisms between differential objects over \(\top\) in an \(A\)-linear CDC are \(D\)-linear maps with respect to their standard tangent structure. **Proposition 4.21** ([4, Section 3.4]).: _Let \(A\) be a crisp and let \(\mathscr{C}\) be an \(A\)-linear CDC. Then \(\mathscr{C}\) is a tangent category with tangent functor defined on obejcts by_ \[T(A):=A\times A\] _and defined on morphisms by_ \[T(f):=\langle D(f),f\circ\pi_{1}\rangle.\] _In particular, every object in a CDC equipped with this tangent structure is a differential bundle over a terminal object._ **Proposition 4.22** ([4, Section 3.4]).: _Let \(A\) be a crisp and let \(\mathscr{C}\) be an \(A\)-linear CDC. Then \(f:A\to B\) is a linear morphism of differential bundles if and only if \(f\) is \(D\)-linear._ Proof.: The verification of the \(\implies\) direction of the proof is given in [4, Section 3.4] while the \(\iff\) direction follows from a routine verification. Our first result regarding the Ind-category of a CDC and its Ind-tangent structure shows that we can actually use the Ind-category of a CDC to classify if \(\mathscr{C}=\mathscr{C}_{D\,\mathrm{lin}}\). **Proposition 4.23**.: _Let \(A\) be a crisp and let \(\mathscr{C}\) be an \(A\)-linear CDC. Then \(\mathscr{C}=\mathscr{C}_{D\,\mathrm{lin}}\) if and only if every object in \(\mathrm{Ind}(\mathscr{C})\) is a differential bundle over the terminal object \(\top_{\mathrm{Ind}(\mathscr{C})}\)._ Proof.: \(\implies\): Assume that \(\mathscr{C}=\mathscr{C}_{D\,\mathrm{lin}}\). Then by Propositions 4.21 and 4.22 \(\mathscr{C}\) is a tangent category, every object in \(\mathscr{C}\) is a differential bundle over \(\top_{\mathscr{C}}\), and every morphism is a linear map of differential bundles. We can thus apply Corollary 4.14 to deduce that every Ind-object of \(\mathscr{C}\) is a differential bundle over \(\top_{\mathrm{Ind}(\mathscr{C})}\). \(\iff\): Assume that every object in \(\mathrm{Ind}(\mathscr{C})\) is a differential bundle over the terminal object \(\top_{\mathrm{Ind}(\mathscr{C})}\). Consider the category \(\mathbb{2}\), which we display below: Then \(\mathbb{2}\) is a finite (and hence small) filtered category so any functor \(F:\mathbb{2}\to\mathscr{C}\) determines an Ind-object of \(\mathscr{C}\). Let \(F\) be any such functor and let \(\underline{\top}:\mathbb{2}\to\mathscr{C}\) be the functor with \(\underline{\top}(0)=\top_{\mathscr{C}}=\underline{\top}(1).\) Then \(\underline{\top}\cong\top_{\mathrm{Ind}(\mathscr{C})}\) so the functor \(F\) is a differential object over \(\underline{\top}\) by construction. However, it then follows that \(F(0)\) and \(F(1)\) are differential bundles over \(\top\) in \(\mathscr{C}\) and the pair \((F(\varphi),\underline{\top}(\varphi))\) is a linear morphism of differential bundles. However, this implies that \(F(\varphi)\) is a \(D\)-linear morphism by Proposition 4.22, so using that there is a canonical bijection of sets \[[\mathbb{2},\mathscr{C}]_{0}\cong\mathscr{C}_{1}\] allows us to deduce that every morphism in \(\mathscr{C}\) is \(D\)-linear. A corollary of this construction is that it allows us to give a necessary and sufficient condition for recongnizing differential bundles in Ind-categories of CDCs. **Corollary 4.24**.: _Let \(\mathscr{C}\) be an \(A\)-linear CDC. In order for \(F:I\to\mathscr{C}\) to be a differential bundle in \(\operatorname{Ind}(\mathscr{C})\) it is necessary and sufficient that for \(\varphi\in I_{1}\), every morphism \(F(\varphi)\) is \(D\)-linear in \(\mathscr{C}\)._ This means that when we study the \(\operatorname{Ind}\)-category of a CDC in order to determine when \(\operatorname{Ind}(\mathscr{C})\) is _not_ a CDC we can use the following recipe: 1. Find two objects of our CDC, \(X\) and \(Y\), such that there is a non-\(D\)-linear morphism \(\alpha:X\to Y\). 2. Consider the \(\operatorname{Ind}\)-object \(F:\mathbb{Z}\to\mathscr{C}\) given by \(F(0)=X,F(1)=Y,F(\alpha)=\alpha\). 3. Note that \(F\) is not a differential bundle over \(\top_{\operatorname{Ind}(\mathscr{C})}\) and so is not a differential object. 4. Conclude that \(\operatorname{Ind}(\mathscr{C})\) is not a CDC because not every object is a differential object. This leads to a straightforward way of recognizing when \(\operatorname{Ind}\)-categories of CDCs are not themselves CDCs. **Proposition 4.25**.: _Let \(A\) be a cig and let \(\mathscr{C}\) be an \(A\)-linear CDC. Then if \(\mathscr{C}_{D\operatorname{lin}}\neq\mathscr{C}\), \(\operatorname{Ind}(\mathscr{C})\) is not a CDC._ **Remark 4.26**.: It remains open to determine exactly when the \(\operatorname{Ind}\)-category of a CDC is itself a CDC. In light of Propositions 4.23 and 4.25 if \(\mathscr{C}\) is a CDC in order for \(\operatorname{Ind}(\mathscr{C})\) to be a CDC we require that \(\mathscr{C}=\mathscr{C}_{D\operatorname{lin}}\). However, to make \(\operatorname{Ind}(\mathscr{C})\) a CDC we need the differential combinators to behave suitably well with filtered colimits. ## 5. \(\operatorname{Ind}\)-Tangent Category Examples In this section we will give some computations regarding the \(\operatorname{Ind}\)-tangent category of some tangent categories of interest. What is of particular note is that the examples here give various different ways of discussing infinite-dimensional differential-geometric information: \(\operatorname{Ind}(\mathbf{CAlg}_{A})\) and \(\operatorname{Ind}(A\mathbf{\text{-}Poly})\) both give algebraic (ring-or-rig-theoretic) analogues of infinite-dimensional differential algebra; \(\operatorname{Ind}(\mathbf{Sch}_{/S})\) gives differential insight into formal8 infinite-dimensional Euclidean manifolds. Footnote 8: In the sense of the theory of formal schemes and formal geometry in the spirit of the the Existence Theorem for formal functions (cf. [9, Theoérome 4.1.5]). ### The Tangent Structure on Commutative \(A\)-Algebras Let \(A\) be a ring9. The tangent structure on the category of commutative \(A\)-algebras takes the form Footnote 9: A commutative ring with identity. \[T(R):=R[\varepsilon]\cong\frac{R[x]}{(x^{2})}\cong R\otimes_{A}\frac{A[x]}{(x^ {2})}\] and with morphisms adapted correspondingly. To give a concrete description of the tangent functor \(\operatorname{Ind}(\mathbf{CAlg}_{A})\) we will use the functorial description of \(\operatorname{Ind}\). Let \(I\) be a filtered category and let \(F:I\to\mathbf{CAlg}_{A}\) be a functor. Then \(\operatorname{Ind}(T)(F)=T\circ F\) so we can describe the \(\operatorname{Ind}\)-object of \(\mathbf{CAlg}_{A}\) as follows: * For any \(i\in I_{0}\), the object assignment of \(\operatorname{Ind}(T)(F)\) is given by \((T\circ F)(i)=F(i)[\varepsilon]\). * For any morphism \(\varphi:i\to i^{\prime}\) in \(I\), the morphism assignment of \(\operatorname{Ind}(T)(F)\) is given by \((\operatorname{Ind}(T)(F))(\varphi)=(T\circ F)(\varphi)=F(\varphi)[\varepsilon]\) where \(F(\varphi)[\varepsilon]\) is the unique morphism making the diagram \[\begin{CD}F(i)[\varepsilon]@>{\exists\mathbb{I}\,F(\varphi)[\varepsilon]}>{}>F(i^{ \prime})[\varepsilon]\\ @V{\cong}V{\cong}V@V{\cong}V{\cong}V\\ @V{F(i)[\varepsilon]}V{(x^{2})}V@V{r\mapsto F(\varphi)(r)}V{x\mapsto x}>\frac{F(i^{ \prime})[x]}{(x^{2})}V\\ @V{\cong}V{\cong}V@V{\cong}V{\cong}V\\ F(i)\otimes_{A}\frac{A[x]}{(x^{2})}F(i^{\prime})\otimes_{A}\frac{A[x]}{(x^{2}) }\end{CD}\] commute. In particular this means that \(\operatorname{Ind}(T)\) acts by taking a filtered diagram in \(\mathbf{CAlg}_{A}\) and applies the \((-)[\varepsilon]\) functor to it at every possible stage. ### The Zariski Tangent Structure on Formal Schemes In a recent work, [5], Crutwell and LeMay have proved that the category \(\operatorname{\mathbf{Sch}}_{/S}\) admits a tangent structure for any base scheme \(S\). The tangent functor \(T:\operatorname{\mathbf{Sch}}_{/S}\to\operatorname{\mathbf{Sch}}_{/S}\) sends an \(S\)-scheme \(X\) to the (Zariski) tangent fibre of \(X\) relative to \(S\) constructed by Grothendieck in [10, Section 16.5]. In particular, for an \(S\)-scheme \(X\) we have10 Footnote 10: The version of the tangent functor and maps constructed in [6] is given for affine schemes, but because the Sym functor commutes with tensors on \(\operatorname{\mathbf{QCoh}}(X)\) and because the sheaf of differentials \(\Omega^{1}_{X/S}\) is a quasi-coherent sheaf, everything regarding this functor may be checked Zariski-locally, i.e., affine-locally on both the target and the base. We’ll describe this more in detail later, but it is worth remarking now. \[T(X):=T_{X/S}=\operatorname{Spec}\left(\operatorname{Sym}(\Omega^{1}_{X/S}) \right).\] For an affine scheme \(S=\operatorname{Spec}A\) and an affine \(S\)-scheme \(X=\operatorname{Spec}B\), the scheme \(T_{X/S}=T_{B/A}=\operatorname{Spec}(\operatorname{Sym}(\Omega^{1}_{B/A}))\) is an affine scheme. The ring \[C=\operatorname{Sym}\left(\Omega^{1}_{B/A}\right)\] is generated by symbols \(b,\mathrm{d}b\) for \(b\in B\) generated by the rules that addition and multiplication for symbols from \(b\in B\) are as in \(B\) and the Leibniz rule \[\mathrm{d}(bb^{\prime})=b^{\prime}\mathrm{d}(b)+b\mathrm{d}(b^{\prime})\] holds with \(\mathrm{d}b\mathrm{d}b^{\prime}=0\) and \(\mathrm{d}(a)=0\) for \(a\in A\). With this definition we find that \[T_{2}(X)=(T_{B/A})_{2}=\operatorname{Spec}\left(\operatorname{Sym}(\Omega^{1}_ {B/A})\otimes_{B}\operatorname{Sym}(\Omega^{1}_{B/A})\right)\] and that \[T^{2}(X)=T_{T_{B/A}/A}=\operatorname{Sym}\left(\Omega^{1}_{\operatorname{Sym} (\Omega^{1}_{B/A})/A}\right).\] It is worth noting for what follows that the algebra \(C\) with \(T^{2}_{B/A}=\operatorname{Spec}C\), \[C=\operatorname{Sym}\left(\Omega^{1}_{\operatorname{Sym}(\Omega^{1}_{B/A})} \right),\] is generated by symbols \(b,\mathrm{d}b,\delta b\), and \(\delta\mathrm{d}(b)\) for all \(b\in B\). Essentially, there is a new derivational neighborhood \(\delta\) of \(\operatorname{Sym}(\Omega^{1}_{B/A})\) which gives us a notion of \(2\)-jets and a distinct direction of \(1\)-jets (the \(\delta\)-direction). **Theorem 5.1** ([6]).: _For affine schemes \(\operatorname{Spec}B\to\operatorname{Spec}A\) the tangent structure \((\operatorname{\mathbf{Sch}}_{/\operatorname{Spec}A},\mathbb{T})\) is generated by the maps and functors on affine schemes:_ * _The tangent functor is given by_ \(T_{\operatorname{Spec}B/\operatorname{Spec}A}=T_{B/A}=\operatorname{Spec} \left(\operatorname{Sym}(\Omega^{1}_{B/A})\right)\)_._ * _The bundle map_ \(p_{B}:T_{B/A}\to\operatorname{Spec}B\) _is the spectrum of the ring map_ \[q_{B}:B\to\operatorname{Sym}(\Omega^{1}_{B/A})\] _generated by_ \(b\mapsto b\)_._ * _The zero map_ \(0_{B}:T_{B/A}\to\operatorname{Spec}B\) _is the spectrum of the ring map_ \[\zeta_{B}:Sym(\Omega^{1}_{B/A})\to B\] _given by_ \(b\mapsto b,\mathrm{d}b\mapsto 0\)_._ * _The bundle addition map_ \(+_{B}:(T_{B/A})_{2}\to T_{B/A}\) _is the spectrum of the map_ \[\operatorname{add}_{B}:\operatorname{Sym}(\Omega^{1}_{B/A})\to\operatorname{ Sym}(\Omega^{1}_{B/A})\otimes_{B}\operatorname{Sym}(\Omega^{1}_{B/A})\] _given by_ \(b\mapsto b\otimes 1_{B},\mathrm{d}b\mapsto\mathrm{d}b\otimes 1+1\otimes \mathrm{d}b\)_._ * _The vertical lift_ \(\ell_{B}:T_{B/A}\to T^{2}_{B/A}\) _is given as the spectrum of the ring map_ \[v_{B}:\operatorname{Sym}\left(\Omega^{1}_{\operatorname{Sym}(\Omega^{1}_{B/A}) }\right)\to\operatorname{Sym}\left(\Omega^{1}_{B/A}\right)\] _generated by_ \(b\mapsto b,\mathrm{d}b\mapsto 0,\delta b\mapsto 0\)_,_ \(\delta\mathrm{d}(b)\mapsto\mathrm{d}b\)_._ * _The canonical flip is the map_ \(c_{B}:T_{B/A}^{2}\to T_{B/A}^{2}\) _generated as the spectrum of the ring map_ \[\gamma_{B}:\operatorname{Sym}\left(\Omega^{1}_{\operatorname{Sym}(\Omega^{1}_{ B/A})}\right)\to\operatorname{Sym}\left(\Omega^{1}_{\operatorname{Sym}(\Omega^{1}_{ B/A})}\right)\] _which interchanges_ \(1\)_-jets, i.e., the map is generated by_ \(b\mapsto b,\mathrm{d}b\mapsto\delta b,\delta b\mapsto\mathrm{d}b\) _and_ \(\delta\mathrm{d}b\mapsto\delta\mathrm{d}b\)_._ **Definition 5.2**.: We define the Zariski tangent structure on a scheme \(S\) to be the tangent structure \(\mathbb{T}_{\mathbf{Zar}(S)}=(T_{-/S},p,0,+,\ell,c)\) on the category \(\mathbf{Sch}_{/S}\) described by Theorem 5.1. **Remark 5.3**.: This tangent category and tangent structure is also used and studied in [16]. The Zariski tangent structure there is used to study a category of equivariant tangent schemes over a base \(G\)-variety \(X\); we will not take this direction in this paper save for a short discussion at the end of this paper. It is worth noting, however, that in [16] the various functoriality and naturality conditions involving the Zariski Tangent Structure are established, and we will refer to those results when establishing the functoriality of the \(\operatorname{Ind}\)-tangent structure on schemes and the tangent structure on the category of formal schemes. This is, in some sense, the "canonical" tangent structure on \(\mathbf{Sch}_{/S}\), as the tangent scheme \(T_{X/S}\) captures \(S\)-derivations of \(X\) in the following sense: If \(S=\operatorname{Spec}K\) for a field \(K\) and if \(\operatorname{Spec}A\) is an affine \(K\)-scheme, then the \(A\)-points of the tangent scheme \(T_{X/S}(A)\) satisfy \[T_{X/S}(A)=\mathbf{Sch}_{/K}(\operatorname{Spec}A,T_{X/S})\cong\mathbf{Sch}_{/K }\left(\operatorname{Spec}\left(\frac{A[x]}{(x^{2})}\right),X\right)\] so in particular \(K\)-points of \(T_{X/S}\) give the \(1\)-differentials \(\Omega^{1}_{X/K}\). Moreover, for any closed point \(x\in|X|\) we have a canonical isomorphism \[T_{X/S}(x)\cong K\text{-}\mathbf{Alg}\left(\frac{\mathfrak{m}_{x}}{\mathfrak{ m}_{x}^{2}},K\right)\] of \(T_{X/S}(x)\) with the Zariski tangent space of \(X\) over \(K\). **Proposition 5.4** ([16]).: _Let \(S\) be a scheme and let \(f:X\to Y\) be a morphism of \(S\)-schemes. Then the pullback functor \(f^{*}:\mathbf{Sch}_{/Y}\to\mathbf{Sch}_{/X}\) is part of a strong tangent morphism \((f^{*},T_{f})\) where \(T_{f}\) is the natural isomorphism_ \[f^{*}\circ T_{-/Y}\xrightarrow{\cong}T_{((-)\times_{Y}X)/X}.\] We will now, as an application to Theorem 3.27 above, construct the tangent category of formal schemes over a base scheme \(S\). **Example 5.5**.: Let \(S\) be an arbitrary base scheme and consider the Zariski tangent category \((\mathbf{Sch}_{S},T_{\mathbf{Zar}(S)})\) of \(\mathbf{Sch}_{/S}\). Fix a Grothendieck universe \(\mathscr{U}\) and let \(\operatorname{Ind}_{\mathscr{U}}(\mathscr{C})\) be the ind-category of \(\mathscr{C}\) where each object \(\underline{X}\) is indexed by a filtered \(\mathscr{U}\)-small category (cf. [1, I.8.2.4.5]). Then by [19, Definition 4.5]11\(\operatorname{Ind}_{\mathscr{U}}(\mathbf{Sch}_{/S})=\mathbf{FSch}_{/S}\), i.e., the \(\mathscr{U}\)-small ind-category of \(\mathbf{Sch}_{/S}\) is equivalent to the category of formal schemes over \(S\). Applying Theorem 3.27 to \(\operatorname{Ind}_{\mathscr{U}}(\mathbf{Sch}_{/S})\) we find that \(\mathbf{FSch}_{/S}\) is a tangent category with tangent formal scheme described as follows. If \(\mathfrak{X}=(X_{i})_{i\in I}=\operatorname{colim}_{i\in I}X_{i}\) is a formal scheme over \(S\) then the Zariski tangent formal scheme of \(\mathfrak{X}\) is the formal scheme Footnote 11: It is worth remarking that there are _various_ different definitions of formal schemes in the literature. For instance, in [8, Definition 10.4.2] Grothendieck and Dieudonne define formal schemes in terms of topologically ringed spaces; in [20] Yasuda defines formal schemes as certain proringed spaces, i.e., as a certain type of topological spaces equipped with a specific flavour of sheaf of porings; and in [11] are defined in terms of completions along only Noetherian schemes. What matters, however, is that each such situation gives a subcategory of \(\operatorname{Ind}(\mathbf{Sch}_{/S})\) and so we take the more purely categorical perspective in this paper. \[T_{\mathfrak{X}/S}:=(T_{X_{i}/S})_{i\in I}=\operatorname{colim}_{i\in I}T_{X_{i }/S}.\] A straightforward formal consequence of our main theorem and Proposition 5.4 is that when we have a morphism of schemes \(f:S\to Y\), the pullback functor \(\underline{f}^{*}:\mathbf{FSch}_{/Y}\to\mathbf{FSch}_{/S}\) is part of a strong tangent morphism. **Proposition 5.6**.: _Let \(f:S\to Y\) be a morphism of schemes. Then the pullback functor \(f^{*}:\mathbf{FSch}_{/Y}\to\mathbf{FSch}_{/S}\) is part of a strong tangent morphism._ Proof.: We first note that the functor \(f^{*}:\mathbf{FSch}_{/Y}\to\mathbf{FSch}_{/S}\) is define by sending a formal scheme \(\mathfrak{X}\to Y\) to the formal scheme \(\mathfrak{X}\times_{Y}S\) (and similarly on morphisms). However, if \(\mathfrak{X}=(X_{i})_{i\in I}\) then \(\mathfrak{X}\times_{Y}S\) is represented by the Ind-object \((X_{i}\times_{Y}S)_{i\in I}\). Consequently we define our natural isomorphism \[(T_{f}):f^{*}\circ T_{-/Y}\to T_{-/S}\circ f^{*}\] by defining the \(\mathfrak{X}\)-component \[(T_{f})_{\mathfrak{X}}:(f^{*}\circ T_{-/Y})(\mathfrak{X})\to(T_{-/S}\circ f^{ *})(\mathfrak{X})\] to be the map \[(T_{f})_{\mathfrak{X}}=((T_{f})_{X_{i}})_{i\in I}\] where each \((T_{f})_{X_{i}}\) is the \(X_{i}\)-component of the natural isomorphism \(T_{f}\) of Proposition 5.4. That this is a morphism of Ind-schemes is trivial to check and that it is an isomorphism is immediate as well. Finally that the pair \((f^{*},(T_{f}))\) is a strong tangent morphism follows from the fact that the tangent morphism identities and strengths are checked locally. The more general situation (determining whether or not whether or not the pullback functor \(\underline{f}^{*}:\mathbf{FSch}_{/\mathfrak{Y}}\to\mathbf{FSch}_{/\mathfrak{X}}\) is a strong tangent morphism) is significantly more complicated problem. While I anticipate that it is true, and that this can and should use Proposition 4.1 to handle pullback along a morphism \(\underline{f}:X\to\mathfrak{Y}\), establishing this in complete generality is left as future work. We close this subsection with some explicit examples of the formal tangent scheme of some various formal schemes over various bases. **Example 5.7**.: Let \(K\) be a field (of characteristic zero, for simplicitly) and let \(S:=\operatorname{Spec}K\) be our base scheme. Now consider the scheme \(X=\operatorname{Spec}K[t]\) and note that there is a dense embedding of rings \[K[t]\hookrightarrow\lim_{n\in\mathbb{N}}\frac{K[t]}{(t^{n})}.\] Now consider the formal scheme \(\mathfrak{X}:=\operatorname{Spf}K[\![t]\!]\), which we define to be the Ind-object \[\operatorname{Spf}K[\![t]\!]:=\left(\operatorname{Spec}\frac{K[\![t]\!]}{(t^{n })}\right)_{n\in\mathbb{N}}\cong\left(\operatorname{Spec}\frac{K[\![t]\!]}{(t ^{n})}\right)_{n\in\mathbb{N}}.\] Then \(\operatorname{Spf}K[\![t]\!]\) gives us a nilpotent thickening of \(\operatorname{Spec}K[t]\) at the origin which is _not_ equivalent to a scheme. We can describe the formal tangent scheme \(T_{\operatorname{Spf}K[\![t]\!]/K}\) as follows. By construction we have that \[T_{\operatorname{Spf}K[\![t]\!]/\operatorname{Spec}K} =T_{-/\operatorname{Spec}K}\left(\operatorname{Spec}\frac{K[t]}{( t^{n})}\right)_{n\in\mathbb{N}}=\left(T_{\operatorname{Spec}(K[t]/(t^{n}))/ \operatorname{Spec}K}\right)_{n\in\mathbb{N}}\] \[\cong\left(\operatorname{Spec}\left(\operatorname{Sym}\left( \Omega^{1}_{(K[\![t]\!]/(t^{n}))/K}\right)\right)\right)_{n\in\mathbb{N}} \cong\operatorname{Spec}\left(\operatorname{Sym}\left(\frac{K[t]\!]\mathrm{d} t}{(t^{n},\mathrm{d}(t^{n})/\mathrm{d}t)}\right)\right)_{n\in\mathbb{N}}\] \[\cong\left(\operatorname{Spec}\left(\operatorname{Sym}\left( \frac{K[t]\!]\mathrm{d}t}{(t^{n},nt^{n-1}\mathrm{d}t)}\right)\right)\right)_{n \in\mathbb{N}}.\] We now close by making some short conjectures regarding the tangent category of formal schemes. **Conjecture 5.8**.: _Let \(\mathfrak{X}\) and \(\mathfrak{Y}\) be formal schemes over a base scheme \(S\) and let \(\underline{f}:\mathfrak{X}\to\mathfrak{Y}\) be a morphism. Then the pullback functor \(\underline{f}^{*}:\mathbf{FSch}_{/\mathfrak{Y}}\to\mathbf{FSch}_{/\mathfrak{X}}\) is part of a strong tangent morphism._ **Conjecture 5.9**.: _If every morphism appearing in a formal scheme \(\mathfrak{X}=(X_{i})_{i\in I}\) is a closed immersion then the same is true of \(T_{\mathfrak{X}/S}\)._ ### The Ind-Tangent Category of the CDC of Polynomials In this subsection we fix a \(\operatorname{crig}\)\(A\). An important and well-known \(A\)-linear CDC (cf. [13, Example 2.7.a]) is the CDC \(A\)-**Poly** of polynomials with coefficients in \(A\). This is the category defined as follows: * Objects: \(n\in\mathbb{N}\); * Morphisms: A map \(\varphi:n\to m\) is given by an \(m\)-tuple of polynomials with scalars in \(A\) with \(n\)-variables. That is, \[\varphi=\big{(}p_{1}(x_{1},\cdots,x_{n}),\cdots,p_{m}(x_{1},\cdots,x_{n}) \big{)},\qquad p_{j}(x_{1},\cdots,x_{n})\in A[x_{1},\cdots,x_{n}].\] * Composition: The composition of morphisms \(\varphi:n\to m\) and \(\psi:m\to\ell\), if \(\varphi\) is the \(m\)-tuple \((f_{1}(x_{1},\cdots,x_{n}),\cdots,f_{m}(x_{1},\cdots,x_{n}))\) and \(\psi\) is the \(\ell\)-tuple \((g_{1}(x_{1},\cdots,x_{m}),\cdots,g_{\ell}(x_{1},\cdots,x_{m}))\), then \(\psi\circ\varphi\) is defined by \[\psi\circ\varphi:=\bigg{(}g_{1}\big{(}f_{1}(x_{1},\cdots,x_{n}),\cdots,f_{m}(x _{1},\cdots,x_{m})\big{)},\cdots,g_{\ell}\big{(}f_{1}(x_{1},\cdots,x_{n}), \cdots,f_{m}(x_{1},\cdots,x_{n})\big{)}\bigg{)}.\] * Identities: The identity map \(\operatorname{id}_{n}:n\to n\) is given by \[\operatorname{id}_{n}:=(x_{1},\cdots,x_{n}).\] In this category the product of \(n\) with \(m\) is the natural number sum \(n+m\) (analogously to how the dimension of affine space satisfies \(\mathbb{A}_{K}^{n}\times_{\operatorname{Spec}K}\mathbb{A}_{K}^{m}\cong \mathbb{A}_{K}^{n+k}\) in the scheme-theoretic world; note here \(K\) is a ring). The differential combinator on \(A\)-**Poly** takes an \(m\)-tuple \(\varphi:n\to m\) with \[\varphi=\big{(}p_{1}(x_{1},\cdots,x_{n}),\cdots,p_{m}(x_{1},\cdots,x_{n}) \big{)}\] and sends it to the tuple \(D(\varphi):n\times n\to m\) given by the sum of its formal total derivatives: \[D(\varphi):=\left(\left(\sum_{i=1}^{n}\frac{\partial\,p_{1}(x_{1},\cdots,x_{ n})}{\partial\,x_{i}}\right)y_{i},\cdots,\left(\sum_{i=1}^{n}\frac{\partial\,p_{m} (x_{1},\cdots,x_{m})}{\partial\,x_{i}}\right)y_{i}\right).\] By [13] we know that a morphism \(\varphi:n\to m\) is \(D\)-linear if and only if \(\varphi=(p_{1},\cdots,p_{m})\) is comprised of degree one monomials. In particular, such a map induces an \(A\)-linear transformation \(A^{n}\to A^{m}\) and there is an equivalence of categories (cf. [13]) \(A\)-**Poly\({}_{D\,\mathrm{lin}}\simeq A\)-Mod\({}_{\mathrm{f.d.}}\)** where \(A\)-Mod\({}_{\mathrm{f.d.}}\) denotes the category of finite dimensional \(A\)-modules. Let us now give a concrete description of Ind\((A\)-**Poly**). The objects here are, of course, filtered diagrams \(I\to A\)-**Poly** and so every object can be represented as a filtered diagram of natural numbers and tuples of polynomials. By Proposition 4.25 and the description of the linear maps above, we see that Ind\((A\)-**Poly**) is not a CDC. However, we will now characterize the differential objects in Ind\((A\)-**Poly**) and then use this to prove that the category of such objects is equivalent to the category of \(A\)-modules. **Theorem 5.10**.: _The category of differential objects in Ind\((A\)-**Poly**) together with linear bundle maps between them is equivalent to Ind\((A\)-Mod\({}_{\mathrm{f.d.}})\). In particular, there is an equivalence of categories of_ **Diff\((\)Ind\((A\)-**Poly**)) _and the category \(A\)-_**Mod** _of \(A\)-modules._ Proof.: By Corollary 4.24 we find that a differential object in Ind\((A\)-**Poly**) is necessarily a filtered diagram \(F:I\to A\)-**Poly** such that for every morphism \(\varphi\in I_{1}\), \(F(\varphi)\) is \(D\)-linear in \(A\)-**Poly**. Consequently, \(F\) factors as \(F:I\to A\)-**Poly\({}_{D\,\mathrm{lin}}\to A\)-Poly** and so we induce an equivalence of categories \[\textbf{Diff}(\text{Ind}(A\text{-}\textbf{Poly}))\simeq\text{Ind}(A\text{-} \textbf{Poly}_{D\,\mathrm{lin}})\simeq\text{Ind}(A\text{-}\textbf{Mod}_{ \mathrm{f.d.}}).\] Now, because the category of \(A\)-modules is compact (in the sense that every \(A\)-module is a filtered colimit of its finite dimensional submodules) and cocomplete it follows from [12, Corollary 6.3.5] that Ind\((A\)-Mod\({}_{\mathrm{f.d.}})\simeq A\)-Mod. Chaining the equivalences together gives \[\textbf{Diff}(\text{Ind}(A\text{-}\textbf{Poly}))\simeq\text{Ind}(A\text{-} \textbf{Poly}_{D\,\mathrm{lin}})\simeq\text{Ind}(A\text{-}\textbf{Mod}_{ \mathrm{f.d.}})\simeq A\text{-}\textbf{Mod}\] as was desired. **Corollary 5.11**.: _The Ind-tangent category_ Ind\((A\)-**Poly**) _contains a sub-CDC equivalent to the category \(A\)-Mod of \(A\)-modules._ As an aside, note that the category Ind\((A\)-**Poly**) also contains an object representing polynomials in any number of variables. If \(I\) is a set, regard \(I\) as a discrete category. Then the object \(F:I\to A\)-**Poly** given by \(F(i)=1\). Then \(F\) represents the coproduct \[\bigotimes_{i\in I}A[x]\cong A[x_{i}:i\in I]\] and so we get that Ind\((A\)-**Poly**) contains polynomial objects with arbitrary numbers of variables. ### The Ind-Tangent Category of the CDC of Smooth Maps We close this section of the paper by computing some facts about the CDC \(\mathbb{R}\operatorname{\textbf{-Smooth}}\) of smooth functions between Euclidean spaces (cf. [13]). This is a CDC which is important for modeling the theory of smooth functions between manifolds and other differential geometric contexts. This category is defined as follows: 1. Objects: Euclidean spaces \(\mathbb{R}^{n}\) for \(n\in\mathbb{N}\); 2. Morphisms: Smooth functions \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\); 3. Composition and Identities: As in \(\mathbf{Set}\). Note that \(\mathbb{R}\operatorname{\textbf{-Smooth}}\) is an \(\mathbb{R}\)-linear category as each hom-set satisfies \[\mathbb{R}\operatorname{\textbf{-Smooth}}(\mathbb{R}^{n},\mathbb{R}^{m})= \mathcal{C}^{\infty}(\mathbb{R}^{n},\mathbb{R}^{m})\] and so is canonically a generically infinite-dimensional \(\mathbb{R}\)-vector space12 with bilinear composition. The differential combinator \(D\) on \(\mathbb{R}\operatorname{\textbf{-Smooth}}\) is defined by sending a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) to its total derivative (viewed as a function \(D(f):\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}^{m}\)). Explicitly, begin by noting that a smooth function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) can be seen as an \(m\)-tuple of smooth functions \(f_{i}:\mathbb{R}^{n}\to\mathbb{R}\), i.e., Footnote 12: The exception lies with \(\mathbb{R}^{0}\). \[f=(f_{1},\cdots,f_{m}):\mathbb{R}^{n}\to\mathbb{R}^{m}\,.\] Writing \(D_{i}(f)\) for the function \(\partial f/\partial x_{i}\), we then define \(D(f):\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}^{m}\) by setting \[D(f)(\mathbf{v},\mathbf{w}):=\left(\sum_{i=1}^{n}D_{i}(f_{1})(\mathbf{v}) \mathbf{w}_{i},\cdots,\sum_{i=1}^{n}D_{i}(f_{m})(\mathbf{v})\mathbf{w}_{i} \right).\] In this way we find, following [13], that a morphism \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) is \(D\)-linear if and only if \(f\) is linear in the usual sense. Similarly to the case for \(A\operatorname{\textbf{-Poly}}_{D\,\mathrm{lin}}\), this gives rise to an equivalence of categories \(\mathbb{R}\operatorname{\textbf{-Smooth}}_{D\,\mathrm{lin}}\simeq\mathbb{R} \operatorname{\textbf{-Vec}}_{\mathrm{f.d.}}\) where \(\mathbb{R}\operatorname{\textbf{-Vec}}_{\mathrm{f.d.}}\) is the category of finite dimensional real vector spaces. We will proceed as in the \(A\operatorname{\textbf{-Poly}}\) case and determine the differential bundles over \(\top_{\mathrm{Ind}(\mathbb{R}\operatorname{\textbf{-Smooth}})}\). Perhaps unsurprisingly, we will show that \(\operatorname{\textbf{Diff}}(\operatorname{Ind}(\mathbb{R}\operatorname{ \textbf{-Smooth}}))\simeq\mathbb{R}\operatorname{\textbf{-Vec}}\) before making some closing remarks regarding some of the objects which we can find in \(\operatorname{Ind}(\mathbb{R}\operatorname{\textbf{-Smooth}})\). **Theorem 5.12**.: _The category of differential objects in \(\operatorname{Ind}(\mathbb{R}\operatorname{\textbf{-Smooth}})\) together with linear bundle maps between them is equivalent to \(\operatorname{Ind}(\mathbb{R}\operatorname{\textbf{-Vec}}_{\mathrm{f.d.}})\). In particular, there is an equivalence of categories between \(\operatorname{\textbf{Diff}}(\operatorname{Ind}(\mathbb{R}\operatorname{ \textbf{-Smooth}}))\) and the category \(\mathbb{R}\operatorname{\textbf{-Vec}}\) of \(\mathbb{R}\) vector spaces._ Proof.: The proof follows mutatis mutandis to that of Theorem 5.10. We conclude by discussing some of the objects which we can find in \(\operatorname{Ind}(\mathbb{R}\operatorname{\textbf{-Smooth}})\). Let \(M\) be an arbitrary unbounded infinite-dimensional smooth manifold whose finite dimensional subspaces are Euclidean spaces. Then \(M\) represents an object in \(\operatorname{Ind}(\mathbb{R}\operatorname{\textbf{-Smooth}})\) by taking the filtered category \(I\) to be the lattice of finite dimensional subspaces and the functor \(F:I\to\mathbb{R}\operatorname{\textbf{-Smooth}}\) sends each subspace to its Euclidean representative. Consequently we can find infinite-dimensional locally Euclidean manifolds in \(\operatorname{Ind}(\mathbb{R}\operatorname{\textbf{-Smooth}})\) and so we can use \(\operatorname{Ind}(\mathbb{R}\operatorname{\textbf{-Smooth}})\) as a tangent category for functional analytic settings.
2305.17802
Universal shortcuts to adiabaticity of finite-time and weak processes
The analytical expression for shortcuts to adiabaticity for any switching time and any thermally isolated system performing a finite-time and weak process is presented. It is based on the universal solution of the optimal protocols of weak processes, where the extension to adiabatic processes was made by means of the concept of waiting time. Two examples are solved to verify the validity of such shortcuts: the typical case of oscillatory relaxation function and the transverse-field quantum Ising chain. In the end, a discussion about the limitations of the applicability of these shortcuts in quantum annealing is made.
Pierre Nazé
2023-05-28T19:27:18Z
http://arxiv.org/abs/2305.17802v2
# Universal shortcuts to adiabaticity of finite-time and weak processes ###### Abstract The analytical expression for shortcuts to adiabaticity for any switching time and any thermally isolated system performing a finite-time and weak process is presented. It is based on the universal solution of the optimal protocols of weak processes, where the extension to adiabatic processes was made by means of the concept of waiting time. Two examples are solved to verify the validity of such shortcuts: the typical case of oscillatory relaxation function and the transverse-field quantum Ising chain. In the end, a discussion about the limitations of the applicability of these shortcuts in quantum annealing is made. ## I Introduction Although finding optimal protocols for the work spent in a thermodynamic process may be urgent, the desire to find situations where these protocols are the best ones is more urgent. These best protocols are called _shortcuts to adiabaticity_, where the driving achieves the minimal possible energy in a reasonable process time [1; 2]. The aim of this work is to find a regime where one can propose shortcuts to adiabaticity for a variety of systems. Previous studies show that such shortcuts to adiabaticity exist in the context of finite-time and weak drivings [3; 4]. However, the solutions proposed were restricted to continuous protocols, removing therefore a class of admissible functions where the solutions are given by distributions such as Dirac deltas and their derivatives [5]. In another study, universal solutions for the optimal protocol for isothermal and weak processes were found considering these admissible functions [6]. The question is then proposed: is it possible to extend such a procedure to adiabatic processes and find universal shortcuts to adiabaticity where these admissible functions are taken into account? The answer is positive. First, after proposing the concept of waiting time to unify the treatment of the optimal protocols for isothermal and adiabatic processes, I solved the optimal protocol problem of two important examples in order to find a structure for such a solution. The suggested universal result is proven to be a shortcut to adiabaticity for any switching time and any thermally isolated system, ever since appropriate frequencies are found. In particular, one of these examples is the transverse-field quantum Ising chain, which is a prototype of an adiabatic quantum computer[7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. My result shows therefore that we can completely suppress the errors due to excitations in non-equilibrium drivings in a quantum annealing procedure if we choose such shortcuts to adiabaticity to perform the process. A discussion however of the applicability is made in the face of the necessary criteria to hold linear-response theory [17]. ## II Preliminaries ### Excess work I start defining notations and developing the main concepts to be used in this work. Consider a quantum system with a Hamiltonian \(\mathcal{H}(\lambda(t))\), where \(\lambda(t)\) is a time-dependent external parameter. Initially, this system is in contact with a heat bath of temperature \(\beta\equiv\left(k_{B}T\right)^{-1}\), where \(k_{B}\) is Boltzmann's constant. The system is then decoupled from the heat bath and, during a switching time \(\tau\), the external parameter is changed from \(\lambda_{0}\) to \(\lambda_{0}+\delta\lambda\). The average work performed on the system during this process is \[W\equiv\int_{0}^{\tau}\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle \dot{\lambda}(t)dt, \tag{1}\] where \(\partial_{\lambda}\) is the partial derivative for \(\lambda\) and the superscripted dot is the total time derivative. The generalized force \(\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle\) is calculated using the trace over the density matrix \(\rho(t)\) \[\left\langle A(t)\right\rangle=\operatorname{tr}\left\{A\rho(t)\right\} \tag{2}\] where \(A\) is some observable. The density matrix \(\rho(t)\) evolves according to Liouville equation \[\dot{\rho}=\mathcal{L}\rho:=-\frac{1}{i\hbar}[\rho,\mathcal{H}], \tag{3}\] where \(\mathcal{L}\) is the Liouville operator, \([\cdot,\cdot]\) is the commutator and \(\rho(0)=\rho_{c}\) is the initial canonical density matrix. Consider also that the external parameter can be expressed as \[\lambda(t)=\lambda_{0}+g(t)\delta\lambda, \tag{4}\] where to satisfy the initial conditions of the external parameter, the protocol \(g(t)\) must satisfy the following boundary conditions \[g(0)=0,\quad g(\tau)=1. \tag{5}\] Linear response theory aims to express the average of some observable until the first order of some perturbation considering how this perturbation affects the observable and the non-equilibrium density matrix [18]. In our case, we consider that the parameter does not considerably changes during the process, \(|g(t)\delta\lambda/\lambda_{0}|\ll 1\), for all \(t\in[0,\tau]\). Using the framework of linear-response theory [18], the generalized force \(\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle\) can be approximated until the first-order as \[\begin{split}\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle \right\rangle&=\left\langle\partial_{\lambda}\mathcal{H}\right\rangle _{0}+\delta\lambda\left\langle\partial_{\lambda\lambda}^{2}\mathcal{H}\right\rangle _{0}g(t)\\ &\quad-\delta\lambda\int_{0}^{t}\phi_{0}(t-t^{\prime})g(t^{\prime })dt^{\prime},\end{split} \tag{6}\] where the \(\left\langle\cdot\right\rangle_{0}\) is the average over the initial canonical density matrix. The quantity \(\phi_{0}(t)\) is the so-called response function [18], which can be conveniently expressed as the derivative of the relaxation function \(\Psi_{0}(t)\)[18] \[\phi_{0}(t)=-\frac{d\Psi_{0}}{dt}, \tag{7}\] where \[\Psi_{0}(t)=\beta\langle\partial_{\lambda}\mathcal{H}(t)\partial_{\lambda} \mathcal{H}(0)\rangle_{0}+\mathcal{C}, \tag{8}\] being the constant \(\mathcal{C}\) calculated via the final value theorem [18]. In this manner, the generalized force, written in terms of the relaxation function, is \[\begin{split}\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle \right\rangle&=\left\langle\partial_{\lambda}\mathcal{H} \right\rangle_{0}-\delta\lambda\widetilde{\Psi}_{0}g(t)\\ &\quad+\delta\lambda\int_{0}^{t}\Psi_{0}(t-t^{\prime})\dot{g}(t^ {\prime})dt^{\prime},\end{split} \tag{9}\] where \(\widetilde{\Psi}_{0}(t)\equiv\Psi_{0}(0)-\left\langle\partial_{\lambda\lambda }^{2}\mathcal{H}\right\rangle_{0}\). Combining Eqs. (1) and (9), the average work performed at the linear response of the generalized force is \[\begin{split} W=&\,\delta\lambda\left\langle \partial_{\lambda}\mathcal{H}\right\rangle_{0}-\frac{\delta\lambda^{2}}{2} \widetilde{\Psi}_{0}\\ &+\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{ \prime})\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt.\end{split} \tag{10}\] We remark that in thermally isolated systems, the work is separated into two contributions: the quasistatic work \(W_{\text{qs}}\) and the excess work \(W_{\text{ex}}\). We observe that only the double integral on Eq. (10) has "memory" of the trajectory of \(\lambda(t)\). Therefore the other terms are part of the contribution of the quasistatic work. Thus, we can split them as \[W_{\text{qs}}=\delta\lambda\left\langle\partial_{\lambda}\mathcal{H}\right \rangle_{0}-\frac{\delta\lambda^{2}}{2}\widetilde{\Psi}_{0}, \tag{11}\] \[W_{\text{ex}}=\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{\prime })\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt. \tag{12}\] In particular, the excess work can be rewritten using the symmetry property of the relaxation function, \(\Psi(t)=\dot{\Psi}(-t)\) (see Ref. [18]), \[W_{\text{ex}}=\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\int_{0}^{\tau}\Psi_{ 0}(t-t^{\prime})\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt. \tag{13}\] We remark that such treatment can be applied to classic systems, by changing the operators to functions, and the commutator by the Poisson bracket [18]. ### Optimal excess work Consider the excess work rewritten in terms of the protocols \(g(t)\) instead of its derivative \[W_{\text{ex}}= \frac{\delta\lambda^{2}}{2}\Psi(0)+\delta\lambda^{2}\int_{0}^{ \tau}\dot{\Psi}_{0}(\tau-t)g(t)dt \tag{14}\] \[-\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\int_{0}^{\tau}\ddot{ \Psi}(t-t^{\prime})g(t)g(t^{\prime})dtdt^{\prime}. \tag{15}\] Using calculus of variations, we can derive the Euler-Lagrange equation that furnishes the optimal protocol \(g^{*}(t)\) of the system that will minimize the excess work \[\int_{0}^{\tau}\ddot{\Psi}_{0}(t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}=\dot {\Psi}_{0}(\tau-t). \tag{16}\] In particular, the optimal excess work will be [19] \[W_{\text{ex}}^{*}=\frac{\delta\lambda^{2}}{2}\Psi(0)+\frac{\delta\lambda^{2}}{2 }\int_{0}^{\tau}\dot{\Psi}_{0}(\tau-t)g^{*}(t)dt. \tag{17}\] ### Universal optimal protocol To derive the universal solution of Eq. (16), I will basically use the symmetric property of the optimal protocol [19] \[g^{*}(t)=1-g^{*}(\tau-t), \tag{18}\] First, I open the right-hand side of Eq. (16) in the appropriate integrals \[\int_{0}^{t}\ddot{\Psi}(t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}+\int_{t}^{ \tau}\ddot{\Psi}(t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}=\dot{\Psi}(\tau-t) \tag{19}\] Using the symmetric property (18) in the second term of the right-hand side of Eq. (19), one can show that \[\int_{0}^{t}\ddot{\Psi}(t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}=\int_{0}^{ \tau-t}\ddot{\Psi}(\tau-t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}. \tag{20}\] Therefore, Eq. (20) must be equal to a symmetric function \(h(t)\), such that, \(h(t)=h(\tau-t)\). Using the solution of the Euler-Lagrange equation, the above result is equal to \[\int_{0}^{t}\ddot{\Psi}(t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}=\int_{0}^{t} \ddot{\Psi}(t-t^{\prime})(1-g^{*}(\tau-t^{\prime}))dt^{\prime}, \tag{21}\] which is another way to express the symmetry of the optimal protocol. Consider, by simplicity, a function \(g_{0}(t)\), such that \[\int_{0}^{t}\ddot{\Psi}(t-t^{\prime})g_{0}(t^{\prime})dt^{\prime}=C_{0}, \tag{22}\] where \(C_{0}\) is a constant in time. Applying the convolution theorem we have \[g_{0}(t)=\mathcal{L}_{s}^{-1}\left\{\frac{C_{0}}{s\mathcal{L}_{t}\{\dot{\Psi}(t) \}(s)}\right\}(t), \tag{23}\] where \(\mathcal{L}_{t}\{\cdot\}\) and \(\mathcal{L}_{s}^{-1}\{\cdot\}\) are respectively the Laplace and inverse Laplace transform. Assuming that \(\mathcal{L}_{t}\{\Psi(t)\}(s)\) can be expressed as a Taylor series in \(s=0\), we have by applying Horner-Ruffini method \[\frac{1}{\mathcal{L}_{t}\{\bar{\Psi}(t)\}(s)}=\frac{1}{s^{2}\mathcal{L}_{t}\{ \Psi(t)\}(s)-s\Psi(0)}=\sum_{n=-2}^{\infty}a_{n}s^{n}. \tag{24}\] We have then as solution \[g_{0}(t)=C_{0}\left(a_{-2}t+a_{-1}+\sum_{n=0}^{\infty}a_{n}\delta^{(n)}(t) \right). \tag{25}\] We demand that the constant \(C_{0}\) must be equal to a number where it holds the time-reversal symmetry \[g_{0}(t)-C_{0}\sum_{n=0}^{\infty}a_{n}\delta^{(n)}(\tau-t)=1-g_{0}(\tau-t)+C_{0 }\sum_{n=0}^{\infty}a_{n}\delta^{(n)}(t). \tag{26}\] Since the Dirac deltas and their derivatives will cancel out, the constant will be \[C_{0}=\frac{1}{a_{-2}\tau+2a_{-1}}. \tag{27}\] In this manner, the optimal protocol, by construction, will be \[g^{*}(t)=\frac{1}{2}\left(g_{0}(t)+1-g_{0}(\tau-t)+\right. \tag{28}\] \[\left.C_{0}\sum_{n=0}^{\infty}a_{n}(\delta^{(n)}(t)-\delta^{(n)}( \tau-t))\right), \tag{29}\] in which, by substituting Eq. (25) and (27), we will have \[g^{*}(t)=-\frac{t-a_{-1}}{a_{-2}\tau+2a_{-1}}-\sum_{n=0}^{\infty}\frac{a_{n}( \delta^{(n)}(t)-\delta^{(n)}(\tau-t))}{a_{-2}\tau+2a_{-1}}, \tag{30}\] which is the universal optimal protocol. In particular, the first terms when calculated are \[a_{-2}=-1,\quad a_{-1}=-\mathcal{L}_{t}\left\{\frac{\Psi(t)}{\Psi(0)}\right\} (0). \tag{31}\] Also, to preserve the time-reversal symmetry in the points \(t=0\) and \(t=\tau\), one must define \(\delta^{(n)}(0)=\delta^{(n)}(\tau)=0\), for all \(n\in\mathcal{N}\). ## III Universal shortcut to adiabaticity To find a universal shortcut to adiabaticity, consider that the excess work should be null. Therefore, it holds \[\int_{0}^{\tau}\dot{\Psi}(\tau-t)g^{*}(t)dt=-\Psi(0). \tag{32}\] Applying the Laplace transform for \(\tau\), the optimal protocol should be \[\mathcal{L}_{\tau}\{g^{*}(\tau)\}(s)=-\frac{1}{s\mathcal{L}_{\tau}\{\dot{\Psi }(\tau)\}(s)}, \tag{33}\] where we took \(\Psi(0)=1\) without loss of generality. Observe that this expression can be rewritten in terms of the function \(g_{0}\) since it was constructed by symmetry with it \[\mathcal{L}_{\tau}\{g(\tau)\}(s)=-\frac{s\mathcal{L}_{\tau}\{g_{0}(\tau)\}(s )}{C_{0}}, \tag{34}\] or still \[g(\tau)=-\frac{\dot{g}_{0}(\tau)}{C_{0}}. \tag{35}\] Using the expressions for \(g(t)\) and \(g_{0}(t)\) calculated in \(\tau\) in Eq. (35), we have \[\frac{a_{-2}\tau+a_{-1}}{a_{-2}\tau+2a_{-1}}=-a_{-2}. \tag{36}\] Since \(a_{-2}=-1\), we must have \[a_{-1}=0. \tag{37}\] Therefore, ever since the system presents \(a_{-1}=0\), the optimal protocol is a shortcut to adiabaticity. Observe that the examples calculated in Ref. [6] have \(a_{-1}=-\tau_{R}\neq 0\), which implies that such systems do not present a shortcut to adiabaticity. ## IV Examples ### Cosine relaxation function First I solve the optimal protocol problem of a typical relaxation function of thermally isolated systems performing an adiabatic process [20] \[\Psi(t)=\Psi_{0}\cos\omega t. \tag{38}\] Applying the method, the terms \(a_{n}\) will be \[a_{-2}=-1,\quad a_{-1}=0,\quad a_{0}=1/\omega^{2} \tag{39}\] and \[a_{n}=0, \tag{40}\] with \(n\geq 1\). The optimal protocol will be \[g^{*}(t)=\frac{t}{\tau}+\frac{\delta(t)-\delta(\tau-t)}{\tau\omega^{2}}. \tag{41}\] As predicted, the optimal protocol is a shortcut to adiabaticity for any switching time. Indeed, calculating the excess work (12), one has \[W_{\rm ex}(\tau)=0. \tag{42}\] It is interesting that in comparison with the other results presented in Ref. [6] an interpretation of the relaxation time of thermally isolated systems can be made in this case. Indeed, one can assume by comparison that in this case \[\tau_{R}=0. \tag{43}\] But what does that mean and how one can bring it into line with the concept of relaxation time in an isothermal process? First of all, calling such quantity "relaxation time" for thermally isolated systems is technically wrong, since the relaxation function does not decorrelate as systems performing the isothermal processes do with the heat bath. This new timescale must have an interpretation that coincides with the relaxation time for isothermal processes and must be zero for thermally isolated systems. I propose then the following interpretation: this new timescale, which I will call "waiting time", is nothing more than the average time necessary for the system to achieve its final state if the process is stopped along the process. In this manner, systems performing isothermal processes will equilibrate with the heat bath, having therefore a positive waiting time, while the ones performing adiabatic processes are already in their final states once their processes are stopped, meaning that their waiting time is null. In particular, for isothermal processes, the waiting time will coincide with the relaxation time. This interpretation also makes sense with the time competition in a non-equilibrium process, where the whole process is characterized by the ratio between the switching time, which brings the system out of equilibrium, and the waiting time, which leads the system to its final state. At this point, in order to unify such interpretation for both cases, I propose the following mathematical definition for the waiting time \[\tau_{w}=\mathcal{L}_{t}\left\{\frac{\Psi(t)}{\Psi(0)}\right\}(0), \tag{44}\] whose derivation comes from the explicit calculation of the term \(-a_{-1}\). The universal solution, valid for isothermal and adiabatic processes, becomes \[g^{*}(t)=\frac{t-\tau_{w}}{\tau+2\tau_{w}}+\sum_{n=0}^{\infty}\frac{a_{n}( \delta^{(n)}(t)-\delta^{(n)}(\tau-t))}{\tau+2\tau_{w}}. \tag{45}\] Since it is expected that \(\tau_{w}=0\) to any thermally isolated systems, they have as optimal protocols shortcuts to adiabaticity. ### Transverse-field quantum Ising chain The relaxation function of the transverse-field quantum Ising chain taken initially at equilibrium with \(T=0\) and periodic boundary conditions is [17] \[\Psi_{N}(t)=\frac{16}{N}\sum_{n=1}^{N/2}\frac{J^{2}}{\epsilon^{3}(n)}\sin^{2} \left(\left(\frac{2n-1}{N}\right)\pi\right)\cos\left(\frac{2\epsilon(n)}{\hbar }t\right), \tag{46}\] where \[\epsilon(n)=2\sqrt{J^{2}+\Gamma_{0}^{2}-2J\Gamma_{0}\cos\left(\left(\frac{2n-1 }{N}\right)\pi\right)}, \tag{47}\] where \(J\) is the coupling energy of the system, \(\Gamma_{0}\) the initial magnetic field and \(N\) is a even number of spins. Calculating the coefficients \(a_{n}\), we have \[a_{-2}=-1,\quad a_{-1}=0,\quad a_{2n}\neq 0,\quad a_{2n+1}=0. \tag{48}\] with \(n\geq 0\). Since \(\tau_{w}=0\), the shortcut to adiabaticity will be \[g^{*}(t)=\frac{t}{\tau}+\sum_{n=0}^{\infty}\frac{a_{2n}(\delta^{(2n)}(t)- \delta^{(2n)}(\tau-t))}{\tau}. \tag{49}\] To verify if it is indeed a shortcut, we calculate the excess work for driving with the following parameters: \(\hbar=1,J=1,\Gamma_{0}=0.95,\delta\Gamma=0.1,N=10\). In order to avoid a calculation of an extensive series, we use the fact that the solution of the optimal protocol is global [21], so any solution found by alternative methods is an optimal protocol. We proceed as follows: choosing a number of Dirac delta and its derivatives equal to \(N/2\), I let the coefficients \(a_{n}\) free of choice. I calculate the Euler-Lagrange equation and constructed a linear equation system of those coefficients in order to nullify the result. Since the number of frequencies in the relaxation function is \(N/2\), the system will have a solution. After solving it, I found positive "frequencies" \(\Omega_{n}\) for each one of the derivatives of the Dirac delta. The calculation of the excess work result for such a solution is depicted in Fig. 1. Indeed, as predicted, we were able to completely suppress any excitation of the non-equilibrium driving. ## Discussion ### Continuous part For all examples treated here, the continuous part of the shortcut to adiabaticity \(g_{C}^{*}(t)\) was given by \[g_{C}^{*}(t)=\frac{t}{\tau}. \tag{50}\] Such a solution is consistent with previous results for extreme cases where \(\tau\gg\tau_{w}\)[19]. Also, this continuous part Figure 1: Optimal excess work calculated for the transverse-field quantum Ising chain. Indeed, it is a shortcut to adiabaticity for any switching time. It was used \(\hbar=1,J=1,\Gamma_{0}=0.95,\delta\Gamma=0.1,N=10\). is restrained \[0\leq g_{C}^{*}(t)\leq 1, \tag{51}\] for all \(t\in[0,\tau]\), \(0\leq\tau/\tau_{w}\leq\infty\), meaning that it can be used in linear-response theory. Therefore, the shortcut is physically consistent. ### Singular part For all examples treated here, the singular part of the optimal protocol \(g_{S}^{*}(t)\) was given by \[g_{S}^{*}(t)=\sum_{n=1}^{N/2}\frac{\delta^{(2(n-1))}(t)-\delta^{(2(n-1))}(\tau- t)}{\tau\Omega_{n}^{2n}}, \tag{52}\] where \(\Omega_{n}\) were positive numbers, independent of \(\tau\) and related to the parameters of the system. Therefore, for the extreme cases where \(\tau\gg\tau_{w}\), we have \[\lim_{\tau\gg\tau_{w}}g_{S}^{*}(t)=0, \tag{53}\] which corresponds again to the prediction in such case [19]. ### Quantum annealing In particular, the transverse-field quantum Ising chain has direct applicability to quantum annealing, since this system is a basic prototype of an adiabatic quantum computer. In this manner, for any switching time, one can find a solution that spends the minimal possible energy, avoiding errors that come from the excitability of non-equilibrium drivings. However, such a result is possible only on the regime where linear-response theory holds, which implies weak drivings of the magnetic field and a small number of spins [17]. Effects when the Kibble-Zurek mechanism starts to manifest with its diverging timescale may break down such a result. Another possible challenge is the implementation of Dirac deltas in the protocols, which as far as I know has not been implemented in experiments [6]. ## VI Final remarks In this work, I presented a universal solution for shortcuts to adiabaticity for any thermally isolated system performing a weak process. It relies only on the structure of the universal solution of the optimal protocol of the excess work. Two examples are solved in order to find such a structure: The cosine relaxation function, which encompasses a variety of systems, and the transverse-field quantum Ising chain, which is a prototype of an adiabatic quantum computer. The concept of waiting time is introduced in order to unify the description of the universal solution for isothermal and adiabatic processes as well. Also, although the result suggests that errors that come from the excitability of non-equilibrium drivings can be completely suppressed, this only holds for regimes where linear-response theory holds, that is, weak drivings of the magnetic field and a small number of particles. When the Kibble-Zurek mechanism starts to appear with its diverging timescale may break down such a result.
2307.01819
On the weight zero compactly supported cohomology of $\mathcal{H}_{g, n}$
For $g\ge 2$ and $n\ge 0$, let $\mathcal{H}_{g,n}\subset \mathcal{M}_{g,n}$ denote the complex moduli stack of $n$-marked smooth hyperelliptic curves of genus $g$. A normal crossings compactification of this space is provided by the theory of pointed admissible $\mathbb{Z}/2\mathbb{Z}$-covers. We explicitly determine the resulting dual complex, and we use this to define a graph complex which computes the weight zero compactly supported cohomology of $\mathcal{H}_{g, n}$. Using this graph complex, we give a sum-over-graphs formula for the $S_n$-equivariant weight zero compactly supported Euler characteristic of $\mathcal{H}_{g, n}$. This formula allows for the computer-aided calculation, for each $g\le 7$, of the generating function $\mathsf{h}_g$ for these equivariant Euler characteristics for all $n$. More generally, we determine the dual complex of the boundary in any moduli space of pointed admissible $G$-covers of genus zero curves, when $G$ is abelian, as a symmetric $\Delta$-complex. We use these complexes to generalize our formula for $\mathsf{h}_g$ to moduli spaces of $n$-pointed smooth abelian covers of genus zero curves.
Madeline Brandt, Melody Chan, Siddarth Kannan
2023-07-04T16:49:45Z
http://arxiv.org/abs/2307.01819v1
# On the weight zero compactly supported cohomology of \(\mathcal{H}_{g,n}\) ###### Abstract. For \(g\geq 2\) and \(n\geq 0\), let \(\mathcal{H}_{g,n}\subset\mathcal{M}_{g,n}\) denote the complex moduli stack of \(n\)-marked smooth hyperelliptic curves of genus \(g\). A normal crossings compactification of this space is provided by the theory of pointed admissible \(\mathbb{Z}/2\mathbb{Z}\)-covers. We explicitly determine the resulting dual complex, and we use this to define a graph complex which computes the weight zero compactly supported cohomology of \(\mathcal{H}_{g,n}\). Using this graph complex, we give a sum-over-graphs formula for the \(S_{n}\)-equivariant weight zero compactly supported Euler characteristic of \(\mathcal{H}_{g,n}\). This formula allows for the computer-aided calculation, for each \(g\leq 7\), of the generating function \(\mathsf{h}_{g}\) for these equivariant Euler characteristics for all \(n\). More generally, we determine the dual complex of the boundary in any moduli space of pointed admissible \(G\)-covers of genus zero curves, when \(G\) is abelian, as a symmetric \(\Delta\)-complex. We use these complexes to generalize our formula for \(\mathsf{h}_{g}\) to moduli spaces of \(n\)-pointed smooth abelian covers of genus zero curves. ## 1. Introduction For integers \(g\geq 2\) and \(n\geq 0\), let \(\mathcal{H}_{g,n}\subset\mathcal{M}_{g,n}\) denote the complex moduli stack of \(n\)-marked smooth hyperelliptic curves of genus \(g\). This space is a smooth Deligne-Mumford stack of dimension \(2g+n-1\). The group \(S_{n}\) acts on \(\mathcal{H}_{g,n}\) by permuting the marked points, and the rational cohomology groups with compact support \(H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\) are \(S_{n}\)-representations in the category of mixed Hodge structures over \(\mathbb{Q}\). In particular, each cohomology group \(H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\) carries a weight filtration \[W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\subset W_{1}H^{i}_{c}(\mathcal{H }_{g,n};\mathbb{Q})\cdots\subset W_{4g+2n-2}H^{i}_{c}(\mathcal{H}_{g,n}; \mathbb{Q})=H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q}),\] which is preserved by the \(S_{n}\)-action. In this paper, we study the \(S_{n}\)-representation defined by the weight zero piece of this filtration. When \(X\) is a smooth and separated variety or Deligne-Mumford stack, Deligne's weight spectral sequence [11, SS3.2] computes the associated graded pieces of the weight filtration on the compactly supported cohomology of \(X\). It identifies the weight zero piece with the reduced cohomology of the dual complex of any normal crossings compactification of \(X\). We will furnish a normal crossings compactification of \(\mathcal{H}_{g,n}\) using the theory of pointed admissible \(\mathbb{Z}/2\mathbb{Z}\)-covers, as developed by Abramovich-Vistoli [1], Abramovich-Corti-Vistoli [1], and Jarvis-Kaufmann-Kimura [21], following Harris-Mumford's original theory [12]. Denoting the dual complex of the resulting boundary divisor by \(\Theta_{g,n}\), we then study the weight zero compactly supported cohomology of \(\mathcal{H}_{g,n}\) via the identification \[W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\cong\tilde{H}^{i-1}(\Theta_{g,n} ;\mathbb{Q}) \tag{1}\] mentioned above, where \(\tilde{H}^{*}\) denotes reduced cohomology. Along the way, we also explicitly determine the dual complex of the boundary in any space of pointed admissible \(G\)-covers of genus zero curves, for abelian groups \(G\) (Theorem 3.5). Our main result concerns the \(S_{n}\)-equivariant weight zero compactly supported Euler characteristic \[\chi^{\mathcal{S}_{n}}(W_{0}H^{*}_{c}(\mathcal{H}_{g,n};\mathbb{Q})):=\sum_{i =0}^{4g+2n-2}(-1)^{i}\operatorname{ch}_{n}\left(W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\right)\in\Lambda,\] ## 1. Introduction Let \(\Omega\) be a finite finite group and \(G\) a finite group. A _finite_ group \(G\) is a finite group, and \(G\) acts on \(X\), \(\Delta^{\circ}\) is a finite group. A _finite_ group \(G\) is a finite group, and \(G\) acts on \(X\), \(\Delta^{\circ}\) is a finite group. a result of Gorsky [13, Theorem 2.5] concerning complex quasi-projective varieties \(X\) with an action of a finite group; our specific formulation is a new contribution. In particular, it does not appear in the work of Chan-Faber-Galatius-Payne on the top weight cohomology of \(\mathcal{M}_{g,n}\), where an alternate argument, which is less geometric, is used [12]. Now let us turn our attention to individual cohomology groups, rather than Euler characteristics. First, for \(n=0,1,2\), and \(3\), the cohomology of \(\mathcal{H}_{g,n}\) was completely computed by Tommasi [14]; see Section 1.2. The consequences of these computations for the weight zero part of cohomology with compact supports can be interpreted via our work as statements about chain complexes of graph-theoretic admissible covers. In Section 5, we prove some of these statements, using the acyclicity results mentioned above. In particular, we deduce the following facts, first proved by Tommasi: **Proposition B**.: For all \(g\geq 2\), we have 1. \(W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})=0\) for all \(i\), when \(n\leq 1\); 2. When \(n=2\), we have \[W_{0}H^{2g+1}_{c}(\mathcal{H}_{g,2};\mathbb{Q})\cong\mathbb{Q}.\] As an \(S_{2}\)-representation, we have \[W_{0}H^{2g+1}_{c}(\mathcal{H}_{g,2};\mathbb{Q})\cong\begin{cases}\text{triv}& \text{ if $g$ is even}\\ \text{sgn}&\text{ if $g$ is odd}.\end{cases}\] Part (1) of Proposition B is established via a spectral sequence argument, similar to the ones we use for acyclicity of other subcomplexes of \(\Theta_{g,n}\). For part (2), we write down an explicit cellular cycle on \(\Theta_{g,2}\) corresponding to the nonzero class in \(W_{0}H^{2g+1}(\mathcal{H}_{g,2};\mathbb{Q})\): see Figure 11 in Section 5. Tommasi shows additionally that \(W_{0}H^{i}_{c}(\mathcal{H}_{g,2};\mathbb{Q})=0\) for \(i\neq 2g+1\), but we do not see how to prove this directly using our graph complex, nor have we investigated whether we can use our methods to re-deduce \(W_{0}H^{*}_{c}(\mathcal{H}_{g,3};\mathbb{Q})\) for all \(g\). ### The support of \(W_{0}H^{*}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\) It is worth noting that the weight zero compactly supported cohomology of \(\mathcal{H}_{g,n}\) is supported in at most two degrees. Precisely, \[W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})=0\quad\text{ unless }\quad i=2g-2+n\text{ or }i=2g-1+n. \tag{2}\] We now explain the claim (2), which follows from an argument we learned from D. Petersen. To sidestep stack-theoretic issues, let us momentarily replace \(\mathcal{H}_{g,n}\) by its coarse moduli space \(H_{g,n}\); this is inconsequential on the level of rational cohomology. It is well-known that \(H_{g}\) is affine, as it can be identified with the quotient \(\mathcal{M}_{0,2g+2}/S_{2g+2}\). In general, \(H_{g,n}\) is not far from affine: as explained by D. Petersen in a MathOverflow post [14], the _affine stratification number_[15] of \(H_{g,n}\) is \(1\) for all \(n>0\). By [15, Corollary 4.19(c)], we may conclude that \[H^{i}(\mathcal{H}_{g,n};\mathbb{Q})=0\text{ for }i>2g+n,\quad\text{and}\quad H ^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})=0\text{ for }i<2g-2+n,\] the latter by Poincare duality. As the dual complex \(\Theta_{g,n}\) of the normal crossings compactification of \(\mathcal{H}_{g,n}\) by pointed admissible \(\mathbb{Z}/2\mathbb{Z}\)-covers is a generalized cell complex of dimension \(2g-2+n\) (Section 3), the claim (2) follows immediately from (1). Thus, our formula for \(\mathsf{h}_{g}\) is a formula for the _difference_ of the two \(S_{n}\)-representations in (2) and can be used to bound the multiplicities of Specht modules appearing in them individually. We have not investigated whether \(\mathsf{h}_{g}\) is in fact a cancellation-free formula for this difference. ### Related work on the cohomology of \(\mathcal{H}_{g,n}\) Recently, there have been a number of significant advances on the geometry of moduli spaces of pointed hyperelliptic curves. Canning-Larson study the rational Chow ring of \(\mathcal{H}_{g,n}\), in particular determining it completely for \(n\leq 2g+6\) [CL]. Their results also have implications for rationality of \(\mathcal{H}_{g,n}\). More generally, there has been progress on understanding the birational geometry of \(\mathcal{H}_{g,n}\); see, for example, the overview and references in that paper. In another direction, Bergstrom-Diaconu-Petersen-Westerland [1] compute the stable homology of braid groups with coefficients in (any Schur functor applied to) the Burau representation. These results have implications for the stable homology of moduli spaces of hyperelliptic curves with twisted coefficients. They can also be related to the Serre spectral sequence on rational cohomology for the fiber bundle \(\operatorname{Conf}_{n}(S_{g})\to\mathcal{H}_{g,n}\to\mathcal{H}_{g}\), as C. Westerland has explained to us. Our focus here is the cohomology groups of \(\mathcal{H}_{g,n}\) with (untwisted) \(\mathbb{Q}\)-coefficients, and specifically the weight zero compactly supported cohomology groups. The topological Euler characteristic of \(\mathcal{H}_{g,n}\) has been computed by Bini [1], but his techniques are not compatible with the weight filtration. Gorsky [1] calculates the equivariant Euler characteristic \[\chi^{S_{n}}(\mathcal{H}_{g,n}):=\sum_{i=0}^{4g+2n-2}(-1)^{i}\operatorname{ch }_{n}(H^{i}(\mathcal{H}_{g,n};\mathbb{Q})),\] by fibering \(\mathcal{H}_{g,n}\) over \(\mathcal{H}_{g}\). The fiber of this morphism over a point of \(\mathcal{H}_{g}\) representing a curve \(C\) is equal to \(\operatorname{Conf}_{n}(C)/\mathrm{Aut}(C)\). Gorsky proceeds by stratifying \(\mathcal{H}_{g}\) by the \(S_{n}\)-equivariant Euler characteristic of the fibers, and then calculating the Euler characteristic of each stratum. Our techniques are similar in spirit to Gorsky's. The \(S_{n}\)-equivariant weight zero compactly supported Euler characteristic of \(\mathcal{H}_{g,n}\) is equal to \(h_{n}-\chi^{S_{n}}(\Theta_{g,n})\), where \(h_{n}\in\Lambda\) is the \(n\)th homogeneous symmetric function. As explained above, we first remove an acyclic locus from \(\Theta_{g,n}\), and then stratify the remaining space in terms of configuration spaces of graphs, summing up these contributions to give our formula (Section 6). ### Relation to point-counting For higher \(n\), Bergstrom [1] studies the cohomology of \(\mathcal{H}_{g,n}\) via point-counting: for all \(g\geq 2\), he gives an algorithm to determine the count of \(\mathbb{F}_{q}\)-points of \(\mathcal{H}_{g,n}\) for \(n\leq 7\) and for all prime powers \(q\). Together with the results of [1], Bergstrom's work implies that for odd \(q\), the number of \(\mathbb{F}_{q}\)-points of \(\mathcal{H}_{g,n}\) agrees with a polynomial \(P_{g,n}(q)\) for \(n\leq 9\) (there is a different polynomial for even \(q\)). By [10, Theorem 6.1.2(3)], we have an equality \[P_{g,n}(q)=\sum_{j=0}^{2g+n-1}\chi_{c}^{2j}(\mathcal{H}_{g,n})q^{j},\] where \[\chi_{c}^{k}(\mathcal{H}_{g,n}):=\sum_{i=0}^{4g+2n-2}(-1)^{i}\dim_{\mathbb{Q} }\operatorname{Gr}_{k}^{W}H_{c}^{i}(\mathcal{H}_{g,n};\mathbb{Q}),\] and \[\operatorname{Gr}_{k}^{W}H_{c}^{i}(\mathcal{H}_{g,n};\mathbb{Q}):=W_{k}H_{c}^ {i}(\mathcal{H}_{g,n};\mathbb{Q})/W_{k-1}H_{c}^{i}(\mathcal{H}_{g,n};\mathbb{ Q})\] is the \(k\)th associated graded piece of the weight filtration. In particular, the constant term of \(P_{g,n}(q)\) is equal to the weight zero compactly supported Euler characteristic. Bergstrom's original work [1] is \(S_{n}\)-equivariant, and we have confirmed that our data agrees with his for \(n\leq 7\). He has explained to us that [1, Theorem 5.2] and [1] imply that for each \(n\leq 9\), there exists a polynomial \(F_{n}(t)\in\mathbb{Q}[t]\), with degree bounded by \(n-2\) if \(n\) is even and \(n-3\) if \(n\) is odd, such that \[\chi_{c}^{0}(\mathcal{H}_{g,n})=F_{n}(g)\] for all \(g\). With these bounds on the degrees, our formula allows us to compute this polynomial for all \(n\leq 9\), using the data in Table 3. The polynomials \(F_{n}(t)\) can certainly be calculated from Bergstrom's work, but did not explicitly appear there, so we record them below. In each case, the degree of \(F_{n}(t)\) attains the communicated bound. **Proposition C**.: We have \(\chi^{0}_{c}(\mathcal{H}_{g,n})=0\) for \(n\in\{0,1,3\}\), while \(\chi^{0}_{c}(\mathcal{H}_{g,2})=-1\). For \(4\leq n\leq 9\), we have the following: \[\chi^{0}_{c}(\mathcal{H}_{g,4}) =g(1-g)\] \[\chi^{0}_{c}(\mathcal{H}_{g,5}) =5g(-1+g)\] \[\chi^{0}_{c}(\mathcal{H}_{g,6}) =\tfrac{1}{8}g(198-203g+18g^{2}-13g^{3})\] \[\chi^{0}_{c}(\mathcal{H}_{g,7}) =\tfrac{7}{4}g(-78+83g-18g^{2}+13g^{3})\] \[\chi^{0}_{c}(\mathcal{H}_{g,8}) =\tfrac{1}{4}g(3420-3784g+1355g^{2}-1005g^{3}+25g^{4}-11g^{5})\] \[\chi^{0}_{c}(\mathcal{H}_{g,9}) =\tfrac{9}{4}g(-2700+3092g-1545g^{2}+1195g^{3}-75g^{4}+33g^{5}).\] ### Relation to previous work on \(\mathcal{M}_{g,n}\) Our calculations are a new step in understanding weight zero compactly supported rational cohomology of moduli spaces via combinatorics of normal crossings compactifications [1, 1, 2, 3, 4]. In our calculation of \(\mathsf{h}_{g}\), we proceed in a similar fashion to Chan-Faber-Galatius-Payne [1], who calculate the \(S_{n}\)-equivariant weight zero Euler characteristic of \(\mathcal{M}_{g,n}\). They use the dual complex \(\Delta_{g,n}\) of the Deligne-Mumford-Knudsen compactification \(\mathcal{M}_{g,n}\subset\overline{\mathcal{M}}_{g,n}\), which can be interpreted as a tropical moduli space of curves [1]. They express the generating function \[\mathsf{z}_{g}:=\sum_{n\geq 0}\chi^{S_{n}}(W_{0}H_{c}^{*}(\mathcal{M}_{g,n}; \mathbb{Q}))\] as a sum over contributions from configuration spaces of graphs. The contribution from each graph is a sum of monomials in the inhomogeneous power sum symmetric functions \(P_{i}:=1+p_{i}\), of degree equal to the topological Euler characteristic of the graph. A crucial difference between their work and ours, which has been an unexpected subtlety here, is that they find that the only graphs contributing to their formula are connected with first Betti number \(g\). As such, their formula for \(\mathsf{z}_{g}\) is a Laurent polynomial in the \(P_{i}\)'s, homogeneous of degree \(1-g\). The ability to focus on graphs with fixed Euler characteristic is a significant conceptual aid to their work. In contrast, we find that while all of the graphs contributing to \(\mathsf{h}_{g}\) are connected double covers of metric trees, they do not have fixed first Betti number, so their topological Euler characteristics vary, and indeed for \(g\geq 3\) the formulas for \(\mathsf{h}_{g}\) are not homogeneous in the \(P_{i}\)'s. When \(g=2\), we have \(\mathcal{H}_{2,n}=\mathcal{M}_{2,n}\), so \(\mathsf{h}_{2}=\mathsf{z}_{2}\) is homogeneous of degree \(-1\). ### Applications to moduli spaces of admissible \(G\)-covers in genus zero While our main focus in this paper is the moduli space \(\mathcal{H}_{g,n}\), our techniques are more general. As mentioned above, Theorem 3.5 in Section 3 contains a description of the dual complex of the boundary divisor in any moduli space of pointed admissible \(G\)-covers of genus zero curves, when \(G\) is an abelian group. We specialize to \(G=\mathbb{Z}/2\mathbb{Z}\) in order to study \(\mathcal{H}_{g,n}\). We can prove a generalization of Theorem A to more general moduli spaces of pointed \(G\)-covers: see Remarks 5.13 and 6.6, and Theorem D in Section 6. ### Acknowledgments We are grateful to Dan Abramovich for teaching us about twisted stable maps and admissible \(G\)-covers, and to Jonas Bergstrom for explaining his work [1] and sharing his data on the weight zero compactly supported Euler characteristic of \(\mathcal{H}_{g,n}\). Jonas Bergstrom, Dan Petersen, and Dhruv Ranganathan provided extremely valuable comments on a draft of this paper; we thank them very much. MB is supported by the National Science Foundation under Award No. 2001739. MC was supported by NSF CAREER DMS-1844768, a Sloan Foundation Fellowship and a Simons Foundation Fellowship. SK was supported by an NSF Graduate Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. ## 2. Pointed admissible \(G\)-covers and their moduli In this section we recall moduli spaces of pointed admissible \(G\)-covers, following [1, 1, 1, 2, 3, 4]. We determine the connected components of these spaces when \(g=0\) and \(G\) is abelian (Proposition 2.4), and give a normal crossings compactification (Proposition 2.5). Later, in Section 3, we will determine the dual complex of this compactification. Ultimately, we will obtain a normal crossings compactification of \(\mathcal{H}_{g,n}\) and the corresponding dual complex as a special case in Section 4. ### Admissible \(G\)-covers Let \(G\) be a finite group, and let \(g,n\geq 0\) be integers such that \(2g-2+n>0\). We recall the notion of an _admissible \(G\)-cover_ of nodal curves of type \((g,n)\) over an arbitrary base scheme \(T\) ([1, Definition 4.3.1]). It is the data of an \(n\)-marked, stable genus \(g\) curve \((C,p_{1},\ldots,p_{n})\) over \(T\), and a covering of nodal curves \(\phi\colon P\to C\) with an action of \(G\) on \(P\) leaving \(\phi\) invariant, such that: 1. \(\phi\) is a principal \(G\)-bundle away from the nodes and markings of \(C\), 2. The analytic local equations for \(P\to C\to T\) at a point \(p\in P\) over a node of \(C\) are \[\operatorname{Spec}A[z,w]/(zw-t)\to\operatorname{Spec}A[x,y]/(xy-t^{r})\to \operatorname{Spec}A,\] where \(t\in A\), \(x=z^{r}\) and \(y=w^{r}\) for some integer \(r>0\). 3. The analytic local equations for \(P\to C\to T\) at a point \(p\in P\) over a marked point of \(C\) are \[\operatorname{Spec}A[z]\to\operatorname{Spec}A[x]\to\operatorname{Spec}A,\] where \(x=z^{s}\) for some integer \(s>0\). 4. if \(x\in P\) is a geometric node, then the action of the stabilizer \(G_{x}\) of \(x\) on the tangent spaces of the two analytic branches at \(x\) is _balanced_: the characters of these two one-dimensional representations of \(G_{x}\) are inverse to each other. Admissible \(G\)-covers of type \((g,n)\) form a Deligne-Mumford stack, denoted \(\operatorname{Adm}_{g,n}(G)\); this is a consequence of the identification of \(\operatorname{Adm}_{g,n}(G)\) with the space \(\mathcal{B}^{\operatorname{bal}}_{g,n}(G)\) of _balanced twisted \(G\)-covers_ of type \((g,n)\) which is proven in [1] to be a Deligne-Mumford stack. We may write \(G\)_-cover_ rather than _admissible \(G\)-cover_ for short. ### Admissible covers of smooth curves Let \(\operatorname{Adm}_{g,n}^{\circ}(G)\) denote the open substack of \(G\)-covers in which the target curve (and hence also the source curve) is smooth. In this section, we will determine the connected components of \(\operatorname{Adm}_{0,n}^{\circ}(G)\) (Proposition 2.1). We will use this result later when determining the connected components of the corresponding space of pointed admissible \(G\)-covers. There is a forgetful map \[\pi\colon\operatorname{Adm}_{g,n}^{\circ}(G)\to\mathcal{M}_{g,n}\] sending a \(G\)-cover \(P\to(C,p_{1},\ldots,p_{n})\) to the \(n\)-pointed curve \((C,p_{1},\ldots,p_{n})\). This morphism is etale. Working over \(\mathbb{C}\), the fiber over \((C,p_{1},\ldots,p_{n})\) is identified with the set \[\operatorname{Hom}(\pi_{1}(C-\{p_{1},\ldots,p_{n}\},p_{0}),G)/G \tag{3}\] where \(G\) acts by conjugation, and \(p_{0}\in C-\{p_{1},\ldots,p_{n}\}\) is any choice of basepoint. An element of the set (3) specifies a \(G\)-cover of the punctured curve \(C-\{p_{1},\ldots,p_{n}\}\), which can be extended uniquely over the punctures. Then the data of the morphism \(\pi\) is equivalent to the data of the action of \(\pi_{1}\) of the base on the fiber (3) above. We shall now consider this action in the case \(g=0\), when the action may be understood via the classical Hurwitz theory of \(\mathbb{P}^{1}\). We denote by \[\varepsilon_{n}^{\operatorname{ni}}(G):=\{(g_{1},\ldots,g_{n})\in G^{n}:g_{1} \cdots g_{n}=1\}\] the set of _Nielsen classes_. We do not impose that \(g_{1},\ldots,g_{n}\) generate \(G\); correspondingly, our source curves are not required to be connected. The group \(G\) acts by conjugation on \(\varepsilon_{n}^{\operatorname{ni}}(G)\), and the elements of \(\varepsilon_{n}^{\operatorname{ni}}(G)/G\) are called _inner Nielsen classes_. Recall the following relationship between the set (3) to the set of inner Nielsen classes: choose loops \(\rho_{1},\dots,\rho_{n}\) around \(p_{1},\dots,p_{n}\), respectively, based at \(p_{0}\), such that \(\rho_{1},\dots,\rho_{n}\) generate \(\pi_{1}(C-\{p_{1},\dots,p_{n}\},p_{0})\) subject only to the relation \[\rho_{1}\cdot\dots\cdot\rho_{n}=1.\] Such a choice identifies the set (3) with the inner Nielsen classes. Now the following diagram of pullback squares relates \(\operatorname{Adm}^{\circ}_{0,n}(G)\) to Hurwitz spaces of \(G\)-covers. The spaces above are defined as follows. The configuration spaces (ordered and unordered) of \(n\) points in \(\mathbb{P}^{1}\) are denoted \(\operatorname{Conf}_{n}(\mathbb{P}^{1})\) and \(\operatorname{UConf}_{n}(\mathbb{P}^{1})\), respectively. The space \(U\mathcal{H}^{G}_{\mathbb{P}^{1},n}\) is the moduli space parametrizing sets \(S\subset\mathbb{P}^{1}\) of \(n\) points, together with a ramified \(G\)-cover \(f\colon P\to\mathbb{P}^{1}\) whose branch locus is contained in \(S\). The space \(\mathcal{H}^{G}_{\mathbb{P}^{1},n}\) is the ordered version of this space, obtained by pullback. The map \(\mathcal{M}_{0,n}\to\operatorname{Conf}_{n}(\mathbb{P}^{1})\) fixes \((p_{1},p_{2},p_{3})\) to be \((0,1,\infty)\), for instance. **Proposition 2.1**.: If \(G\) is abelian, then \(U\mathcal{H}^{G}_{\mathbb{P}^{1},n}\to\operatorname{UConf}_{n}(\mathbb{P}^{1})\), and hence also \(\operatorname{Adm}^{\circ}_{0,n}(G)\to\mathcal{M}_{0,n}\), is a trivial bundle. As a variety, \(\operatorname{Adm}^{\circ}_{0,n}(G)\) is isomorphic to \(\mathcal{M}_{0,n}\times\varepsilon^{\operatorname{ni}}_{n}(G)\). Proof.: For an arbitrary finite group \(G\), the way in which \(U\mathcal{H}^{G}_{\mathbb{P}^{1},n}\) is a covering space over \(\operatorname{UConf}_{n}(\mathbb{P}^{1})\) is classically understood, essentially going back to Hurwitz [10], see [11, p. 547]. The following is a complete description. For an appropriate choice of basis, \(\operatorname{Hom}(\pi_{1}(\mathbb{P}^{1}-S,p_{0}),G)\) is identified with \(\varepsilon^{\operatorname{ni}}_{n}(G)\), and the spherical braid group \(\pi_{1}(\operatorname{UConf}_{n}(\mathbb{P}^{1}))\) has a presentation with generators \(\gamma_{1},\dots,\gamma_{n-1}\), acting on \(\varepsilon^{\operatorname{ni}}_{n}(G)\) via \[\gamma_{i}\cdot(g_{1},\dots,g_{n})=(g_{1},\dots,g_{i-1},g_{i}g_{i+1}g_{i}^{-1 },g_{i},g_{i+2},\dots,g_{n}).\] In the case that \(G\) is abelian, the action described above induces a trivial action of the spherical braid group \(\pi_{1}(\operatorname{UConf}_{n}(\mathbb{P}^{1}))\) on \(\varepsilon^{\operatorname{ni}}_{n}(G)\), proving the claim. **Remark 2.2**.: Stack-theoretically, we have \[\operatorname{Adm}^{\circ}_{0,n}(G)\cong\mathcal{M}_{0,n}\times[\varepsilon^{ \operatorname{ni}}(G)/G] \tag{4}\] if \(G\) is abelian, where \(G\) acts trivially on \(\varepsilon^{\operatorname{ni}}(G)\). Under this identification, write \[\operatorname{Adm}^{\circ}_{0,n}(G;g_{1},\dots,g_{n}) \tag{5}\] for the connected component of \(\operatorname{Adm}^{\circ}_{0,n}(G)\) corresponding to the Nielsen class \((g_{1},\dots,g_{n})\); it is isomorphic to \(\mathcal{M}_{0,n}\times BG\). ### Pointed admissible covers We study spaces of pointed admissible covers and determine the connected components of these spaces in Proposition 2.4. This is an important calculation towards the computation of the connected boundary strata in Theorem 3.5, since the boundary strata of spaces of pointed admissible covers are quotients of products of smaller spaces of pointed admissible covers. Let \(G\) be any group, not necessarily abelian. Let \(\overline{\mathcal{M}}^{G}_{g,n}\) denote the space of \(n\)-marked _pointed admissible \(G\)-covers_ of genus \(g\)[13]. It is a moduli space for nodal admissible \(G\)-covers \(P\to(C,p_{1},\dots,p_{n})\), together with a choice of a lift \(\widetilde{p}_{i}\) on \(P\) of each \(p_{i}\). The open substack \(\mathcal{M}^{G}_{g,n}\) is the moduli space of pointed admissible \(G\)-covers in which source and target are smooth. Summarizing, we have a Cartesian square which lays out the unfortunate lack of parallelism in the notation for these spaces. The notation comes from the literature, however. **Proposition 2.3**.: The morphisms \(\mathcal{M}^{G}_{g,n}\to\operatorname{Adm}^{\circ}_{g,n}(G)\) and \(\overline{\mathcal{M}}^{G}_{g,n}\to\operatorname{Adm}_{g,n}(G)\) are etale. For easy reference, we prove Proposition 2.3 below. We note, however, that the argument appears as part of the proof in [1, Theorem 2.4] of the fact that \(\overline{\mathcal{M}}^{G}_{g,n}\) is a smooth Deligne-Mumford stack, flat, proper, and quasi-finite over \(\overline{\mathcal{M}}_{g,n}\). Proof.: We verify the second statement, which implies the first. Recall the construction of \(\overline{\mathcal{M}}^{G}_{g,n}\), which we summarize following [13]. Let \(E\to\mathcal{C}=[E/G]\) denote the universal source curve and stacky target curves, respectively, over \(\operatorname{Adm}^{G}_{g,n}\), and let \(C\) denote the coarse space of \(\mathcal{C}\). For \(i=1,\dots,n\), let \(\mathcal{S}_{i}\to\mathcal{C}\) denote the closed substack of \(\mathcal{C}^{\text{sm}}\) whose image in \(C\) is the universal \(i^{\text{th}}\) marked point; \(\mathcal{S}_{i}\) is an etale gerbe over \(\operatorname{Adm}^{G}_{g,n}\). Let \(E_{i}=E\times_{\mathcal{C}}\mathcal{S}_{i}\). We have the following diagram, whose top square is Cartesian and where the morphisms known to be etale are labeled: The morphism \(E_{i}\to\operatorname{Adm}^{G}_{g,n}\) is etale since it is a composition of \(E_{i}\to\mathcal{S}_{i}\), which is a pullback of an etale morphism and hence etale, and the etale gerbe \(\mathcal{S}_{i}\to\operatorname{Adm}^{G}_{g,n}\). Therefore \[\overline{\mathcal{M}}^{G}_{g,n}=E_{1}\times_{\operatorname{Adm}^{G}_{g,n}} \cdots\times_{\operatorname{Adm}^{G}_{g,n}}E_{n}\] is also etale over \(\operatorname{Adm}^{G}_{g,n}\). The spaces \(\mathcal{M}^{G}_{g,n}\) and \(\operatorname{Adm}^{\circ}_{g,n}(G)\) need not be connected, as observed in Remark 2.2. Given \(g_{1},\dots,g_{n}\in G\), write \(\mathcal{M}^{G}_{g,n}(g_{1},\dots,g_{n})\) for the open and closed substack of \(\mathcal{M}^{G}_{g,n}\) in which the monodromy at the marking \(\widehat{p}_{i}\) in the source curve is \(g_{i}\). **Proposition 2.4**.: Let \(G\) be an abelian group. Suppose \(g_{1}\cdots g_{n}=1\), so that \(\mathcal{M}^{G}_{0,n}(g_{1},\dots,g_{n})\) is nonempty. The connected components of \(\mathcal{M}^{G}_{0,n}(g_{1},\dots,g_{n})\) are in bijection with orbits of functions \[\{1,\dots,n\}\to G/\langle g_{1},\dots,g_{n}\rangle\] under left \(G\)-translation. Proof.: The restriction of the map \(\mathcal{M}^{G}_{0,n}\xrightarrow{\pi}\operatorname{Adm}^{\circ}_{0,n}(G)\) to \(\mathcal{M}^{G}_{0,n}(g_{1},\dots,g_{n})\) becomes a surjection \[\mathcal{M}^{G}_{0,n}(g_{1},\dots,g_{n})\xrightarrow{\pi}\operatorname{Adm}^{ \circ}_{0,n}(G;g_{1},\dots,g_{n})\cong\mathcal{M}_{0,n}\times BG,\] where the last isomorphism was established in Proposition 2.1. This morphism is etale by Proposition 2.3. Now let \(P\to(C,p_{1},\dots,p_{n})\) be any unpointed admissible cover; the fiber of \(\pi\) over it is the action groupoid on all lifts \(\widetilde{p}_{1},\dots,\widetilde{p}_{n}\) of \(p_{1},\dots,p_{n}\) respectively, with the group \(G\) acting by simultaneous translation of the \(\widetilde{p}_{i}\). The connected components of \(\mathcal{M}^{G}_{0,n}(g_{1},\dots,g_{n})\) are in bijection with the orbits of this category under the further action of pure mapping class group \(\operatorname{Mod}_{0,n}\). Those orbits are in bijection with orbits of functions \(\{1,\dots,n\}\to\pi_{0}(P)\) under left \(G\)-translation; and \(\pi_{0}(P)\cong G/\langle g_{1},\dots,g_{n}\rangle\). It will be convenient to work with pointed curves labelled by arbitrary finite sets. Thus let \(G\) be a finite group, \(S\) a finite set, and \(\rho\colon S\to G\) any function. For \(g\geq 0\) with \(2g-2+|S|>0\), let \[\overline{\mathcal{M}}^{G}_{g,S}(\rho)\] denote the space of pointed admissible \(G\)-covers of genus \(g\) curves with specified monodromy \(\rho\). Let \(\mathcal{M}^{G}_{g,S}(\rho)\) denote the open subset parametrizing admissible \(G\)-covers in which the target curve is smooth. **Proposition 2.5**.: The space \(\overline{\mathcal{M}}^{G}_{g,S}=\coprod_{\rho}\overline{\mathcal{M}}^{G}_{g,S}(\rho)\) is a normal crossings compactification of \(\mathcal{M}^{G}_{g,S}=\coprod_{\rho}\mathcal{M}^{G}_{g,S}(\rho)\). Proof.: This follows from the fact that \(\operatorname{Adm}^{\circ}_{g,n}(G)\subset\operatorname{Adm}_{g,n}(G)\) is a normal crossings compactification, by the proof of [10, SS3.23], and \(\overline{\mathcal{M}}^{G}_{g,S}\) is etale over \(\operatorname{Adm}_{g,S}(G)\) (Proposition 2.3). ## 3. Boundary complexes of pointed admissible \(G\)-covers In this section we write down the boundary complex for the normal crossings compactification \[\mathcal{M}^{G}_{0,S}(\rho)\subset\overline{\mathcal{M}}^{G}_{0,S}(\rho) \tag{6}\] when \(G\) is abelian (Theorem 3.5). This will be used in Section 4, to provide a normal crossings compactification of \(\mathcal{H}_{g,n}\) and obtain its boundary complex. The boundary complex is governed by _graph-theoretic admissible covers of graphs_, which we develop below in SS3.1. The basic notion of an admissible cover in tropical geometry was established in [10] and [11]. More recently and closely related to our approach, for an arbitrary finite group \(G\), the notion of a graph \(G\)-cover associated to a admissible \(G\)-cover was developed by Galeotti [1, 1]--see especially [1, SS3.1]--for the purpose of studying the birational geometry, and singularities, of (coarse spaces of) moduli spaces of genus \(g\) curves with a principal \(G\)-bundle. Our definition is essentially a streamlined version of Galeotti's earlier definition, in the case that both apply. By putting into place our restrictions on \(g\) and \(G\), we are able to give a completely explicit description of the boundary complex of (6). See Remarks 3.6 and 3.7 for further comments on the general case and for further discussion of the surrounding literature. ### Categories of covers of graphs Throughout Section 3, let \(G\) be a finite abelian group. The boundary strata of the compactification \[\mathcal{M}^{G}_{0,S}(\rho)\hookrightarrow\overline{\mathcal{M}}^{G}_{0,S}(\rho)\] are in correspondence with _graph-theoretic admissible \(G\)-covers_, which we will now define. A _graph_\(C=(V,H,i_{C},r_{C})\) is the data of two finite sets of _vertices_\(V=V(C)\), and _half-edges_\(H=H(C)\), together with maps \[i_{C}\colon H\to H,\qquad r_{C}\colon H\to V\] such that \(i_{C}\) is an involution. We abbreviate \(i=i_{C}\) and \(r=r_{C}\). We permit \(i\) to have fixed points, and let \(L=L(C)\) denote the set of fixed elements of \(i\), called _legs_. View \(r_{C}\) as the map taking a half-edge to its incident vertex. The edge set \(E=E(C)\) is the set of pairs \(\{h,i(h)\}\) for \(i(h)\neq h\); view \(i_{C}\) as the "other half" map on the half-edges. A morphism of graphs \(f\colon C\to C^{\prime}\) is given by set maps \(f_{V}\colon V\to V^{\prime}\) and \(f_{H}\colon H\to H^{\prime}\) such that the relevant squares commute: For a finite set \(S\), an _\(S\)-marking_ of \(C\) is an injection \(m=m_{C}\colon S\to L(C)\). It will be convenient _not_ to require that \(m\) is a bijection. A morphism of \(S\)-marked graphs \((C,m_{C})\to(C^{\prime},m_{C^{\prime}})\) is a morphism of graphs \(f\colon C\to C^{\prime}\) that preserves the \(S\)-marking, i.e., \(f_{H}\circ m_{C}=m_{C^{\prime}}\). **Definition 3.1**.: Let \(G\) be a finite abelian group, \(S\) a finite set. An _\(S\)-marked, admissible \(G\)-cover of graphs_ in genus \(0\) is 1. A morphism \(f\colon P\to C\) of \(S\)-marked graphs, such that \(C\) is a _stable_\(S\)-marked tree: for each vertex \(v\in V(C)\), we have \(|r_{C}^{-1}(v)|\geq 3\). We require that the legs of \(C\) are in _bijection_ with \(S\). 2. A left action \(\Phi\colon G\times P\to P\) leaving \(P\to C\) invariant, such that \(P\to P/G\) is canonically isomorphic to \(P\to C\). 3. A "monodromy marking" \(\mu\colon H(C)\to G\). Thus every half edge (including legs) of \(C\) is assigned an element of \(G\). We require \(\mu(i(h))=\mu(h)^{-1}\) for every \(h\in H(C)\). 4. A function \(g\colon V(P)\to\mathbb{Z}_{\geq 0}\); we call \(g(v)\) the _weight_ or _genus_ of \(v\). The above data must satisfy: 1. For every \(v\in V(C)\), \(f^{-1}(v)\cong G/\langle\mu(h):h\in r^{-1}(v)\rangle\) as left \(G\)-sets, and \[\prod_{h\in r^{-1}(v)}\mu(h)=1.\] 2. For every \(h\in H(C)\), \(f^{-1}(h)\cong G/\langle\mu(h)\rangle\) as left \(G\)-sets. 3. (local Riemann-Hurwitz) For all \(v\in V(P)\), writing \(w=f(v)\) and \(n_{w}=r_{C}^{-1}(w)\), the genus \(g(v)\) of \(v\) is given by \[2-2g(v)=|\langle\mu(n_{w})\rangle|\left(2-\sum_{h\in n_{w}}\frac{|\langle\mu(h )\rangle|-1}{|\langle\mu(h)\rangle|}\right).\] We will use the boldface notation \(\mathbf{P}\to\mathbf{C}\) to indicate a graph-theoretic admissible \(G\)-cover, with the understanding that this includes all of the data above. When we need to refer to the marking functions, we will write \(m_{P}\) for the marking of \(P\) and \(m_{C}\) for the marking of \(C\). It is clear from condition (c) that the genus function \(g\) is determined by the monodromy marking \(\mu\) as well as the morphism \(P\to C\). Moreover, since \(C\) is a tree, the data of \(C\) and \(\mu\), without the \(S\)-marking, actually determine \(P\) and \(\Phi\) up to isomorphism. On the other hand, the \(S\)-marking on \(P\) is not in general determined by the \(S\)-marking on \(C\). If \(\mathcal{P}\to\mathcal{C}\) is an \(S\)-marked admissible \(G\)-cover of nodal curves, with \(\mathcal{C}\) a stable \(S\)-marked curve of genus \(0\), then we obtain a corresponding \(S\)-marked admissible \(G\)-cover of dual graphs \(\mathbf{P}\to\mathbf{C}\). The meaning of condition (a) is that the stabilizer of an irreducible component of \(\mathcal{P}\) above a given irreducible component \(\mathcal{C}_{v}\) of \(\mathcal{C}\) is exactly the subgroup of \(G\) generated by the monodromy elements around the special points (nodes and marked points) on \(\mathcal{C}_{v}\). The content here is that since \(\mathcal{C}_{v}\) is rational, \(\pi_{1}(\mathcal{C}_{v})\) is generated by keyhole loops around the special points. Similarly, the data of a homomorphism \(\pi_{1}(\mathcal{C}_{v})\to G\), for appropriately chosen keyhole loops, is the data of an ordered tuple elements of \(G\) whose product is the identity. Condition (b) is similar. **Definition 3.2**.: Let \(\mathbf{P}\to\mathbf{C}\) and \(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime}\) be graph-theoretic \(S\)-pointed admissible \(G\)-covers. 1. An _isomorphism_\((\mathbf{P}\to\mathbf{C})\to(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime})\) is the data of graph isomorphisms \(\phi\colon P\to P^{\prime}\) and \(\psi\colon C\to C^{\prime}\), compatible with the marking functions \(m_{P}\) and \(m_{C}\), as well as the monodromy marking \(\mu\), which fit into a commutative square. 2. Let \(e\in E(C)\) be an edge. The _edge-contraction_ of \(\mathbf{P}\to\mathbf{C}\), denoted \((\mathbf{P}\to\mathbf{C})/e\), is obtained by contracting the edge \(e\) in \(C\), together with its preimages in \(P\). The new monodromy marking is obtained by restricting the previous one. **Definition 3.3**.: We write \(\Gamma^{G}_{0,S}\) for the category of all graph-theoretic \(S\)-pointed admissible \(G\)-covers, where morphisms are given by compositions of isomorphisms and edge-contractions. Given a function \(\rho\colon S\to G\), we put \(\Gamma^{G}_{0,S}(\rho)\) for the full subcategory of \(\Gamma^{G}_{0,S}\) on those graph-theoretic \(S\)-pointed admissible \(G\)-covers \(\mathbf{P}\to\mathbf{C}\) such that the monodromy marking on \(\mathbf{C}\) extends \(\rho\). Precisely, \(\rho=\mu|_{L(C)}\circ m_{C}\) where \(m_{C}\colon S\to L(C)\) is the \(S\)-marking on \(C\). ### The dual complex of the boundary We now state Theorem 3.5 on the boundary complex of the space of pointed admissible covers. Recall the category of _symmetric \(\Delta\)-complexes_ (see [1]), i.e., the category \(\operatorname{Fun}(\mathsf{Fl}^{\mathrm{op}},\mathsf{Set})\), where \(\mathsf{Fl}\) is the category of finite sets with injections. For \(q\geq-1\) an integer, we henceforth write \[[q]=\{0,\ldots,q\}.\] This notational convention includes the special case \([-1]=\emptyset\). Given \(X\colon\mathsf{Fl}^{\mathrm{op}}\to\mathsf{Set}\) and an integer \(q\geq-1\), write \[X_{q}=X([q])\] for the set of \(q\)_-simplices_ of \(X\). **Definition 3.4**.: Fix \(g=0\) and \(G\) abelian. For data \(G,S,\) and \(\rho\) as above, we define a symmetric \(\Delta\)-complex \[\Delta^{G}_{0,S}(\rho)\colon\mathsf{Fl}^{\mathrm{op}}\to\mathsf{Set}\] as follows. For each \(q\geq-1\), the set \(\Delta^{G}_{0,S}(\rho)_{q}\) is the set of isomorphism classes of pairs \((\mathbf{P}\to\mathbf{C},\omega)\), where 1. \(\mathbf{P}\to\mathbf{C}\) is an object of \(\Gamma^{G}_{0,S}(\rho)\) 2. \(\omega\colon[q]\to E(C)\) is a bijection, called an _edge-labelling_. For morphisms, given \(i\colon[q^{\prime}]\hookrightarrow[q]\), and given a graph-theoretic admissible cover \(\mathbf{P}\to\mathbf{C}\) as above, contract the edges \(E(C)-\omega(i([q^{\prime}]))\), to obtain a new object of \(\Gamma^{G}_{0,S}(\rho)\), and take the unique edge-labelling by \([q^{\prime}]\) which preserves the order of the remaining edges. **Theorem 3.5**.: Let \(G\) be an abelian group, \(S\) a finite set. There is an isomorphism of symmetric \(\Delta\)-complexes \[\Delta^{G}_{0,S}(\rho)\cong\Delta(\mathcal{M}^{G}_{0,S}(\rho)\subset\overline{ \mathcal{M}}^{G}_{0,S}(\rho)).\] Proof.: Let us start with the stratification of the boundary of \(\operatorname{Adm}_{0,S}(G;\rho)\). The space \(\operatorname{Adm}_{0,S}(G;\rho)\) is nonempty if and only if \(\prod_{s\in S}\rho(s)=1_{G}\). The boundary complex of \(\operatorname{Adm}_{0,S}^{0}(G;\rho)\subset\operatorname{Adm}_{0,S}(G;\rho)\) is the complex of trees \(C\) with a bijective \(S\)-marking \(m\colon S\to L(C)\), together with a monodromy marking \(\mu\colon H(C)\to G\) extending \(\rho\), which must satisfy, for every vertex \(v\in V(C)\) and \(e=\{h_{1},h_{2}\}\in E(C)\), \[\prod_{h\in r^{-1}(v)}\mu(h)=1,\qquad\mu(h_{1})\mu(h_{2})=1.\] More formally, as a symmetric \(\Delta\)-complex, the boundary complex has a \(q\)-simplex for every such datum \((C,m,\mu)\) together with an arbitrary bijective edge-labelling \(\omega\colon[q]\to E(C)\), one for each isomorphism class of \((C,m,\mu,\omega)\). Now since \(\overline{\mathcal{M}}_{0,S}(G;\rho)\) is etale over \(\operatorname{Adm}_{0,S}(G;\rho)\), there is a morphism of boundary complexes from that of the former to that of the latter. We now study the fibers of this morphism. Suppose \[(C,m\colon S\to L(C),\mu\colon H(C)\to G)\] is a stable \(S\)-marked tree with monodromy marking \(\mu\). For \(v\in C\), write \(n_{v}=r^{-1}(v)\) for the set of half-edges (including legs) at \(v\), and write \[G_{v}=\langle\mu(h)\colon h\in n_{v}\rangle.\] Let \(\mu_{v}\) be the restriction of \(\mu\) to \(n_{v}\). Then the data of the stable \(S\)-marked tree indexes a boundary stratum of \(\operatorname{Adm}_{0,S}(G;\rho)\). Note that this stratum is indeed connected, since it is, up to finite quotient, isomorphic to a product \(\prod_{v\in V(C)}\operatorname{Adm}_{0,n_{v}}(G;\mu_{v})\) of varieties that are themselves connected (5). The preimage in \(\overline{\mathcal{M}}_{0,S}(G;\rho)\) of this stratum is isomorphic to the variety \[\prod_{v\in V(C)}\mathcal{M}^{G}_{0,n_{v}}(\mu_{v})\,/\,G^{E(C)} \tag{7}\] e.g., by [10, SS2]. Let us explain the action of \(G^{E(C)}\) in (7). For a given edge \(e=\{h,h^{\prime}\}\), incident to vertices \(v\) and \(v^{\prime}\), the copy of \(G\) indexed by \(e\) acts by translating the lifted marked point indexed by \(h\), respectively \(h^{\prime}\), in the moduli space \(\mathcal{M}^{G}_{0,n_{v}}(\mu_{v})\), respectively \(\mathcal{M}^{G}_{0,n_{v^{\prime}}}(\mu_{v^{\prime}}).\) (In general, \(G\) would also change the values of the marking functions \(\mu_{v}(h)\) and \(\mu_{v^{\prime}}(h^{\prime})\), respectively, by conjugation, but \(G\) is abelian here.) The variety (7) may not be connected, and it remains to describe its connected components. For each \(v\in V(C)\), let \[X_{v}=\{\operatorname{Fun}(n_{v},G/G_{v})\}/G\] where the quotient is with respect to the \(G\)-action on \(G/G_{v}\). From Proposition 2.4, the connected components of (7) are in bijection with \[\left(\prod_{v\in V(C)}X_{v}\right)/G^{E(C)}. \tag{8}\] The last step is a combinatorial identification of (8) with the set of isomorphism classes of graph-theoretic \(S\)-pointed admissible \(G\)-covers. Let us begin by considering local data at a single vertex \(v\in V(C)\). Consider an element \(f_{v}\in X_{v}\), together with the data of \(h|_{n_{v}}\colon n_{v}\to G\). From \(f_{v}\) and \(h|_{n_{v}}\) we can extract a graph-theoretic \(n_{v}\)-pointed admissible cover involving graphs with legs but no edges: \(C_{v}\) is a single vertex, with legs \(n_{v}\); \(V(P_{v})=G/G_{v}\) as a left \(G\)-set; and above each leg \(h\in n_{v}\) of \(C\) is a set of legs in \(P_{v}\) isomorphic to \(G/\langle h\rangle\), with root map compatible with the map \(G/\langle h\rangle\to G/G_{v}\). Finally, \(P_{v}\) has \(S\)-marking given by \(f_{v}\). Continue to fix a stable \(S\)-marked tree \(C\) and monodromy marking \(\mu\) on \(C\). Now, given \((f_{v})_{v}\in\prod X_{v}\), we assemble the local picture above into an admissible cover of graphs. For every edge \(e=\{h,h^{\prime}\}\) of \(C\), with root vertices \(v=r(h)\) and \(v^{\prime}=r(h^{\prime})\), the half-edges of \(P_{v}\) above \(h\) and the half-edges of \(P_{v^{\prime}}\) above \(h^{\prime}\) are each isomorphic to \(G/\langle\mu(h)\rangle=G/\langle\mu(h^{\prime})\rangle\) as \(G\)-sets. There is a unique \(G\)-equivariant bijection between these two sets that sends the chosen lift of \(h\) to the chosen lift of \(h^{\prime}\), and another choice of lifts of \(h\) and \(h^{\prime}\) produce the same bijection if they are related to the original choices by the same element of \(G\). Therefore these identifications glue the half-edges above \(h\) and \(h^{\prime}\) into edges above \(e\), obtaining a graph-theoretic admissible cover \(P\to C\) which was independent of the action of \(G^{E(C)}\). It is straightforward to reverse this process, giving an element of the set (8) starting from a graph-theoretic admissible cover. **Remark 3.6**.: Theorem 3.5 furnishes an explicit description of the symmetric \(\Delta\)-complex \[\Delta(\mathcal{M}_{g,n}^{G}\subset\overline{\mathcal{M}}_{g,n}^{G}) \tag{9}\] when \(g=0\) and \(G\) is abelian. It is sufficiently explicit that it can be programmed, and indeed we carry out computer calculations for the results in Appendix A. Without restrictions on \(G\) and \(g\), it is still possible to give a general description of (9) using the framework of _graphs of groups_, roughly, decorating vertices of graphs with fundamental groups of punctured curves. This idea will appear in future work by M. Talpo, M. Ulirsch, and D. Zakharov; we thank Ulirsch for bringing it to our attention. This general description is not explicit in the above sense. It involves the very interesting sub-question of determining the connected components of the spaces \(\mathcal{M}_{g,n}^{G}\) in general; compare with Proposition 2.1. We also refer to forthcoming work of P. Souza, that constructs (9) in the case of \(G\) cyclic with \(g\) arbitrary, and identifies it as the nonarchimedean skeleton of the toroidal pair. Moreover, that work is a precursor to further work by Y. El Maazouz, P. Helminck, F. Rohrle, P. Souza, and C. Yun studying the homotopy type of boundary complexes of unramified \(\mathbb{Z}/p\mathbb{Z}\) covers for \(g=2\). **Remark 3.7**.: The graph-theoretic admissible \(G\)-covers in this paper (Definition 3.1) are exactly what are needed for an precise description of the boundary complex (Theorem 3.5). Thus they are reasonably expected to be similar to, but distinct from, the spaces of covers of tropical curves appearing in [1], in [10], and the references therein. The work [10] on tropicalizations of the space of admissible covers is an important comparison point for this paper. Rather than \(G\)-covers, they study the admissible covers compactification of the Hurwitz space of degree \(d\) covers of smooth curves with fixed target genus \(h\) and fixed ramification profiles (and hence fixed source genus \(g\)) over \(n\) marked branch points in the target. All of the inverse images of the branch points are also marked. This moduli space is canonically isomorphic to a cover of a component of the space \(\operatorname{Adm}_{h,n}(S_{d})\). In [10] the boundary complex, which may be identified with the link of the skeleton of the Berkovich analytification [1], is _compared_, but not identified, with a certain space of tropical admissible covers, via a surjective morphism of generalized cone complexes from the former to the latter. The failure for this surjection to be an isomorphism is due to multiplicities fully accounted for in [10, SS4.2.4], and is related to Remark 3.6 above. ## 4. Compactifications of \(\mathcal{H}_{g,n}\) Let \(g\geq 2\) and \(n\geq 0\). Throughout this section we will fix \[S=\{1,\dots,n\}\cup\{w_{1},\dots,w_{2g+2}\},\] and fix \(G=\mathbb{Z}/2\mathbb{Z}=\{0,1\}\). We also define \(\rho\colon S\to\mathbb{Z}/2\mathbb{Z}\) by \(\rho(i)=0\) for all \(i\in\{1,\ldots,n\}\), and \(\rho(w_{k})=1\) for \(k\in\{1,\ldots,2g+2\}\). We will discuss how the stack quotient \[[\overline{\mathcal{M}}_{0,S}^{\mathbb{Z}/2\mathbb{Z}}(\rho)/S_{2g+2}]\] provides a normal crossings compactification of \(\mathcal{H}_{g,n}\), and give an explicit description of the dual complex \(\Theta_{g,n}\) of this compactification. The description will be in terms of the dual complexes studied in the previous section. We first consider the case of labelled Weierstrass points, and then quotient out by \(S_{2g+2}\). ### The complex \(\widetilde{\Theta}_{g,n}\) First, let \(\widetilde{\mathcal{H}}_{g,n}\) denote the moduli stack of hyperelliptic curves of genus \(g\) with \(n\) distinct marked points and \(2g+2\) labelled Weierstrass points. The symmetric group on \(2g+2\) letters permutes the labels on Weierstrass points, and \[\mathcal{H}_{g,n}\cong[\widetilde{\mathcal{H}}_{g,n}/S_{2g+2}].\] In this subsection, we will provide a normal crossings compactification of \(\widetilde{\mathcal{H}}_{g,n}\) and give the corresponding dual complex. Then, we will quotient out by \(S_{2g+2}\) to give a normal crossings compactification of \(\mathcal{H}_{g,n}\). In \(\widetilde{\mathcal{H}}_{g,n}\), a marked point _is_ allowed to coincide with a Weierstrass point, and two marked points are allowed to form a conjugate pair under the hyperelliptic involution. Because of this, two types of graphs will require special attention. **Definition 4.1**.: We call the following graph-theoretic admissible covers type (1) and type (2) respectively: 1. For distinct \(i,j\in\{1,\ldots,n\}\), the admissible cover of graphs in Figure 2 on the left. 2. For each \(i\in\{1,\ldots,n\}\) and \(w_{k}\in\{w_{1},\ldots,w_{2g+2}\}\), the admissible cover of graphs in Figure 2 on the right. **Proposition 4.2**.: There is an open inclusion \[\widetilde{\mathcal{H}}_{g,n}\hookrightarrow\overline{\mathcal{M}}_{0,S}^{ \mathbb{Z}/2\mathbb{Z}}(\rho)\] which is a normal crossings compactification, and whose boundary complex \(\widetilde{\Theta}_{g,n}\) is isomorphic to the subcomplex of \[\Delta_{0,S}^{\mathbb{Z}/2\mathbb{Z}}(\rho)\] on simplices whose vertices are not of type (1) or (2) in Definition 4.1. Proof.: Let \(\widetilde{\mathcal{H}}^{\circ}_{g,n}\) denote the open substack of \(\widetilde{\mathcal{H}}_{g,n}\) in which a marked point may not collide with a Weierstrass point, and two marked points may not form a conjugate pair. Then \[\widetilde{\mathcal{H}}^{\circ}_{g,n}\cong\mathcal{M}^{\mathbb{Z}/2\mathbb{Z}}_ {0,S}(\rho),\] where \(\mathcal{M}^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\) denotes the interior of the moduli space \(\overline{\mathcal{M}}^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\) of pointed admissible covers. We define a partial compactification \(\mathcal{H}^{*}_{g,n}\) of \(\widetilde{\mathcal{H}}^{\circ}_{g,n}\), such that \[\widetilde{\mathcal{H}}^{\circ}_{g,n}\subset\mathcal{H}^{*}_{g,n}\subset \overline{\mathcal{M}}^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho),\] and the second inclusion is normal crossings. In \(\overline{\mathcal{M}}^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\), define \(\mathcal{H}^{*}_{g,n}\) to be the open complement of all boundary divisors except for those corresponding to dual graphs of type (1) or (2) (see Definition 4.1). Since \(\mathcal{H}^{*}_{g,n}\) is the complement of a subset of the boundary divisors, the divisor \[\overline{\mathcal{M}}^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\smallsetminus \mathcal{H}^{*}_{g,n}\] still has normal crossings. Stabilization gives a canonical isomorphism \(\mathcal{H}^{*}_{g,n}\cong\widetilde{\mathcal{H}}_{g,n}\) which is equivariant with respect to the action of \(S_{n}\), thus giving the first part of the result. We now turn our attention to the boundary complex. Denote by \(\Delta^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\) the dual complex of the compactification \[\widetilde{\mathcal{H}}^{\circ}_{g,n}\cong\mathcal{M}^{\mathbb{Z}/2\mathbb{Z}} _{0,S}(\rho)\subset\overline{\mathcal{M}}^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho).\] The target graphs of type (1) and (2) in Definition 4.1 have one edge, and correspond to vertices in \(\Delta^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\). Then, the boundary complex \(\widetilde{\Theta}_{g,n}\) of the inclusion \[\widetilde{\mathcal{H}}_{g,n}\subset\overline{\mathcal{M}}^{\mathbb{Z}/2 \mathbb{Z}}_{0,S}(\rho)\] is the subcomplex of \(\Delta^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\) determined by those simplices which have no vertices of type (1) or (2) in Definition 4.1. Let us now describe the complex \(\Delta^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\) in more detail. Its \(q\)-simplices are given by isomorphism classes of pairs \((\mathbf{P}\to\mathbf{C},\omega)\), where \(\mathbf{P}\to\mathbf{C}\) is an object of the category \(\Gamma^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\) (Definition 3.3), and \(\omega\colon[q]\to E(C)\) is an edge-labelling. Moreover, on \(L(C)\), the monodromy marking \(\mu\) satisfies \(\mu(m_{C}(j))=0\) if \(j\in\{1,\dots,n\}\), and \(\mu(m_{C}(j))=1\) if \(j\in\{w_{1},\dots,w_{2g+2}\}\). We will call the elements of \[m_{C}(\{w_{1},\dots,w_{2g+2}\})\subset L(C)\] the _branch legs_ of \(C\). Notice that the above conditions on \(\mu|_{L(C)}\) suffice to determine \(\mu\) on all other half-edges of \(C\), by condition (1) of Definition 3.4. Call a vertex \(v\in V(C)\) a _leaf_ vertex if it is incident to only one edge. If a leaf vertex \(v\in V(C)\) supports an odd number of branch legs, then the non-leg half edge \(h\) incident to \(v\) must satisfy \(\mu(h)=1\). On the other hand, if a leaf vertex \(v\) supports an even number of branch legs, then the non-leg half edge \(h\) incident to \(v\) must satisfy \(\mu(h)=0\). Proceeding inductively, this determines \(\mu\) on all half-edges incident to non-leaf vertices of \(C\) as well. This discussion implies that given the monodromy data \(\rho\) and an \(S\)-marked stable tree \(C\), the only additional data required to determine an object of the category \(\Gamma^{\mathbb{Z}/2\mathbb{Z}}_{0,S}(\rho)\) is a lift of the marking function \(m_{C}\colon S\to L(C)\) to a function \(m_{P}\colon S\to L(P)\) such that the diagram commutes. (Note that the morphism of graphs \(P\to C\), without the marking function on \(P\), is already determined by \(C\) and \(\mu\).) Moreover, since each branch leg in \(C\) has a unique preimage in \(P\), one only needs to choose, for each \(i\in\{1,\ldots,n\}\), a leg in the preimage of \(m(i)\in L(C)\). Two such choices are equivalent if they differ by the \(\mathbb{Z}/2\mathbb{Z}\)-action on \(P\). See Figure 3 for an example. ### The complex \(\Theta_{g,n}\) We now construct a normal crossings compactification of \(\mathcal{H}_{g,n}\) and the corresponding dual complex \(\Theta_{g,n}\). By Proposition 4.2, in order to pass from \(\Delta_{0,S}^{\mathbb{Z}/2\mathbb{Z}}(\rho)\) to \(\widetilde{\Theta}_{g,n}\), we remove all edge-labelled pairs \((\mathbf{P}\to\mathbf{C},\omega)\) such that \(\mathbf{P}\to\mathbf{C}\) admits a contraction to covers of type (1) or (2) in Definition 4.1. To that end, let \[\Gamma_{0,S}^{\mathbb{Z}/2\mathbb{Z},*}(\rho)\] be the full subcategory of \(\Gamma_{0,S}^{\mathbb{Z}/2\mathbb{Z}}(\rho)\) on those covers which do not admit a contraction to covers of type (1) or (2). **Definition 4.3**.: We define the category \(\Gamma_{g,n}^{\mathcal{H}}\) as follows. 1. The objects are \(S_{2g+2}\)-orbits of objects of \(\Gamma_{0,S}^{\mathbb{Z}/2\mathbb{Z},*}(\rho)\). Precisely, the objects are covers \(\mathbf{P}\to\mathbf{C}\), where 1. \(\mathbf{C}=(C,m_{C})\) is the data of a stable tree \(C\) with \(2g+2+n\) legs, together with an injective function \(m_{C}\colon\{1,\ldots,n\}\to L(C)\) such that there are no vertices \(v\in V(C)\) such that \[|r^{-1}(v)|=3,\quad|L(C)\cap r^{-1}(v)|=2,\quad\text{and}\quad|m_{C}^{-1}(v)|=1.\] 2. \(\mathbf{P}=(P,m_{P})\), where \(P\) is the unique graph-theoretic admissible \(\mathbb{Z}/2\mathbb{Z}\)-cover of \(C\) obtained by declaring each unmarked leg to have monodromy \(1\in\mathbb{Z}/2\mathbb{Z}\) and each marked leg to have monodromy \(0\), and \(m_{P}\colon\{1,\ldots,n\}\to L(P)\) is a marking of \(L(P)\) such that \(m_{P}(i)\) is a leg in the inverse image of \(m_{C}(i)\) for all \(i\). 2. The morphisms are compositions of isomorphisms and edge-contractions. **Proposition 4.4**.: The inclusion \(\mathcal{H}_{g,n}\subset[\overline{\mathcal{M}}_{0,S}^{\mathbb{Z}/2\mathbb{Z}} (\rho)/S_{2g+2}]\) is a normal crossings compactification, and the boundary complex \(\Theta_{g,n}\) has the following explicit description. Figure 3. A \(\{1,2\}\cup\{w_{1},\ldots,w_{8}\}\)-marked stable tree \(C\), together with the two lifts of \(m_{C}\) to a marking \(m_{P}\). These lifts are determined by a choice of fiber over each leg marked by \(\{1,2\}\) on \(C\), and two such choices define the same graph-theoretic admissible \(\mathbb{Z}/2\mathbb{Z}\)-cover if they differ by the \(\mathbb{Z}/2\mathbb{Z}\)-action on \(P\). 1. The set of \(q\)-simplices \(\left(\Theta_{g,n}\right)_{q}\) is the set of isomorphism classes of pairs \((\mathbf{P}\to\mathbf{C},\omega)\) where \(\mathbf{P}\to\mathbf{C}\) is an object of \(\Gamma^{\mathcal{H}}_{g,n}\), and \(\omega\colon[q]\to E(C)\) is an edge-labelling. 2. Given an injection \(\iota\colon[q^{\prime}]\hookrightarrow[q]\), we define \(\iota^{*}(\mathbf{P}\to\mathbf{C},\omega)\in\left(\Theta_{g,n}\right)_{q^{ \prime}}\) by contracting those edges which are not in the image of \(\iota\), and taking the unique induced edge-labelling which preserves the order of the remaining edges. Proof.: Since the action of \(S_{2g+2}\) on \(\widetilde{\mathcal{H}}_{g,n}\subset\overline{\mathcal{M}}_{0,S}^{2/2\mathbb{ Z}}(\rho)\) preserves \(\widetilde{\mathcal{H}}_{g,n}\), we have that \[\mathcal{H}_{g,n}\cong[\widetilde{\mathcal{H}}_{g,n}/S_{2g+2}]\subset[ \overline{\mathcal{M}}_{0,S}^{2/2\mathbb{Z}}(\rho)/S_{2g+2}]\] is a normal crossings compactification with boundary complex equal to \[\Delta(\widetilde{\mathcal{H}}_{g,n}\subset\overline{\mathcal{M}}_{0,S}^{2/2 \mathbb{Z}}(\rho))/S_{2g+2}=\tilde{\Theta}_{g,n}/S_{2g+2},\] and the described symmetric \(\Delta\)-complex is precisely the quotient of \(\tilde{\Theta}_{g,n}\) by \(S_{2g+2}\). As a direct result of Proposition 4.4, we have the following corollary identifying the weight zero compactly supported cohomology of \(\mathcal{H}_{g,n}\) with the reduced cohomology of \(\Theta_{g,n}\): see [1, Theorem 5.8]. **Corollary 4.5**.: For each \(i\), there are canonical \(S_{n}\)-equivariant isomorphisms \[W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\cong\tilde{H}^{i-1}(\Theta_{g,n}; \mathbb{Q})\cong\tilde{H}_{i-1}(\Theta_{g,n};\mathbb{Q})^{\vee},\] where \(\tilde{H}^{*}\) and \(\tilde{H}_{*}\) denote reduced cohomology and homology, respectively. We now establish some conventions for working with objects of the category \(\Gamma^{\mathcal{H}}_{g,n}\). **Definition 4.6**.: Given an object \(\mathbf{P}\to\mathbf{C}\) of \(\Gamma^{\mathcal{H}}_{g,n}\), we define the _weight_ of a vertex \(v\in V(\mathbf{C})\) to be the number of unmarked legs based at \(v\). The weight in this sense should not be confused with the notion of vertex weights corresponding to genera of irreducible curves. As a sanity check: the total weight of the vertices of \(C\) is \(2g+2\). When depicting objects of \(\Gamma^{\mathcal{H}}_{g,n}\), we adopt the following conventions. Instead of drawing the unmarked legs of \(\mathbf{C}\), we will label each vertex of \(\mathbf{C}\) with its weight. To avoid confusion with the genera of vertices in the source graph, we will depict the weight of a vertex in \(\mathbf{C}\) with the color grey, and genera of vertices with blue. Since each unmarked leg of \(C\) has a unique preimage in \(P\), we will not draw those legs of \(P\). When a leg of \(C\) has two preimages in \(P\), so only one is marked, we will suppress the other leg. See Figure 4 for the images of the \(\Gamma^{\mathbb{Z}/2\mathbb{Z},*}_{0,S}(\rho)\) objects from Figure 3 under the functor to \(\Gamma^{\mathcal{H}}_{g,n}\). See Figure 5 for a complete list of isomorphism classes of \(\Gamma^{\mathcal{H}}_{g,n}\)-objects when \(g=2\) and \(n=0\). **Remark 4.7**.: We remark on the case \(n=0\). In this case, the symmetric \(\Delta\)-complex \(\Theta_{g,0}\) is isomorphic to the quotient of the dual complex \[\Delta_{0,2g+2}:=\Delta\left(\mathcal{M}_{0,2g+2}\subset\overline{\mathcal{M}} _{0,2g+2}\right) \tag{10}\] by the \(S_{2g+2}\)-action permuting the marked points. The dual complex (10) is the moduli space of \((2g+2)\)-marked tropical curves of genus zero and volume one [1], also known as the space of phylogenetic trees [1, 1, 1]. The identification \[\Theta_{g,0}=\Delta_{0,2g+2}/S_{2g+2}\] can be seen directly from our description of the category \(\Gamma^{\mathcal{H}}_{g}\), and holds despite the fact that the morphism \[[\overline{\mathcal{M}}_{0,2g+2}^{\mathbb{Z}/2\mathbb{Z}}(\rho)/S_{2g+2}]\to[ \overline{\mathcal{M}}_{0,2g+2}/S_{2g+2}]\] is not an isomorphism or even a \(\mathbb{Z}/2\mathbb{Z}\)-gerbe, due to the possible presence of extra automorphisms, more than \(\mathbb{Z}/2\mathbb{Z}\), in the source curves of \(\mathbb{Z}/2\mathbb{Z}\)-admissible covers. ## 5. Acyclic subcomplexes of \(\Theta_{g,n}\) In this section we will study the cellular chain complex of \(\Theta_{g,n}\), establishing Theorem 5.1 below, which states that several natural subcomplexes are acyclic. This will allow us to prove Proposition B later in this section. The acyclicity results will be used in Section 6 to obtain Theorem A. **Theorem 5.1**.: Fix \(g\geq 2\) and \(n\geq 0\). Then the following subcomplexes of \(\Theta_{g,n}\) have vanishing reduced rational homology: 1. the _repeated marking locus_\(\Theta_{g,n}^{\operatorname{rep}}\), namely the subcomplex determined by those \(\Gamma_{g,n}^{\mathcal{H}}\)-objects \(\mathbf{P}\to\mathbf{C}\) such that there exists \(v\in V(\mathbf{P})\) supporting at least two markings from \(\{1,\ldots,n\}\); 2. the _weight_\(3\)_locus_\(\Theta_{g,n}^{\geq 3}\), determined by those \(\Gamma_{g,n}^{\mathcal{H}}\)-objects \(\mathbf{P}\to\mathbf{C}\) such that \(\mathbf{C}\) has a vertex of weight at least \(3\) (Definition 4.6); and 3. the intersection \(\Theta_{g,n}^{\operatorname{rep}}\cap\Theta_{g,n}^{\geq 3}\). **Remark 5.2**.: There are stronger statements that are also true, namely that the three subspaces of the space \(\Theta_{g,n}\) corresponding to (1), (2), and (3) are in fact contractible. It is possible to convert the proofs below, of vanishing reduced rational homology, to proofs of contractibility, using the _vertex property_ technique of [12, SS4]. ### The cellular chain complex of \(\Theta_{g,n}\) Following [12, SS3], the reduced rational homology of \(\Theta_{g,n}\) is computed by the graph complex \(\mathcal{C}_{*}^{(g,n)}\) described as follows. In degree \(p\), \(\mathcal{C}_{p}^{(g,n)}\) is spanned by pairs \((\mathbf{P}\to\mathbf{C},\omega)\) where \(\mathbf{P}\to\mathbf{C}\) is an object of \(\Gamma_{g,n}^{\mathcal{H}}\), and \(\omega\colon[p]\to E(\mathbf{C})\) is a bijective edge-labelling. These pairs are subject to the relation \((\mathbf{P}\to\mathbf{C},\omega)=\operatorname{sgn}(\rho)(\mathbf{P}\to \mathbf{C},\omega\circ\rho)\) whenever \(\rho\in\mathfrak{S}_{p+1}=\operatorname{Aut}([p])\). The differential \(\partial\colon\mathcal{C}_{p}^{(g,n)}\to\mathcal{C}_{p-1}^{(g,n)}\) is given by the signed sum of edge contractions: \[\partial(\mathbf{P}\to\mathbf{C},\omega)=\sum_{i\in[p]}(-1)^{i}(\delta^{i})^{ *}(\mathbf{P}\to\mathbf{C},\omega),\] where \(\delta^{i}\colon[p-1]\to[p]\) is the unique order-preserving injection which misses \(i\). To prove Theorem 5.1, we will show that the corresponding sub-chain complexes of \(\mathcal{C}_{*}^{(g,n)}\) are acyclic. Denote by \(\mathcal{R}_{*}^{(g,n)}\) the sub-chain complex of \(\mathcal{C}_{*}^{(g,n)}\) spanned by those pairs \((\mathbf{P}\to\mathbf{C},\omega)\) such that \(\mathbf{P}\) has a vertex \(v\) that has at least two markings from \(\{1,\ldots,n\}\); this is the chain complex which computes the reduced rational homology of \(\Theta_{g,n}^{\mathrm{rep}}\). Denote by \(\mathcal{Q}_{*}^{(g,n)}\) the augmented cellular chain complex of \(\Theta_{g,n}^{\geq 3}\): this is the sub-chain complex of \(\mathcal{C}_{*}^{(g,n)}\) spanned by those pairs \((\mathbf{P}\to\mathbf{C},\omega)\) where \(\mathbf{C}\) has at least one vertex \(v\) with weight at least \(3\). We will show that the chain complexes \(\mathcal{R}_{*}^{(g,n)}\) and \(\mathcal{Q}_{*}^{(g,n)}\cap\mathcal{R}_{*}^{(g,n)}\) are acyclic for all \(g\geq 2\) and all \(n\geq 2\) (Theorem 5.5), that the chain complex \(\mathcal{Q}_{*}^{(g,n)}\) is acyclic for all \(g\geq 2\) and all \(n\geq 0\) (Theorem 5.9), and that the chain complex \(\mathcal{C}_{*}^{(g,n)}\) is acyclic for all \(g\geq 2\) and \(n\leq 1\) (Theorem 5.12). Thus, Theorem 5.5 and Theorem 5.9 prove Theorem 5.1, and Theorem 5.12 gives part (1) of Proposition The proofs of these theorems are informed by work of Chan-Galatius-Payne on contractibility criteria for symmetric \(\Delta\)-complexes [10], as well as work of Conant-Gerlits-Vogtmann [10] on the acyclicity of the subcomplex of Kontsevich's graph complex spanned by graphs with cut vertices. ### The homology of \(\Theta_{g,n}^{\mathrm{rep}}\) It will be useful to isolate specific types of edges of covers with repeated markings. **Definition 5.3**.: For a \(\Gamma_{g,n}^{\mathcal{H}}\)-object \(\mathbf{P}\to\mathbf{C}\) with repeated markings, we say an edge \(e\in E(\mathbf{C})\) is a _supporting edge_, with support equal to \(S\subseteq[n]\), if, upon contracting all edges of \(\mathbf{C}\) which are not equal to \(e\), as well as their preimages in \(\mathbf{P}\), we obtain the cover \(\mathbf{B}_{S}\to\mathbf{E}_{S}\) depicted in Figure 6. If \(|S|=i\), we will call \(e\) an _\(i\)-supporting edge_. **Definition 5.4**.: Given a \(\Gamma_{g,n}^{\mathcal{H}}\)-object \(\mathbf{P}\to\mathbf{C}\), we define the _supporting edge retraction_ of \(\mathbf{G}\to\mathbf{T}\) to be the cover obtained by contracting all supporting edges in \(\mathbf{C}\) and their preimages in \(\mathbf{P}\). **Theorem 5.5**.: For all \(g\geq 2\) and \(n\geq 2\), the chain complexes \(\mathcal{R}_{*}^{(g,n)}\) and \(\mathcal{R}_{*}^{(g,n)}\cap\mathcal{Q}_{*}^{(g,n)}\) are acyclic. Proof.: We will prove the theorem only for \(\mathcal{R}_{*}^{(g,n)}\), as the same argument works for \(\mathcal{R}_{*}^{(g,n)}\cap\mathcal{Q}_{*}^{(g,n)}\). For ease of notation, fix \(g,n\geq 2\) and put \[\mathcal{R}_{*}:=\mathcal{R}_{*}^{(g,n)}.\] First, filter \(\mathcal{R}_{*}\) as follows: let \[\mathcal{R}_{*}^{\geq i}\hookrightarrow\mathcal{R}_{*}\] be the subcomplex generated by covers which have an \(k\)-supporting edge for some \(k\geq i\). More precisely, we mean that \(\mathcal{R}_{*}^{\geq i}\) is spanned by covers obtained by edge-contraction from covers with supporting edges of this type. We apply this definition even when \(i=n+1\), in which case \(\mathcal{R}_{*}^{\geq n+1}=0\). Then we have a filtration \[0=\mathcal{R}^{\geq n+1}\hookrightarrow\mathcal{R}_{*}^{\geq n}\hookrightarrow \cdots\hookrightarrow\mathcal{R}_{*}^{\geq 2}=\mathcal{R}_{*}.\] Passing to the associated spectral sequence, it suffices to show that for each \(i=2,\ldots,n\), the successive quotient chain complexes \[\mathcal{R}_{*}^{i}:=\mathcal{R}_{*}^{\geq i}/\mathcal{R}_{*}^{\geq i+1}\] are acyclic. These quotient chain complexes are spanned by covers with \(i\)-supporting edges and their edge-contractions, but do not include any covers with \(k\)-supporting edges or their edge contractions for any \(k>i\). Now we filter \(\mathcal{R}_{*}^{i}\). Define \[F_{p}\mathcal{R}_{*}^{i}\hookrightarrow\mathcal{R}_{*}^{i}\] to be the sub-chain complex spanned by graphs with at most \(p\) non-supporting edges. This is an ascending filtration \[0=F_{-1}\mathcal{R}_{*}^{i}\hookrightarrow F_{0}\mathcal{R}_{*}^{i}\hookrightarrow \cdots\hookrightarrow\mathcal{R}_{*}^{i}\] and again by considering the associated spectral sequence, it suffices to show that successive quotients \[G_{p}\mathcal{R}_{*}^{i}:=F_{p}\mathcal{R}_{*}^{i}/F_{p-1}\mathcal{R}_{*}^{i}\] are acyclic, in order to conclude that \(\mathcal{R}_{*}^{i}\) and hence \(\mathcal{R}\) is acyclic. For fixed \(i\) and \(p\), let \(A_{i,p}\) denote the set of isomorphism classes of \(\Gamma_{g,n}^{\mathcal{H}}\)-objects \(\mathbf{P}\to\mathbf{C}\) where \(|E(\mathbf{C})|=p\) and which (1) do not have any supporting edges, (2) admit a contraction from a cover with an \(i\)-supporting edge, and (3) do not admit a contraction from any covers with \(k\)-supporting edges for \(k>i\). Then we have a direct sum decomposition \[G_{p}\mathcal{R}_{*}^{i}=\bigoplus_{\mathbf{P}\to\mathbf{C}\in A_{i,p}} \mathcal{L}_{*}^{\mathbf{P}\to\mathbf{C}},\] where \(\mathcal{L}_{*}^{\mathbf{P}\to\mathbf{C}}\) is the sub-chain complex consisting of those covers whose supporting edge retraction is equal to \(\mathbf{P}\to\mathbf{C}\). This direct sum decomposition holds because the differential on \(G_{p}\mathcal{R}_{*}^{i}\) is given by a signed sum of supporting edge contractions, and hence preserves the supporting edge retraction of a given cover. Next, given \(\mathbf{P}\to\mathbf{C}\in A_{i,p}\), we have a tensor product decomposition \[\mathcal{L}_{*}^{\mathbf{P}\to\mathbf{C}}\cong\left(\bigotimes_{v\in V_{i}^{ \mathrm{rep}}(\mathbf{P})}(\mathbb{Q}\xrightarrow{\sim}\mathbb{Q})\right)[1-p],\] where \(V_{i}^{\mathrm{rep}}(\mathbf{P})\) denotes the set of vertices of \(\mathbf{P}\) which contain exactly \(i\) markings, and the first copy of \(\mathbb{Q}\) is in degree \(1\). This tensor product decomposition holds because a generator of \(\mathcal{L}_{*}^{\mathbf{P}\to\mathbf{C}}\) is determined by a choice of subset of those vertices of \(\mathbf{P}\) which contain \(i\) markings: the corresponding generator is determined by expanding a single \(i\)-supporting edge from the image of each chosen vertex in \(\mathbf{C}\). The degree shift is required to account for the \(p\) edges of \(\mathbf{P}\to\mathbf{C}\). Altogether, this shows that \(\mathcal{L}_{*}^{\mathbf{P}\to\mathbf{C}}\) is a tensor product of acyclic chain complexes, so \(\mathcal{L}_{*}^{\mathbf{P}\to\mathbf{C}}\) is itself acyclic, and the proof is complete. ### The homology of \(\Theta_{g,n}^{\geq 3}\) We will now show that the chain complex \(\mathcal{Q}_{*}^{(g,n)}\) is acyclic. It will again be convenient to name particular types of edges. **Definition 5.6**.: Suppose \(\mathbf{P}\to\mathbf{C}\) is an object of \(\Gamma_{g,n}^{\mathcal{H}}\), and that \(\mathbf{C}\) has a vertex of weight at least \(3\). 1. We say \(e\in E(\mathbf{C})\) is a _\(3\)-end_ if upon contracting all edges in \(\mathbf{C}\) except for \(e\), and their preimages in \(\mathbf{P}\), we obtain the cover \(\mathbf{D}\to\mathbf{F}\) in Figure 7. 2. We say a cover \(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime}\) is a _\(3\)-end expansion_ of \(\mathbf{P}\to\mathbf{C}\) if \(\mathbf{P}\to\mathbf{C}\) is obtained from \(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime}\) by contracting a sequence of \(3\)-ends. It is straightforward to see that for any cover \(\mathbf{P}\to\mathbf{C}\), the poset of \(3\)-end expansions of \(\mathbf{G}\to\mathbf{T}\) has a maximal element, as in the following lemma. We omit the proof: see Figure 8 for an example of how this expansion is constructed. **Lemma 5.7**.: Let \(\mathbf{P}\to\mathbf{C}\) be an object of \(\Gamma_{g,n}^{\mathcal{H}}\). Then the poset of \(3\)-end expansions of \(\mathbf{P}\to\mathbf{C}\) has a unique maximal element \(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime}\), and this expansion is canonical in the sense that any automorphism of \(\mathbf{P}\to\mathbf{C}\) lifts to an automorphism of \(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime}\). Given a \(\Gamma_{g,n}^{\mathcal{H}}\)-object \(\mathbf{P}\to\mathbf{C}\), let \(A(\mathbf{P}\to\mathbf{C})\) be the set of isomorphism classes of covers obtained from \(\mathbf{P}\to\mathbf{C}\) by contracting \(3\)-ends. We define a chain complex \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\) as follows: the vector space \(\mathcal{Q}_{p}^{\mathbf{P}\to\mathbf{C}}\) is spanned by pairs \((\mathbf{H}\to\mathbf{K},\omega)\), where \(\mathbf{H}\to\mathbf{K}\) is an element of \(A(\mathbf{G}\to\mathbf{T})\) with \(|E(\mathbf{K})|=p+1\), and \(\omega\colon[p]\to E(\mathbf{K})\) is an edge-labelling. These generators are subject to the usual relation \[(\mathbf{H}\to\mathbf{K},\omega\circ\rho)=\operatorname{sgn}(\rho)(\mathbf{H} \to\mathbf{K},\omega)\] for \(\rho\in\operatorname{Aut}([p])\). The differential on \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\) is given by the signed sum of \(3\)-end contractions; we set it equal to \(0\) on any generators which do not have any \(3\)-ends. **Proposition 5.8**.: Suppose \(\mathbf{P}\to\mathbf{C}\) has a \(3\)-end and is maximal with respect to expanding \(3\)-ends. Then \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\) is acyclic. Proof.: First consider the case where \(\mathbf{C}\) has no automorphisms. This implies that all \(3\)-end contractions of \(\mathbf{C}\) have no automorphisms, since an automorphism of a tree must lift to an automorphism of its maximal \(3\)-end expansion. Let \(q+1\) be the number of distinct \(3\)-ends of \(\mathbf{C}\). We can understand \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\) as a shift of the augmented cellular chain complex of the standard \(q\)-simplex \(\sigma^{q}\), viewed as the space parameterizing assignments of nonnegative lengths to the \(q+1\) distinct \(3\)-ends of \(\mathbf{C}\), such that the lengths sum to one. So in the automorphism-free case, \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\) is acyclic. For the general case, when \(\mathbf{C}\) and its contractions may have automorphisms, fix a labelling of the edges of \(\mathbf{C}\), to get a decorated tree \(\mathbf{C}^{\dagger}\). This induces a labelling of the edges of each contraction of \(\mathbf{C}\). Let \(A(\mathbf{C}^{\dagger})\) be the set consisting of \(\mathbf{C}^{\dagger}\) and all of its contractions. We can make a chain complex \(\mathcal{Q}_{*}^{\mathbf{T},\dagger}\) which in degree \(p\) is spanned by pairs \([\mathbf{K},\omega]\) where \(\mathbf{K}\) is an element of \(A(\mathbf{C}^{\dagger})\) with \(|\mathbf{K}|=p+1\), and \(\omega\colon[p]\to E(\mathbf{P})\) is a bijection, subject to the usual relations under the action of \(\operatorname{Aut}([p])\). Observe that there is a canonical action of \(\operatorname{Aut}(\mathbf{C})\) on the chain complex \(\mathcal{Q}_{*}^{\mathbf{C},\dagger}\), and \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\) is identified with the \(\operatorname{Aut}(\mathbf{C})\)-coinvariants of the complex \(\mathcal{Q}_{*}^{\mathbf{C},\dagger}\), by the second part of Lemma 5.7. Since \(\operatorname{Aut}(\mathbf{C})\) is finite, it has no homology over the rationals. Moreover, \(\mathcal{Q}^{\mathbf{C},\dagger}\) is acyclic by the first part of the proof. We conclude that \[H_{*}((\mathcal{Q}_{*}^{\mathbf{C},\dagger})_{\operatorname{Aut}(\mathbf{C})}) =(H_{*}(\mathcal{Q}_{*}^{\mathbf{C},\dagger}))_{\operatorname{Aut}( \mathbf{C})}=0,\] as desired. We now prove that \(\mathcal{Q}_{*}^{(g,n)}\) is acyclic. **Theorem 5.9**.: For \(g\geq 2\) and \(n\geq 0\), the chain complex \(\mathcal{Q}_{*}^{(g,n)}\) is acyclic. Proof.: Let \(F_{p}\mathcal{Q}_{*}^{(g,n)}\) denote the subspace spanned by those covers whose target tree has at most \(p\) edges which are not \(3\)-ends. This defines a bounded, increasing filtration of \(\mathcal{Q}^{(g,n)}\). The \(E^{0}\) page \[E^{0}_{p,q}=F_{p}\mathcal{Q}_{p+q}^{(g,n)}/F_{p-1}\mathcal{Q}_{p+q}^{(g,n)}\] of the associated spectral sequence is spanned by covers whose target tree has exactly \(p\) edges which are not \(3\)-ends. The differential \(\partial_{0}\colon E^{0}_{p,q}\to E^{0}_{p,q-1}\) is given by a signed sum of \(3\)-end contractions. Therefore, by Lemma 5.7, the \(p\)th row of the \(E^{0}\) page breaks up into a direct sum of chain complexes of the form \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\), where \(\mathbf{C}\) has at least one \(3\)-end, and the tree obtained from \(\mathbf{C}\) by contracting all \(3\)-ends has \(p\) edges. Proposition 5.8 then implies that the \(E^{1}\) page vanishes, which completes the proof. ### Calculations on \(\Theta_{g,n}\) for \(n\leq 2\) We conclude this section by proving Proposition B. The first part of Proposition B asserts that \(\mathcal{C}_{*}^{(g,n)}\) is acyclic for \(n\leq 1\), and the proof is similar to the one that \(\mathcal{Q}_{*}^{(g,n)}\) is acyclic. Once again, we isolate particular types of edges: **Definition 5.10**.: Let \(\mathbf{P}\to\mathbf{C}\) be a \(\Gamma_{g,n}^{\mathcal{H}}\)-object. An edge \(e\in E(\mathbf{C})\) is called a _\(2\)-end_ if upon contracting all edges of \(\mathbf{T}\) except for \(e\) and their preimages in \(\mathbf{G}\), we obtain the cover \(\mathbf{J}\to\mathbf{K}\) in Figure 9. The key to the proof of acyclicity of \(\mathcal{C}^{(g,n)}\) when \(n\leq 1\) is the following lemma. **Lemma 5.11**.: Let \(\mathbf{P}\to\mathbf{C}\) be an object of \(\Gamma_{g,n}^{\mathcal{H}}\) for \(n\leq 1\). Then the poset of expansions of \(\mathbf{P}\to\mathbf{C}\) by \(2\)-ends has a unique maximal element \(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime}\). Moreover, this expansion is canonical in the sense that any automorphism of \(\mathbf{P}\to\mathbf{C}\) lifts to one of \(\mathbf{P}^{\prime}\to\mathbf{C}^{\prime}\). Proof.: It is clear how to construct the graph \(\mathbf{C}^{\prime}\): for every vertex of \(\mathbf{C}\) with weight \(d\geq 2\), one expands \(\lfloor d/2\rfloor\) many \(2\)-ends from \(v\), leaving behind a vertex of weight \(d-2\lfloor d/2\rfloor\). This uniquely determines a cover \(P^{\prime}\), but does not determine the marking function on \(P^{\prime}\). If \(n=0\), then there is no marking function, so \(\mathbf{P}^{\prime}\) is determined. For \(n=1\), the only ambiguity arises when \(v\) supports the unique marking, and the preimage of \(v\) in \(\mathbf{C}^{\prime}\) has \(2\) preimages in the graph \(P^{\prime}\), so one has to make a choice as to which fiber to mark. However, since \(n=1\), both choices are equivalent, as they differ by the \(\mathbb{Z}/2\mathbb{Z}\)-action on \(P^{\prime}\). Therefore \(\mathbf{P}^{\prime}\) is also determined when \(n=1\). The statement on lifting of automorphisms is straightforward to check. The lemma fails when \(n>1\), because in general there is no canonical way of distributing the markings supported at \(v\) among the fibers over \(v\) in \(P^{\prime}\). See Figure 10 for an example. Given Lemma 5.11, the proof of the following theorem is completely analogous to the proof of Theorem 5.9; we will only outline the necessary steps. **Theorem 5.12**.: For \(g\geq 2\) and \(n\leq 1\), the chain complex \(\mathcal{C}_{*}^{(g,n)}\) is acyclic. Figure 9. The cover \(\mathbf{J}\to\mathbf{K}\) Proof.: First, define \(B(\mathbf{P}\to\mathbf{C})\) to be the set of isomorphism classes of \(\Gamma_{g,n}^{\mathcal{H}}\)-objects obtained from \(\mathbf{P}\to\mathbf{C}\) by contracting \(2\)-ends. Then, use this to define a chain complex \(\mathcal{G}_{*}^{\mathbf{P}\to\mathbf{C}}\) analogously to \(\mathcal{Q}_{*}^{\mathbf{P}\to\mathbf{C}}\), where the differential is given by a signed sum of \(2\)-end contractions. The proof that \(\mathcal{G}_{*}^{\mathbf{P}\to\mathbf{C}}\) is acyclic, for \(\mathbf{P}\to\mathbf{C}\) maximal with respect to expanding \(2\)-ends, is exactly the same as the proof of Proposition 5.8. Finally, one proves the theorem by filtering \(\mathcal{C}^{(g,n)}\): set \(F_{p}\mathcal{C}^{(g,n)}_{*}\) to be the subspace of \(\mathcal{C}^{(g,n)}_{*}\) spanned by those covers with at most \(p\) edges which are not \(2\)-ends. Then the \(p\)th column of the \(E^{0}\) page of the associated spectral sequence breaks up into a direct sum of complexes of the form \(\mathcal{G}_{*}^{\mathbf{P}\to\mathbf{C}}\) by Lemma 5.11, so the \(E^{1}\) page vanishes, and the result follows. Theorem 5.12 gives part (1) of Proposition B. Part (2) states that \[W_{0}H_{c}^{2g+1}(\mathcal{H}_{g,2};\mathbb{Q})\cong\mathbb{Q},\] and that the corresponding \(S_{2}\)-representation is trivial if \(g\) is even, and given by the sign representation if \(g\) is odd. We prove this now by writing down an explicit cycle in \(\mathcal{G}_{2g}^{(g,2)}\) corresponding to this class. See Figure 11. Proof of Proposition B, part (2).: We have an isomorphism of \(S_{2}\)-representations \[W_{0}H_{c}^{2g+1}(\mathcal{H}_{g,2};\mathbb{Q})\cong\tilde{H}_{2g}(\Theta_{g, 2};\mathbb{Q})^{\vee}\] by Corollary 4.5. We have \[\tilde{H}_{2g}(\Theta_{g,2};\mathbb{Q})=H_{2g}\left(\mathcal{C}^{(g,2)}_{*} \right).\] Observe that \(2g\) is the top homological degree of \(\mathcal{C}^{(g,2)}\): the maximal number of edges of a stable tree with \(2g+4\) legs is \(2g+1\). Therefore, any cycle in \(\mathcal{C}^{(g,2)}_{2g}\) defines a class in homology. Any target tree for a cover in \(\mathcal{C}^{(g,2)}_{2g}\) must be trivalent, and to be a nonzero element, it cannot have any automorphisms which act by an odd permutation of the edge set. It is straightforward to conclude that such a tree must be equal to the tree depicted in Figure 11. This tree \(\mathbf{C}\) has two covers, depicted in Figure 11. Therefore \(\dim\mathcal{C}^{(g,2)}_{2g}=2\), where a basis is given by choosing any edge-labelling of the aforementioned tree. One can verify directly that neither one of these basis elements form a cycle on their own, but their difference does. From this we conclude that \(H_{2g}(\mathcal{C}^{2g}_{*})\cong\mathbb{Q}\). To understand the \(S_{2}\)-representation, we note that when \(g\) is even, the transposition in \(S_{2}\) preserves any edge-labelling of the given tree, and when \(g\) is odd, the transposition induces an odd permutation of the edge labels. **Remark 5.13**.: Theorem 5.5 generalizes to other spaces of admissible covers. Fix an integer \(N>0\) and let \(G\) be an abelian group, which we now write additively to be consistent with our notation for \(G=\mathbb{Z}/2\mathbb{Z}\). Let \[\mu\colon\{w_{1},\ldots,w_{N}\}\to G\] be a function such that the image of \(\mu\) generates \(G\), which additionally satisfies \[\sum_{i=1}^{N}\mu(w_{i})=0.\] where \(0\in G\) denotes the identity element. For any integer \(n\geq 0\), we can extend \(\mu\) to a function \[\{1,\ldots,n\}\cup\{w_{1},\ldots,w_{N}\}\to G\] by setting the image of each \(i\in\{1,\ldots,n\}\) to be \(0\); for ease of notation, we will also call this extension \(\mu\). We set the notation \[\overline{\mathcal{M}}^{G}_{0,n+N}(\mu):=\overline{\mathcal{M}}^{G}_{0,\{1, \ldots,n\}\cup\{w_{1},\ldots,w_{N}\}}(\mu)\] and define \(\mathcal{M}^{G}_{0,n+N}(\mu)\) similarly. We now define an intermediate locus \[\mathcal{M}^{G}_{0,n+N}(\mu)\subset\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu) \subset\overline{\mathcal{M}}^{G}_{0,n+N}(\mu)\] in analogy with the space \(\tilde{\mathcal{H}}_{g,n}\) of \(n\)-marked hyperelliptic curves of genus \(g\) together with a labelling of their Weierstrass points, considered in SS4.1. Given a graph-theoretic pointed admissible \(G\)-cover \(\mathbf{P}\to\mathbf{C}\in\operatorname{Ob}(\Gamma^{G}_{0,n+N}(\mu))\), where \(\Gamma^{G}_{0,n+N}(\mu)\) is the category defined in Definition 3.3, we say that \(\mathbf{P}\to\mathbf{C}\) is _forbidden_ if all of the following conditions hold: * \(|E(\mathbf{C})|=1\), * If we erase all of the legs labelled by \(\{1,\ldots,n\}\) from \(\mathbf{C}\), the resulting \(\{w_{1},\ldots,w_{N}\}\)-marked tree is not stable in the sense of Definition 3.1, and * the source graph \(\mathbf{P}\) has no vertices supporting repeated markings among \(\{1,\ldots,n\}\). Each forbidden cover \(\mathbf{P}\to\mathbf{C}\) corresponds uniquely to a boundary divisor of \(\overline{\mathcal{M}}^{G}_{0,n+N}(\mu)\), and we define \(\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu)\) to be the complement in \(\overline{\mathcal{M}}^{G}_{0,n+N}(\mu)\) of those boundary divisors which are not forbidden. When \(G=\mathbb{Z}/2\mathbb{Z}=\{0,1\}\), \(N=2g+2\), and \(\mu(w_{i})=1\) for all \(i\), the forbidden divisors are precisely those of type (1) and (2) in Definition 4.1, and we have \[\tilde{\mathcal{M}}^{\mathbb{Z}/2\mathbb{Z}}_{0,n+2g+2}(\mu)\cong\widetilde{ \mathcal{H}}_{g,n}.\] For general \(G\) and \(\mu\), the space \(\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu)\) can be identified with the moduli space of smooth \(N\)-pointed admissible \(G\)-covers of \(\mathbb{P}^{1}\), with monodromy specified by \(\mu\), together with \(n\) distinct marked points on the source curve. This space admits an \(S_{n}\)-action given by permuting the \(n\) marked points on the source, and the isomorphism with \(\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu)\) is \(S_{n}\)-equivariant. The dual complex \(\tilde{\Theta}^{G}_{0,n+N}(\mu)\) of the normal crossings compactification \[\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu)\subset\overline{\mathcal{M}}^{G}_{0,n+N} (\mu)\] is the subcomplex of \(\Delta^{G}_{0,n+N}(\mu)\) on those simplices which have no forbidden vertices. The analogue of Theorem 5.5 holds for \(\tilde{\Theta}^{G}_{0,n+N}(\mu)\): the subcomplex parameterizing graph-theoretic admissible \(G\)-covers \(\mathbf{P}\to\mathbf{C}\) where \(\mathbf{P}\) has a repeated marking is acyclic. Our proof of Theorem 5.5 carries through to this setting, mutatis mutandis. In Remark 6.6 below, we explain how this leads to a generalization of Theorem A for these spaces. ## 6. A graph sum formula for \(\mathsf{h}_{g}\) Recall from the introduction that \[\mathsf{h}_{g}=\sum_{n\geq 0}\sum_{i=0}^{4g-2+2n}(-1)^{i}\operatorname{ch}_{n }W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\in\hat{\Lambda}\] denotes the generating function for the weight zero equivariant Euler characteristics of the moduli spaces \(\mathcal{H}_{g,n}\). In this section we will prove Theorem A, thus establishing our sum-over-graphs formula for \(\mathsf{h}_{g}\). Recall that \(T_{2g+2}\) denotes the set of isomorphism classes trees with \(2g+2\) unlabelled leaves, and each such tree \(C\) has a unique graph-theoretic admissible \(\mathbb{Z}/2\mathbb{Z}\)-cover \(P_{C}\to C\). Let \(T_{2g+2}^{<3}\) denote the subset of \(T_{2g+2}\) consisting of those trees such that no vertex supports more than two leaves, and for a tree \(C\) we write \(E_{C}\) for its set of edges. We restate Theorem A for convenience. **Theorem A**.: We have \[\mathsf{h}_{g}=\sum_{C\in T_{2g+2}^{<3}}\frac{(-1)^{|E_{C}|}}{|\operatorname{ Aut}(P_{C})|}\sum_{\tau\in\operatorname{Aut}(P_{C})}\operatorname{sgn}( \tau|_{E_{C}})\prod_{k\geq 1}(1+p_{k})^{f(P_{C},\tau,k)}\] where \(E_{C}\) is the set of edges of the tree \(C\), \(p_{k}=\sum_{n>0}x_{n}^{k}\in\hat{\Lambda}\) is the \(k\)th power sum symmetric function, and \(k\cdot f(P_{C},\tau,k)\) is the compactly supported Euler characteristic of the set of points in \(P_{C}\) which have orbit of length \(k\), under the action of \(\tau\). We will prove Theorem A through a series of intermediate results. **Lemma 6.1**.: We have \[\mathsf{h}_{g}=-\sum_{n\geq 0}\chi_{c}^{S_{n}}(\Theta_{g,n}\smallsetminus( \Theta_{g,n}^{\operatorname{rep}}\cup\Theta_{g,n}^{\geq 3})),\] where \(\chi_{c}^{S_{n}}(\cdot)\) denotes the \(S_{n}\)-equivariant compactly supported Euler characteristic. Proof.: Via the identification \[W_{0}H^{i}_{c}(\mathcal{H}_{g,n};\mathbb{Q})\cong\widetilde{H}_{i-1}(\Theta_{g,n};\mathbb{Q})^{\vee}\] of Corollary 4.5, we can write \[\mathsf{h}_{g} =\sum_{n\geq 0}\sum_{i=0}^{4g-2+2n}(-1)^{i}\operatorname{ch}_{n} \widetilde{H}_{i-1}(\Theta_{g,n};\mathbb{Q})\] \[=\sum_{n\geq 0}-\widetilde{\chi}^{S_{n}}(\Theta_{g,n}),\] where \(\widetilde{\chi}^{S_{n}}(\cdot)\) denotes the \(S_{n}\)-equivariant reduced Euler characteristic. Since \(\Theta_{g,n}\) is connected and compact, and \(S_{n}\) acts trivially on \(H_{0}(\Theta_{g,n};\mathbb{Q})\cong\mathbb{Q}\), we have \[-\sum_{n\geq 0}\widetilde{\chi}^{S_{n}}(\Theta_{g,n})=\sum_{n\geq 0}h_{n}-\sum_{n \geq 0}\chi_{c}^{S_{n}}(\Theta_{g,n}),\] where \(h_{n}\in\Lambda\) is the \(n\)th homogeneous symmetric function, defined as the Frobenius characteristic of the trivial \(S_{n}\)-representation. By the additivity of the compactly supported Euler characteristic under stratification, we can write \[\sum_{n\geq 0}\chi_{c}^{S_{n}}(\Theta_{g,n})=\sum_{n\geq 0}\left(\chi_{c}^{S_{n} }(\Theta_{g,n}\smallsetminus(\Theta_{g,n}^{\mathrm{rep}}\cup\Theta_{g,n}^{\geq 3 }))+\chi_{c}^{S_{n}}(\Theta_{g,n}^{\mathrm{rep}}\cup\Theta_{g,n}^{\geq 3})\right)\] Since the union \(\Theta_{g,n}^{\mathrm{rep}}\cup\Theta_{g,n}^{\geq 3}\) is compact and connected, with vanishing reduced rational homology by Theorem 5.1, and \(S_{n}\) acts trivially on \(H_{0}(\Theta_{g,n}^{\mathrm{rep}}\cup\Theta_{g,n}^{\geq 3};\mathbb{Q})\), we have \[\chi_{c}^{S_{n}}(\Theta_{g,n}^{\mathrm{rep}}\cup\Theta_{g,n}^{\geq 3})=h_{n},\] and the proof is complete. **Lemma 6.2**.: We have \[\mathsf{h}_{g}=-\sum_{C\in T_{2g+2}^{<3}}\sum_{n\geq 0}\chi_{c}^{S_{n}}\left( \left(\operatorname{Conf}_{n}(P_{C})\times(\Delta^{|E_{C}|-1})^{\circ}\right) /\operatorname{Aut}(P_{C})\right).\] Proof.: We can stratify the space \[X_{g,n}:=\Theta_{g,n}\smallsetminus(\Theta_{g,n}^{\mathrm{rep}}\cup\Theta_{g,n}^ {\geq 3})\] by the \(\Gamma_{g}^{\mathcal{H}}\)-object that arises when we forget the markings of the legs. Such an object is uniquely specified by an element \(C\) of \(T_{2g+2}^{<3}\), which determines its covering \(P_{C}\). The stratum corresponding to \(P_{C}\to C\) is \(S_{n}\)-equivariantly homeomorphic to \[\left(\operatorname{Conf}_{n}(P_{C})\times(\Delta^{|E_{C}|-1})^{\circ}\right) /\operatorname{Aut}(P_{C}).\] Above, \((\Delta^{|E_{C}|-1})^{\circ}\) denotes the interior of the standard \(|E_{C}|-1\) simplex \(\Delta^{|E_{C}|-1}\), viewed as the space parameterizing metrics \(\ell\colon E_{C}\to\mathbb{R}_{>0}\) of total length one. The space \(\operatorname{Conf}_{n}(P_{C})\) is the configuration space of \(n\) distinct points on \(P_{C}\), and the action of \(\operatorname{Aut}(P_{C})\) is diagonal: \(\operatorname{Aut}(P_{C})\) naturally acts on \(C\), and hence on \(|E_{C}|\) and \(\left(\Delta^{|E_{C}|-1}\right)^{\circ}\). We now show how to calculate the terms in the sum, following Gorsky's calculation of the \(S_{n}\)-equivariant Euler characteristic of \(\operatorname{Conf}_{n}(X)/G\), where \(X\) is an algebraic variety and \(G\) is a finite subgroup of its automorphism group [10]. **Proposition 6.3**.: Let \(X\) be a finite CW complex, and let \(E\) be a finite set. Set \[\Delta^{\circ}=\left\{\ell\colon E\to\mathbb{R}_{>0}\mid\sum_{e\in E}\ell(e)= 1\right\}.\] Let \(G\) be a finite group acting on both \(X\) and \(E\), and set \[\mathsf{h}_{X,E,G}:=\sum_{n\geq 0}\chi_{c}^{S_{n}}\left(\left(\operatorname{ Conf}_{n}(X)\times\Delta^{\circ}\right)/G\right).\] Then \[\mathsf{h}_{X,E,G}=-\frac{(-1)^{|E|}}{|G|}\sum_{g\in G}\operatorname{sgn}\left( g|_{E}\right)\prod_{k\geq 1}(1+p_{k})^{\chi_{c}(X_{k}(g))/k},\] where \(X_{k}(g)\) denotes the set of points of \(X\) which have orbit of length \(k\) under the action of \(g\). Before proving Proposition 6.3, we need two intermediate lemmas. **Lemma 6.4**.: Suppose that \(X\) is any finite CW complex. Then \[f(t):=\sum_{n\geq 0}\chi_{c}(\operatorname{Conf}_{n}(X))\frac{t^{n}}{n!}=(1+t)^{ \chi_{c}(X)}.\] Proof.: We have the identity \[\chi_{c}(X^{n})=\sum_{k=1}^{n}S(n,k)\chi_{c}(\operatorname{Conf}_{k}(X)),\] where \(S(n,k)\), the Stirling number of the second kind, counts the number of partitions of \(n\) with \(k\) parts. It follows that \[g(t):=\sum_{n\geq 0}\chi_{c}(X^{n})\frac{t^{n}}{n!}=e^{\chi_{c}(X)t}\] is the _Stirling transform_ of \(f\), so that \(f(t)=g(\log(1+t))=(1+t)^{\chi_{c}(X)}\), as claimed. **Lemma 6.5**.: For any group \(H\) acting on a space \(Y\), denote by \[[Y]^{h}\] the set of fixed points of \(h\in H\) acting on \(Y\). Then, for \(X\), \(E\), and \(G\) as above, and \(\sigma\in G\), we have \[\chi_{c}\left(\left[\left(\operatorname{Conf}_{n}(X)\times\Delta^{\circ} \right)/G\right]^{\sigma}\right)=-\frac{(-1)^{|E|}}{|G|}\sum_{g\in G}\operatorname {sgn}(g|_{E})\cdot\chi_{c}\left(\left[\operatorname{Conf}_{n}(X)\right]^{g^{- 1}\sigma}\right).\] Proof.: Define \[S=\{(g,\ell,y)\in G\times E\times\operatorname{Conf}_{n}(X)\mid g\cdot(\ell,y )=\sigma\cdot(\ell,y)\}.\] Then we have a map \[S\to\left[\left(\operatorname{Conf}_{n}(X)\times\Delta^{\circ}\right)/G \right]^{\sigma},\] which takes \((g,\ell,y)\) to \((\ell,y)\). The fibers of this map are all nonempty and have cardinality equal to \(|G|\), so \[\chi_{c}\left(\left[\left(\operatorname{Conf}_{n}(X)\times\Delta^{\circ} \right)/G\right]^{\sigma}\right)=\frac{1}{|G|}\chi_{c}(S).\] On the other hand, the projection \(S\to G\) has fiber over \(g\in G\) isomorphic to \[[\Delta^{\circ}]^{g}\times\left[\operatorname{Conf}_{\mathfrak{n}}(X)\right]^ {g^{-1}\sigma}.\] Therefore we have \[\chi_{c}\left(\left[\left(\operatorname{Conf}_{n}(X)\times\Delta^{\circ} \right)/G\right]^{\sigma}\right)=\frac{1}{|G|}\sum_{g\in G}\chi_{c}([\Delta^{ \circ}]^{g})\cdot\chi_{c}\left(\left[\operatorname{Conf}_{\mathfrak{n}}(X) \right]^{g^{-1}\sigma}\right).\] The proof is finished upon noting that \([\Delta^{\circ}]^{g}\) is again an open simplex, whose dimension modulo \(2\) is equal to \(|E|+\operatorname{sgn}(g|_{E})-1\). We can now prove Proposition 6.3. Proof of Proposition 6.3.: We have \[\mathfrak{h}_{X,E,G} =\sum_{n\geq 0}\frac{1}{n!}\sum_{\sigma\in S_{n}}\sum_{i\geq 0}(-1 )^{i}\mathrm{Tr}\left(\sigma|_{H^{i}_{c}((\operatorname{Conf}_{n}(X)\times \Delta^{\circ})/G;\mathbb{Q})}\right)p_{1}^{k_{1}(\sigma)}\cdots p_{n}^{k_{n} (\sigma)}\] \[=\sum_{n\geq 0}\frac{1}{n!}\sum_{\sigma\in S_{n}}\chi_{c}\left( \left[\left(\operatorname{Conf}_{n}(X)\times\Delta^{\circ}\right)/G\right]^{ \sigma}\right)p_{1}^{k_{1}(\sigma)}\cdots p_{n}^{k_{n}(\sigma)}.\] The second equality follows from the Lefschetz fixed-point theorem, applied to the one-point compactification of \(\left(\mathrm{Conf}_{n}(X)\times\Delta^{\circ}\right)/G\), where we set \(k_{i}(\sigma)\) to be the number of cycles of length \(i\) in \(\sigma\). Now using Lemma 6.5, we have \[\mathsf{h}_{X,E,G}=-\sum_{n\geq 0}\frac{1}{n!}\sum_{\sigma\in S_{n}}\frac{(-1)^{ |E|}}{|G|}\sum_{g\in G}\mathrm{sgn}(g|_{E})\cdot\chi_{c}\left(\left[\mathrm{ Conf}_{n}(X)\right]^{g^{-1}\sigma}\right)p_{1}^{k_{1}(\sigma)}\cdots p_{n}^{k_{n}( \sigma)}.\] Now the proof follows that of Gorsky [1, Theorem 2.5]: if we set \[X_{k}(g):=\{x\in X\mid x\text{ has orbit of size $k$ under $g$}\},\] and \[\tilde{X}_{k}(g)=X_{k}(g)/(g),\] then for fixed \(\ell_{1},\ldots,\ell_{n}\) such that \(\sum_{i=1}^{n}i\ell_{i}=n\), we have a map \[\prod_{\begin{subarray}{c}\sigma\in S_{n}\\ k_{i}(\sigma)=\ell_{i}\forall i\end{subarray}}\left[\mathrm{Conf}_{n}(X)\right] ^{g^{-1}\sigma}\to\prod_{i=1}^{n}\mathrm{Conf}_{\ell_{i}}(\tilde{X}_{i}(g))/S _{\ell_{i}}\] which is \(n!\)-to-\(1\), so that \[\frac{1}{n!}\sum_{\begin{subarray}{c}\sigma\in S_{n}\\ k_{i}(\sigma)=\ell_{i}\forall i\end{subarray}}\chi_{c}\left(\left[\mathrm{ Conf}_{n}(X)\right]^{g^{-1}\sigma}\right)=\prod_{i=1}^{n}\frac{\chi_{c}( \mathrm{Conf}_{\ell_{i}}(\tilde{X}_{i}(g)))}{\ell_{i}!}.\] Now the proposition follows from Lemma 6.4, upon summing over all possible tuples \((\ell_{1},\ldots,\ell_{n})\). Now Theorem A is proved by combining Lemma 6.2 with Proposition 6.3. **Remark 6.6**.: As explained in Remark 5.13, the repeated marking locus in the dual complex \(\tilde{\Theta}^{G}_{0,n+N}(\mu)\) of the inclusion \[\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu)\subset\overline{\mathcal{M}}^{G}_{0,n+N} (\mu)\] is also acyclic, and \(\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu)\) is naturally identified with the moduli space of smooth \(N\)-pointed admissible covers of \(\mathbb{P}^{1}\) with \(\mu\)-specified monodromy, together with \(n\) distinct marked points on the source curve. By the acyclicity of the repeated marking locus, we can write a graph sum formula for the generating function encoding the \(S_{n}\)-equivariant weight zero compactly supported Euler characteristics of these moduli spaces. Define \[\mathsf{h}^{G}_{N}(\mu)=\sum_{n\geq 0}\sum_{i=0}^{2N+2n-6}(-1)^{i}\operatorname{ ch}_{n}(W_{0}H^{i}_{c}(\tilde{\mathcal{M}}^{G}_{0,n+N}(\mu);\mathbb{Q})).\] By removing the repeated marking locus from the dual complex and emulating the techniques of this section, we obtain the following theorem. **Theorem D**.: We have \[\mathsf{h}^{G}_{N}(\mu)=\sum_{\mathbf{P}\to\mathbf{C}\in\mathrm{Ob}(\Gamma^{G }_{0,N}(\mu))}\frac{(-1)^{|E_{\mathbf{C}}|}}{|\operatorname{Aut}(\mathbf{P}) |}\sum_{\tau\in\operatorname{Aut}(\mathbf{P})}\mathrm{sgn}(\tau|_{E_{\mathbf{ C}}})\prod_{k\geq 1}(1+p_{k})^{f(\mathbf{P},\tau,k)}\] where \(E_{\mathbf{C}}\) is the set of edges of the tree \(\mathbf{C}\), \(p_{k}=\sum_{n>0}x_{n}^{k}\in\hat{\Lambda}\) is the \(k\)th power sum symmetric function, and \(k\cdot f(\mathbf{P},\tau,k)\) is given by the compactly supported Euler characteristic of the set of points in \(\mathbf{P}\) which have orbit of length \(k\), under the action of \(\tau\). The first sum is taken over isomorphism classes of objects in \(\Gamma^{G}_{0,N}(\mu)\), which is the category defined in Definition 3.3. Taking \(G=\mathbb{Z}/2\mathbb{Z}\), \(N=2g+2\), and \(\mu\colon\{w_{1},\ldots,w_{N}\}\to\mathbb{Z}/2\mathbb{Z}\) to be the constant function \(1\) in Theorem D, we obtain the generating function for the \(S_{n}\)-equivariant weight zero compactly supported Euler characteristics of the moduli spaces \(\hat{\mathcal{H}}_{g,n}\) of \(n\)-pointed hyperelliptic curves of genus \(g\), together with labellings of their Weierstrass points. ## Appendix A Calculations for \(g\leq 7\) In this appendix we present the computational data obtained by implementing Theorem A on a computer. This was implemented in _Mathematica_ using the package IGraph/M [1]. The code for these computations is available at [1]. We compute \(\mathsf{h}_{g}\) explicitly for \(2\leq g\leq 7\): see Table 1. For scale, \(\mathsf{h}_{5}\) is computed as a sum over \(96\) graphs and takes \(8\) minutes to compute on a home laptop, while \(\mathsf{h}_{7}\) is computed as a sum over \(2789\) graphs and takes just under \(3\) days to compute on a home laptop. We extract from this data exponential generating functions for the numerical weight zero compactly supported Euler characteristic by setting \(P_{1}\) to \(1+t\) and all other \(P_{i}\) to \(1\), see Table 2. We display these Euler characteristics for \(0\leq n\leq 10\) in Table 3. \begin{tabular}{|l|l|} \hline \(g\) & \(\mathsf{h}_{g}\) \\ \hline 2 & \(\frac{1}{12}\left(-\frac{P_{1}^{3}}{P_{2}^{2}}-\frac{2P_{1}^{2}}{P_{3}}+\frac{6P _{1}}{P_{2}}-\frac{2P_{2}P_{3}}{P_{6}}-\frac{1}{P_{1}}\right)\) \\ \hline 3 & \(-\frac{P_{1}^{4}}{16{P_{2}}^{3}}+\frac{P_{1}^{3}}{8{P_{2}}^{2}}-\frac{5{P_{1} }^{2}}{16{P_{2}}^{2}}-\frac{P_{1}^{2}}{8{P_{4}}}-\frac{1}{16{P_{1}}^{2}}+\frac {P_{1}P_{2}}{4{P_{4}}}+\frac{P_{1}}{2{P_{2}}}+\frac{1}{8{P_{1}}}-\frac{P_{2}}{8 {P_{4}}}-\frac{5}{16{P_{2}}}\) \\ \hline 4 & \(-\frac{9P_{1}^{5}}{160{P_{2}^{4}}}+\frac{7P_{1}^{4}}{48{P_{2}^{3}}}-\frac{8}{ 8{P_{2}^{2}}}-\frac{P_{1}^{3}}{16{P_{2}^{3}}}-\frac{P_{1}^{3}}{16{P_{2}^{3}}} +\frac{P_{1}^{2}}{16{P_{2}^{2}}}+\frac{P_{1}^{2}}{8{P_{4}}}-\frac{P_{1}^{2}}{1 0{P_{5}}}-\frac{P_{1}}{2{P_{2}}}+\frac{7P_{1}}{16{P_{2}^{2}}}+\frac{P_{1}}{6{P _{3}}}-\frac{P_{2}P_{1}}{4{P_{4}}}+\frac{P_{1}}{8{P_{4}}}+\) \\ & \(\frac{P_{3}P_{1}}{6{P_{6}}}-\frac{1}{8{P_{1}}}+\frac{1}{48{P_{1}^{2}}}-\frac{ 9}{160{P_{1}^{3}}}+\frac{1}{16{P_{2}}}-\frac{1}{16{P_{1}}{P_{2}}}+\frac{P_{2}} {8{P_{4}}}-\frac{P_{2}}{16{P_{1}}{P_{4}}}-\frac{P_{2}}{10{P_{1}}}\) \\ \hline 5 & \(-\frac{11P^{6}}{192{P_{2}^{5}}}+\frac{3P_{1}^{5}}{16{P_{2}^{4}}}-\frac{P_{1}^ {4}}{4{P_{3}^{3}}}-\frac{5P_{1}^{4}}{64{P_{4}^{2}}}-\frac{P_{1}^{4}}{16{P_{2}^ {2}}}P_{4}+\frac{P_{1}^{3}}{8{P_{2}^{2}}}+\frac{5P_{1}^{3}}{16{P_{2}^{3}}}+ \frac{3P_{1}^{3}}{16{P_{2}}{P_{4}}}-\frac{P_{1}^{2}}{4{P_{2}^{2}}}-\frac{35P_{ 1}^{2}}{9{P_{2}^{3}}}-\frac{P_{1}^{2}}{12{P_{3}^{2}}}-\frac{3P_{1}^{2}}{16{P_{ 2}^{4}}}-\) \\ & \(\frac{P_{2}{P_{2}^{2}}}{8{P_{4}^{2}}}+\frac{P_{2}^{2}}{12{P_{6}}}-\frac{P_{4} P_{1}^{2}}{8{P_{2}}{P_{8}}}+\frac{3P_{1}}{4{P_{2}^{2}}}-\frac{P_{2}P_{1}}{4{P_{4}}}+ \frac{3P_{1}}{8{P_{4}}}+\frac{P_{2}^{2}P_{1}}{4{P_{4}^{2}}}+\frac{P_{4}P_{1}}{ 4{P_{8}}}+\frac{1}{8{P_{1}}}-\frac{1}{4{P_{1}^{2}}}+\frac{3}{16{P_{1}^{3}}}- \frac{11}{192{P_{1}^{4}}}-\frac{1}{4{P_{2}}}+\) \\ & \(\frac{5}{16{P_{1}}{P_{2}}}-\frac{5}{64{P_{1}^{2}}{P_{2}}}-\frac{35}{96{P_{2}^{2 }}}-\frac{3}{16{P_{4}}}+\frac{3P_{2}}{16{P_{1}}{P_{4}}}-\frac{P_{2}}{16{P_{1}^ {2}}{P_{4}}}-\frac{P_{2}^{2}}{8{P_{4}^{2}}}+\frac{P_{2}}{12{P_{6}}}-\frac{P_{2} ^{2}}{12{P_{6}^{2}}}-\frac{P_{4}}{8{P_{8}^{2}}}\) \\ \hline 6 & \(-\frac{227{P_{1}^{7}}}{3584{P_{2}^{6}}}+\frac{P_{1}^{6}}{4{P_{2}^{5}}}-\frac{5 5P_{1}^{5}}{128{P_{4}^{4}}}-\frac{25P_{1}^{5}}{256{P_{2}^{5}}}-\frac{9P_{1}^{5} }{128{P_{2}^{3}}{P_{4}^{3}}}+\frac{3P_{1}^{4}}{8{P_{3}^{3}}}+\frac{7P_{1}^{4}}{ 16{P_{4}^{4}}}+\frac{P_{1}^{4}}{4{P_{2}^{2}}{P_{4}}}-\frac{P_{1}^{3}}{8{P_{2}^{ 2}}}-\frac{13P_{1}^{3}}{16{P_{3}^{3}}}-\frac{55P_{1}^{3}}{512{P_{4}^{2}}}- \frac{11P_{1}^{3}}{32{P_{2}}{P_{4}}}-\frac{5P_{2}}{64{P_{2}^{2}}}-\frac{3P_{1}^ {3}}{16{P_{3}^{3}}}-\frac{55P_{1}^{3}}{512{P_{4}^{2}}}-\) \\ & \(\frac{11P_{1}^{3}}{32{P_{2}}{P_{4}}}-\frac{3P_{1}^{3}}{6{P_{4}^{2}}{P_{2}^{4}}}- \frac{7P_{1}^{3}}{128{P_{4}^{2}}}-\frac{P_{4}P_{1}^{3}}{32{P_{2}^{2}}{P_{8}^{2}}} +\frac{7P_{1}^{2}}{8{P_{2}^{2}}}+\frac{5P_{1}^{2}}{16{P_{3}^{2}}}+\frac{P_{2} ^{2}}{4{P_{4}}}+\frac{P_{2}^{2}P_{1}}{8{P_{4}^{2}}}-\frac{P_{2}^{2}}{14{P_{4}}} -\frac{99P_{1}}{64{P_{2}^{2}}}+\frac{59P_{1}}{128{P_{3}^{3}}}+\) \\ & \(\frac{P_{2}{P_{1}}}{4{P_{4}}}-\frac{7P_{1}}{8{P_{4}}}+\frac{19}{64{P_{1}}{P_{ 4}}}-\frac{9P_{2}^{2}P_{1}}{32{P_{4}^{2}}}+\frac{9P_{2}P_{1}}{64{P_{4}^{2}}}- \frac{P_{4}P_{1}}{8{P_{8}}}+\frac{3P_{4}P_{1}}{16{P_{2}}{P_{8}}}-\frac{1}{8{P _{1}}}+\frac{3{8{P_{1}^{2}}}}{8{P_{1}^{3}}}-\frac{55}{128{P_{1}^{3}}}+ \frac{1}{4{P_{1}^{4}}}-\frac{227}{3584{P_{1}^{5}}}+\) \\ & \(\frac{7}{8{P_{2}}}-\frac{13}{16{P_{1}}{P_{2}}}+\frac{7}{16{P_{1}^{2}}{P_{2}}}- \frac{25}{256{P_{1}^{3}}{P_{2}}}+\frac{5}{16{P_{2}^{2}}}-\frac{55}{512{P_{1} }{P_{2}^{2}}}+\frac{P_{2}}{4{P_{4}}}+\frac{1}{8{P_{4}}}-\frac{11P_{2}}{32{P_{ 1}}{P_{4}}}-\frac{3}{64{P_{1}}{P_{4}}}+\frac{P_{2}}{4{P_{1}^{2}}{P_{1}^{2}}}- \frac{9P_{2}}{128{P_{1}^{3}}}-\frac{9P_{2}}{128{P_{1}^{3}}}-\frac{9P_{2}}{ 128{P_{1}^{3}}}-\frac{1}{4{P_{1}^{2}}}-\) \\ & \(\frac{9P_{2}}{128{P_{1}^{3}}{P_{4}}}+\frac{P_{2}^{2}}{8{P_{4 \begin{tabular}{|l|l|} \hline \(g\) & \(\sum_{n\geq 0}\frac{t^{n}}{n!}\left(\sum_{i=0}^{4g+2n-2}(-1)^{i}\dim_{ \mathbb{Q}}W_{0}H_{c}^{i}(\mathcal{H}_{g,n};\mathbb{Q})\right)\) \\ \hline \(2\) & \(-\frac{t^{2}}{12(1+t)}\left(6+6t+t^{2}\right)\) \\ \hline \(3\) & \(-\frac{t^{2}}{16(1+t)^{2}}\left(8+16t+12t^{2}+4t^{3}+t^{4}\right)\) \\ \hline \(4\) & \(-\frac{t^{2}}{480(1+t)^{3}}\left(240+720t+960t^{2}+720t^{3}+386t^{4}+146t^{5} +27t^{6}\right)\) \\ \hline \(5\) & \(-\frac{t^{2}}{192(1+t)^{4}}\left(96+384t+736t^{2}+864t^{3}+748t^{4}+504t^{5}+2 46t^{6}+74t^{7}+11t^{8}\right)\) \\ \hline \(6\) & \(-\frac{t^{2}}{3584(1+t)^{5}}\left(\begin{array}{c}1792+8960t+22400t^{2}+35840 t^{3}+43232t^{4}+41888t^{5}+32096t^{6}+18272t^{7}\\ +7268t^{8}+1828t^{9}+227t^{10}\end{array}\right)\) \\ \hline \(7\) & \(-\frac{t^{2}}{15360(1+t)^{6}}\left(7680+46080t+142080t^{2}+288000t^{3}+446720t ^{4}+565760t^{5}+587520t^{6}\right)\) \\ \hline \end{tabular} Table 2. The exponential generating functions for numerical weight zero compactly supported Euler characteristics of \(\mathcal{H}_{g,n}\). \begin{tabular}{|l|l|l l l l l l l l l l|} \hline \(g\) & \(n\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) \\ \hline \(2\) & \(0\) & \(0\) & -1 & \(0\) & -2 & \(10\) & -60 & \(420\) & -3360 & \(30240\) & -302400 \\ \(3\) & \(0\) & \(0\) & -1 & \(0\) & -6 & \(30\) & -225 & \(1890\) & -17640 & \(181440\) & -2041200 \\ \(4\) & \(0\) & \(0\) & -1 & \(0\) & -12 & \(60\) & -579 & \(5586\) & -59220 & \(684180\) & -8557920 \\ \(5\) & \(0\) & \(0\) & -1 & \(0\) & -20 & \(100\) & -1245 & \(13230\) & -157500 & \(2022300\) & -27877500 \\ \(6\) & \(0\) & \(0\) & -1 & \(0\) & -30 & \(150\) & -2385 & \(27090\) & -361080 & \(5099760\) & -76856850 \\ \(7\) & \(0\) & \(0\) & -1 & \(0\) & -42 & \(210\) & -4200 & \(49980\) & -745920 & \(11460960\) & -187595730 \\ \hline \end{tabular} Table 3. The weight zero compactly supported Euler characteristic of \(\mathcal{H}_{g,n}\) for \(2\leq g\leq 7\), and \(0\leq n\leq 10\).
2303.05150
Toward Higher Integration Density in Femtosecond-Laser-Written Programmable Photonic Circuits
Programmability in femtosecond-laser-written integrated circuits is commonly achieved with the implementation of thermal phase shifters. Recent work has shown how such phase shifters display significantly reduced power dissipation and thermal crosstalk with the implementation of thermal isolation structures. However, the aforementioned phase shifter technology is based on a single gold film, which poses severe limitations on integration density and circuit complexity due to intrinsic geometrical constraints. To increase the compactness, we propose two improvements to this technology. Firstly, we fabricated thermal phase shifters with a photolithography process based on two different metal films, namely chromium for microheaters and copper for contact pads and interconnections. Secondly, we developed a novel curved isolation trench design that, along with a state-of-the-art curvature radius, allows for a significant reduction in the optical length of integrated circuits. As a result, curved Cr-Cu phase shifters provide a compact footprint with low parasitic series resistance and no significant increase in power dissipation (~ 38 mW) and thermal crosstalk (~ 20%). These results pave the way toward the fabrication of femtosecond-laser-written photonic circuits with a steep increase in terms of layout complexity.
Riccardo Albiero, Ciro Pentangelo, Marco Gardina, Simone Atzeni, Francesco Ceccarelli, Roberto Osellame
2023-03-09T10:08:31Z
http://arxiv.org/abs/2303.05150v1
# Toward Higher Integration Density in Femtosecond-Laser-Written Programmable Photonic Circuits ###### Abstract Programmability in femtosecond-laser-written integrated circuits is commonly achieved with the implementation of thermal phase shifters. Recent work has shown how such phase shifters display significantly reduced power dissipation and thermal crosstalk with the implementation of thermal isolation structures. However, the aforementioned phase shifter technology is based on a single gold film, which poses severe limitations on integration density and circuit complexity due to intrinsic geometrical constraints. To increase the compactness, we propose two improvements to this technology. Firstly, we fabricated thermal phase shifters with a photolithography process based on two different metal films, namely (1) chromium for microheaters and (2) copper for contact pads and interconnections. Secondly, we developed a novel curved isolation trench design that, along with a state-of-the-art curvature radius, allows for a significant reduction in the optical length of integrated circuits. As a result, curved Cr-Cu phase shifters provide a compact footprint with low parasitic series resistance and no significant increase in power dissipation (\(\sim\)38 mW) and thermal crosstalk (\(\sim\)20%). These results pave the way toward the fabrication of femtosecond-laser-written photonic circuits with a steep increase in terms of layout complexity. femtosecond laser micromachining; programmable photonic circuits; thermal phase shifting; universal photonic processors + Footnote †: journal: MDPI Article 1 ## 1 Introduction Integrated photonics is a fundamental technology for many applications, such as quantum information processing [1, 2, 3] and signal routing and communication [4, 5], where it represents a clear path toward the realization of large-scale photonic devices and networks [3, 6]. Indeed, a bulk optics setup with discrete optical components shows stability and scalability limitations that become more apparent with the growth of the protocol complexity. An integrated approach, on the other hand, enables the realization of a large number of optical components on a single monolithic device, maintaining excellent interferometric stability and a small footprint. Furthermore, an appealing feature of photonic integrated circuits (PICs) is the possibility to actively reconfigure the circuit operation by relying on electrically programmable components such as phase shifters. In recent years, there has been a growing interest toward the development of universal photonic processors (UPPs) [7, 8, 9, 10], which are programmable PICs that can perform any arbitrary unitary transformation on a given set of input signals. Circuit topologies for the realization of UPPs are well assessed in the literature, with two notable examples being triangular [11] and rectangular [12] meshes of reconfigurable Mach-Zehnder interferometers (MZIs). UPPs have been already demonstrated in several integrated photonics platforms such as silicon nitride [9; 10; 13], silica-on-silicon [14] and femtosecond laser micromachining (FLM) [15; 16]. In particular, FLM is a versatile and cost-effective fabrication platform for implementing integrated optical devices. This technique exploits the non-linear interaction of focused femtosecond laser pulses with a transparent dielectric material to induce a permanent change in the refractive index localized in the focal volume, allowing the fabrication of waveguides by translating the sample underneath the pulsed laser. Femtosecond-laser-written waveguides in glass substrates show low optical losses (less than \(0.3\,\mathrm{dB/cm}\) in the visible and near-infrared [17]), have low birefringence (as low as \(1.2\times 10^{-6}\)[18], making them compatible with polarization-encoded protocols), and can be easily exploited for the fabrication of complex structures in a three-dimensional fashion [19; 20]. Reconfigurability can be achieved by leveraging the thermo-optic effect: that is, the dependence of the refractive index on temperature. The most straightforward implementation within the FLM fabrication platform involves the integration of microheaters on the surface of the device, which act as resistive elements, allowing the dissipation of electrical power by Joule effect and a consequent localized substrate temperature increase. This technique is particularly effective since microheaters may be easily realized by depositing a metal film on top of the substrate, which is then patterned using the same femtosecond laser system used for the fabrication of the optical structures [21; 22]. They also offer excellent operation stability while causing no additional photon losses. On the other hand, thermal phase shifters suffer from some drawbacks, the most notable being thermal crosstalk, which is related to the heat diffusion toward waveguides different than the target one. Waveguides can be thermally isolated by microstructuring the glass substrate and fabricating deep isolation trenches, as shown in [23], to reduce the influence of thermal crosstalk while providing orders of magnitude reduction in the dissipated power. When operated in a vacuum environment [23], devices featuring thermal isolation structures display an even greater reduction in dissipated power while making thermal crosstalk almost negligible at the cost of a slower time response. Scaling up the computational power of UPPs requires an increase in both the number of optical modes and thermal phase shifters, which are practically limited to a handful and a few tens, respectively, with the FLM fabrication process adopted in [23]. Indeed, in [23], the resistive elements, the interconnections and the contact pads are all patterned on the same metal film; therefore, the aspect ratio is the only degree of freedom that can be exploited to concentrate all the electrical resistance, and thus the power dissipation, in the microheater. In essence, the metal interconnections require an aspect ratio close to 1:1 in order to reduce the parasitic series resistance, therefore occupying a large surface area on the device. This poses a severe limitation on the number of thermal phase shifters that can be manufactured on a single chip. In addition, the overall optical losses should be kept as low as possible as the complexity of the device increases. This can be achieved by both optimizing the waveguide propagation losses and reducing the total circuit length. To address these issues, we propose two different solutions: on the one hand, we reduce the footprint of the programmable MZI (the building block of UPPs) by both optimizing the minimum curvature radius with negligible additional losses and developing curved isolation trenches. In this way, we maintain the waveguides as thermally isolated while fabricating MZIs with 'null' arm length, i.e., without straight waveguide segments for the phase shifters, thus saving some millimeters per cell. On the other hand, we propose the use of a two-metal planar lithography technique to fabricate thermal phase shifters. This approach allows using a metal with high electrical resistance for the fabrication of the heaters while using a low resistance metal for the realization of long and narrow metal interconnections with negligible parasitic series resistance. However, performing photolithography on a microstructured substrate forbids the use of standard liquid photoresist deposited by spin coating, which makes the development of such a technology not trivial. The process we developed makes use of a dry photoresist, which can tent over isolation structures, providing homogeneous coverage. The manuscript is organized as follows: in Section 2, we provide a description of the fabrication process developed for the thermal phase shifters and for the curved deep isolation trenches. In Section 3, we present the electrical and optical characterization and provide a performance comparison between rectangular and curved trenches with various bending radii. Lastly, in Section 4 we discuss the results and the prospects of this work and finally draw our conclusions in Section 5. ## 2 Fabrication Improvements ### Compact Mach-Zelnder Interferometer Design The following discussion will be centered on the programmable MZI, which represents the fundamental building block for universal multiport devices [11, 12]. The interferometers are fabricated in boro-aluminosilicate glass (Corning EAGLE XG, 1 mm thick) and inscribed 35 um from the glass surface. Optical waveguides are fabricated using the multi-scan technique and thermal annealing [24]. Single-mode operation has been optimized for a wavelength of 940 nm, achieving propagation losses down to 0.14 dB/cm and negligible bending losses at curvature radii as low as \(R~{}=~{}15\) mm. This value shows an improvement of a factor of 2 or higher with respect to the one employed in previous laser-written UPPs [15, 16], and it is in line with the state of the art for the FLM platform [25], where advanced techniques for the reduction in bending losses were introduced. We considered two different interferometer designs, as shown in Figure 1. The first design (Figure 1a) comprises two balanced directional couplers realized with circular S-bend segments (curvature radius \(R~{}=~{}15\) mm), which are connected by a straight segment with a length \(L_{arm}~{}=~{}1.5\) mm. This MZI structure is similar to the one presented in [23], and it is used as a control sample throughout this study. The second design (Figure 1b), on the other hand, uses circular segments, and the two balanced directional couplers are directly connected together without straight segments (\(L_{arm}~{}=~{}0\) mm). In order to study this compact MZI cell, we fabricated interferometers with different curvature radii, namely \(R\) = 15 mm, 20 mm and 25 mm. The pitch between the optical modes is \(p~{}=~{}80\) um, while the interaction distance and the interaction length of the directional couplers are \(d_{int}~{}=~{}7\) um and \(L_{int}~{}=~{}0\) mm, respectively, for all of the interferometers. Figure 1: Scheme of the devices. (**a**) MZIs featuring rectangular trenches and thus finite arm length. (**b**) MZIs featuring curved trenches and thus ‘null’ arm length. In this case, the couplers are closer and the overall circuit length is reduced. (**c**) Section view of isolation trenches. White circles are waveguide cross-sections, while the gray rectangle is the microheater cross-section. Deep isolation trenches have been fabricated by water-assisted laser ablation [23; 26]. Optimized shapes have been produced for each interferometric design: standard rectangular trenches were used for the first MZI structure, while a novel curved layout was optimized for structures with null arm length. In this latter case, two different blocks are required: a biconvex and biconcave one, to fit, respectively, the inner and outer area between the two optical modes of the MZI (see Figure 1b). Aside from the form factor, all the trench designs have the same geometrical parameters (Figure 1c): a depth \(D_{t}~{}=~{}300\,\mathrm{\SIUnitSymbolMicro m}\), a wall width \(W~{}=~{}25\,\mathrm{\SIUnitSymbolMicro m}\) and a length \(L_{t}~{}=~{}1.5\,\mathrm{mm}\) to accommodate thermal phase shifters of the same length. A microscope image of these structures is reported in Figure 2a. ### Two-Metal Photolithography Technique for Thermal Phase Shifters Chromium was chosen as the material for the resistive elements due to its high resistivity as well as its lower temperature coefficient of resistance (TCR) in the temperature range we are interested in (20 to 200 \({}^{\circ}\)C). Similarly, copper was chosen as the material for the interconnections and the contact pads due to its significantly lower resistivity and compatibility with wire bonding. A thickness of 100 nm was chosen for the chromium film, considering microheater dimensions of \(W_{m}=W=25\,\mathrm{\SIUnitSymbolMicro m}\) by \(L_{m}=L_{t}=1.5\,\mathrm{mm}\). For the interconnections, a 10 nm thin film of titanium was used as an adhesion layer before the actual deposition of 1 \(\mathrm{\SIUnitSymbolMicro m}\) of copper. These metal films were all deposited by electron-beam evaporation with an Evatee BAK 640 and patterned via photolithography and wet etching as follows. Lithography was performed with a negative dry film photoresist (Ordyl FP 450) instead of a typical liquid photoresist deposited by spin coating, since such a technique is not ideal for rectangular and microstructured substrates due to the formation of comets, poor edge coverage and severe inhomogeneities. The 50 \(\mathrm{\SIUnitSymbolMicro m}\) thick dry film is laminated on the substrate by a dry film laminator at a temperature of 80 \({}^{\circ}\)C. The thickness of the dry film allows it to 'tent' over the trenches, evenly covering the surface around the isolation structures. The dry film is then exposed by a Heidelberg MLA100 maskless aligner with a dose of 500 mJ/cm\({}^{2}\) and developed for 2\({}^{\prime}\)40\({}^{\prime\prime}\) in a 1% solution of Na\({}_{2}\)CO\({}_{3}\) at room temperature. Before proceeding with wet etching, the sample is typically placed in an argon plasma (200 W for 2\({}^{\prime}\)). After etching, the photoresist mask is stripped away in a 3% solution of KOH. After deposition of the chromium film, a dry film photoresist mask covering both the microheater and interconnections is prepared. Chromium is then etched with the Microchemi Figure 2: Microscope images of microheaters before and after microheater fabrication. All images show the same MZI groups: straight arms at the top, curved ones with radius \(R~{}=~{}15\,\mathrm{mm}\) at the bottom. (**a**) Bare substrate with isolation trenches. (**b**) Chromium microheaters and copper interconnections deposited and patterned (image in false colors). cals TechniEtch Cr01 etchant mixture for 40\({}^{\prime\prime}\). Subsequently, the titanium and copper films are evaporated in the same step, and a second photoresist mask is prepared, covering only the interconnections and not the microheaters. Selective copper etching is performed by dipping the sample in a 40:1:1 ratio solution of H\({}_{2}\)O, HCl and 30% H\({}_{2}\)O\({}_{2}\) for 2\({}^{\prime}\). The titanium thin film is then etched using the same photoresist mask with buffered oxide etch (BOE) for 15\({}^{\prime\prime}\). The device with fully patterned microheaters and interconnections is then placed in a vacuum annealer at 350 \({}^{\circ}\)C for 1 h in order to stabilize the metal films [22] (Figure 2b). Finally, to avoid resistance drifting during operation due to the oxidation of both metals, we deposit a passivation layer of 200 nm of SiO\({}_{2}\) via plasma-enhanced chemical vapor deposition (PECVD) with a STS Multiplex. The silica above the copper contact pads at the sides of the sample is then opened with a final lithography step and wet etching in BOE for 2\({}^{\prime}\). ## 3 Experimental Results ### Electrical Measurements A total of 12 microheaters were fabricated and divided in four groups of three MZIs, as reported in Figure 1. The first group corresponds to the control sample with straight thermal shifters (Figure 1a), while the other three groups correspond to the curved microheaters with different radii (Figure 1b). The microheaters featured an average resistance of 850 \(\Omega\) before annealing, which dropped to 720 \(\Omega\) after annealing. These results are in perfect agreement with the sheet resistance measurements that we have performed on plain chromium films and that resulted in \(R_{sh}\,=\,12\,\Omega/\Box\). On the other hand, the copper film features a sheet resistance \(R_{sh}\,=\,0.02\,\Omega/\Box\), which leads to a parasitic series resistance \(<\)5 \(\Omega\) for the adopted design and, thus, negligible with respect to the microheater resistance. Resistance variation of chromium within the operation temperature range was assessed by measuring it with increasing voltage up to a maximum power dissipation corresponding to a 3\(\pi\) phase shift. The resulting graph for a microheater featuring curved trenches with \(R=15\) mm can be seen in Figure 3a. The maximum variation of resistance in the working temperature range is as low as 1% from the value measured at room temperature. This value is very low when compared to an identical gold microheater, which would vary by more than 30% in the same temperature range due to gold's higher TCR [22]. Although the curve reported in Figure 3a looks peculiar for a metal, it is worth mentioning that the electrical resistivity of chromium and its non-trivial temperature dependence are heavily affected by many factors such as film thickness, the presence of contaminants, and the grain structure of the film [27; 28; 29; 30]. However, further investigation on the resistivity of this film is beyond the scope of this article. Finally, we performed stability measurements to verify the presence of possible resistance drifts at various dissipated powers. Such drifts can arise from either an insufficient thermal budget provided during annealing (typically resulting in a resistance decrease over time) or microheater oxidation due to defective passivation (resulting in a resistance increase over time). Microheaters featuring thermal isolation trenches were actuated at powers corresponding to \(\pi\), 2\(\pi\) and 3\(\pi\) phase shifts for 10 h, resulting in a maximum resistance variation of 0.02%. These chromium microheaters indeed do not display significant drift over time, indicating a properly working annealing and passivation. ### Optical Measurements The performance of all the fabricated devices was characterized in terms of power dissipation, crosstalk and dynamic response. We report data only for the last microheater of each group (see Figure 1). However, no significant deviations were observed by actuating the other microheaters. All the results of the characterization are reported in Table 1 and will be detailed in the following sections. #### 3.2.1 Dissipated Power For a given MZI, the phase difference \(\Delta\varphi\) induced between its arms by the microheater can be expressed as [21]: \[\Delta\varphi=\varphi_{0}+\alpha P \tag{1}\] where \(\varphi_{0}\) is the phase difference between the arms of the interferometer when no power is dissipated, \(\alpha\) is the interferometer tuning coefficient and \(P\) is the dissipated power. In particular, \(P_{2\pi}\) is the electrical power that a thermal phase shifter has to dissipate to induce a \(\Delta\varphi=2\pi\) phase shift and achieve full reconfigurability. In [23] was demonstrated that the use of deep isolation trenches provides a significant improvement in terms of \(P_{2\pi}\) with respect to a planar device. In this study, we compared the performance of the curved isolation trenches with that of the rectangular ones in order to determine how the different geometry affects the isolation performance. \(P_{2\pi}\) was characterized experimentally for all of the MZIs with the completely curved design and the control sample in order to be able to draw a comparison with the results in the literature. \begin{table} \begin{tabular}{l c c c c c} \hline **Isolation Design** & **Cell Length (mm)** & \(P_{2\pi}\) **(mW)** & \(\Delta\varphi_{ct}^{2p}/\Delta\varphi\) (\%) & \(\tau_{large}\) **(ms)** & \(\tau_{small}\) **(ms)** \\ \hline Rectangular trenches, \(R_{\infty}\) & 7.42 & 35.02 & 21.31 & 34.4 & 57.7 \\ Curved trench, \(R=25\,\mathrm{mm}\) & 7.64 & 38.23 & 20.17 & 29.9 & 52.5 \\ Curved trench, \(R=20\,\mathrm{mm}\) & 6.83 & 38.33 & 20.07 & 28.9 & 55.4 \\ Curved trench, \(R=15\,\mathrm{mm}\) & 5.92 & 38.10 & 18.64 & 29.8 & 50.3 \\ \hline \end{tabular} \end{table} Table 1: Summary of experimental measurements. Cell length, dissipated power, thermal crosstalk (\(d~{}=~{}2p\)) and response times for both large and small signal regimes are reported for all MZI groups considered in this work. Figure 3: (a) Electrical characterization of the resistance versus electrical power on a curved microheater with \(R=15\,\mathrm{mm}\); markers indicate power values corresponding to specific phase shifts. (b) Stability measurements performed on the same microheater at electrical powers corresponding to \(\pi\), \(2\pi\), and \(3\pi\) phase shifts. We measured a power dissipation \(P_{2\pi}=38\,\mathrm{mW}\), which is constant among all the MZIs fabricated with the curved design, indicating that \(P_{2\pi}\) does not depend on the curvature radius. This result is not trivial. Indeed, while the width \(W_{t}\) of the waist of each block is identical to that of the rectangular trenches, the minimum width of the curved trenches varies by a factor of two (from a minimum of \(13\,\mathrm{\SIUnitSymbolMicro m}\) to a maximum of \(27\,\mathrm{\SIUnitSymbolMicro m}\)) as the radius increases from \(R=15\) to \(25\,\mathrm{mm}\), potentially impacting on the isolation capability of the structure. Furthermore, the \(P_{2\pi}\) values measured for the control sample, namely \(35\,\mathrm{mW}\), are comparable to the results obtained with curved trenches. This result indicates that \(P_{2\pi}\) is not significantly affected by trench shape, allowing the realization of compact MZIs with nearly the same dissipated power as the previous schemes. #### 3.2.2 Thermal Crosstalk Under the assumption of linear dependence of the refractive index on the temperature change of the substrate, the phase shift induced by thermal crosstalk on a given MZI can be modeled as [23]: \[\Delta\varphi_{ct}=\sum_{i}\alpha_{i}P_{i}, \tag{2}\] where the sum is performed on all neighboring MZIs, \(P_{i}\) is the power dissipated on each microheater and \(\alpha_{i}\) represents the crosstalk tuning coefficients between each adjacent phase shifter and the target MZI. We can evaluate the effect of thermal crosstalk by measuring the phase \(\Delta\varphi_{ct}\) induced on a target MZI while dissipating power on its neighboring microheaters at various distances. The magnitude of thermal crosstalk can then be evaluated in terms of the ratio of phase induced by a neighboring heater and by the target heater when actuated with the same power \(P=P_{i}\). As expected, the additional thermal isolation between waveguides in substrates featuring deep trenches helps to dampen additional phase contributions from nearby phase shifters, reducing thermal crosstalk effects when compared to planar substrates. Thermal crosstalk was measured for all four MZI designs for the first and second neighbors (distances of \(2p\) and \(4p\); see Figure 1a,b). We report in Figure 4a the thermal crosstalk measurements for the curved phase shifter with radius \(R=15\,\mathrm{mm}\). We observe a similar isolation performance for all microstructures, with \(\sim\)20% phase induced on the first neighbor at a distance \(2p=160\,\mathrm{\SIUnitSymbolMicro m}\), suggesting an independence of crosstalk against both curvature radius and geometry, similarly to power dissipation measurements. This result was again not trivial, especially considering that in the biconvex design, trench widths are as low as \(13\,\mathrm{\SIUnitSymbolMicro m}\) in the tails. #### 3.2.3 Dynamic Response Dynamic response is another key performance metric for programmable circuits. While thermal isolation in general reduces both the dissipated power \(P_{2\pi}\) and the thermal crosstalk, it has been shown to have a detrimental effect on the time response of the device [23; 31]. Rise and fall times are computed as the interval in which the signal changes from 10% to 90% and vice versa of the steady-state optical power and were assessed for both the large and small signal regime. In the former case, we measured the switching time for a complete optical switch by applying a phase shift of \(\Delta\varphi=\pi\). In the latter one, we measured the switching time for small phase variations \(\delta\varphi\) causing a variation of the normalized optical power around the interferometer balanced working point of about 5%, namely: \[\Delta\varphi_{small}=\pi/2\pm\delta\varphi \tag{3}\] where \(\pi/2\) is the phase shift to bias the MZI in a balanced state with a transmission coefficient \(T=0.5\). The leading edges for the large signal regime are shown in Figure 4b. In this section, we will present the measurements for the heating transients, but it is worth noting that both the heating (rising) and cooling (falling) processes have comparable switching times as consistent with the theory [31]. Similarly to the considerations we made in the previous sections, the rising times we measured are equivalent for all the curvature radii to a value of 29.5 \(\pm\) 0.4 ms. Moreover, these values are comparable to the switching times measured for the rectangular trenches, namely 34.4 ms. A similar behavior was observed also for the small signal response, and the values are reported in Table 1. As a final remark, we note that at a closer observation, all previous experimental results indicate coherently a slightly reduced isolation between the two arms of a single MZI with the curved trenches (a slightly higher \(P_{2\pi}\) and slightly lower response times). However, these small variations are negligible if compared to the great advantage that the curved design provides in terms of device losses. In fact, given that the propagation losses (expressed in dB) scale linearly with the device length, the curved design provides a significant 20% reduction. ## 4 Discussion The use of a two-metal photolithographic process for the fabrication of thermal shifters allows us to overcome the geometrical constraints imposed by a single-material approach [23]. As a result, the microheaters are made of a metal with high electrical resistivity, namely Figure 4: (**a**) Normalized optical power as a function of dissipated power on target MZI and neighboring MZIs measured on devices with curved trenches of radius \(R~{}=~{}15\) mm. (**b**) Normalized optical power as a function of time for the rectangular trench (\(L_{t}=1.5\) mm) and curved trench (\(R~{}=~{}15\) mm). chromium, while the interconnections are made of a metal with low resistivity, namely copper. We successfully fabricated Cr-Cu thermal phase shifters and demonstrated that after several hours of operation, the shifters maintain high stability in time with no drawbacks with respect to the previous technology. Such a feature has paramount importance when the device is employed in quantum photonics experiments, whose duration can be even in the range of tens of hours. In addition, the chromium microheaters feature a low TCR, which is important to develop simple yet accurate calibration protocols for complex circuits. On the other hand, we optimized a novel deep isolation curved trench design, allowing us to thermally isolate waveguides in MZIs with zero arm length. This design, along with a state-of-the-art curvature radius, allowed us to demonstrate a MZI cell featuring a total length of 5.92 mm (see Table 1). These results represent the state of the art for programmable PICs fabricated through FLM with a total loss per unit cell as low as 0.08 dB. Finally, we showed how these two approaches can be combined to create compact and fully reconfigurable MZIs. We tested the curved design for different curvature radii (from \(R~{}=~{}15\) mm to \(R~{}=~{}25\) mm) and found independence from curvature radius in terms of dissipated power, thermal crosstalk and dynamic response. Moreover, the new curved design shows performances that are equivalent to the one presented in [23] (see Table 1). This approach represents a starting point for further compactifying the UPP building-block design in which a thermal shifter (usually referred to as external phase shifter) is typically fabricated also at the input or output of the MZI [12]. Actually, Walmsley et al. have already proposed a scheme [32] in which the external phase shifter is moved on top of the second arm of the interferometers, resulting in two thermal phase shifters integrated on top of the interferometer arms. In this case, an experimental realization of this layout can be made with minor changes to the procedure presented in this manuscript, specifically by changing the photoresist masks for the fabrication of thermal phase shifters while the waveguide fabrication process and substrate microstructuring process remain unchanged. When scaling the complexity of the circuit, i.e., increasing the number of modes of the \(N\times N\) UPP mesh, a significant advantage is gained in terms of insertion losses of the whole device. By combining the design proposed in [32] and our curved phase shifter approach, it would lead to 3 mm/cell saving, resulting in a reduction in the propagation distance of some centimeters even with a few cascading MZIs. We also want to emphasize that this analysis is not limited to UPPs but can be applied to application-specific PICs with any layout. ## 5 Conclusions In this work, we presented two solutions for further scaling the integration density and complexity of programmable FLM integrated optical circuits: a two-metal planar lithography technique to fabricate thermal phase shifters and the implementation of curved deep isolation trenches. On the one hand, the new photolithographic process allows for an increased integration density of phase shifters fabricated on-chip. Moreover, using two metals allowed for a reduction in the parasitic series resistance of the contact pads while increasing the overall thermal and temporal stability of the microheaters and maintaining low \(P_{2\pi}\sim 38\) mW and thermal crosstalk of about \(\sim\)20%. On the other hand, the implementation of curved deep isolation trenches allows for up to a 20% reduction in circuit length and thus a similar reduction in propagation losses. In particular, when compared to standard rectangular trenches, they take the same amount of fabrication time while enabling significantly more compact circuit layouts. These results pave the way for increasingly more compact, stable and low-loss laser-written UPPs and PICs in general without significant compromises in terms of performance and fabrication time. Conceptualization, S.A., F.C., R.O.; optical circuit fabrication, R.A., S.A.; electrical circuit fabrication, C.P., M.G., F.C.; experimental characterization, R.A., C.P., M.G.; project administration and funding acquisition, R.O.; original draft preparation, R.A., C.P. All the authors contributed to the review and editing of the manuscript. This research was funded by the European Union's Horizon 2020 research and innovation program through the Future and Emerging Technologies (FET) project PHOQUSING (Grant Agreement No. 899544) and through the European Research Council (ERC) project CAPABLE (Grant Agreement No. 742745). R.A. acknowledges funding of his PhD fellowship by Thales Alenia Space Italia s.p.a. This work was partially performed at PoliFAB, the micro- and nanofabrication facility of Politecnico di Milano (www.polifab.polimi.it) (visited on 18/07/2022). The authors would like to thank the PoliFAB staff for the valuable technical support. The authors would like also to thank Emanuele Urbinati for his preliminary work on dry resist photolithography. The authors declare no conflict of interest.
2302.12705
Designing and simulating realistic spatial frequency domain imaging systems using open-source 3D rendering software
Spatial frequency domain imaging (SFDI) is a low-cost imaging technique that can deliver real-time maps of absorption and reduced scattering coefficients. However, there are a wide range of imaging geometries that practical SFDI systems must cope with including imaging flat samples ex vivo, imaging inside tubular lumen in vivo such as in an endoscopy, and measuring tumours or polyps of varying shapes, sizes and optical properties. There is a need for a design and simulation tool to accelerate design and fabrication of new SFDI systems. We present such a system implemented using open-source 3D design and ray-tracing software Blender that is capable of simulating media with realistic optical properties (mimicking healthy and cancerous tissue), a wide variety of shapes and size, and in both planar and tubular imaging geometries. We first demonstrate quantitative agreement between Monte-Carlo simulated scattering and absorption coefficients and those measured from our Blender system. Next, we show the ability of the system to simulate absorption, scattering and shape for flat samples with small simulated tumours and show that the improved contrast associated with SFDI is reproduced. Finally, to demonstrate the versatility of the system as a design tool we show that it can be used to generate a custom look-up-table for mapping from modulation amplitude values to absorption and scattering values in a tubular geometry, simulating a lumen. As a demonstrative example we show that longitudinal sectioning of the tube, with separate look-up tables for each section, significantly improves accuracy of SFDI, representing an important design insight for future systems. We therefore anticipate our simulation system will significantly aid in the design and development of novel SFDI systems, especially as such systems are miniaturised for deployment in endoscopic and laparoscopic systems.
Jane Crowley, George S. D. Gordon
2023-02-24T16:06:25Z
http://arxiv.org/abs/2302.12705v1
Designing and simulating realistic spatial frequency domain imaging systems using open-source 3D rendering software ###### Abstract Spatial frequency domain imaging (SFDI) is a low-cost imaging technique that can deliver real-time maps of absorption and reduced scattering coefficients, offering improved contrast over conventional reflectance methods for imaging important tissue structures such as tumours. However, there are a wide range of imaging geometries that practical SFDI systems must cope with including imaging flat samples _ex vivo_, imaging inside tubular lumen _in vivo_ such as in an endoscopy, and measuring tumours or polyps of varying shapes, sizes and optical properties. There is a need for a design and simulation tool to accelerate design and fabrication of new SFDI systems, and to validate performance under a wide range of realistic imaging scenarios. We present such a system implemented using open-source 3D design and ray-tracing software _Blender_ that is capable of simulating turbid media with realistic optical properties (mimicking healthy and cancerous tissue), a wide variety of shapes and size, and in both planar and tubular imaging geometries. Because Blender is a full end-to-end rendering package, effects such as varying lighting, refractive index changes, non-normal incidence, specular reflections and shadows are naturally accounted for, enabling realistic simulation of new designs. We first demonstrate quantitative agreement between Monte-Carlo simulated scattering and absorption coefficients and those measured from our Blender system. Next, we show the ability of the system to simulate absorption, scattering and shape for flat samples with small simulated tumours and show that the improved contrast associated with SFDI is reproduced. Finally, to demonstrate the versatility of the system as a design tool we show that it can be used to generate a custom look-up-table for mapping from modulation amplitude values to absorption and scattering values in a tubular geometry, simulating a lumen. As a demonstrative example we show that longitudinal sectioning of the tube, with separate look-up tables for each section, significantly improves accuracy of SFDI, representing an important design insight for future systems. We therefore anticipate our simulation system will significantly aid in the design and development of novel SFDI systems, especially as such systems are miniaturised for deployment in endoscopic and laparoscopic systems. osajournal Optica Publishing Group under the terms of the Optica Publishing Group Publishing Agreement ## 1 Introduction Optical properties, specifically scattering and absorption, and shape are important potential indicators of cancer within the gastrointestinal (GI) tract [1, 2]. Conventional white light endoscopes and capsule endoscopes are the standard method of imaging the GI tract but provide limited information about tissue properties that are hallmarks of a range of potential tumours [3], leading to low five-year survival rates of oesophageal cancer (15% [4]) and colon cancer (63% [5]). SFDI is a well-established, low-cost imaging technique [6], with applications for imaging blood oxygenation [7], burn depth [8], dental caries [9], bowel ischaemia [10], and indicators of cancer [11]. A range of commercial [12] and research [13, 14, 15] SFDI systems are now available. However, these existing systems are almost exclusively designed for _planar_ imaging geometries, where the sample is flat and the camera and projector are located above it at near-normal incidence. However, many important clinical applications exhibit _non-planar_ geometries: for example imaging inside tubular lumen such as the GI tract, blood vessels, biliary system (Fig 1). SFDI imaging _in vivo_ in such organs is challenging due to miniaturisation needs, and because the surfaces are cylindrical, creating non-planar illumination conditions and sample geometries. This means that illumination and imaging may no longer be normal (or nearly normal) to the surface being imaged so different scattering behaviour will be observed [16], and specular reflections will be altered. To aid in the design of novel SFDI systems under these constraints, we have created an SFDI design and simulation tool in the open source 3D modelling software _Blender_ (v 2.93) using the built-in ray-tracing engine Cycles (Fig 1). Cycles is a physically based path tracer, in which rays of light are traced from each camera pixel into the scene and either absorb into the world background, reflect, refrac or reach their maximum bounce limit, typically 1024 bounces. To increase accuracy, Cycles produces randomised light rays and averages the results from a singular pixel over time, analogous to a Monte Carlo simulation [19]. Cycles simulates volume scattering inside objects using a Henyey-Greenstein Phase function, which is commonly also used in Monte-Carlo simulations of tissue [20, 21]. _Blender_ has previously been used for three-dimensional shape measurement of additive manufacturing parts with complex geometries [22], for the development anatomically accurate meshes to use in Monte Carlo light simulations [23], and for the generation of SFDI image data sets to train neural networks [24, 25]. By using Blender for both geometry specification (i.e. design) and simulation (via ray-tracing with Cycles), we are able to simulate realistic optical properties and geometries while naturally accounting for realistic features of SFDI systems such as stray light, specular reflections and shadows. Figure 1: Future SFDI systems, especially those for _in vivo_ clinical use, may require significantly different geometries from conventional SFDI: a) conventional ‘planar’ SFDI imaging geometry with projector at a small angle to flat sample, with real-world application of measuring diabetic foot shown in inset [17], b) SFDI operating in a tubular (lumen) geometry, that may be required for use in future endoscope systems where projection is no longer approximately flat, with example usage for imaging polyps in the colon shown inset [18], c) screenshot of our _Blender_ SFDI model applied to a planar geometry, with reconstructed scattering properties of tumour like sample shown inset, d) a screenshot of our _Blender_ model applied to a non-planar tubular geometry, with reconstructed scattering properties shown inset. Conventional imaging in the spatial frequency domain is demonstrated in Fig 2 and consists of projecting a known set of structured illumination patterns onto a sample at a small angle (\(\lesssim 6^{\circ}\)) to the normal to minimise specular reflections recorded by the camera [26]. The structured illumination set typically consists of 2D sinusoids at 3 equispaced phase offsets but recent work has shown the successful use of randomised speckle patterns as an alternative illumination scheme [27]. The amplitude of the projection pattern is modified by the absorbing and scattering nature of the sample. A camera placed orthogonal to the sample captures images which are processed to determine the modulation amplitude of the reflected illumination pattern as a function of spatial frequency, via the equations [28]: \[M_{AC}\left(x_{i},f_{x}\right)=\frac{\sqrt{2}}{3}\sqrt{\left(I_{1}(x_{i})-I_{2 }(x_{i})\right)^{2}+\left(I_{2}(x_{i})-I_{3}(x_{i})\right)^{2}+\left(I_{3}(x_{i })-I_{1}(x_{i})\right)^{2}} \tag{1}\] \[M_{DC}(x_{i})=\frac{1}{3}\left(I_{1}(x_{i})+I_{2}(x_{i})+I_{3}(x_{i})\right) \tag{2}\] where \(I(x_{i})\) is intensity of each pixel (position \(x_{i}\)) value in the captured images for sinusoidally modulated illuminations with a spatial phase shift of \(0^{\circ}\) (\(I_{1}\)), \(120^{\circ}\) (\(I_{2}\)) and \(240^{\circ}\) (\(I_{3}\)). These results can also be obtained from a single image, termed _single snapshot of optical properties (SSOP)_, by the use of Fourier-domain filtering [29] or convolutional neural networks [30] to separate the AC and DC components. In either approach, the AC and DC modulation amplitudes need to be calibrated with the modulation transfer function (MTF) of the system in order to produce _diffuse reflectance_ values that represent the next intermediate step toward obtaining scattering and absorption. Conventional calibration for the MTF is achieved by imaging a reference material of known optical properties and computing diffuse reflectance values for these properties using a light propagation model: Monte Carlo simulation or the Diffusion Approximation [28, 31]. The difference between the computed and measured diffuse reflectance is used to infer the MTF, which can then be applied to obtain diffuse reflectance values from the modulation amplitudes of the sample of interest. Finally, the absorption, \(\mu_{a}\), and reduced scattering, \(\mu_{s}\prime\), coefficients are then determined via a look-up table (LUT) generated from the chosen light-propagation model. An alternative method of generating this LUT is to measure a large data set of materials of known optical property values and interpolate between values to create an LUT unique to the imaging system in use. This has been achieved previously by calculating the reflectance and modulation of each material by comparison to a reflectance standard [32]. Figure 2: Conventional SFDI process: A sample is illuminated with a sinusoidal pattern then an image is captured. This is then demodulated and compared with a reference material of known optical properties. The optical properties of the material of interest are then estimated using a look-up table. Finally, maps of absorption and reduced scattering coefficient are produced. Another important property that SFDI imaging can extract is height information, i.e. shape in flat geometries. This can be done via fringe projection profilometry [33, 34]. A structured illumination pattern is projected onto a sample of interest and a camera captures how the pattern is distorted by the presence of the sample to determine sample shape [11]. This information can be useful in clinical settings for quantifying the size and shape of polyps, which is linked to their pathology [2]. Previous studies have shown that structured illumination significantly reduces error in shape measurement compared to visual assessment, and is comparable to measurement with biopsy forceps and ruled snare [35]. Combining shape information with optical property information would give valuable diagnostic information to the clinician to determine optimal treatment for patients. Developing an SFDI system to determine optical properties and shape in clinical environments, both _ex vivo_ and _in vivo_, has many challenges associated with it, such as determination of optimum illumination source placement and determination of optimum illumination patterns. Here, we present a design and simulation system using free, open-source 3D modelling and rendering package _Blender_, that can implement SFDI for simulation of absorption, scattering and shape. We first show how to use _Blender_ to model a customisable scattering and absorbing material with in-built _Blender_ material nodes. We then show how to construct a virtual characterisation system for the absorption density, \(A_{\rho}\), and scattering density, \(S_{\rho}\), of this material using two approaches: a double integrating sphere (DIS) [36] and an SFDI system. For both approaches, we validate the accuracy of retrieved optical properties and show how this can improved by generating an empirically derived LUT from the DIS in-situ data. Next, we present two illustrative example use cases for our system. First, we show that the simulated SFDI system enables reconstruction of scattering, absorption and shape of planar geometry samples mimicking cancerous and pre-cancerous conditions such as squamous cell carcinoma and Barrett's Oesophagus respectively. Second, we demonstrate, for the first time, a novel illumination scheme tailored for non-planar, tubular geometries (such as inside a lumen) where the spatial frequency is constant throughout the length of the tube such that the optical properties can be accurately obtained. To improve accuracy, we longitudinally section the tube and create separate look-up tables for each section, a straight-forward task in our system. We show that this customised illumination can detect changes in absorption and scattering properties within a tube of biologically relevant material, providing a potential design for future SFDI systems. ## 2 Methods ### Material simulation The use of 3D modelling or CAD software to simulate conventional optical imaging with a light source, 3D objects and a camera is well-established. The key challenges for designing an SFDI simulation system in such software is ensuring accurate, calibrated simulation of optical scattering and absorption, and designing appropriately structured illumination patterns. To achieve the most physically realistic ray-traced renderings in _Blender_, some optimisation of the render settings is required. Within the ray-tracing engine Cycles, which physically traces rays of light, the maximum number of bounces a light ray can travel before the simulation terminates can be set. We set this value to 1024, the highest allowed. The number of samples to render per pixel in the image was set to 1000. Clamping of direct and indirect light, which limits the maximum intensity a pixel can have, was disabled by setting both to 0. Colour management, which is typically used to make visually appealing images but introduces unwanted artefacts such as gamma correction, was disabled by setting the display device to _'None'_. View transform was set to _'Standard'_ to ensure no extra conversions were applied to the resulting images. The sequencer, which sets the the colour space, was set to _'Raw'_ to avoid unwanted colour balancing or further gamma correction. For all images rendered, the camera exposure is adjusted to avoid saturation while maximising power of detected signal, but the images must then have their intensities corrected by following the equation: \[I_{output}(x,y)=I_{render}(x,y)\times 2^{t_{exposure}} \tag{3}\] where \(I_{output}\) is the exposure-corrected intensity we required, \(I_{render}\) is the raw value obtained following the render, and \(t_{exposure}\) is the exposure setting. Previous work has used a weighted mixture between transparent, sub-surface scattering and absorbing materials to create a composite material with the desired optical properties [24]. Though this approach works in many realistic operating regimes, it is limited because the sub-surface approximation applies only at surfaces and not in the entire material volume. Here, we therefore model the material more accurately using a volume shader, instead of surface shader, exploiting Blender's built-in volume absorption and volume scattering shaders. The absorption and scattering was varied by changing the density parameters of the nodes, \(A_{\rho}\) and \(S_{\rho}\) respectively. The anisotropy, \(g\), in the volume scatter node was set to 0.8, in line with the anisotropy values measured of the tissue within the GI junction [1] and with the value of \(g\) set in the Virtual Photonics Monte Carlo simulation software [37], which was used to generate a LUT. When capturing image data, we extract the red channel of the RGB colour images. However, because Blender supports tri-colour operation it can also provide physically realistic scattering at green and blue wavelengths if desired. In order to use a LUT generated from a Monte-Carlo simulation or the Diffusion Approximation, the semi-infinite thickness requirement must be met [38]. To set an appropriate thickness for the material to meet this property, a red sphere was placed behind the material and the parameters were varied until the sphere was not visible, i.e. the material was not transparent for a fixed thickness. For a material of 2 m thickness, this red sphere was not seen for \(A_{\rho}\geq 2\) when \(S_{\rho}=0\), and for \(S_{\rho}\geq 3\) when \(A_{\rho}=0\). These are therefore the limitations of the material in our system when used for SFDI. However, it is noted that this limitation could be circumvented by using an empirically derived LUT that is calibrated to a particular physical thickness. Our aim is to create a simulation of an SFDI system with biologically relevant samples, and so we have identified two disease states relevant for detection of cancer in the upper GI tract that have distinctive scattering and absorption properties: squamous cell carcinoma (SCC) and Barrett's Oesophagus (BO). SCC is formed of neoplastic cells invading the submucosal layer of tissue [39]. For simplicity, we model SCC with a spheroid but the flexibility of Blender allows for generation of arbitrary shapes. We modelled tumour spheroids using sphere meshes scaled to be 40 mm diameter. We note that the _'scale'_ parameter of the object in Blender should be reset when the desired size is reached to ensure proper behaviour with regard to scattering length scales along different dimensions. BO can be a pre-cursor to oesophageal adenocarcinoma, and occurs when the epithelium of the oesophagus begins transforming into a structure mimicking the the lining of the stomach [40]. To examine contrast, two materials were placed adjacent to one another: one with the optical properties of healthy oesophageal tissue and the other with the optical properties of BO. To simulate realistic gastrointestinal imaging, we consider two imaging geometries. The first simulates an 'up-close' view of a tumour on the wall of a large lumen and can be approximated by a flat geometry. However, to identify such structures during a typical endoscopy or to examine such structures in a smaller lumen, it is also necessary to consider a tubular geometry with a wide field-of-view. We therefore also consider the scenario of an SFDI system pointing down a tube, shown in Figure 1b. ### Calibration of material optical properties For SFDI measurements, a reference material of known optical properties is required to correctly calibrate the system response (as discussed in Sect 1). This requires determining the relationship between the material parameters in _Blender_ and the recovered absorption and reduced scattering coefficients. This can be done directly with an SFDI system through a 'trial and error' approach - the material parameters are adjusted until plots of diffuse reflectance vs. spatial frequency agree with theoretically derived curves [24]. However, this approach is imprecise and laborious. We therefore developed a more accurate approach which involves simulating a double integrating sphere (DIS) system in _Blender_[36]. This system consists of two hollow spheres, termed the'reflectance' sphere and 'transmittance' sphere, each with 100 mm diameter and 10 mm wall thickness. The material of these spheres is set to be highly reflective using the diffuse bi-directional scattering distribution shader with 0 roughness and colour of pure white. The reflectance sphere has an entry port and an exit port, with the sample located at the exit port, which are rectangular in shape with a 10 mm side length. The transmittance sphere has only an entry port, where the sample is located, of the same shape and size as the reflectance sphere exit port. The sample placed at the exit (sample) port of the reflectance sphere and the entry (sample) port of the transmittance sphere has a thickness of 1 mm. The material of the sample is that of the material described in Sect. 2.1. The input light source is a spot light of power 5 W, with a beam radius of 0.5 mm and a spot size of 6\({}^{\circ}\). The light is placed at the entry port of the reflectance sphere. Cameras were placed at the base of each of the spheres to act as detectors, with all pixels summed together (i.e. integrated over the detector area) to give a power value. For our initial tests, only the red channel is considered. A baffle is placed between the sample ports and the cameras to block specularly reflected light from the sample entering the camera detector. To perform normalisation, it is necessary to collect transmission and reflectance measures with a reflectance standard sample, no sample and with the light beam blocked. The reflectance standard sample is simulated using the diffuse bi-directional scattering distribution function (BSDF) shader in _Blender_ with a purely white colour and roughness set to 0. It is assumed that the normalised reflectance of this material is 0.9. For each captured image, the camera exposure was varied until the average intensity was approximately in the middle of the 0-255 range (i.e. 8-bit colour). This exposure was noted and corrected for using Eqn 3. To determine the reduced scattering and absorption coefficients, a series of images is taken in the reflectance sphere and the transmittance sphere, and the normalised reflectance and transmittance are calculated respectively for varying sample material properties using the equations: \[M_{R}=r_{std}\frac{R_{2}(r_{s}^{direct},r_{s},t_{s}^{direct},t_{s})-R_{2}(0,0,0,0)}{R_{2}(r_{std},r_{std},0,0)-R_{2}(0,0,0,0)} \tag{4}\] \[M_{T}=\frac{T_{2}(r_{s}^{direct},r_{s},t_{s}^{direct},t_{s})-T_{2}(0,0,0,0)}{T_ {2}(0,0,1,1)-T_{2}(0,0,0,0)} \tag{5}\] where \(r_{std}\) is the normalised reflectance of the reflectance standard, \(R_{2}(r_{s}^{direct},r_{s},t_{s}^{direct},t_{s})\) and \(T_{2}(r_{s}^{direct},r_{s},t_{s}^{direct},t_{s})\) are reflectance and transmittance measurements respectively when the sample material is in place, \(R_{2}(r_{std},r_{std},0,0)\) is a reflectance measurement when the standard reflectance sample previously described is in place of the material, \(R_{2}(0,0,0,0)\) is a reflectance measurement when there is no sample present and the transmittance sphere is removed, \(T_{2}(0,0,1,1)\) is a transmittance measurement when light passes straight through the reflectance sphere when no sample is present into the transmittance sphere and \(T_{2}(0,0,0,0)\) is a transmittance measurement when the incident beam is blocked and there is no sample in the port. These normalised values were then input into an inverse adding doubling (IAD) algorithm to determine the optical properties [41]. We used nine data points in the ranges \(A_{\rho}:1-100\) and \(S_{\rho}:5000-20000\). These values were chosen as when input into the IAD algorithm, they gave optical properties within our range of interest. We captured images in our SFDI set-up for these same material parameters, with a camera placed 0.5 m above the sample of interest and a 5 W spot light source, acting as the projector, placed at a \(4^{\circ}\) offset to the camera to reduce any specular reflections. The camera and projector were placed at the same height from the sample, at \(0.035\) m apart. The optical properties in the up-close flat geometry were calculated using two different LUTs: a Monte Carlo generated LUT and an empirically-derived LUT. #### 2.2.1 Monte Carlo LUT The Monte Carlo (MC) LUT was generated using Virtual Photonics Monte Carlo simulation software [37], with absorption coefficients ranging from \(0-0.5\) mm\({}^{-1}\), and reduced scattering coefficients ranging from \(0-5\) mm\({}^{-1}\). The optical properties of the nine material values were calculated using a reference material of \(A_{\rho}=1\) and \(S_{\rho}=20000\) with the corresponding reference optical properties determined from the IAD algorithm for this specific material. #### 2.2.2 Empirically-derived LUT The empirically-derived LUT is able to correct for discrepancies between the SFDI and IAD measurements, which arise from the way anisotropy is implemented when mixing the absorption and scattering materials, leading to slight deviations in effective anisotropy constant (\(g\)). For our first simulation of an 'up-close' view of the wall of a large lumen, approximated as a flat geometry, we generated a modulation vs reflectance LUT as described by _Erickson et. al._[32]. We started with the same nine data points as before, captured the modulation and reflectance of these densities, and then did a first linear interpolation between these data points to increase the LUT from 9 data points to \(100\times 100\) data points, improving granularity of final optical properties. A second interpolation step, this time using bicubic interpolation, was then carried out to determine the optical properties of a sample of interest. We also performed an inverse calculation, such that if one has desired optical properties of interest, they may determine what \(A_{\rho}\) and \(S_{\rho}\) are required to produce these optical properties. For our second simulated situation of a wide field-of-view tubular geometry, we generated a LUT of AC modulation amplitude (normalised to reference) vs DC modulation amplitude (normalised to reference). This type of LUT that uses normalisation was chosen because in the tubular geometry there is a difference in intensity from the distal to proximal end of the tube, and therefore large intensity variation within the material. Dividing by the reference material with similar variation in intensity reduces this intensity variation in the resultant image and makes for a more accurate optical property calculation. ### Robust shape determination In addition to measuring optical properties, we would also like to exploit the fringe-projection approach to reconstruct 3D shape via fringe profilometry. To do this, we will consider a fringe projection pattern of the form: \[\psi(x,y)=\sin(\omega y+\phi) \tag{6}\] where \(\omega=2\pi f\) is the angular frequency of the projected pattern with spatial frequency \(f\). The sinusoidal pattern must be rotated \(90^{\circ}\) from the optical property measurements (which are of the form \(\psi(x,y)=\sin(\omega x+\phi)\)) such that the fringes show maximum sensitivity to surface variations [42]. This is because a change in vertical height now corresponds to a displacement in one axis from the centre of the projector, which in turn results in a phase shift of the sinusoid. This is a consequence of the small angle of the projector relative to the camera. If this displacement were instead along the fringes there would be no shift in phase observed. Typically, a single fringe image may be used to reconstruct height but, as with SSOP, this may require spatial filtering and hence incur a reduction in resolution. While single-shot methods may offer a speed advantage, their noise performance is typically worse. We therefore use a generalised approach for using \(N\) phase-shifted images to reconstruct height maps [43]. If the geometry of the system is precisely known, this phase can then be converted to height for each pixel in the image via the equation [33]: \[h(x,y)=\frac{l_{0}\Delta\phi(x,y)}{\Delta\phi(x,y)-2\pi f_{0}d} \tag{7}\] where \(l_{0}\) is the distance from the projector to the reference material, \(\Delta\phi\) is the phase difference between the actual phase (calculated) and the phase of the background reference plane, \(f_{0}\) is the spatial frequency of the projected pattern and \(d\) is the separation distance of the projector and camera. Because of the geometrical assumptions made in mapping phase to height, this approach cannot be straightforwardly applied to non-planar geometries for shape reconstruction. In non-planar geometries, reconstruction of exact physical height could instead be approximately deduced by comparison with a reference phantom, e.g. perfectly straight tube for a lumen geometry, or by applying advanced techniques such as deep-learning [44]. ### Development of projection pattern for tubular geometry Conventional SFDI systems with planar sinusoidal projections are suitable for planar samples, such as _ex vivo_ resected tissue specimens. However for _in vivo_ use, an SFDI system would typically need to be operated inside a tubular lumen, e.g. the gastrointestinal tract if in an endoscope. Using our _Blender_ simulation it is very simple to explore such a situation. We began by simulating a tube of length 250 mm with an outer diameter of 80 mm and an inner diameter of 20 mm. A 120 mW spot light source was placed at a distance of 100 mm from the top of the tube and projected a flat sinusoidal pattern down the tube. This naive approach creates a non-uniform spatial frequency pattern throughout the length of the tube which makes reconstructing accurate optical properties challenging (see Fig 3a). Therefore, we developed a process to create a more suitable illumination pattern for other imaging geometries and demonstrated for the test case of a tube. First, the material of the tube was set to be highly reflective using a pre-existing material node of diffuse BSDF with a roughness of 0 and a shade of pure white. Next, the surface of the tube was 'unwrapped' within _Blender_ using the UV mapping tool, resulting in a flattened map of the inside of the tube. A sinusoidal pattern of the desired phase and spatial frequency was then applied to this flat surface. Once applied, the material is then wrapped, such that the inside of the tube now has a uniform spatial frequency throughout its length. 1 W light sources were placed equally throughout the tube such that the illumination intensity is uniform looking down the tube at the top. Here, we evenly distributed 40 point sources down the 250 mm tube. A camera placed 110 mm above the top of the tube then captured an image of the concentric circle illumination pattern. This image was then exported to _Python_ where a normalisation was applied to ensure that the sinusoid pixel values vary across the maximum range for projection (\(0-255\)). This process was carried out for sinusoidal patterns of a fixed spatial frequency at 3 different phase shifts. Then, the material of the tube can be reverted to the original scattering/absorbing material described in Sect 2.1. Finally, the normalised images of the patterned tube are then used as the new projection patterns, which are projected onto the tube with a 5 W light source. The new projection pattern and resultant tube cross section is shown in Fig 3b. This process can be considered a 'pre-distortion' of the projected pattern to produce more uniform spatial frequencies and could alternatively be computed using analytically-derived formulae, or by direct inverse computation using a ray-tracing engine. However, our approach here is very simple to implement and is highly effective. These modified projection patterns can then be used for SFDI imaging as there is a now a uniform spatial frequency pattern within the geometry length. However, the tubular geometry inherently allows less light to reach the distal end of the tube and also less light to be reflected back as only a small range of angles can escape the tube via the opening. The projector placement, at a large angle to the normal of the tube surface, also creates different incidence angles along the length of the tube. The empirically-derived LUT approach still therefore produced substantially different optical properties along the length of the tube. As the LUT is constructed by taking the mean of pixels on the tube wall, this is an inevitable result using a single LUT for the whole tube. To improve these results, we developed a sectioning approach. Instead of using a single LUT as before, we sectioned the tube (length wise) into five different longitudinal subsections and used a different LUT for each section. The five sections were selected as regions that showed a mean intensity difference \(>10\) relative to other sections. ## 3 Results and Discussion ### Material simulation In the DIS simulation, \(A_{\rho}\) and \(S_{\rho}\) were varied, the reflectance and transmission values obtained, and input into the IAD algorithm which computes the optical properties of the material. This was repeated until optical properties within our range of interest were obtained. Following this process, it was found that realistic ranges for these parameters were \(1\leq A_{\rho}\leq 100\) and \(5000\leq S_{\rho}\leq 20000\). We selected material of \(A_{\rho}=1\) and \(S_{\rho}=20000\) with optical properties \(\mu_{a}=0.026\) mm\({}^{-1}\) and \(\mu_{s}\prime=4.75\) mm\({}^{-1}\) to be the reference material for the SFDI measurements using a Monte Carlo generated LUT as described in Sect 2.2.1. The results from the SFDI measurements are shown in Fig 4. We note from Fig 4(a) that there is a discrepancy between the results from the IAD and the SFDI Monte Carlo LUT calculations, particularly at low \(S_{\rho}\) values. We also note a small cross-coupling between absorption and scattering: increased scattering density reduces absorption coefficient. The reason for the first of these two irregularities can be Figure 3: Comparing tubes with (a) planar sinusoidal pattern and (b) our novel illumination pattern. Top left shows image being projected from projector (highlighted yellow in complementary images). Bottom left shows view of shown projection pattern down tube with red dashed line indicating where tube is cut to view cross section on right panel seen by examining the implementation of the material in Blender. The scattering and absorbing materials are combined using an 'add' shader which equally weights both materials and adds them together. Because the absorbing component of the material does not display anisotropy, unlike the scattering component, the combination of the two components slightly modifies the 'effective' anisotropy of the whole material to a value different from the nominal \(g\) of the scattering component. This is the cause of the small discrepancy between the two methods of measuring scattering. The cross-coupling between scattering and absorption may arise in part because increased absorption reduces the accuracy of scattering measurements as there will be fewer'scattering' events simulated for each ray before it is absorbed. The effect observed here is comparatively small and so for the purposes of designing SFDI systems may be neglected. However, we note that such cross-coupling is observed, often more strongly, when imaging experimental phantoms. To account for these discrepancies, we introduce the empirically-derived LUT described in Sect 2.2.2 with resultant calculated optical properties displayed in Fig 4. Using this we are able to select optical properties we wish to simulate and input the associated \(A_{\rho}\) and \(S_{\rho}\) values. We can then simulate materials of various shape and give them specific optical properties, which is ultimately the most relevant feature required for designing and testing new SFDI systems. ### Simulation of typical gastrointestinal conditions in up-close planar geometry Fig 5 shows the optical property and height maps generated for a 40 mm diameter simulated polyp, with an absorption coefficient higher than that of surrounding healthy tissue and a reduced scattering coefficient lower than that of surrounding healthy tissue, simulating squamous cell carcinoma. At 635 nm, the absorption coefficient of squamous cell carcinoma is 0.12 mm\({}^{-1}\), which is much greater than that of healthy oesophageal tissue, 0.058 mm\({}^{-1}\), and the reduced scattering coefficient of 0.64 mm\({}^{-1}\), is less than that of healthy oesophageal tissue, which is typically 0.75 mm\({}^{-1}\)[39]. To recreate this behaviour we therefore simulated healthy background tissue with \(A_{\rho}=25\) and \(S_{\rho}=4236\), and we simulated a polyp with \(A_{\rho}=96\) and \(S_{\rho}=3721\). The simulated polyp has a height (from the base material) of 40 mm. Fig 5e shows a successful height map generation from fringe profilometry measurements. Fig 5c,d,g,h demonstrate successful recovery of optical properties, demonstrating the success of this method to simulate and image Figure 4: (a) Absorption and (b) reduced scattering coefficient vs scattering density, \(S_{\rho}\), calculated for varying absorption densities, \(A_{\rho}\), via IAD algorithm (solid line), SFDI Monte Carlo LUT (dashed line) and SFDI empirically derived LUT (dotted line). The error bars represent the standard deviation across the calculated \(500\times 500\) pixel optical property map. objects. We note that the empirically derived LUT produces results closer to the expected values, which is because it accounts for some of the discrepancies in our tissue simulation as described earlier. However,the Monte Carlo LUT still provides high contrast between the squamous cell carcinoma and background, which is arguably more important for wide-field diagnostic applications. We note that because the surface profile information is available, the optical property accuracy may be improved by the addition of surface profile correction for optical property determination [42]. Fig 6 shows the optical property maps generated for a segment of Barrett's oesophagus next to a segment of healthy oesophageal tissue. The tissue properties are designed to exhibit similar absorption coefficients, while the reduced scattering coefficient of the simulated BO is less than that of the adjacent healthy oesophageal tissue. At 635 nm, the absorption coefficient of Barrett's oesophagus with mild chronic inflammation, 0.057 mm\({}^{-1}\) is similar to that of healthy oesophageal tissue, while the reduced scattering coefficient of Barrett's oesophagus with mild chronic inflammation, 0.51 mm\({}^{-1}\) is much less than that of healthy oesophageal tissue [39]. We simulated healthy oesophageal tissue as before with \(A_{\rho}\) = 25 and \(S_{\rho}\) = 4236, and Barrett's oesophagus with \(A_{\rho}\) = 22 and \(S_{\rho}\) = 3215. Fig 6c,d,f and g show these optical properties are recovered as expected, demonstrating the capability of the simulation system to differentiate between tissue types. We observe a close match between the Monte Carlo and empirically derived LUT results, which may arise due to the ideal geometric condition in which samples are totally flat, thereby reducing artefacts. We note that at the intersection region of the two simulated tissue types, there is a spike in both the optical properties, which results from effects at the interface and a small air gap that is present. Figure 5: Simulated squamous cell carcinoma as a spheroid on a background of healthy oesophageal tissue showing (a) white light image and (e) reconstructed height map with (b) expected absorption coefficient, \(\mu_{a}\), (c) \(\mu_{a}\) recovered with MC LUT (d) \(\mu_{a}\) recovered with empirically derived LUT (f) expected reduced scattering coefficient, \(\mu_{s}\)_t_, (g) \(\mu_{s}\)_t_ recovered with MC LUT and (h) \(\mu_{s}\)_t_ recovered with empirically derived LUT. Scale bar = 20mm. ### Simulation of optical property variation in tubular geometry The results presented here were taken using our modified projection pattern such that there was a uniform spatial frequency at the material surface enabling accurate optical property determination. This novel approach allows for the simulation of a geometry of any size to be generated and the desired illumination patterns of interest obtained for demodulation at the material surface. We initially recovered optical properties of the tube material via nearest neighbour interpolation using our Monte Carlo generated LUT described in Sect 2.2. We use nearest-neighbour interpolation in this situation for robustness to points outside the convex hull of the LUT. This gives the results of Figure 7 and 8 a quantized appearance. However, because it is relatively easy to generate more sample images using _Blender_, the sample set could be expanded such that cubic interpolation could be reliably used with sufficient computing simulation time available. For the measurements here, the measured AC modulation amplitude was higher than expected, resulting in overestimation of scattering coefficients for low \(S_{\rho}\) value materials. This difference arises because of the different geometry in the tubular case compared to the planar case. Here, the camera and projector are placed directly at the proximal end of the tube at no offset angle to one another, meaning the camera may be more prone to detecting specular reflections. This results in a higher AC modulation amplitude and reduced scattering coefficient offset. To overcome this issue, we introduced the empirically derived LUT described for tubular geometry in Sect 2.2. Nearest neighbour interpolation was again used in this case. This was somewhat successful however it was not applicable along the length of the tube. Significant improvement was achieved when sectioning the LUT as described in Sect 2.4, shown in Fig 7. We calculated, over six varying material values, that the sectioned LUT method reduced the calculated absorption coefficient relative error by 50% and reduced the calculated reduced scattering coefficient relative error from 8% to 3%. Differentiating between optical properties inside lumen can be useful for detecting different Figure 6: Simulated Barrett’s Oesophagus with mild chronic inflammation (left half of sample) adjacent to healthy oesophageal tissue (right half of sample) showing (a) white light image (b) expected absorption coefficient, \(\mu_{a}\), (c) \(\mu_{a}\) recovered with MC LUT and (d) \(\mu_{a}\) recovered with empirically derived LUT (e) expected reduced scattering coefficient, \(\mu_{s}\prime\), (f) \(\mu_{s}\prime\) recovered with MC LUT and (g) \(\mu_{s}\prime\) recovered with empirically derived LUT. Scale bar = 20mm. Figure 8: Imaging different material types within complex geometry, analogous to a lumen showing (a) white light image of the tube (b) expected absorption coefficient of tube material (c) measured absorption coefficient (d) expected reduced scattering coefficient of tube material and (e) measured reduced scattering coefficient. Tube inner diameter = 20mm. Figure 7: Comparison of sectioned an un-sectioned empirically derived LUT for tube wall material of \(\mu_{a}\) = 0.107 mm\({}^{-1}\) and \(\mu_{s}\)\(\prime\) = 4.64 mm\({}^{-1}\) (a) white light image of tube (b) expected absorption coefficient, \(\mu_{a}\), (c) measured \(\mu_{a}\) using un-sectioned LUT (d) measured \(\mu_{a}\) using _sectioned_ LUT (e) expected reduced scattering coefficient, \(\mu_{s}\)\(\prime\), (f) \(\mu_{s}\)\(\prime\) measured using un-sectioned LUT, (g) \(\mu_{s}\)\(\prime\) measured from _sectioned_ LUT. Tube inner diameter = 20mm. diseases. To show the capability of the method to differentiate between optical properties inside a lumen, we simulated a tube with two different materials; three quarters had material properties \(A_{\rho}=100\) and \(S_{\rho}=5000\) for expected optical properties of \(\mu_{a}=0.121\) mm\({}^{-1}\) and \(\mu_{s}\prime=0.93\) mm\({}^{-1}\), and the top right quarter had material properties \(A_{\rho}=50\) and \(S_{\rho}=20000\) for expected optical properties \(\mu_{a}=0.066\) mm\({}^{-1}\) and \(\mu_{s}\prime=4.69\) mm\({}^{-1}\). The results are shown in Fig 8. We note that the there is a distinct difference in material properties from the top right quarter and the rest of the tube. However, there are artefacts present in the main part of the tube that do not match the optical properties. Due to the geometry of the system, it is likely that light is reflecting off multiple surfaces before reaching the camera which may contribute to these artefacts. ## 4 Discussion These results demonstrate the capability of the _Blender_ system to simulate varying tissue types of various shapes, in different imaging geometries, and the capability of the SFDI imaging system to successfully determine their optical properties. This can be useful for many applications. Software such as _OptogenSIM_[45], _FullMonte_[46] and _ValoMC_[47] model Monte Carlo simulations in biologic al relevant samples, however they suffer from a variety of limitations; such as incapability to generate realistic, complex sample geometries within the software and lack of full consideration for lighting conditions or camera positions. The presented SFDI simulation model can overcome many of the limitations of existing software by enabling custom configuration of illumination source and camera position and orientation, spatial frequency, and illumination pattern. This allows the introduction of real-world artefacts that help to test the limitations of a new system design. SFDI can have various sources of errors [48], such as errors arising from assumptions made with selected light propagation model, differences in optical properties dependent on depth, divergence of the projection beam and how the spatial frequency may change with distance from projector to sample. These sources of error can be simulated in _Blender_ and their optimum solutions determined in a cost-effective, timely manner. Once solutions are found they can be applied to current systems to improve system accuracy. We envisage the use of this system to simulate novel systems, as well as current systems, as a means to optimise an SFDI system to speed up the progression of system development and test the possibilities and limitations of the technique in different situations. Another potential application of this system could be to generate large data sets using SFDI, which may then be used in lieu of experimental data, eliminating the time and money necessary to go into data collection. These could be used to improve optical property uncertainty measurements by making large look up tables for specific system setups [49], to correct the optical property profile measurements using deep learning [50], and has previously shown promise to train neural networks for SFDI demodulation [25]. ## 5 Conclusion We have further developed an SFDI and fringe profilometry imaging system in the open-source graphics software _Blender_ which enables the simulation of typical gastrointestinal conditions with specific absorption and reduced scattering coefficients in tubular imaging geometries relevant to that of the gastrointestinal tract. We have shown simulation of objects of specific shape, size and optical properties and successful imaging of these objects to recover maps of height, absorption and scattering. We anticipate our results will aid in the design of a future SFDI systems, e.g. miniaturised systems, by enabling the testing of different illumination geometries and patterns. FundingThe authors acknowledge support from a UKRI Future Leaders Fellowship (MR/T041951/1) and an ESPRC Ph.D. Studentship (2268555). AcknowledgmentsWe acknowledge open-source software resources offered by the Virtual Photonics Technology Initiative ([https://virtualphotonics.org](https://virtualphotonics.org)), at the Beckman Laser Institute, University of California, Irvine. ## Disclosures. The authors declare no conflicts of interest. ## Data Availability Statement. The data presented in this study are available from the the following source: [DOI to be inserted later].
2310.12514
Phases of 4He and H2 adsorbed on doped graphene
The influence of attractive boron impurities, embedded on a graphene sheet, on the phase diagrams of $^4$He and H$_2$ adsorbed on top was studied using the diffusion Monte Carlo method. The doping of graphene was made by distributing the boron atoms following the same pattern found in an experimentally synthesized substrate. Our results show that while the different incommensurate solid phases of both adsorbates remain largely unchanged after doping, the liquid/gas equations of state are significanty different from the ones on pristine graphene. Doping graphene produces new translationally invariant stable phases for $^4$He, depending on the concentration of boron impurities, but makes the H$_2$ ground state to remain solid. In addition, several new registered phases appear for both adsorbates.
M. C. Gordillo, J. Boronat
2023-10-19T06:28:48Z
http://arxiv.org/abs/2310.12514v1
# Phases of \({}^{4}\)He and H\({}_{2}\) adsorbed on doped graphene ###### Abstract The influence of attractive boron impurities, embedded on a graphene sheet, on the phase diagrams of \({}^{4}\)He and H\({}_{2}\) adsorbed on top was studied using the diffusion Monte Carlo method. The doping of graphene was made by distributing the boron atoms following the same pattern found in an experimentally synthesized substrate. Our results show that while the different incommensurate solid phases of both adsorbates remain largely unchanged after doping, the liquid/gas equations of state are significantly different from the ones on pristine graphene. Doping graphene produces new translationally invariant stable phases for \({}^{4}\)He, depending on the concentration of boron impurities, but makes the H\({}_{2}\) ground state to remain solid. In addition, several new registered phases appear for both adsorbates. ## I Introduction First and second layers of \({}^{4}\)He and H\({}_{2}\) adsorbed on graphite have been profusely studied both from the experimental [1; 2; 3; 4; 5; 6; 7; 8] and theoretical [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] points of view, the main motivation being that both are archetypal two-dimensional (2D) many-body quantum systems. Both species have registered \(\sqrt{3}\times\sqrt{3}\) solids as ground states, that change upon further loading into incommensurate lattices before promotion to a second layer. In the limit \(T=0\), those second layers are different for both adsorbates: \({}^{4}\)He forms a 2D liquid that solidifies upon loading, while H\({}_{2}\) always remain solid. Both second layer solids could present supersolidity in a narrow density range [21; 22]. The same is true for molecular hydrogen adsorbed on a second layer of a narrow carbon nanotube. On the other hand, the second layer of \({}^{4}\)He on a nanotube is always a liquid [23]. Reducing the dimensionality of the system has been suggested as a possible way of getting a liquid H\({}_{2}\) phase due to a decreasing in the interaction strength. However, two-dimensional molecular hydrogen has been always shown to be a solid because of the relatively large H\({}_{2}\)-H\({}_{2}\) attractive interaction strength, that makes bulk H\({}_{2}\) solidify at \(T>0\). Nevertheless, its low mass and associated large zero point motion would eventually make H\({}_{2}\) a superfluid if a liquid could be supercooled up to low enough temperatures. A possibility to frustrate the first layer solid phase would be to consider some kind of defects in the substrate with respect to standard graphene. For instance, one can deposit H\({}_{2}\) on a carbon glass surface [24]. Diffusion Monte Carlo calculations of molecular hydrogen adsorbed on such environment indicate that the registered phase is substituted by a superglass [25] with a small but still finite superfluid fraction. Alternatively, one can think about what would happen on novel substrates such as graphyne [26] or biphenylene [27] sheets, where some new commensurate phases were found for \({}^{4}\)He but that have not been explored as H\({}_{2}\) adsorbents yet. With the precedent work in mind, one realizes that an unexplored possible way to get stable liquid H\({}_{2}\) could be to consider its adsorption on a surface with impurities embedded in the substrate. In particular, to load H\({}_{2}\) on a graphene sheet in which part of the carbon atoms have been substituted by another species. In this work, we will use the experimentally synthesized substrate of Ref. [28], that includes boron impurities in a graphene layer. In order to disentangle the effect of the impurity-H\({}_{2}\) interactions from that of their location within the surface, we considered both the substrate of Ref. [28] and another setup that includes a pair of boron impurities located in opposite vertices of a graphene hexagon. We used a pair of impurities instead of a single one due to the particular structure of the boron substrate (see below). Our study was extended to \({}^{4}\)He to compare the influence of the adsorbate-adsorbate interaction in the final results. ## II Method Our method is fully microscopic and starts with the Hamiltonian of the system, written as \[H=\sum_{i=1}^{N}\left[-\frac{\hbar^{2}}{2m}\nabla_{i}^{2}+V_{\rm ext}(x_{i},y_ {i},z_{i})\right]+\sum_{i<j}^{N}V_{\rm pair}(r_{ij})\, \tag{1}\] where \(x_{i}\), \(y_{i}\), and \(z_{i}\) are the coordinates of the each of the \(N\) adsorbate particles with mass \(m\). The potential \(V_{\rm ext}(x_{i},y_{i},z_{i})\) accounts for the interaction between each atom or molecule and all the individual atoms in the rigid graphene-with-impurities layer. Those potentials are of Lennard-Jones (LJ) type, with standard parameters taken from Ref. [29] in the case of He-C, and from
2308.08706
Bures geodesics and quantum metrology
We study the geodesics on the manifold of mixed quantum states for the Bures metric. It is shown that these geodesics correspond to physical non-Markovian evolutions of the system coupled to an ancilla. Furthermore, we argue that geodesics lead to optimal precision in single-parameter estimation in quantum metrology. More precisely, if the unknown parameter is a phase shift proportional to the time parametrizing the geodesic, the estimation error obtained by processing the data of measurements on the system is equal to the smallest error that can be achieved from joint detections on the system and ancilla, meaning that the ancilla does not carry any information on this parameter. The error can saturate the Heisenberg bound. In addition, the measurement on the system bringing most information on the parameter is parameter-independent and can be determined in terms of the intersections of the geodesic with the boundary of quantum states. These results show that geodesic evolutions are of interest for high-precision detections in systems coupled to an ancilla in the absence of measurements on the ancilla.
Dominique Spehner
2023-08-17T00:00:39Z
http://arxiv.org/abs/2308.08706v1
# Bures geodesics and quantum metrology ###### Abstract We study the geodesics on the manifold of mixed quantum states for the Bures metric. It is shown that these geodesics correspond to physical non-Markovian evolutions of the system coupled to an ancilla. Furthermore, we argue that geodesics lead to optimal precision in single-parameter estimation in quantum metrology. More precisely, if the unknown parameter is a phase shift proportional to the time parametrizing the geodesic, the estimation error obtained by processing the data of measurements on the system is equal to the smallest error that can be achieved from joint detections on the system and ancilla, meaning that the ancilla does not carry any information on this parameter. The error can saturate the Heisenberg bound. In addition, the measurement on the system bringing most information on the parameter is parameter-independent and can be determined in terms of the intersections of the geodesic with the boundary of quantum states. These results show that geodesic evolutions are of interest for high-precision detections in systems coupled to an ancilla in the absence of measurements on the ancilla. ## I Introduction. Geodesics play a prominent role in classical mechanics and general relativity as they describe the trajectories of free particles and light. In contrast, at first sight they are not relevant in quantum mechanics. In the quantum theory, the notion of trajectories in space or space-time has to be abandoned. Instead, quantum dynamics are described by time evolutions of quantum states. Such states are given by density matrices \(\rho\) forming a manifold of dimension \(n^{2}-1\), where \(n\) is the dimension of the system Hilbert space \(\mathcal{H}\) (which we assume here to be finite). Different distances can be defined on this manifold. A distance appearing naturally in various contexts in quantum information theory is the Bures arccos distance \(d_{\text{B}}\)[1]. This distance is a good measure of the distinguishability of quantum states, being a simple function of the fidelity [2]. Furthermore, it has a clear information content, in particular it satisfies the data-processing inequality [1] and it is closely related to the quantum Fisher information quantifying the maximal amount of information on a parameter in a quantum state [3; 4]. Unlike the trace distance, \(d_{\text{B}}\) is a Riemannian distance, i.e., it has an associated metric \(g\) giving the square infinitesimal distance \(\text{d}s^{2}=(g_{\rho})_{\alpha\beta}\,\partial_{\alpha\rho}\,\partial_{ \beta}\rho\), where \(\partial_{\alpha}\rho\) is the derivative of \(\rho\) with respect to the coordinate \(\alpha\) and we make use of Einstein's summation convention. The manifold of quantum states \(\mathcal{E}_{\mathcal{H}}\) equipped with the Bures metric is a Riemannian manifold, on which one can define geodesics. In this work, we study these geodesics and analyze their usefulness in quantum metrology. Metrology is the science of devising schemes that extract as precise as possible estimates of the parameters associated to the system. In quantum metrology, the estimation of the unknown parameters (for instance, a phase shift in an interferometer) is obtained from the detection outcomes on quantum probes undergoing a parameter-dependent transformation process (for instance, the propagation in the two arms of the interferometer). It has been recognized that estimation errors scaling like the inverse of the number \(N\) of probes (Heisenberg limit) can be achieved using entangled probes, yielding an improvement by a factor of \(1/\sqrt{N}\) with respect to the error for classical probes [5; 6; 7; 8; 9]. Quantum-enhanced precisions have been observed experimentally in optical systems [10; 11; 12], trapped ions [13; 14], and Bose-Einstein condensates [15; 16]. In these experiments, noise and losses lead to dephasing and entanglement losses, thus limiting the precision. In order to account for such limiting effects, several authors have studied parameter estimation in quantum systems coupled to their environment [17; 18; 19; 20; 21; 22; 23; 24; 25]. It has been argued that for certain dephasing and photon loss processes, the \(\sqrt{N}\)-improvement can be lost, the best precision having the classical scaling for large \(N\) albeit with a larger prefactor [17; 18; 19; 20; 21; 22]. On a general ground, one expects that the environment coupling should increase the estimation error since information on the parameter can be lost in the environment and measurements on the latter are not possible. The results of the aforementioned references are, however, model-dependent. The characterization of which couplings are more detrimental to the estimation precision is an open issue, even in single parameter estimation. We establish in this paper that (i) Bures geodesics are not purely mathematical objects but correspond to physical non-Markovian evolutions of the system coupled to its environment; (ii) if the transformation process on a probe is given by a geodesic and the estimated parameter is a phase shift \(x\) proportional to the time \(\tau\) parametrizing this geodesic then, in contrast to the aforementioned general expectation, the environment does not carry any information on \(x\). More precisely, the estimation precision is equal to the best precision which can be achieved from joint measurements on the probes and their environments, even if one can measure only the probes. We show that this precision can reach the Heisenberg limit for a large number of probes. We also show that (iii) there is an optimal measurement on the probes yielding the smallest error which is independent of the estimated parameter \(x\) and given in terms of the intersection states of the geodesic with the boundary \(\partial\mathcal{E}_{\mathcal{H}}\) of the manifold of quantum states. As a consequence of (i), the geodesics are physical processes that can be simulated in a quantum system coupled to an ancilla (playing the role of the environment). Actually, we prove that it is possible to engineer a system-ancilla coupling Hamiltonian such that a state on the geodesic corresponds to the system state after an interaction with the ancilla during a lapse of time \(\tau\). We give examples of quantum circuits implementing this Hamiltonian and the corresponding geodesic. Our results (ii) and (iii) show that geodesics are of practical interest in quantum metrology. Note that (ii) does not contradict the works [17; 18; 19; 20; 21; 22] because the detrimental effect of the coupling with the environment is demonstrated in these works for particular dephasing or loss processes. Our analysis relies on an application to the manifold of quantum states \(\mathcal{E}_{\mathcal{H}}\) of the concept of Riemannian submersions in Riemannian geometry [26]. On the way, previous results in the literature [27; 28] on the explicit form of the Bures geodesics for arbitrary Hilbert space dimensions \(n\) are revisited. We show that these results are incomplete as they miss the geodesic curves joining two quantum states along paths which are not the shortest ones. We derive the explicit forms of all geodesics joining two invertible states \(\rho\) and \(\sigma\) at arbitrary dimensions \(n\) and study the intersections of these geodesics with the boundary \(\partial\mathcal{E}_{\mathcal{H}}\). The rest of the paper is organized as follows. A summary of our main results is presented in Sec. II after a brief introduction to quantum parameter estimation. The mathematical background on Riemannian geometry and submersions is given in Sec. III. In Sec. IV, the explicit form of the Bures geodesics is derived and we study their intersections with the boundary \(\partial\mathcal{E}_{\mathcal{H}}\). In Sec. V, we show that the geodesics correspond to physical evolutions of the system coupled to an ancilla. The optimality of geodesics in quantum metrology is investigated in Sec. VI. Finally, our main conclusions and perspectives are drawn in Sec. VII. Two appendices contain some technical properties and proofs. ## II Main results In this section we describe our main results and orient the reader to the subsequent sections, where these results are presented with more mathematical details. ### Determination of the geodesics and their intersections with the boundary of quantum states The explicit form of the Bures geodesics has been derived in Refs. [27; 28] for arbitrary Hilbert space dimensions \(n<\infty\). The geodesic joining two invertible states \(\rho\) and \(\sigma\) determined in these references is given by \[\gamma_{\mathrm{g}}(\tau)=\frac{1}{\sin^{2}\theta}\bigg{(}\sin^{ 2}(\theta-\tau)\,\rho+\sin^{2}(\tau)\,\sigma \tag{1}\] \[+\sin(\theta-\tau)\sin(\tau)\big{(}\rho^{-1/2}|\sqrt{\sigma} \sqrt{\rho}|\rho^{1/2}+\mathrm{h.c.}\big{)}\bigg{)}\] with \(0\leq\tau\leq\theta=\arccos\sqrt{F(\rho,\sigma)}\), where \(F(\rho,\sigma)=(\mathrm{tr}\,|\sqrt{\sigma}\sqrt{\rho}|)^{2}\) is the fidelity between \(\rho\) and \(\sigma\), \(|O|=\sqrt{O^{\dagger}O}\) stands for the modulus of the operator \(O\), and \(\mathrm{h.c.}\) refers to the Hermitian conjugate. The geodesic (1) has a length \(\theta\) equal to the arccos Bures distance \(d_{\mathrm{B}}(\rho,\sigma)\), it is thus the shortest geodesic arc joining \(\rho\) and \(\sigma\). Recall that a curve is a geodesic if it has constant velocity and minimizes _locally_ the length of curves between two points. In Riemannian manifolds, there exists in general geodesics joining two points which do not follow the shortest path from one point to the other. For instance, there are two geodesic arcs joining two non-diametrically opposite points on a sphere, namely the two arcs of the great circle passing through them; the smallest arc is the shortest geodesic, which minimizes the length globally, and the largest arc is another geodesic with a length strictly larger than the distance between its two extremities. On the other hand, if the two points on the sphere are diametrically opposite, there are infinitely many geodesics joining them, which have all the same length. In a similar way, we show in Sec. IV that, depending on the two invertible mixed states \(\rho\) and \(\sigma\), there is either a finite or an infinite number of Bures geodesics joining \(\rho\) and \(\sigma\). The explicit form of these geodesics is given by a formula generalizing (1) in Theorem 1 below. For generic invertible states \(\rho\) and \(\sigma\in\mathcal{E}_{\mathcal{H}}\), the number of geodesics is finite and equal to \(2^{n}\) (recall that \(n=\dim\mathcal{H}\)). The geodesics can be classified according to the number of times they bounce on the boundary of quantum states \(\partial\mathcal{E}_{\mathcal{H}}\) between \(\rho\) and \(\sigma\). The shortest geodesic (1) is the only geodesic starting at \(\rho\) and ending at \(\sigma\) without intersecting the boundary (but it does so if one extend it after \(\sigma\), as shown in [27]). Although these results are not the most original contribution of the paper, they form the starting point of the subsequent analysis. The method to determine the Bures geodesics is similar, albeit technically more involved, to textbook derivations of the Fubini-Study geodesics on the complex projective space \(\mathbf{CP}^{n}\) (manifold of pure quantum states) [26]. It relies on the notion of Riemannian submersions. The main observation is that the manifold of (mixed) quantum states can be viewed as the projection of a pure state manifold on an enlarged Hilbert space \(\mathcal{H}\otimes\mathcal{H}_{\mathrm{A}}\) describing the system coupled to an ancilla \(\mathsf{A}\), where the projection is the partial trace over the ancilla and \(\dim\mathcal{H}_{\mathrm{A}}\geq n\). The set of all purifications \(|\Psi\rangle\in\mathcal{H}\otimes\mathcal{H}_{\mathrm{A}}\) of \(\rho\) projecting out to the same density matrix \(\rho\) forms an orbit under the action of local unitaries on the ancilla. As noted by Uhlmann [2; 29], the Bures distance between \(\rho\) and \(\sigma\) is the norm distance between the corresponding orbits, that is, \(d_{\mathrm{Bures}}(\rho,\sigma)=\min\left\|\left|\Psi\right\rangle-\left|\Phi \right\rangle\right\|\) where the minimum is over all purifications \(|\Psi\rangle\) and \(|\Phi\rangle\) on the orbits of \(\rho\) and \(\sigma\), respectively. For such a distance, the geodesics \(\gamma_{\rm g}(\tau)\) joining \(\rho\) and \(\sigma\) are obtained by projecting onto \(\mathcal{E}_{\mathcal{H}}\) geodesics on the purification manifold having horizontal tangent vectors, as illustrated in Fig. 3. The latter geodesics are easy to determine since the metric on this manifold is the euclidean metric (given by the scalar product) restricted to a unit hypersphere (since purifications are normalized vectors). More details on Riemannian submersions are given in Sec. III below. ### Geodesics correspond to physical evolutions One of the purposes of this paper is to show that the Bures geodesics are not only mathematical objects but correspond to physical dynamical evolutions that could in principle be realized in the laboratory. Let \(\gamma_{\rm g}(\tau)\) be a geodesic on \(\mathcal{E}_{\mathcal{H}}\) starting at \(\rho=\gamma_{\rm g}(0)\). Consider an ancilla system \(\mathsf{A}\) with Hilbert space \(\mathcal{H}_{\rm A}\), as described in the previous subsection. Let \(|\Psi\rangle\) be a purification of \(\rho\) on \(\mathcal{H}\otimes\mathcal{H}_{\rm A}\), i.e., \(\rho=\operatorname{tr}_{\mathsf{A}}|\Psi\rangle\langle\Psi|\), where \(\mathsf{TA}_{\rm A}\) is the partial trace over the ancilla. We show in Sec. V (see Theorem 2) that there exists a system-ancilla Hamiltonian \(H_{\rm g}\) such that \[\gamma_{\rm g}(\tau)=\operatorname{tr}_{\mathsf{A}}e^{-\mathrm{i}\tau H_{ \rm g}}|\Psi\rangle\langle\Psi|\,e^{\mathrm{i}\tau H_{\rm g}}. \tag{2}\] In other words, \(\gamma_{\rm g}(\tau)\) is the system state at the (dimensionless) time \(\tau\), given that the system is coupled to the ancilla at time \(0\) and interacts with it up to time \(\tau\) with the Hamiltonian \(H_{\rm g}\). This Hamiltonian reads \[H_{\rm g}=-\mathrm{i}\big{(}|\Psi\rangle\langle\dot{\Psi}|-|\dot{\Psi} \rangle\langle\Psi|\big{)}\, \tag{3}\] where \(|\dot{\Psi}\rangle\) is a normalized vector satisfying the horizontality condition \[|\dot{\Psi}\rangle=H_{\rm S}\otimes\mathds{1}_{\mathsf{A}}|\Psi\rangle \tag{4}\] for some self-adjoint operator \(H_{\rm S}\) acting on the system such that \(\langle H_{\rm S}\otimes\mathds{1}_{\mathsf{A}}\rangle_{\Psi}=0\). Condition (4) can be interpreted geometrically as follows: \(|\dot{\Psi}\rangle\) is a vector in the tangent space at \(|\Psi\rangle\) which is orthogonal to the orbit \(\{\mathds{1}\otimes U_{\mathsf{A}}|\Psi\rangle\,;\,U_{\mathsf{A}}\) unitary on \(\mathcal{H}_{\mathsf{A}}\}\) of \(\rho\) under the unitary group on the ancilla. Note that \(\langle\Psi|\dot{\Psi}\rangle=0\). Since \(\rho\) can be chosen arbitrarily on \(\gamma_{\rm g}\) and all geodesics extended as closed curves intersect the boundary of quantum states \(\partial\mathcal{E}_{\mathcal{H}}\) (see Appendix B), one can without loss of generality assume that \(\rho\in\partial\mathcal{E}_{\mathcal{H}}\). If \(\gamma_{\rm g}\) has an intersection with \(\partial\mathcal{E}_{\mathcal{H}}\) given by a pure state \(\rho_{\psi}=|\psi\rangle\langle\psi|\), one can choose \(\rho=\rho_{\psi}\). Then the purifications of \(\rho\) are product states \(|\Psi\rangle=|\psi\rangle|\alpha_{0}\rangle\) and, as a consequence of (2), there is a smooth family of Completely Positive Trace Preserving (CPTP) maps \(\mathcal{M}_{\rm g,\tau}\) (quantum channels) such that \[\gamma_{\rm g}(\tau)=\mathcal{M}_{\rm g,\tau}(\rho)\, \tag{5}\] i.e., arbitrary states on the geodesic are obtained by applying \(\mathcal{M}_{\rm g,\tau}\) to \(\rho\) for some time \(\tau\). The quantum evolution \(\{\mathcal{M}_{\rm g,\tau}\}_{\tau\geq 0}\) is strongly non-Markovian. Actually, we show in Sec. V that this evolution is periodic in time, \[\mathcal{M}_{\rm g,\tau+2\pi}=\mathcal{M}_{\rm g,\tau}. \tag{6}\] For a system formed by \(d\) qubits coupled to \(d\) ancilla qubits, the geodesics can be implemented by the quantum circuit of Fig. 1(a), where \(U_{\mathsf{SA}}\) is a unitary operator on \(\mathcal{H}\otimes\mathcal{H}_{\mathsf{A}}\) such that \[|\Psi\rangle=U_{\mathsf{SA}}|0\rangle|0\rangle_{\mathsf{A}}\quad,\quad|\dot{ \Psi}\rangle=U_{\mathsf{SA}}|1\rangle|0\rangle_{\mathsf{A}}. \tag{7}\] Here, \(\{|k\rangle\}_{k=0}^{n-1}\) and \(\{|k\rangle\}_{k=0}^{n-1}\) denotes the computational bases of \(\mathcal{H}\simeq\mathbb{C}^{n}\) and \(\mathcal{H}_{\mathsf{A}}\simeq\mathbb{C}^{n}\) (with \(n=2^{d}\)). Indeed, denoting by \(\sigma_{y}^{(1)}\) the \(y\)-Pauli matrix acting on the first qubit and by \(\mathds{1}^{(2\ldots N)}\) the identity operator on the other qubits of the system, the Hamiltonian \[\widetilde{H}_{\rm g}=U_{\mathsf{SA}}\,\sigma_{y}^{(1)}\otimes\mathds{1}^{(2 \ldots N)}\otimes\mathds{1}_{\mathsf{A}}\,U_{\mathsf{SA}}^{\dagger} \tag{8}\] leaves the subspace \(\operatorname{span}\{|\Psi\rangle,|\dot{\Psi}\rangle\}\) invariant and coincides with the geodesic Hamiltonian \(H_{\rm g}\) on this subspace, see (3). Thus \[e^{-i\tau H_{\rm g}}|\Psi\rangle=U_{\mathsf{SA}}\,e^{-\mathrm{i}\tau\sigma_{y}^ {(1)}}\otimes\mathds{1}^{(2\ldots N)}\otimes\mathds{1}_{\mathsf{A}}\,|0 \rangle|0\rangle_{\mathsf{A}}. \tag{9}\] By (2), the state of the system at the output of the circuit is \(\gamma_{\rm g}(\tau)\). The two circuits of Fig. 1(b) and (c) give examples of entangling unitaries \(U_{\mathsf{SA}}\) implementing a geodesic \(\gamma_{\rm g}(\tau)\) through an arbitrary invertible state \(\rho=\gamma_{\rm g}(0)\). Actually, introducing its spectral decomposition \(\rho=\sum_{k}p_{k}|w_{k}\rangle\langle w_{k}|\), it is easy to check that (7) holds for both circuits, with \(|\Psi\rangle=\sum_{k}\sqrt{p_{k}}|w_{k}\rangle|k\rangle_{\mathsf{A}}\) a purification of \(\rho\) and \(|\dot{\Psi}\rangle\) a horizontal tangent vector having the form (4) for some self-adjoint operator \(H_{\rm S}\) on \(\mathcal{H}\). Note that the system-ancilla entangling operation in these circuits is quite simple, as it obtained by means of \(d\) C-NOT gates. The geodesic implemented by the unitary \(U_{\mathsf{SA}}\) of Fig. 1(b) is a geodesic joining two commuting states (see Sec. IV). ### Optimality of geodesics for parameter estimation in open quantum systems Before explaining our results, let us introduce some basic background on quantum metrology for readers not familiar with this field. _Parameter estimation in closed and open quantum systems._ The goal of parameter estimation is to estimate an unknown real parameter \(x\) pertaining to an interval \(X\subset\mathbb{R}\) by performing measurements on a quantum system (probe). Before each measurement, the system undergoes a \(x\)-dependent process transforming the input state \(\rho_{\rm in}\) into an output state \(\rho_{x}\). The estimator \(x_{\rm est}\in X\) is a function of the measurement outcomes. The precision of the estimation is characterized by the variance \((\Delta x)^{2}=\langle(x_{\rm est}-x)^{2}\rangle_{x}\), where \(\langle\cdot\rangle_{x}\) refers to the average over the outcomes conditioned to the parameter value \(x\). Hereafter, we assume that the estimator is unbiased, i.e., \(\langle x_{\rm est}\rangle_{x}=x\) and \(\partial_{x}\langle x_{\rm est}\rangle_{x}=1\). If one performs \(N_{\rm meas}\) independent identical measurements on identical probes prepared in state \(\rho_{x}\), then the estimation error satisfies the quantum Cramer-Rao bound [30; 31] \[\Delta x\geq(\Delta x)_{\rm best}=\frac{1}{\sqrt{N_{\rm meas}\, \mathcal{F}_{Q}(x,\{\rho_{x}\}_{x\in X})}}\;, \tag{10}\] where \(\mathcal{F}_{Q}(x,\{\rho_{x}\}_{x\in X})\) is the quantum Fisher information (QFI). The QFI is obtained by maximizing over all measurements the classical Fisher information (CFI) \[\mathcal{F}_{\rm clas}(x,\{p_{j|x}\}_{x\in X})=\sum_{j,p_{j|x}>0} \frac{(\partial_{x}p_{j|x})^{2}}{p_{j|x}}\;, \tag{11}\] where \(p_{j|x}\) is the probability of the measurement outcome \(j\) given that the system is in state \(\rho_{x}\). Note that the QFI and CFI depend on general on the parameter value \(x\). For clarity we write this dependence on \(x\) explicitly. The bound (10) is saturated asymptotically (in the limit \(N_{\rm meas}\gg 1\)) by choosing: (i) the maximum likelihood estimator \(x_{\rm est}\) (ii) the optimal measurement maximizing the CFI, for which \(\mathcal{F}_{\rm clas}=\mathcal{F}_{Q}\). This means that the right-hand side of (10) gives the smallest error that can be achieved in the estimation. Using classical resources, this error scales with the number of probes \(N\) like \(1/\sqrt{N}\) (shot noise limit). Entanglement among the quantum probes can enhance the precision by a factor \(1/\sqrt{N}\), leading to errors \((\Delta x)_{\rm best}\) scaling like \(1/N\) (Heisenberg limit). For a closed system in a pure state undergoing the \(x\)-dependent unitary transformation \(|\Psi_{x}\rangle=e^{-{\rm i}xH}|\Psi_{\rm in}\rangle\), where \(H\) is a given observable, the QFI is given by [4; 31] \[\mathcal{F}_{Q}(\{|\Psi_{x}\}_{x\in X})) = 4\big{(}\|\dot{\Psi}_{x}\|^{2}-\big{|}\langle\Psi_{x}|\dot{\Psi} _{x}\rangle\big{|}^{2}\big{)} \tag{12}\] \[= 4\langle(\Delta H)^{2}\rangle_{\Psi_{\rm in}}\;,\] where \(|\dot{\Psi}_{x}\rangle\) is the derivative of \(|\Psi_{x}\rangle\) with respect to \(x\) and \(\langle(\Delta H)^{2}\rangle_{\Psi}=\langle\Psi|H^{2}|\Psi\rangle-\langle\Psi |H|\Psi\rangle^{2}\) is the square quantum fluctuation of \(H\) in state \(|\Psi\rangle\). The input states maximizing this fluctuation are the superpositions \[|\Psi_{\rm in}\rangle=\frac{1}{\sqrt{2}}\Big{(}|\epsilon_{\rm max }\rangle+e^{{\rm i}\varphi}|\epsilon_{\rm min}\rangle\Big{)}\;, \tag{13}\] where \(\varphi\) is a real phase and \(|\epsilon_{\rm max}\rangle\) (respectively \(|\epsilon_{\rm min}\rangle\)) is an eigenstate of \(H\) with maximal (minimal) eigenvalue \(\epsilon_{\rm max}\) (\(\epsilon_{\rm min}\)). (We assume here that these eigenvalues are non-degenerated.) According to (10) and (12), the states (13) are the optimal input states minimizing the best estimation error \((\Delta x)_{\rm best}\). Indeed, by convexity of the QFI, using mixed input states \(\rho_{\rm in}\) can not lead to smaller errors. Note that the QFI (12) is independent of \(x\), as clear from the last expression. For this reason, we omit here \(x\) in the argument of \(\mathcal{F}_{Q}\). In contrast, for non-unitary evolutions the QFI depends in general upon the value \(x\) of the estimated parameter. For \(N\) probes undergoing a unitary "parallel" transformation with the observable \(H_{N}=\sum_{i=1}^{N}H_{i}\), where \(H_{i}\) stands for the action of \(H\) on the \(i\)th probe, the error has the Heisenberg scaling \((\Delta x)_{\rm best}\propto 1/N\) (more precisely, \((\Delta x)_{\rm best}=(2N\Delta\sqrt{N_{\rm meas}})^{-1}\) with \(\Delta=(\epsilon_{\rm max}-\epsilon_{\rm min})/2\)). The optimal input states, given by replacing \(|\epsilon_{\rm max}\rangle\) and \(|\epsilon_{\rm min}\rangle\) in (13) by the eigenvectors of \(H_{N}\) associated to the maximal and minimal eigenvalues \(N\epsilon_{\rm max}\) and \(N\epsilon_{\rm min}\), are highly entangled. In experimental setups, the coupling of the probes with their environment can not be neglected. A general description of the state transformation process is given by a family \(\{\mathcal{M}_{x}\}_{x\in X}\) of \(x\)-dependent quantum channels, which account for the joint effects of the free evolutions of the probe P and environment E and the coupling between them. The probe output state is related to the input state \(\rho_{\rm in}\) by \[\rho_{x}=\mathcal{M}_{x}(\rho_{\rm in})\;. \tag{14}\] In a realistic scenario, measurements can be performed on the probe only, i.e., one can not extract information from the environment. An issue of current interest is to find optimal families of quantum channels \(\{\mathcal{M}_{x}\}_{x\in X}\) and input states \(\rho_{\rm in}\) which lead to the smallest error, i.e., to the largest possible QFI. The precision \((\Delta x)_{\rm best}\) can clearly not be smaller than the best precision \((\Delta x)_{\rm best,p which would be obtained from joint measurements on the probe and environment. A natural question is whether there exists a family of quantum channels such that \((\Delta x)_{\rm best}=(\Delta x)_{\rm best,PE}\) for any value of \(x\). This means that the environment does not carry any information about the parameter \(x\). _Optimality of geodesics in parameter estimation._ We will show in Sec. VI below that it is possible to find such a family of quantum channels and input state, which are moreover such that the error \((\Delta x)_{\rm best}\) is equal to the smallest possible error \[(\Delta x)_{\rm best}=(\Delta x)_{\rm best,PE}=(2\Delta\sqrt{N_{\rm meas}})^{ -1}\;, \tag{15}\] where \(\Delta=(\epsilon_{\rm max}-\epsilon_{\rm min})/2\) is as before the maximal quantum fluctuation of the observable \(H\). Such a family is given by the CPTP maps associated to the Bures geodesics described in the previous subsection. The corresponding state transformation is \[\rho_{x}=\gamma_{\rm g}(x\Delta)\;, \tag{16}\] where \(\gamma_{\rm g}(\tau)\) is a geodesic starting at \[\gamma_{\rm g}(0)=\rho_{\rm in}={\rm tr}_{\mathsf{A}}\,|\Psi_{\rm in}\rangle \langle\Psi_{\rm in}| \tag{17}\] Note that the estimated parameter \(x\) appears in (16) as a phase shift proportional to the dimensionless time \(\tau\). In other words, for the state transformation and phase shift \(x\) given in (16), one has: (i) the precision on the estimation of \(x\) obtained from measurements on the probe only is the same as that obtained by performing joint measurements on the probe and environment; (ii) the error \((\Delta x)_{\rm best}=(\Delta x)_{\rm best,PE}\) is the smallest achievable among all probe-environment initial states. Furthermore, as illustrated in Fig. 2, the error can reach the Heisenberg bound for a large number \(N\) of entangled probes. To justify properties (i)-(ii), let us look at the transformation (14) as resulting from the coupling of the probe with an ancilla \(\mathsf{A}\), the probe and ancilla being initially in a pure state and undergoing a unitary transformation with some Hamiltonian \(H\) acting on the probe and ancilla Hilbert space \(\mathcal{K}=\mathcal{H}\otimes\mathcal{H}_{\mathsf{A}}\). Note that this is always possible according to Stinepring's theorem Stinepring (1965). Thus \[\rho_{x}={\rm tr}_{\mathsf{A}}\,|\Psi_{x}\rangle\langle\Psi_{x}|\quad,\quad| \Psi_{x}\rangle=e^{-{\rm i}xH}|\Psi_{\rm in}\rangle\;. \tag{18}\] Let us decompose the tangent vector \(|\dot{\Psi}_{x}\rangle=\partial_{x}|\Psi_{x}\rangle\) into the sum of its horizontal part \(|\dot{\Psi}_{x}^{\rm h}\rangle\) and its vertical part \(|\dot{\Psi}_{x}^{\rm v}\rangle\) with \(\langle\dot{\Psi}_{x}^{\rm h}|\dot{\Psi}_{x}^{\rm v}\rangle=0\), where horizonality means orthogonality to the orbit of \(\rho_{x}\), see Fig. 3. According to the theory of Riemannian submersions, the norm of the horizontal part coincides with the square norm \((g_{\rm B})_{\rho_{x}}(\dot{\rho}_{x},\dot{\rho}_{x})\) of \(\dot{\rho}_{x}=\partial_{x}\rho_{x}\), the latter norm being given by the Bures metric \(g_{\rm B}\) at \(\rho_{x}\) (see Sec. III). It is known that this square norm is equal to the QFI up to a factor of one fourth Bures (1965); Gisin and Parisi (1999). Thus, thanks to Pythagorean theorem \(\|\dot{\Psi}_{x}\|^{2}=\|\dot{\Psi}_{x}^{\rm h}\|^{2}+\|\dot{\Psi}_{x}^{\rm v }\|^{2}\), one obtains the following formula for the QFI of the probe \[\mathcal{F}_{Q}(x,\{\rho_{x}\}_{x\in X})=\mathcal{F}_{Q}(\{|\Psi_{x}\rangle\}_ {x\in X})-4\|\dot{\Psi}_{x}^{\rm v}\|^{2}\;, \tag{19}\] where the QFI \(\mathcal{F}(\{|\Psi_{x}\rangle\}_{x\in X})\) of the total system (probe and ancilla) is given by (12) and \(\|\dot{\Psi}_{x}^{\rm v}\|^{2}\) quantifies the amount of information on \(x\) in the ancilla. It follows that the equality \(\mathcal{F}_{Q}(x,\{\rho_{x}\}_{x\in X})=\mathcal{F}_{Q}(\{|\Psi_{x}\rangle\}_ {x\in X})\) holds whenever \(|\dot{\Psi}_{x}^{\rm v}\rangle=0\), that is, for horizontal tangent vectors \(|\dot{\Psi}_{x}\rangle\). Let \(|\Psi_{\rm g}(\tau)\rangle\) be a pure state geodesic for the norm distance on \(\mathcal{K}\). A general result on Riemannian submersions tells us that if the tangent vector \(|\dot{\Psi}_{\rm g}(\tau)\rangle\) is horizontal at \(\tau=0\) then it remains horizontal at all times \(\tau\) and \(|\Psi_{\rm g}(\tau)\rangle\) projects out to a geodesic \(\gamma_{\rm g}(\tau)\) on \(\mathcal{E}_{\mathcal{H}}\), where the projection corresponds here to the partial trace over the ancilla (see Sec. III.3 for more details). Therefore, for the state transformation (16) one has \(|\dot{\Psi}_{x}^{\rm v}\rangle=0\) for all parameter values \(x\). Thanks to (10) and (19), \((\Delta x)_{\rm best}\) is then equal to the best error obtained from measurements on the probe and ancilla. This justifies property (i) above. Furthermore, it is easy to show that \(|\Psi_{\rm in}\rangle\) is the superposition (13) for the Hamiltonian \(H=(\Delta)H_{\rm g}\), \(H_{\rm g}\) being given by (3) with \(|\Psi\rangle=|\Psi_{\rm in}\rangle\) and \(|\dot{\Psi}\rangle=|\dot{\Psi}_{\rm in}\rangle/\|\dot{\Psi}_{\rm in}\|\). This justifies (ii). We show in Sec. VI that, conversely, if (i) and (ii) hold then the state transformation has the form (16) for some geodesic \(\gamma_{\rm g}\). Theorem 3 in this section characterizes all Hamiltonians \(H\) and input states \(|\Psi_{\rm in}\rangle\) satisfying the two following conditions, which are equivalent to (i) and (ii): (I) \(|\dot{\Psi}_{x}\rangle\) is horizontal for all \(x\in X\); (II) \(\mathcal{F}(x,\{|\Psi_{x}\rangle\}_{x\in X})\) is maximal. We prove that these conditions hold if and only if \(|\Psi_{x}\rangle=|\Psi_{\rm g}(x\Delta)\rangle=e^{-{\rm i}(x\Delta)H_{\rm g}}| \Psi_{\rm in}\rangle\) is a pure state geodesic of the probe and ancilla with a horizontal initial tangent vector \(|\dot{\Psi}_{\rm in}\rangle\) satisfying (4). By the aforementioned result on Riemannian submersions, this means Figure 2: Quantum circuit implementing a geodesic transformation for the estimation of a phase shift \(x\), with an error reaching the Heisenberg scaling. The single qubit unitaries \(R_{y}\), \(U_{\mathsf{A}}\) and \(W\) are as in Fig. 1 with \(d=1\). In spite of the presence of the C-NOT gates entangling the probe qubits with the ancilla qubits, the maximal information on \(x\) can be recovered from measurements on the probe qubits only, with a minimal error \((\Delta x)_{\rm best}=(2N\Delta\sqrt{N_{\rm meas}})^{-1}\). that (16) and (17) hold. Equivalently, (I) and (II) are satisfied if and only if \(|\dot{\Psi}_{\rm in}\rangle=-{\rm i}(H-\langle H\rangle\psi_{\rm in})|\Psi_{\rm in}\rangle\) is horizontal and the restriction of \(H\) to the two-dimensional subspace spanned by its eigenvectors associated to the maximal and minimal eigenvalues coincides up to a factor of \(\Delta\) with the geodesic Hamiltonian (3) with \(|\Psi\rangle=|\Psi_{\rm in}\rangle\) and \(|\dot{\Psi}\rangle=|\dot{\Psi}_{\rm in}\rangle/|\dot{\Psi}_{\rm in}\|\). Theorem 4 in Sec. VI shows that these eigenvectors must be related by \(|\epsilon_{\rm min}\rangle=U\otimes\mathds{1}|\epsilon_{\rm max}\rangle\) for some local unitary \(U\) acting on the probe. Another nice property of the state transformation (16) is related to the optimal measurement. Recall that a measurement is given by a POVM \(\{M_{j}\}\), that is, a set of non-negative operators \(M_{j}\geq 0\) on \(\mathcal{H}\) such that \(\sum_{j}M_{j}=\mathds{1}\). The outcome probabilities when the system is in state \(\rho_{x}\) are given by \(p_{j|x}={\rm tr}\,M_{j}\rho_{x}\). An optimal measurement is a POVM \(\{M_{j}^{\rm opt}\}\) for which the CFI (11) coincides with the QFI. Such a measurement leads to the smallest error \(\Delta x=(\Delta x)_{\rm best}\) in (10). In general, \(\{M_{j}^{\rm opt}\}\) depends on the parameter \(x\)[3]. Since \(x\) is unknown _a priori_, it is then in practice impossible to implement directly an optimal measurement strategy. A notable exception for which the optimal measurement is independent of \(x\) is a closed system undergoing a unitary transformation with an input state given by the superposition (13) minimizing the error. Theorem 5 in Sec. VI shows that the geodesic transformation (16) enjoys the same property. More precisely, one has: (iii) there is an optimal POVM \(\{M_{j}^{\rm opt}\}\) maximizing the CFI which is independent of \(x\) and given by the von Neumann measurement with projectors onto \(\ker\rho_{j}\), where \(\rho_{j}\) are the states at which the geodesic \(\gamma_{\rm g}\) intersects the boundary \(\partial\mathcal{E}_{\mathcal{H}}\) of quantum states. This property gives a practical way to determine an optimal measurement in numerical simulations or experiments: first determine the smallest eigenvalue \(p_{n}(x)=\min_{\|\psi\|=1}\langle\psi|\rho_{x}|\psi\rangle\) of \(\rho_{x}\) for different values of \(x\) until finding a parameter value \(x_{j}\) such that \(p_{n}(x_{j})\simeq 0\) (if \(x\) is random and can not be tuned, just repeat the transformation many times); then determine all almost vanishing eigenvalues and the associated eigenvectors of \(\rho_{j}=\rho_{x_{j}}\), e.g. by minimization of \(\langle\psi|\rho_{j}|\psi\rangle\) in orthogonal subspaces or by using quantum state tomography; repeat this procedure for different values of \(x\) until obtaining an orthonormal basis of \(\mathcal{H}\) formed by eigenvectors of the states \(\rho_{j}\) with vanishing eigenvalues (Theorem 6 in Appendix B shows that these eigenvectors indeed form an orthonormal basis). Then such a basis is an optimal measurement basis for the estimation of \(x\). Let us stress that the geometric approach described in the following sections is not restricted to geodesic evolutions and provides a new method for studying parameter estimation in open quantum systems. Actually, the above arguments show that the estimation error is smaller when the tangent vectors \(|\dot{\Psi}_{x}\rangle\) have smaller vertical components, see (19). This observation may be useful for designing new engineering reservoir techniques in order to increase precision in quantum metrology in the presence of losses and dephasing. ## III Mathematical preliminaries In this section we describe the geometrical properties of the manifold of mixed states of a quantum system equipped with the Bures distance and introduce the notion of Riemannian submersions. ### Riemannian geometry for quantum states and Bures distance. Let us first recall some basic notions of Riemannian geometry. A metric on a smooth manifold \(\mathcal{E}\) is a smooth map \(g\) associating to each point \(\rho\in\mathcal{E}\) a scalar product \(g_{\rho}\) on the tangent space \(T_{\rho}\mathcal{E}\) at \(\rho\). A curve \(\gamma\) on \(\mathcal{E}\) joining two points \(\rho\) and \(\sigma\) is parametrized by a piecewise \(C^{1}\) map \(\gamma:t\in[t_{0},t_{1}]\mapsto\gamma(t)\in\mathcal{E}\) such that \(\gamma(t_{0})=\rho\) and \(\gamma(t_{1})=\sigma\). Its length \(\ell(\gamma)\) is \[\ell(\gamma)=\int_{\gamma}\mathrm{d}s=\int_{t_{0}}^{t_{1}}\mathrm{d}t\,\sqrt{g _{\gamma(t)}(\dot{\gamma}(t),\dot{\gamma}(t))}\;, \tag{20}\] where \(\dot{\gamma}(t)\) stands for the time derivative \(\mathrm{d}\gamma/\mathrm{d}t\). A Riemannian distance \(d\) on \(\mathcal{E}\) can be associated to any metric \(g\), defined as the infimum \(d(\rho,\sigma)=\inf_{\gamma}\ell(\gamma)\) of the Figure 3: The manifold of quantum states \(\mathcal{E}=\mathcal{E}_{\mathcal{H}}\) is the projection \(\pi(\mathcal{S})\) of the manifold \(\mathcal{S}\) of pure states on an enlarged Hilbert space \(\mathcal{H}\otimes\mathcal{H}_{\mathsf{A}}\), where \(\pi\) is the partial trace over \(\mathcal{H}_{\mathsf{A}}\). The horizontal subspaces at \(|\Psi\rangle\) and \(|\Phi_{V}\rangle\) are orthogonal to the orbits \(\pi^{-1}(\rho)\) and \(\pi^{-1}(\sigma)\) (red lines). A geodesic in \(\mathcal{S}\) joining \(|\Psi\rangle\) to \(|\Phi_{V}\rangle\) (plain black curve) with a horizontal initial tangent vector \(|\dot{\Psi}^{\rm h}\rangle\) projects out to a geodesic \(\gamma_{\rm g,V}\) on \(\mathcal{E}\) (green plain curve). In contrast, if the geodesic in \(\mathcal{S}\) (blue dashed curve) has a non horizontal initial tangent vector, its projection (green dashed line) is not a geodesic on \(\mathcal{E}\). The differential \(\mathrm{d}\pi\) maps the horizontal tangent vector \(|\dot{\Psi}^{\rm h}\rangle\) to a tangent vector \(\dot{\rho}\) of \(\gamma_{\rm g,V}\) having the same length \(\|\dot{\rho}\|=\|\dot{\Psi}^{\rm h}\|\). A non horizontal vector \(|\dot{\Psi}\rangle\) is mapped by \(\mathrm{d}\pi\) to a vector \(\dot{\rho}\) with a smaller length, given by \(\|\dot{\rho}\|^{2}=\|\dot{\Psi}\|^{2}-\|\dot{\Psi}^{\rm v}\|^{2}\), where \(|\dot{\Psi}^{\rm v}\rangle\) is the vertical component of \(|\dot{\Psi}\rangle\) (Pythagorean theorem). lengths of all curves \(\gamma\) joining \(\rho\) and \(\sigma\). Such a distance is called the geodesic distance on \((\mathcal{E},g)\). Curves \(\gamma_{\rm g}\) with constant velocity minimizing the length _locally_ are called geodesics. More precisely, \(\gamma_{\rm g}:[t_{0},t_{1}]\to\mathcal{E}\) is a geodesic if (i) \(g_{\gamma_{\rm g}(t)}(\dot{\gamma_{\rm g}}(t),\dot{\gamma_{\rm g}}(t))={\rm const.}\) and (ii) \(\forall\ t\in(t_{0},t_{1})\), \(\exists\ \delta>0\) such that \(\ell(\gamma_{\rm g}|_{[t,t+\delta]})=d(\gamma_{\rm g}(t),\gamma_{\rm g}(t+ \delta))\). In particular, if there is a geodesic \(\gamma_{\rm g}\) with length \(\ell(\gamma_{\rm g})=d(\rho,\sigma)\) minimizing the length globally, one says that \(\gamma_{\rm g}\) is the shortest geodesic joining \(\rho\) and \(\sigma\). Conversely, one can associate to a distance \(d\) on \(\mathcal{E}\) a metric \(g\) if \(d\) satisfies the following condition (we ignore here regularity assumptions): for any \(\rho\in\mathcal{E}\) and \(\dot{\rho}\in T_{\rho}\mathcal{E}\), the square distance between \(\rho\) and \(\rho+t\dot{\rho}\) behaves as \(t\to 0\) as \[{\rm d}s^{2}=d(\rho,\rho+t\dot{\rho})^{2}=g_{\rho}(\dot{\rho},\dot{\rho})t^{2} +\mathcal{O}(t^{3})\;. \tag{21}\] In quantum mechanics, states are represented by non-negative operators \(\rho\) with unit trace from the Hilbert space \(\mathcal{H}\) of the system into itself. We assume hereafter that \(\mathcal{H}\) has finite dimension \(n=\dim(\mathcal{H})<\infty\) and denote by \(\mathcal{E}_{\mathcal{H}}\) the set of all quantum states of a given system. The arccos Bures distance between two states \(\rho\) and \(\sigma\) is defined by [1] \[d_{\rm B}(\rho,\sigma)=\arccos\sqrt{F(\rho,\sigma)}\;, \tag{22}\] where \[F(\rho,\sigma)=\left({\rm tr}\,|\sqrt{\sigma}\sqrt{\rho}|\right)^{2}=\left({ \rm tr}(\sqrt{\rho}\,\sigma\sqrt{\rho})^{\frac{1}{2}}\right)^{2} \tag{23}\] is the fidelity. The set of all invertible states, \[\mathcal{E}_{\mathcal{H}}^{\,\rm inv}=\left\{\rho:\mathcal{H}\to\mathcal{H}, \;;\;\rho>0,{\rm tr}\,\rho=1\right\}\,, \tag{24}\] equipped with the distance \(d_{\rm B}\) forms a smooth open Riemannian manifold. Its boundary \(\partial\mathcal{E}_{\mathcal{H}}\) consists of density matrices \(\rho\) having at least one vanishing eigenvalue; for instance, pure states \(\rho_{\psi}=|\psi\rangle\langle\psi|\) are on the boundary. The tangent space at \(\rho\in\mathcal{E}_{\mathcal{H}}^{\,\rm inv}\) can be identified with the (real) vector space of self-adjoint traceless operators on \(\mathcal{H}\), \[T_{\rho}\,\mathcal{E}_{\mathcal{H}}=\left\{\dot{\rho}:\mathcal{H}\to\mathcal{H }\;,\;\dot{\rho}^{\dagger}=\dot{\rho}\;,\;{\rm tr}\,\dot{\rho}=0\right\}\,. \tag{25}\] The metric \(g_{\rm B}\) associated to the distance \(d_{\rm B}\) is given explicitly by [34; 4] \[(g_{\rm B})_{\rho}(\dot{\rho},\dot{\sigma})=\frac{1}{2}{\rm Re}\sum_{k,l=1}^{ n}\frac{\overline{\langle k|\dot{\rho}|l\rangle}\langle k|\dot{\sigma}|l \rangle}{p_{k}+p_{l}}\;,\;\dot{\rho},\dot{\sigma}\in T_{\rho}\,\mathcal{E}_{ \mathcal{H}}\;, \tag{26}\] where \(\{|k\rangle\}_{k=1}^{n}\) is an orthonormal basis of eigenvectors of \(\rho\) with eigenvalues \(p_{k}\). Note that \((g_{B})_{\rho}\) is well defined for invertible states \(\rho\) only, i.e., when \(p_{k}>0\) for all \(k=1,\ldots,n\). ### Purifications and smooth submersions Mixed quantum states of a given system can be described by introducing an auxiliary system \(\mathsf{A}\), called the ancilla, and viewing the system state \(\rho\) as the reduced state of the system \(+\) ancilla. The dimension of the ancilla Hilbert space \(\mathcal{H}_{\mathsf{A}}\) is assumed to fulfill \(n_{\mathsf{A}}\geq n\). We denote by \(\mathcal{K}=\mathcal{H}\otimes\mathcal{H}_{\mathsf{A}}\) the Hilbert space of the composite system. A purification of \(\rho\) on \(\mathcal{K}\) is a pure state \(|\Psi\rangle\langle\Psi|\) such that \(|\Psi\rangle\in\mathcal{K}\) and \(\rho=\pi(|\Psi\rangle)\), with \[\pi(|\Psi\rangle)={\rm tr}_{\mathsf{A}}\,|\Psi\rangle\langle\Psi|\;, \tag{27}\] where \({\rm tr}_{\mathsf{A}}\) stands for the partial trace over the ancilla space \(\mathcal{H}_{\mathsf{A}}\). For our purpose, it is convenient to consider purifications given by normalized vectors in \(|\Psi\rangle\in\mathcal{K}\), instead of pure states (recall that a pure state is a normalized vector modulo a phase factor and can be represented as a rank-one projector \(|\Psi\rangle\langle\Psi|\) in the projective space \(P\mathcal{K}\)). The condition \(\rho>0\) is satisfied if and only if \(|\Psi\rangle\) has Schmidt decomposition \[|\Psi\rangle=\sum_{k=1}^{n}\sqrt{p_{k}}|k\rangle|\alpha_{k}\rangle \tag{28}\] with \(n\) positive Schmidt coefficients \(\sqrt{p_{k}}>0\), \(k=1,\ldots,n\). Here, \(\{|\alpha_{k}\rangle\}_{k=1}^{n_{\mathsf{A}}}\) is an arbitrary orthonormal basis of \(\mathcal{H}_{\mathsf{A}}\). The set of purifications of invertible states is thus the subset \[\mathcal{S}_{\mathcal{K}}^{\,\rm inv} = \left\{|\Psi\rangle\in\mathcal{K}\;;\;|\Psi\rangle=1\;,\;|\Psi \rangle\;{\rm has}\;n\;{\rm positive}\right.\] (29) \[\left.\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \(\mathcal{S}^{\rm inv}_{\mathcal{K}}\). Such quotient maps with surjective differentials are called smooth submersions. The differential of the map (27) is given by \[{\rm d}\pi|_{|\Psi\rangle}(|\dot{\Psi}\rangle)={\rm tr}_{\mathsf{A}}(|\Psi \rangle\langle\dot{\Psi}|+|\dot{\Psi}\rangle\langle\Psi|)\;. \tag{33}\] To prove that \(\pi:\mathcal{S}^{\rm inv}_{\mathcal{K}}\to\mathcal{E}^{\rm inv}_{\mathcal{H}}\) is a smooth submersion, let us first note that for any fixed \(\dot{\rho}\in T_{\rho}\mathcal{E}_{\mathcal{H}}\), if \(|\dot{\Psi}\rangle\in\mathcal{K}\) satisfies \[{\rm d}\pi|_{|\Psi\rangle}(|\dot{\Psi}\rangle)=\dot{\rho} \tag{34}\] then \(|\dot{\Psi}\rangle\in T_{|\Psi\rangle}\mathcal{S}_{\mathcal{K}}\). In fact, by taking the trace of the right hand sides of (33) and (34) one gets \(2{\rm Re}\,\langle\Psi|\dot{\Psi}\rangle={\rm tr}\,\dot{\rho}=0\). Setting \(\rho=\pi(|\Psi\rangle)\), it is easy to check that \(|\dot{\Psi}\rangle=\frac{1}{2}\dot{\rho}\rho^{-1}\otimes\mathds{1}|\Psi\rangle\) is a solution of (34). Hence \({\rm d}\pi|_{|\Psi\rangle}\) is surjective. Let us point out that this is not true if \(\pi\) is defined on the whole unit sphere \(\mathcal{S}_{\mathcal{K}}\) of \(\mathcal{K}\), instead of \(\mathcal{S}^{\rm inv}_{\mathcal{K}}\), i.e., if one adds to \(\mathcal{E}^{\rm inv}_{\mathcal{H}}\) its boundary \(\partial\mathcal{E}_{\mathcal{H}}\). ### Riemannian submersions Our results in this paper rely on the notion of Riemannian submersion. In this subsection we review the properties of such submersions (see e.g. [26] for more details) and show that the partial trace (27) is an instance of Riemannian submersion \(\mathcal{S}^{\rm inv}_{\mathcal{K}}\to\mathcal{E}^{\rm inv}_{\mathcal{H}}\). Let \(\pi:\mathcal{X}\to\mathcal{E}\) be a smooth submersion, where \(\mathcal{X}\) is a Riemannian manifold with metric \(g_{\mathcal{X}}\). The tangent space \(T_{\psi}\mathcal{X}\) at \(\psi\in\mathcal{X}\) can be decomposed into a direct sum of two orthogonal subspaces \(\mathfrak{v}_{\psi}=\ker({\rm d}\pi|_{\psi})\) and \(\mathfrak{h}_{\psi}=\mathfrak{v}_{\psi}^{\perp}\), called respectively the vertical and horizontal subspaces (orthogonality is for the scalar product \((g_{\mathcal{X}})_{\psi}\)). It can be shown that there exists a unique Riemannian metric \(g_{\mathcal{E}}\) on the quotient manifold \(\mathcal{E}\) such that for all \(\psi\in\mathcal{X}\), the restriction of \({\rm d}\pi|_{\psi}\) to the horizontal subspace \(\mathfrak{h}_{\psi}\) is an isometry from \((\mathfrak{h}_{\psi},(g_{\mathcal{X}})_{\psi})\) to \((T_{\rho}\mathcal{E},(g_{\mathcal{E}})_{\rho})\), with \(\rho=\pi(\psi)\). One then says that \(\pi:(\mathcal{X},g_{\mathcal{X}})\to(\mathcal{E},g_{\mathcal{E}})\) is a Riemannian submersion. It is not difficult to prove that \(g_{\mathcal{E}}\) is associated to the distance \(d_{\mathcal{E}}\) on \(\mathcal{E}\) defined by \[d_{\mathcal{E}}(\rho,\sigma)=\inf_{\phi\in\pi^{-1}(\sigma)}d_{\mathcal{X}}( \psi_{1},\phi)\;, \tag{35}\] where \(d_{\mathcal{X}}\) is a distance having the metric \(g_{\mathcal{X}}\) and \(\psi_{1}\) is an arbitrary (fixed) point on the orbit of \(\rho\). A nice property of Riemannian submersions is that the geodesics on the quotient space \(\mathcal{E}\) can be obtained by projecting certain geodesics on \(\mathcal{X}\). More precisely, for any geodesic \(\Gamma:[t_{0},t_{1}]\to\mathcal{X}\) on \((\mathcal{X},g_{\mathcal{X}})\) such that \(\dot{\Gamma}(0)\in\mathfrak{h}_{\Gamma(0)}\), one has [26]: 1. \(\dot{\Gamma}(t)\in\mathfrak{h}_{\Gamma(t)}\) for any \(t\in[t_{0},t_{1}]\); 2. \(\gamma=\pi\circ\Gamma\) is a geodesic on \((\mathcal{E},g_{\mathcal{E}})\). Conversely, any geodesic \(\gamma\) on \((\mathcal{E},g_{\mathcal{E}})\) with \(\gamma(0)=\rho\) can be lifted locally to an arc of geodesic \(\Gamma\) on \((\mathcal{X},g_{\mathcal{X}})\) with horizontal tangent vectors such that \(\Gamma(0)=\psi\), for any \(\psi\in\pi^{-1}(\rho)\). This property is illustrated in Fig. 3. We will call horizontal geodesics the geodesics \(\Gamma\) on \(\mathcal{X}\) such that \(\dot{\Gamma}(0)\in\mathfrak{h}_{\Gamma(0)}\). Let us apply this formalism to the smooth submersion \(\pi\) given by (27). A natural distance on \(\mathcal{S}^{\rm inv}_{\mathcal{K}}\) having the metric \(g_{\mathcal{S}}\) is the norm distance \(d_{\mathcal{S}}(|\Psi\rangle,|\Phi\rangle)=\||\Psi\rangle-|\Phi\rangle\|\). The metric on \(\mathcal{E}^{\rm inv}_{\mathcal{H}}\) which makes \(\pi\) a Riemannian submersion turns out to be the Bures metric \(g_{\mathcal{B}}\). This can be seen by invoking Uhlmann's theorem, which states that [1; 2] \[d_{\rm B}(\rho,\sigma)=\min_{|\Psi\rangle\in\pi^{-1}(\rho),|\Phi\rangle\in\pi^{ -1}(\sigma)}\arccos\big{|}\langle\Psi|\Phi\rangle\big{|}\;, \tag{36}\] where in the right-hand side the arccos distance between pure states is minimized. Equivalently, the Bures distance \(d_{\rm Bures}(\rho,\sigma)=2\sin(d_{\rm B}(\rho,\sigma)/2))\) is given by [29; 4] \[d_{\rm Bures}(\rho,\sigma) = \big{(}2-2\sqrt{F(\rho,\sigma)}\big{)}^{\frac{1}{2}} \tag{37}\] \[= \min_{|\Psi\rangle\in\pi^{-1}(\rho),|\Phi\rangle\in\pi^{-1}( \sigma)}\big{\|}|\Psi\rangle-|\Phi\rangle\big{\|}\] Note that the two distances \(d_{\rm B}\) and \(d_{\rm Bures}\) have the same metric \(g_{\rm B}\), given by (26). Eqs. (36) and (37) tell us that the arccos and Bures distances between \(\rho\) and \(\sigma\) are the minimal distances between the two orbits of \(\rho\) and \(\sigma\). By (32) and the unitary invariance of the scalar product in \(\mathcal{K}\), the minima in these equations can be carried out over all \(|\Phi\rangle\in\pi^{-1}(\sigma)\) for some fixed \(|\Psi_{1}\rangle\in\pi^{-1}(\rho)\). Thus \(d_{\rm Bures}\) has the form (35) with \(d_{\mathcal{X}}=d_{\mathcal{S}}\). Let us end this subsection by determining the vertical and horizontal subspaces in the case of the purification manifold \(\mathcal{X}=\mathcal{S}^{\rm inv}_{\mathcal{K}}\) and quotient map (27). To determine \(\mathfrak{v}_{|\Psi\rangle}=\ker({\rm d}\pi|_{|\Psi\rangle})\), we observe that curves contained in the orbit \(\pi^{-1}(\rho)\) have by definition vertical tangent vectors. Conversely, for a smooth curve \(t\mapsto|\Psi(t)\rangle\) with a vertical tangent vector \(|\dot{\Psi}\rangle\) at \(t=0\), it holds \(|\Psi(t)\rangle=|\Psi_{\rm v}(t)\rangle+\mathcal{O}(t^{2})\) with \(t\mapsto|\Psi_{\rm v}(t)\rangle=\mathds{1}\otimes U_{\mathsf{A}}(t)|\Psi\rangle\) a curve in \(\pi^{-1}(\rho)\) and \(U_{\mathsf{A}}(t)\) a time-dependent unitary ancilla operator. Thus, using \(\dot{U}_{\mathsf{A}}U_{\mathsf{A}}^{\dagger}=-U_{\mathsf{A}}\dot{U}_{\mathsf{A}} ^{\dagger}\), one has \[\mathfrak{v}_{|\Psi\rangle}=\big{\{}\mathds{1}\otimes K_{\mathsf{A}}|\Psi \rangle\;;\,K_{\mathsf{A}}\;\text{skew Hermitian}\big{\}}\;. \tag{38}\] In order to obtain the horizontal subspace \(\mathfrak{h}_{|\Psi\rangle}=\mathfrak{v}_{|\Psi\rangle}^{1}\), we use the Schmidt decomposition (28) and expand an arbitrary horizontal tangent vector \(|\dot{\Psi}^{\rm h}\rangle=\sum_{k,l}c_{kl}|k\rangle|\alpha_{l}\rangle\) in the product basis \(\{|k\rangle|\alpha_{l}\rangle\}_{k,l=1}^{n,n_{\mathsf{A}}}\). Since \(|\dot{\Psi}^{\rm h}\rangle\) is orthogonal to \(\mathfrak{v}_{|\Psi\rangle}\), one finds \[0 = {\rm Re}\,\langle\dot{\Psi}^{\rm h}|\mathds{1}\otimes K_{\mathsf{A} }|\Psi\rangle\] \[= \sum_{k=1}^{n}\sum_{l=1}^{n_{\mathsf{A}}}\big{(}\overline{c}_{kl} \sqrt{p_{k}}\langle\alpha_{l}|K_{\mathsf{A}}|\alpha_{k}\rangle-c_{kl}\sqrt{p_{k}} \langle\alpha_{k}|K_{\mathsf{A}}|\alpha_{l}\rangle\big{)}\] for any skew Hermitian ancilla operator \(K_{\mathsf{A}}\). Choosing \(K_{\mathsf{A}}={\rm i}^{\nu}|\alpha_{l}\rangle\langle\alpha_{k}|-( Let us set \(H_{\rm S}=\sum_{k,l=1}^{n}h_{kl}|k\rangle\langle l|\) with \(h_{kl}=c_{kl}p_{l}^{-\frac{1}{2}}\). It follows from (40) that \(H_{\rm S}=H_{\rm S}^{\dagger}\). Furthermore, \[|\dot{\Psi}^{\rm h}\rangle=\sum_{k,l=1}^{n}h_{kl}\sqrt{p_{l}}|k\rangle|\alpha_{l }\rangle=H_{\rm S}\otimes\mathds{1}_{\rm A}|\Psi\rangle\;. \tag{41}\] Reciprocally, if \(|\dot{\Psi}^{\rm h}\rangle=H_{\rm S}\otimes\mathds{1}_{\rm A}|\Psi\rangle\) with \(H_{\rm S}\) self-adjoint then \({\rm Re}\,\langle\dot{\Psi}^{\rm h}|\mathds{1}\otimes K_{\rm A}|\Psi\rangle=0\). The condition \({\rm Re}\,\langle\dot{\Psi}^{\rm h}|\Psi\rangle=0\) coming from the requirement that \(|\dot{\Psi}^{\rm h}\rangle\) is in the tangent space (30) yields the additional constraint \(\langle H_{\rm S}\otimes\mathds{1}_{\rm A}\rangle_{\Psi}=0\). Thus \[\mathfrak{h}_{|\Psi\rangle}=\left\{H_{\rm S}\otimes\mathds{1}_{\rm A}|\Psi \rangle\,;\,H_{\rm S}\,\,\text{self-adjoint,}\,\,\langle H_{\rm S}\otimes \mathds{1}_{\rm A}\rangle_{\Psi}=0\right\}. \tag{42}\] In conclusion, we have shown that the partial trace map (27) defines a Riemannian submersion from \(\mathcal{S}_{\mathcal{K}}^{\rm inv}\) equipped with the metric \(g_{\mathcal{S}}\) to the manifold \(\mathcal{E}_{\mathcal{H}}^{\rm inv}\) equipped with the Bures metric \(g_{\rm B}\). This means that \(\mathrm{d}\pi\) is an isometry from \((\mathfrak{h}_{|\Psi\rangle},g_{\mathcal{S}})\) to \((T_{\rho}\,\mathcal{E}_{\mathcal{H}},g_{\rm B})\), namely, \[(g_{\rm B})_{\rho}\big{(}\mathrm{d}\pi|_{|\Psi\rangle}(|\dot{\Psi}^{\rm h} \rangle),\mathrm{d}\pi|_{|\Psi\rangle}(|\dot{\Phi}^{\rm h}\rangle)\big{)}={\rm Re }\,\langle\dot{\Psi}^{\rm h}|\dot{\Phi}^{\rm h}\rangle \tag{43}\] for any purification \(|\Psi\rangle\) of \(\rho\) and any horizontal tangent vectors \(|\dot{\Psi}^{\rm h}\rangle,|\dot{\Phi}^{\rm h}\rangle\in\mathfrak{h}_{|\Psi\rangle}\). ## IV Bures geodesics ### Determination of the geodesics We determine in this subsection the Bures geodesics by applying the mathematical framework of the preceding section (see [35] for a similar approach in the case of the space of positive definite matrices, i.e., unnormalized quantum states). The shortest geodesic joining two vectors \(|\Psi\rangle\) and \(|\Phi\rangle\) on the unit sphere \(\mathcal{S}_{\mathcal{K}}^{\rm inv}\) equipped with the metric \(g_{\mathcal{S}}\) is the arc of great circle \[|\Psi_{\rm g}(\tau)\rangle=\cos\tau\,|\Psi\rangle+\sin\tau\,|\dot{\Psi}\rangle \quad,\quad 0\leq\tau\leq\theta\;, \tag{44}\] where \(|\dot{\Psi}\rangle\in T_{|\Psi\rangle}\mathcal{S}_{\mathcal{K}}^{\rm inv}\) and \(\theta=\arccos({\rm Re}\,\langle\Psi|\Phi\rangle)\) is the angle between \(|\Psi\rangle\) and \(|\Phi\rangle\). We may assume without loss of generality that \(\theta\neq 0,\pi\), since otherwise \(|\Psi\rangle\) and \(|\Phi\rangle=\pm|\Psi\rangle\) project out to the same state \(\rho=\sigma\). The longest arc of great circle joining \(|\Psi\rangle\) and \(|\Phi\rangle\) needs not to be considered here, because it is the extension of the shortest geodesic joining \(|\Psi\rangle\) and \(-|\Phi\rangle\) and the latter vector belongs to the same orbit as \(|\Phi\rangle\). The geodesic tangent vector at \(\tau=0\) is given by \[|\dot{\Psi}_{\rm g}(0)\rangle=|\dot{\Psi}\rangle=\frac{1}{\sin\theta}\big{(} |\Phi\rangle-\cos\theta|\Psi\rangle\big{)}\;. \tag{45}\] It is easy to check on this formula that \(\|\dot{\Psi}\|=1\), i.e., the geodesic (44) has unit velocity. According to the properties of Riemannian submersions (Sec. III.3), the Bures geodesics joining the invertible states \(\rho=\pi(|\Psi\rangle)\) and \(\sigma=\pi(|\Phi\rangle)\) are obtained by projecting the arcs of big circle (44) having horizontal tangent vectors \(|\dot{\Psi}\rangle\in\mathfrak{h}_{\Psi}\). Let us consider the purification of \(\rho\) given by \[|\Psi\rangle=\sqrt{\rho}\otimes\mathds{1}_{\rm A}\sum_{k=1}^{n}|k\rangle| \alpha_{k}\rangle\;, \tag{46}\] where we have used the Schmidt decomposition (28). The last sum is an (unnormalized) maximally entangled state of the system and ancilla. Similarly, by (32) any purification of \(\sigma\) has the form \(|\Phi\rangle=\sqrt{\sigma}\,W\rho^{-1/2}\otimes U_{\rm A}|\Psi\rangle\), where \(W\) and \(U_{\rm A}\) are two unitaries on \(\mathcal{H}\) and \(\mathcal{H}_{\rm A}\), respectively, with \(\{W|k\rangle\}\) an eigenbasis of \(\sigma\). We have to determine the purifications \(|\Phi\rangle\) of \(\sigma\) such that the horizontality condition \(|\dot{\Psi}\rangle\in\mathfrak{h}_{|\Psi\rangle}\) holds. In view of (42), (45), and (46), this condition can be written as \[\frac{1}{\sin\theta} \bigg{(}\sqrt{\sigma}\,W\otimes U_{\rm A}-\cos\theta\sqrt{\rho} \otimes\mathds{1}_{\rm A}\bigg{)}\sum_{k=1}^{n}|k\rangle|\alpha_{k}\rangle\] \[=H_{\rm S}\sqrt{\rho}\otimes\mathds{1}_{\rm A}\sum_{k=1}^{n}|k \rangle|\alpha_{k}\rangle\;. \tag{47}\] for some self-adjoint operator \(H_{\rm S}\) such that \(\langle H_{\rm S}\otimes\mathds{1}_{\rm A}\rangle_{\Psi}=0\). In particular, one has \(\langle\alpha_{l}|U_{\rm A}|\alpha_{k}\rangle=0\) for \(n<l\leq n_{\rm A}\) and \(1\leq k\leq n\). We now use the identity \[\mathds{1}\otimes U_{\rm A}\sum_{k=1}^{n}|k\rangle|\alpha_{k}\rangle=U_{\rm A }^{\rm T}\otimes\mathds{1}_{\rm A}\sum_{k=1}^{n}|k\rangle|\alpha_{k}\rangle\;, \tag{48}\] where \(U_{\rm A}^{\rm T}=\sum_{k,l=1}^{n}\langle\alpha_{l}|U_{\rm A}|\alpha_{k} \rangle|k\rangle\langle l|\) (when \(n_{\rm A}>n\), (48) is true provided that \({\rm span}\{|\alpha_{1}\rangle,\ldots,|\alpha_{n}\rangle\}\) is invariant under \(U_{\rm A}\), which is indeed the case here). Observe that \(U_{\rm A}^{\rm T}\) and thus \(U=WU_{\rm A}^{\rm T}\) are unitaries. Multiplying both members of (47) by \(\sqrt{\rho}\), one deduces that this equation is equivalent to \[\frac{1}{\sin\theta}\bigg{(}\sqrt{\rho}\sqrt{\sigma}\,U-\cos\theta\ \rho \bigg{)}=\sqrt{\rho}\,H_{\rm S}\sqrt{\rho}\;. \tag{49}\] Therefore, (47) is equivalent to \(\sqrt{\rho}\sqrt{\sigma}\,U\) being self-adjoint for some unitary \(U\) on \(\mathcal{H}\). It is convenient at this point to introduce the polar decomposition \[\sqrt{\sigma}\sqrt{\rho}=U_{\sigma\rho}\Lambda_{\sigma\rho}\;\;,\;\;\Lambda_{ \sigma\rho}=|\sqrt{\sigma}\sqrt{\rho}|>0\;, \tag{50}\] where \(U_{\sigma\rho}\) is unitary. Let \(V\) be the unitary operator given by \(V=U_{\sigma\rho}^{\dagger}U\). Then \(\sqrt{\rho}\sqrt{\sigma}\,U\) is self-adjoint if and only if \(\Lambda_{\sigma\rho}V=V^{\dagger}\Lambda_{\sigma\rho}\). This implies \(V^{\dagger}\Lambda_{\sigma\rho}^{2}V=\Lambda_{\sigma\rho}^{2}\), i.e., \(V\) commutes with \(\Lambda_{\sigma\rho}^{2}\) and thus with \(\Lambda_{\sigma\rho}\). As a result, \(V^{\dagger}\Lambda_{\sigma\rho}=V\Lambda_{\sigma\rho}\), which entails \(V^{\dagger}=V\) (since \(\Lambda_{\sigma\rho}>0\)). Thus the horizontally condition (47) holds if and only if \(U=U_{\sigma\rho}V\) with \(V\) unitary and self-adjoint, \(V^{2}=\mathds{1}\), and \(V\) commutes with \(\Lambda_{\sigma\rho}\). The corresponding purifications of \(\sigma\) are \[|\Phi_{V}\rangle = \sqrt{\sigma}\,U_{\sigma\rho}V\rho^{-1/2}\otimes\mathds{1}_{\rm A} |\Psi\rangle \tag{51}\] \[= \rho^{-1/2}\Lambda_{\sigma\rho}V\rho^{-1/2}\otimes\mathds{1}_{\rm A }|\Psi\rangle\;.\] Note that \(\langle\Psi|\Phi_{V}\rangle\) is real. By (44) and (45), the horizontal geodesics on \(\mathcal{S}^{\rm inv}_{\mathcal{K}}\) are given by \[|\Psi_{\mathrm{g},V}(\tau)\rangle=\frac{1}{\sin\theta_{V}}\Big{(}\sin(\theta_{V} -\tau)|\Psi\rangle+\sin\tau|\Phi_{V}\rangle\Big{)} \tag{52}\] with \(\cos\theta_{V}=\langle\Psi|\Phi_{V}\rangle=\operatorname{tr}\Lambda_{\sigma \rho}V\). By using the identity \[\operatorname{tr}_{\mathbf{A}}|\Phi_{V}\rangle\langle\Psi|=\sqrt{\sigma}\,U_{ \sigma\rho}V\sqrt{\rho}=\rho^{-1/2}\Lambda_{\sigma\rho}V\sqrt{\rho}\;, \tag{53}\] we obtain the geodesics \(\gamma_{\mathrm{g},V}(\tau)=\pi(|\Psi_{\mathrm{g},V}(\tau)\rangle)\) on \(\mathcal{E}_{\mathcal{H}}\), where \(\pi\) is the quotient map (27), \[\gamma_{\mathrm{g},V}(\tau)=\frac{1}{\sin^{2}\theta_{V}}\bigg{(} \sin^{2}(\theta_{V}-\tau)\,\rho+\sin^{2}(\tau)\,\sigma\] \[+\sin(\theta_{V}-\tau)\sin(\tau)\Big{(}\rho^{-1/2}\Lambda_{\sigma \rho}V\rho^{1/2}+\mathrm{h.c.}\Big{)}\bigg{)} \tag{54}\] with \(0\leq\tau\leq\theta_{V}\). Eq. (54) generalizes formula (1) of Sec. II. It coincides with this formula for \(V=\mathds{1}\). Thanks to (43) and to the horizontality of \(|\hat{\Psi}_{V}\rangle\), \(\gamma_{\mathrm{g},V}\) has unit square velocity \((g_{\mathrm{B}})_{\rho}(\dot{\gamma}_{\mathrm{g},V},\dot{\gamma}_{\mathrm{g}, V})=\|\dot{\Psi}_{V}\|^{2}=1\). Thus \(\theta_{V}=\ell(\gamma_{\mathrm{g},V})\) is the geodesic length. The geodesic with the smallest length is obtained for \(V=\mathds{1}\). In fact, denoting by \(\lambda_{k}>0\) and \(v_{k}\in\{-1,1\}\) the eigenvalues of \(\Lambda_{\rho\sigma}\) and \(V\), one has \[\cos\theta_{V}=\operatorname{tr}\Lambda_{\sigma\rho}V=\sum_{k=1} ^{n}\lambda_{k}v_{k}\] \[\leq\sum_{k=1}^{n}\lambda_{k}=\operatorname{tr}\Lambda_{\sigma \rho}=\sqrt{F(\rho,\sigma)}=\cos\theta_{1}\;, \tag{55}\] where we have set \(\theta_{1}=\theta_{V=\mathds{1}}\) and used \([V,\Lambda_{\sigma\rho}]=0\). Similarly, the longest geodesic joining \(\rho\) to \(\sigma\) is obtained by choosing \(V=-\mathds{1}\) and has length \(\pi-\theta_{1}\). In view of (51), such a geodesic is the projection of the arc of great circle joining \(|\Psi\rangle\) and the vector \(|\Phi_{-\mathds{1}}\rangle=-|\Phi_{1}\rangle\) diametrically opposite to \(|\Phi_{\mathds{1}}\rangle\) on the sphere \(\mathcal{S}^{\rm inv}_{\mathcal{K}}\). The latter is obtained by inverting time on the great circle through \(|\Psi\rangle\) and \(|\Phi_{\mathds{1}}\rangle\) and by replacing the arc length \(\theta_{1}\) by \(\pi-\theta_{1}\). Thus, by extending the shortest and longest geodesics \(\gamma_{\mathrm{g},\mathds{1}}\) and \(\gamma_{\mathrm{g},-\mathds{1}}\) joining \(\rho\) and \(\sigma\) to the interval \([0,\pi]\), one obtains the same closed curve albeit with opposite orientations. More generally, the pair of geodesics \((\gamma_{\mathrm{g},V},\gamma_{\mathrm{g},-V})\) enjoys the same property. We have proven: **Theorem 1**.: _The Bures geodesics joining the two distinct invertible states \(\rho\) and \(\sigma\in\mathcal{E}^{\rm inv}_{\mathcal{H}}\) are given by_ \[\gamma_{\mathrm{g},V}(\tau)=X_{\rho\sigma,V}(\tau)\,\rho\,X_{\rho\sigma,V}( \tau)\quad,\quad 0\leq\tau\leq\theta_{V}\;, \tag{56}\] _where the geodesic length \(\theta_{V}\) is given by (55), \(X_{\rho\sigma,V}(\tau)\) is the operator defined by_ \[X_{\rho\sigma,V}(\tau)=\frac{1}{\sin\theta_{V}}\Big{(}\sin(\tau)M_{\rho\sigma, V}+\sin(\theta_{V}-\tau)\mathds{1}\Big{)} \tag{57}\] _with_ \[M_{\rho\sigma,V}=\rho^{-1/2}|\sqrt{\sigma}\sqrt{\rho}|V\rho^{-1/2}\;, \tag{58}\] _and \(V\) is an arbitrary unitary self-adjoint operator commuting with \(\Lambda_{\sigma\rho}=|\sqrt{\sigma}\sqrt{\rho}|\). Furthermore, the geodesic with the smallest length, denoted hereafter by \(\gamma_{\mathrm{g}}\), is obtained by choosing \(V=\mathds{1}\) in (56)-(58) and has length_ \[\ell(\gamma_{\mathrm{g}})=\theta_{1}=d_{\mathrm{B}}(\rho,\sigma)\;\in\;(0, \frac{\pi}{2}]\;. \tag{59}\] The explicit form (56) of the Bures geodesics have been obtained in Refs. [27; 28] in the special case \(V=\mathds{1}\). Our derivation shows that, in addition to this shortest geodesic, there are other geodesics having larger lengths \(\theta_{V}\), corresponding to \(V\neq\mathds{1}\). These geodesics will be classified below according to the number of time they intersect the boundary \(\partial\mathcal{E}_{\mathcal{H}}\) between \(\rho\) and \(\sigma\). More precisely, there are \(2^{n}\) geodesics joining two invertible states \(\rho\) and \(\sigma\) if \(\Lambda_{\sigma\rho}\) has a non-degenerate spectrum. Actually, then there are \(2^{n}\) choices for \(V\) because \(V\) is diagonal in an eigenbasis of \(\Lambda_{\sigma\rho}\) (since \([\Lambda_{\sigma\rho},V]=0\)) and thus it is fully characterized by its eigenvalues \(v_{k}\in\{1,-1\}\). In contrast, if \(\Lambda_{\sigma\rho}\) has a degenerate eigenvalue \(\lambda_{k}\) then there are infinitely many geodesics \(\gamma_{\mathrm{g},V}\) joining \(\rho\) and \(\sigma\), in analogy with what happens for diametrically opposite points on a sphere. In fact, in that case there are infinitely many choices for \(V\) as \(V_{k}=\Pi_{k}V\Pi_{k}\) can be any \(r_{k}\times r_{k}\) self-adjoint unitary matrix, where \(\Pi_{k}\) and \(r_{k}\) are the eigenprojector and multiplicity of \(\lambda_{k}\). Note that in all cases there are at most \(2^{n}\) distinct geodesic lengths \(\theta_{V}=\ell(\gamma_{\mathrm{g},V})\), since \(\theta_{V}\) only depends on the spectrum of \(V\), see (55). The geodesic with the shortest length is always unique and obtained for \(V=\mathds{1}\). The second shortest geodesic length \(\theta_{2}\) is given by \(\cos\theta_{2}=\cos\theta_{1}-2\lambda_{\rm min}\), \(\lambda_{\rm min}\) being the smallest eigenvalue of \(\Lambda_{\sigma\rho}\). If \(\lambda_{\rm min}\) is not degenerated, the corresponding geodesic is unique and obtained by choosing \(V=V_{i_{m}}\) such that it has a single negative eigenvalue \(v_{i_{m}}=-1\), with \(\lambda_{i_{m}}=\lambda_{\rm min}\). One infers from (59) that the geodesic distance (obtained as the minimal length of curves joining \(\rho\) and \(\sigma\)) is the accros Bures distance \(d_{\mathrm{B}}\), see (22). This is the main reason for using here that distance instead of the Bures distance (37); \(d_{\mathrm{B}}(\rho,\sigma)\) is the angle \(\theta_{1}=\arccos\langle\Psi|\Phi_{\mathds{1}}\rangle\) between the purification vectors \(|\Psi\rangle\) and \(|\Phi_{\mathds{1}}\rangle\) of \(\rho\) and \(\sigma\) (recall that \(\langle\Psi|\Phi_{V}\rangle\in\mathbb{R}\)). Hence \(|\Phi_{1}\rangle\) (and similarly \(|\Phi_{-\mathds{1}}\rangle=-|\Phi_{\mathds{1}}\rangle\)) is a purification \(|\Phi\rangle\in\pi^{-1}(\sigma)\) of \(\sigma\) maximizing the pure state fidelity in Uhlmann's theorem \(\sqrt{F(\rho,\sigma)}=\max_{|\Psi\rangle}|\langle\Psi|\Phi\rangle|\)[1; 2]. This gives a way to compute numerically \(|\Phi_{\mathds{\pm 1}}\rangle\), and thus the horizontal geodesic (52), by using an optimization algorithm, instead of relying on the formula \(|\Phi_{V}\rangle=M_{\rho\sigma,V}\otimes\mathds{1}_{\mathds{A}}|\Psi\rangle\), which requires diagonalizing \(\rho\) and \(\sqrt{\sigma}\sqrt{\rho}\) to compute \(M_{\rho\sigma,V}\). The purifications \(|\Phi_{V}\rangle\) for \(V\neq\pm\mathds{1}\) correspond to relative maxima of \(|\langle\Psi|\Phi\rangle|\), which are smaller than the global maximum, see (55). The properties of the self-adjoint operators (58) are given in Appendix A. As pointed out in [27], for \(V=\mathds{1}\) this operator is related to the optimal measurement to discriminate the distributions of measurement outcomes in states \(\rho\) and \(\sigma\) (more precisely, the Hellinger distance between these two distributions is maximum for a von Neumann measurement of the observable \(M_{\rho\sigma}\)[4]). In the special case of commuting states \(\rho\) and \(\sigma\), the geodesics have the form \[\gamma_{\mathrm{g},V}(\tau) = \sum_{k}p_{k,V}(\tau)|k\rangle\langle k| \tag{60}\] \[p_{k,V}(\tau) = \left(\frac{\sin(\theta_{V}-\tau)}{\sin\theta_{V}}\sqrt{p_{k}}+v _{k}\frac{\sin\tau}{\sin\theta_{V}}\sqrt{q_{k}}\right)^{2}\,,\] where \(\{|k\rangle\}\) is an orthonormal basis of common eigenvectors of \(\rho\) and \(\sigma\) and \(p_{k}\), \(q_{k}\) are the corresponding eigenvalues. Here, we have assumed that \(\Lambda_{\sigma\rho}\) has a non-degenerated spectrum, i.e., \(p_{k}q_{k}\neq p_{l}q_{l}\) if \(k\neq l\) (otherwise there are infinitely many other geodesics from \(\rho\) to \(\sigma\), which are not diagonal in the \(\{|k\rangle\}\)-basis). In fact, in that case one finds \(M_{\rho\sigma,V}=\sum_{k}v_{k}\sqrt{q_{k}/p_{k}}|k\rangle\langle k|\) and the purification (51) of \(\sigma\) is given, choosing \(|\Psi\rangle=\sum_{k}\sqrt{p_{k}}|k\rangle|k\rangle_{\mathrm{A}}\), by \[|\Phi_{V}\rangle=M_{\rho\sigma,V}|\Psi\rangle=\sum_{k}v_{k}\sqrt{q_{k}}|k \rangle|k\rangle_{\mathrm{A}}\;. \tag{61}\] Replacing this expression into (44) and (45) yields the horizontal geodesic \(|\Psi_{\mathrm{g},V}(\tau)\rangle(\tau)=\sum_{k}\sqrt{p_{k,V}(\tau)}|k\rangle|k\rangle _{\mathrm{A}}\), showing that \(\gamma_{\mathrm{g},V}(\tau)\) commutes with \(\rho\) and \(\sigma\) at all times and is given by (60). The circuit of Fig. 1(a) with the unitary \(U_{\mathrm{SA}}\) shown in Fig. 1(b) implements such a geodesic joining commuting states (indeed, using \(|\Psi\rangle=\sum_{k}\sqrt{p_{k}}|w_{k}\rangle|k\rangle_{\mathrm{A}}\) and \(|\dot{\Psi}\rangle=U_{\mathrm{SA}}|1\rangle|0\rangle_{\mathrm{A}}=\sum_{k} \alpha_{k}|w_{k}\rangle|k\rangle_{\mathrm{A}}\) one finds that \(|\Phi_{V}\rangle\) has the form (61) with \(|k\rangle\hookrightarrow|w_{k}\rangle\)). ### Intersections with the boundary of quantum states; geodesics through a pure state We have so far determined the geodesics on the open manifold \(\mathcal{E}_{\mathcal{H}}^{\mathrm{inv}}\) but have not discussed whether such geodesics can bounce on its boundary \(\partial\mathcal{E}_{\mathcal{H}}\). Let us consider the extension of the geodesic (56) joining the two invertible states \(\rho\) and \(\sigma\) to the time interval \([0,\pi]\). This extension is a closed curve, which we still denote by \(\gamma_{\mathrm{g},V}\). Generalizing a result obtained in [27], we show in Appendix B that this curve intersects \(q_{V}\) times the boundary, where \(q_{V}\) is the number of distinct eigenvalues of the observable \(M_{\rho\sigma,V}\) in (58). Furthermore, the number of intersections on the part of \(\gamma_{\mathrm{g},V}\) joining \(\rho\) and \(\sigma\) is equal to the multiplicity of the eigenvalue \(-1\) of \(V\). In particular, the shortest geodesic \(\gamma_{\mathrm{g}}\) does not intersect \(\partial\mathcal{E}_{\mathcal{H}}\) between \(\rho\) and \(\sigma\), while all other geodesics with \(V\neq\mathds{1}\) do so at least once. Let \(\rho_{i}\), \(i=1,\ldots,q_{V}\), be the intersection points of \(\gamma_{\mathrm{g},V}\) with \(\partial\mathcal{E}_{\mathcal{H}}\). Theorem 6 in Appendix B shows that the states \(\rho_{i}\) have ranks \(n-m_{i,V}\) and supports \((\mathds{1}-P_{i,V})\mathcal{H}\), where \(m_{i,V}\) and \(P_{i,V}\) are the multiplicity and spectral projector of the \(i\)th eigenvalue of \(M_{\rho\sigma,V}\). An interpretation of the states \(\rho_{i}\) and their kernel \(P_{i,V}\mathcal{H}\) in quantum metrology is given in Sec VI.3 below. Let us end this discussion by studying the geodesics passing through a given mixed state and a pure state. Consider an invertible state \(\rho>0\) and a pure state \(\rho_{1}=|\phi_{1}\rangle\langle\phi_{1}|\) such that \(\langle\rho\rangle_{\phi_{1}}=\langle\phi_{1}|\rho|\phi_{1}\rangle>0\). As shown in Theorem 6, there is up to time reversal only one geodesic joining \(\rho\) and \(\rho_{1}\). This geodesic is given by \[\gamma_{\mathrm{g},\rho\rightarrow\rho_{1}}(\tau) = \frac{1}{\sin^{2}\theta_{1}}\bigg{(}\sin^{2}(\theta_{1}-\tau)\,\rho +\sin^{2}(\tau)\,|\phi_{1}\rangle\langle\phi_{1}| \tag{62}\] \[+\frac{\sin(\theta_{1}-\tau)\sin(\tau)}{\cos\theta_{1}}\left\{ \rho\,,\,|\phi_{1}\rangle\langle\phi_{1}|\right\}\bigg{)}\] with \(\theta_{1}=\arccos(\langle\rho\rangle_{\phi_{1}^{1/2}}^{1/2})\). This geodesic intersects twice the boundary, at \(\rho_{1}\) and at another state \(\rho_{2}\) of rank \(n-1\) and support orthogonal to \(|\phi_{1}\rangle\). Eq. (62) can be proven from (54) by taking \(\sigma=(1-\varepsilon)\rho_{1}+(\varepsilon/n)\mathds{1}>0\) and letting \(\varepsilon\to 0\). If \(|\phi_{1}\rangle\) is an eigenvector of \(\rho\) with eigenvalue \(p_{1}=\cos^{2}(\theta_{1})>0\) then \(\gamma_{\mathrm{g},\rho\rightarrow\rho_{1}}\) is a segment of straight line. In fact, in that case (62) simplifies to \[\gamma_{\mathrm{g},\rho\rightarrow\rho_{1}}(\tau)=\sin^{2}(\theta_{1}\!-\!\tau)\, \rho_{\perp}\!+\!\cos^{2}(\theta_{1}\!-\!\tau)\,|\phi_{1}\rangle\langle\phi_{1}|\;, \tag{63}\] where \(\rho_{\perp}=\Pi_{\perp}\rho\,\Pi_{\perp}/\sin^{2}(\theta_{1})\) and \(\Pi_{\perp}=1-|\phi_{1}\rangle\langle\phi_{1}|\) is the projector onto the subspace orthogonal to \(|\phi_{1}\rangle\). Note that this agrees with the general form (60) of geodesics between commuting states. In particular, the shortest path joining \(\rho\) to its closest pure state is a segment of straight line intersecting \(\partial\mathcal{E}_{\mathcal{H}}\) transversally (in fact, the closest pure state to \(\rho\), i.e., the state \(|\phi_{1}\rangle\) maximizing the fidelity \(F(\rho,\rho_{1})=\langle\phi_{1}|\rho|\phi_{1}\rangle\), is an eigenvector of \(\rho\) associated to the maximal eigenvalue). ## V Geodesics as physical evolutions We show in this section that the Bures geodesics correspond to physical evolutions of the system coupled to an ancilla and that such evolutions are non-Markovian. Let us recall that the dynamics of a quantum system coupled to its environment is obtained by letting the total system (system and environment) evolve unitarily under some Hamiltonian \(H\) and then tracing out over the environment (referred to as the ancilla A in what follows). The system state at time \(t\geq 0\) is given by \[\rho(t)=\mathrm{tr}_{\mathrm{A}}\,e^{-\mathrm{i}tH}|\Psi\rangle\langle\Psi|\,e^{ \mathrm{i}tH}\;, \tag{64}\] where \(|\Psi\rangle\) is the system-ancilla initial state. Although one usually assumes that the system starts interacting with the ancilla at \(t=0\), so that \(|\Psi\rangle=|\psi\rangle|\alpha\rangle\) is a product state, in general the system and ancilla can be initially entangled. We have seen in Sec. IV.1 that the Bures geodesics \(\gamma_{\mathrm{g},V}(\tau)\) joining two states \(\rho\) and \(\sigma\in\mathcal{E}_{\mathcal{H}}^{\mathrm{inv}}\) are the projection of horizontal pure state geodesics \(|\Psi_{\mathrm{g},V}(\tau)\rangle\) on an enlarged system with Hilbert space \(\mathcal{K}=\mathcal{H}\otimes\mathcal{H}_{\mathrm{A}}\), i.e., \[\gamma_{\mathrm{g},V}(\tau)=\mathrm{tr}_{\mathrm{A}}\,|\Psi_{\mathrm{g},V}(\tau) \rangle\langle\Psi_{\mathrm{g},V}(\tau)|\;, \tag{65}\] where \(|\Psi_{\mathrm{g},V}(\tau)\rangle\) is given by (52). The following theorem shows that one can associate a Hamiltonian to the latter geodesic. **Theorem 2**.: _Consider the system-ancilla Hamiltonian_ \[H_{\mathrm{g},V} = -\mathrm{i}\big{(}|\Psi\rangle\langle\hat{\Psi}_{V}|-|\hat{\Psi}_{V} \rangle\langle\Psi|\big{)} \tag{66}\] \[= \frac{-\mathrm{i}}{\sin\theta}\big{(}|\Psi\rangle\langle\Phi_{V}| -|\Phi_{V}\rangle\langle\Psi|\big{)}\;,\] _where \(|\Psi\rangle\in\mathcal{K}\) is a fixed purification of \(\rho\), \(|\hat{\Psi}_{V}\rangle\) is the horizontal tangent vector to \(|\Psi_{\mathrm{g},V}(\tau)\rangle\) at \(\tau=0\), and \(|\Phi_{V}\rangle\in\mathcal{K}\) is the corresponding purification of \(\sigma\), see (51). Then_ \[|\Psi_{\mathrm{g},V}(\tau)\rangle=e^{-\mathrm{i}\tau H_{\mathrm{g},V}}|\Psi\rangle \tag{67}\] _for any \(\tau\geq 0\). As a result, the geodesic \(\gamma_{\mathrm{g},V}\) coincides with the open quantum system time evolution_ \[\gamma_{\mathrm{g},V}(\tau)=\mathrm{tr}_{\mathrm{A}}\,e^{-\mathrm{i}\tau H_{ \mathrm{g},V}}|\Psi\rangle\langle\Psi|\,e^{\mathrm{i}\tau H_{\mathrm{g},V}}\;. \tag{68}\] Proof.: The horizontality condition \(|\hat{\Psi}_{V}\rangle\in\mathfrak{h}_{|\Psi\rangle}\) entails \(\langle\Psi|\hat{\Psi}_{V}\rangle=0\), see (42) (note that general tangent vectors \(|\Psi\rangle\) at \(|\Psi\rangle\) satisfy a weaker condition \(\mathrm{Re}\,\langle\Psi|\hat{\Psi}\rangle=0\)). Furthermore, one has \(\|\hat{\Psi}_{V}\|=1\), see the statement following (45). Let \(\{|\Psi_{k}\rangle\}_{k=0}^{nn-1}\) be an orthonormal basis of \(\mathcal{K}\) such that \(|\Psi_{0}\rangle=|\Psi\rangle\) and \(|\Psi_{1}\rangle=|\dot{\Psi}_{V}\rangle\). The matrix of \(H_{\mathrm{g},V}\) in this basis has a left upper corner given by the Pauli matrix \(\sigma_{y}\), the other matrix elements being equal to zero. Such a matrix is easy to exponentiate, yielding \[e^{-\mathrm{i}\tau H_{\mathrm{g},V}} = \mathds{1}+(\cos\tau-1)\big{(}|\Psi\rangle\langle\Psi|+|\hat{ \Psi}_{V}\rangle\langle\hat{\Psi}_{V}|\big{)} \tag{69}\] \[-\sin\tau\big{(}|\Psi\rangle\langle\hat{\Psi}_{V}|-|\hat{\Psi}_{ V}\rangle\langle\Psi|\big{)}\;.\] Applying this operator to \(|\Psi\rangle\) and comparing with (44) yields the identity (67). The second equality in (66) follows from the relation (45) between \(|\dot{\Psi}_{V}\rangle\) and \(|\Phi_{V}\rangle\). Note that the Hamiltonian \(H_{\mathrm{g},V}\) does not depend on the choice of the state \(|\Psi\rangle\) on the horizontal geodesic \(|\Psi_{\mathrm{g},V}(\tau)\rangle\). Actually, let us fix another state \(|\Psi_{1}\rangle=|\Psi_{\mathrm{g},V}(\tau_{1})\rangle=\cos\tau_{1}|\Psi \rangle+\sin\tau_{1}|\dot{\Psi}_{V}\rangle\) on this geodesic. Since \(|\dot{\Psi}_{\mathrm{g},V}(0)\rangle\in\mathfrak{h}_{|\Psi\rangle}\), by property (i) of Sec. III.3 the tangent vector \(|\dot{\Psi}_{\mathrm{g},V}(\tau_{1})\rangle=-\sin\tau_{1}|\Psi\rangle+\cos \tau_{1}|\dot{\Psi}_{V}\rangle\) is in the horizontal subspace at \(|\Psi_{1}\rangle\). One easily checks that the expression of \(H_{\mathrm{g},V}\) in the first line of (66) is invariant under the substitutions \(|\Psi\rangle\hookrightarrow|\Psi_{1}\rangle\) and \(|\dot{\Psi}_{V}\rangle\hookrightarrow|\dot{\Psi}_{\mathrm{g},V}(\tau_{1})\rangle\). The geodesics \(\gamma_{\mathrm{g},V}\) in Theorem 1 are given in terms of two invertible states \(\rho\) and \(\sigma\). The purifications \(|\Psi\rangle\in\mathcal{K}\) of \(\rho\) are entangled system-ancilla states. However, since one can choose \(\rho\) to be any state on \(\gamma_{\mathrm{g},V}\), \(\rho\) can be taken to lie on the boundary \(\partial\mathcal{E}_{\mathcal{H}}\). Recall that all geodesics intersect \(\partial\mathcal{E}_{\mathcal{H}}\), see Sec. IV.2. Let us discuss the special case for which \(\gamma_{\mathrm{g},V}\) has an intersection with \(\partial\mathcal{E}_{\mathcal{H}}\) given by a pure state. For instance, all geodesics of a qubit satisfy this hypothesis (since \(\partial\mathcal{E}_{\mathbb{C}^{2}}\) is the set of pure qubit states). One may then choose the initial state \(\rho\) on \(\gamma_{\mathrm{g},V}\) to be a pure state \(\rho_{\psi}=|\psi\rangle\langle\psi|\) having purifications \(|\Psi\rangle=|\psi\rangle|\alpha\rangle\) given by product states, where \(|\alpha\rangle\) is an arbitrary ancilla pure state. This means that the system and ancilla are initially uncorrelated. As pointed out in Sec. IV.2, there is up to time reversal only one geodesics \(\gamma_{\mathrm{g}}\) passing through \(\rho_{\psi}\) and \(\sigma\), corresponding to \(V=\mathds{1}\). In this setting, one can extend the quantum evolution (68) to arbitrary (pure or mixed) initial states \(\nu_{\mathrm{S}}\in\mathcal{E}_{\mathcal{H}}\) of the system, by defining \[\mathcal{M}_{\mathrm{g},\tau}(\nu_{\mathrm{S}})=\mathrm{tr}_{\mathrm{A}}\,e^{- \mathrm{i}\tau H_{\mathrm{g}}}\,\nu_{\mathrm{S}}\otimes|\alpha\rangle\langle \alpha|\,e^{\mathrm{i}\tau H_{\mathrm{g}}}\;, \tag{70}\] where we have set \(H_{\mathrm{g}}=H_{\mathrm{g},\mathds{1}}\). For all times \(\tau\), \(\mathcal{M}_{\mathrm{g},\tau}\) is a quantum channel (CPTP map). The geodesic \(\gamma_{\mathrm{g}}(\tau)=\mathcal{M}_{\mathrm{g},\tau}(\rho_{\psi})\) is obtained by taking the initial state \(\nu_{\mathrm{S}}=\rho_{\psi}\). It is clear from (69) that the system-ancilla unitary evolution operator \(e^{-\mathrm{i}\tau H_{\mathrm{g},V}}\) is periodic in time with period \(2\pi\). Thus the quantum evolution \(\{\mathcal{M}_{\mathrm{g},\tau}\}_{\tau\geq 0}\) is also periodic, more precisely, it satisfies (6). As a consequence, this evolution is strongly non-Markovian. A quantitative study of this non-markovianity and a Kraus decomposition of \(\mathcal{M}_{\mathrm{g},\tau}\) will be presented in a forthcoming paper [33]. In conclusion, the Bures geodesics are not only mathematical objects but correspond to evolutions of the system coupled to an ancilla. This opens the route to the simulation of these geodesics on quantum computers and to their experimental observation. Examples of quantum circuits simulating some geodesics have been given above (see Fig. 1). ## VI Geodesics in quantum metrology In this section we study quantum parameter estimation for a quantum system coupled to an ancilla when measurements are not possible on the ancilla. We show that the Bures geodesics features optimal state transformations for estimating a parameter in a such situation. ### Quantum Fisher information and Bures metric As explained in Sec. II.3, the best precision in the estimation of an unkown parameter \(x\) using quantum probes in the output states \(\rho_{x}\) is given by the inverse square root of the QFI, see (10). The latter is by definition the maximum over all POVMs \(\{M_{j}\}\) of the CFI (11) with probabilities \(p_{j|x}=\mathrm{tr}\,M_{j}\rho_{x}\). It is given by [30; 31] \[\mathcal{F}_{Q}(x,\{\rho_{x}\}_{x\in X})=\mathrm{tr}[\rho_{x}(L_{x})^{2}]\;, \tag{71}\] where \(L_{x}\) is a self-adjoint operator satisfying \[\frac{1}{2}\{L_{x}\,,\,\rho_{x}\}=\dot{\rho}_{x} \tag{72}\] with \(\dot{\rho}_{x}=\partial_{x}\rho_{x}\). The operator \(L_{x}\) is called the symmetric logarithmic derivative of \(\rho_{x}\). A POVM \(\{M_{j}\}\) maximizes the CFI (i.e., \(\mathcal{F}_{\mathrm{clas}}(x,\{p_{j|x}\})_{x\in X}\)) \(=\mathcal{F}_{Q}(x,\{\rho_{x}\}_{x\in X})\)) if and only if \(M_{j}^{1/2}\rho_{x}^{1/2}=c_{j}M_{j}^{1/2}L_{x}\rho_{x}^{1/2}\) for any \(j\), with \(c_{j}\in\mathbb{R}\)[31]. If \(\rho_{x}\) is invertible, this is equivalent to \(M_{j}^{1/2}=c_{j}L_{x}M_{j}^{1/2}\). Thus, the optimal POVMs \(\{M_{j}^{\rm opt}\}\) maximizing the CFI are such that for any \(j\), the support of \(M_{j}^{\rm opt}\) is contained in an eigenspace of \(L_{x}\). In particular, the von Neumann measurement given by the spectral projectors of \(L_{x}\) is optimal. In general, the optimal measurements \(\{M_{j}^{\rm opt}\}\) depend on the estimated parameter \(x\), as \(L_{x}\) depends on \(x\). It is known that for invertible states \(\rho_{x}\in{\cal E}_{\cal H}^{\rm inv}\) the QFI is equal to the Bures metric (26) up to a factor of four [4; 31], \[{\cal F}_{Q}(x,\{\rho_{x}\}_{x\in X})=4(g_{\rm B})_{\rho_{x}}(\dot{\rho}_{x}, \dot{\rho}_{x})\;. \tag{73}\] We point out that the right-hand sides of (71) and (73) are not always equal for non-invertible states \(\rho_{x}\). In fact, it is easy to show that the trace in (71) is given (up to a factor of four) by the right-hand side of (26) with \(\dot{\sigma}=\dot{\rho}\) and a sum running over all indices \(k,l\) such that \(p_{k}+p_{l}>0\). On the other hand, as shown in [36], the Bures metric (which is defined in terms of the infinitesimal distance \({\rm d}s^{2}\), see (21)) is given by the same expression plus an additional term \(2\sum_{j,p_{j|z}=0}\partial_{x}^{2}p_{j|x}\) involving the second derivatives of the vanishing eigenvalues of \(\rho_{x}\) (note that this term is absent when \(\rho_{x}>0\)). As a consequence, the QFI (71) is discontinuous at values \(x\) at which \(\text{rank}(\rho_{x})\) is discontinuous, while the Bures metric remains continuous. This can be illustrated by the following example [36]. Let \(\rho_{x}=\sum_{j}p_{j|x}|j\rangle\langle j|\) with \(\{\{\}\!\!\{j\}\!\}\) a fixed orthonormal basis. Then the QFI (71) equals the CFI (11) and has a jump for trajectories \(x\mapsto\rho_{x}\) bouncing at \(x=x_{0}\) on the boundary \(\partial{\cal E}_{\cal H}\). More precisely, each eigenvalue with a minimum at \(x_{0}\) such that \(p_{j|x_{0}}=(\partial_{x}p_{j|x})_{x_{0}}=0\) contributes to the jump amplitude by \(-\lim_{x\to x_{0}}(\partial_{x}p_{j|x})^{2}/p_{j|x}=-2(\partial_{x}^{2}p_{j|x} )_{x_{0}}\). On the other hand, \((g_{\rm B})_{\rho_{x}}(\dot{\rho}_{x},\dot{\rho}_{x})\) is continuous at \(x_{0}\) due to aforementioned additional term canceling the discontinuity. ### Pythagorean theorem and variational formula for the QFI According to Stinespring's theorem [32], the action of an arbitrary quantum channel \({\cal M}_{x}\) on a state \(\rho_{\rm in}\) can be obtained by coupling the system to an ancilla and letting the composite system evolving unitarily, assuming an initial system-ancilla product state \(\rho_{\rm in}\otimes|\alpha_{0}\rangle\langle\alpha_{0}|\). We suppose in what follows that this composite system is in a pure state \(|\Psi_{x}\rangle\) undergoing a \(x\)-dependent unitary evolution of the form \[|\Psi_{x}\rangle=e^{-{\rm i}xH}|\Psi_{\rm in}\rangle\;, \tag{74}\] where \(H\) is some Hamiltonian on \({\cal K}={\cal H}\otimes{\cal H}_{\sf A}\) and the input system-ancilla state \(|\Psi_{\rm in}\rangle\) may be entangled or not. The output states of the system are given by \[\rho_{x}={\cal M}_{x}(\rho_{\rm in})={\rm tr}_{\sf A}\,|\Psi_{x}\rangle\langle \Psi_{x}|\;. \tag{75}\] For concreteness, we assume that \(x\) belongs to an interval \(X\) containing \(0\). It is also convenient to replace \(H\) by \(\Delta H=H-\langle H\rangle_{\Psi_{\rm in}}\mathds{1}\) in (74). This amounts to multiply \(|\Psi_{x}\rangle\) by an irrelevant phase factor \(e^{{\rm i}x\langle H\rangle_{\Psi_{\rm in}}}\). The QFI of the composite system reads (see (12)) \[{\cal F}_{Q}(\{|\Psi_{x}\rangle\}_{x\in X})=4\|\dot{\Psi}_{x}\|^{2}=4\langle( \Delta H)^{2}\rangle_{\Psi_{\rm in}}\;, \tag{76}\] where we have used that \(\langle(\Delta H)^{2}\rangle_{\Psi_{x}}\) is independent of \(x\). As explained in Sec. II.3, one can decompose the tangent vector \(|\dot{\Psi}_{x}\rangle\) into its horizontal and vertical parts, \[|\dot{\Psi}_{x}\rangle=-{\rm i}\Delta H|\Psi_{x}\rangle=|\dot{\Psi}_{x}^{\rm h }\rangle-{\rm i}\,\mathds{1}\otimes B_{x}^{\rm h}|\Psi_{x}\rangle \tag{77}\] with \(|\dot{\Psi}_{x}^{\rm h}\rangle\in\mathfrak{h}_{|\Psi_{x}\rangle}\) and \(B_{x}^{\rm h}\) a self-adjoint operator on \({\cal H}_{\sf A}\). Here, we have used the form (38) of the vertical subspace \(\mathfrak{v}_{|\Psi_{x}\rangle}\). Note that \(\langle\mathds{1}\otimes B_{x}^{\rm h}\rangle_{\Psi_{x}}=0\) since \(\langle\Psi_{x}|\dot{\Psi}_{x}\rangle=\langle\Psi_{x}|\dot{\Psi}_{x}\rangle=0\). As \(\rho_{x}=\pi(|\Psi_{x}\rangle)\) and \(\mathfrak{v}_{|\Psi\rangle}=\ker({\rm d}\pi|_{|\Psi_{x}\rangle})\) by definition, one has \(\dot{\rho}_{x}={\rm d}\pi|_{|\Psi_{x}\rangle}(|\dot{\Psi}_{x}^{\rm h}\rangle)\). Thanks to (43) and (73), the QFI is given by \[{\cal F}_{Q}(x,\{\rho_{x}\}_{x\in X})=4(g_{\rm B})_{\rho_{x}}\big{(}{ \rm d}\pi|_{|\Psi_{x}\rangle}(|\dot{\Psi}_{x}^{\rm h}\rangle),{\rm d}\pi|_{| \Psi_{x}\rangle}(|\dot{\Psi}_{x}^{\rm h}\rangle)\big{)}\] \[\quad=4\|\dot{\Psi}_{x}^{\rm h}\|^{2}=4\big{(}\|\dot{\Psi}_{x}\|^{2} -\|\mathds{1}\otimes B_{x}^{\rm h}|\Psi_{x}\rangle\|^{2}\big{)}\;, \tag{78}\] where the third equality follows from (77), the orthogonality of the horizontal and vertical subspaces, and the Pythagorean theorem. Using (12) and (38) one sees that (78) is equivalent to the formula (19) of Sec. II.3. It is instructive to derive from (78) the variational formula for the QFI from Ref. [21]. Consider the purifications of \(\rho_{x+\delta x}\) given by \(|\Psi_{\delta x}^{B_{x}}\rangle=\mathds{1}\otimes e^{{\rm i}\delta xB_{x}}| \Psi_{x+\delta x}\rangle\), where \(B_{x}\) is a self-adjoint operator on \({\cal H}_{\sf A}\). The tangent vector at \(\delta x=0\) is \(|\dot{\Psi}_{0}^{B_{x}}\rangle=|\dot{\Psi}_{x}\rangle+\mathds{1}\otimes B_{x} |\Psi_{x}\rangle\). Applying (77) and Pythagorean's theorem again, one has \[\|\dot{\Psi}_{0}^{B_{x}}\|^{2} = \big{\langle}(\Delta H-\mathds{1}\otimes B_{x})^{2}\big{\rangle}_{ \Psi_{x}}\] \[= \big{\|}\dot{\Psi}_{x}^{\rm h}\big{\|}^{2}+\big{\|}\mathds{1} \otimes(B_{x}-B_{x}^{\rm h})|\dot{\Psi}_{x}\big{\|}^{2}\geq\big{\|}\dot{\Psi}_{x }^{\rm h}\big{\|}^{2}\;.\] Hence the minimum of \(\|\dot{\Psi}_{x}^{B_{x}}\|^{2}\) over all \(B_{x}\)'s is equal to \(\|\dot{\Psi}_{x}^{\rm h}\|^{2}\) and the minimum is achieved for \(B_{x}=B_{x}^{\rm h}\). One deduces from (78) that [21] \[{\cal F}_{Q}(x,\{\rho_{x}\}_{x\in X})=4\min_{B_{x}=B_{x}^{\rm h}}\big{\langle}( \Delta H-\mathds{1}\otimes B_{x})^{2}\big{\rangle}_{\Psi_{x}}\;\;. \tag{80}\] Eq. (78) tells us that the QFI of the system is equal to the QFI (76) of the composite system minus a non-negative quantity \(4\|\mathds{1}\otimes B_{x}^{\rm h}|\Psi_{x}\rangle\|^{2}=4\langle(\mathds{1} \otimes B_{x}^{\rm h})^{2}\rangle_{\Psi_{x}}\) that can be interpreted as the amount of information on \(x\) in the ancilla. Eq. (80) provides a variational formula for the QFI. Both expressions (78) and (80) have been derived in [21] by using another method. We see here that they have nice geometrical interpretations in the framework of Riemannian submersions, being simple consequences of the Pythagorean theorem. ### Optimal precision in parameter estimation in open quantum systems One deduces from (76) and (78) that the QFI of the probe is equal to the QFI of the composite system when the tangent vector \(|\hat{\Psi}_{x}\rangle\) is horizontal (i.e., \(B_{x}^{\rm h}=0\)). In such a case, there is no information on the parameter \(x\) in the ancilla: joint measurements on the probe and ancilla do not lead to a better precision in the estimation than local measurements on the probe. However, in general the state transformation does not conserve the horizontality of the tangent vector. Let us assume that \(|\hat{\Psi}_{\rm in}\rangle\in\mathfrak{h}_{|\Psi_{\rm in}\rangle}\), so that the probe QFI is equal to \(4(\langle\Delta H\rangle^{2})_{\Psi_{\rm in}}\) for \(x=0\). While the QFI of the composite system remains the same for all values of \(x\), the probe QFI (78) depends on \(x\) and is strictly smaller than \(4(\langle\Delta H\rangle^{2})_{\Psi_{\rm in}}\) for nonzero values of \(x\) at which \(|\hat{\Psi}_{x}\rangle\notin\mathfrak{h}_{|\Psi_{x}\rangle}\), implying a larger error \((\Delta x)_{\rm best}\) for \(x\neq 0\) than for \(x=0\). It is thus of interest to look for situations for which the tangent vector remains horizontal for all values of the parameter. In such cases, the minimal error \((\Delta x)_{\rm best}\) is \(x\)-independent and equal to the minimal error one would obtain from joint measurements on the probe and ancilla. In the following, we assume that both the system-ancilla coupling Hamiltonian \(H\) and the input state \(|\Psi_{\rm in}\rangle\) can be engineered at will. We look for Hamiltonians \(H\) and input states \(|\Psi_{\rm in}\rangle\) satisfying: 1. the horizontality condition \(|\hat{\Psi}_{x}\rangle\in\mathfrak{h}_{|\Psi_{x}\rangle}\) holds for all \(x\in X\); 2. for a fixed Hamiltonian \(H\) satisfying (I), \(\|\hat{\Psi}_{x}\|^{2}\) is maximum. If conditions (I) and (II) are satisfied then the QFI of the system is constant and maximal for all values of \(x\), \[\mathcal{F}_{Q}(x,\{\rho_{x}\}_{x\in X})=4\langle(\Delta H)^{2}\rangle_{\Psi_ {\rm in}}\;. \tag{81}\] We shall assume that the highest and smallest eigenvalues of \(H\), \(\epsilon_{\rm max}\) and \(\epsilon_{\rm min}\), are non-degenerated, and set \[\Delta=\frac{1}{2}(\epsilon_{\rm max}-\epsilon_{\rm min})\;. \tag{82}\] Our first result is: **Theorem 3**.: _Conditions_ (I) _and_ (II) _are fulfilled if and only if \(|\hat{\Psi}_{\rm in}\rangle=-{\rm i}(\Delta H)|\Psi_{\rm in}\rangle\in \mathfrak{h}_{|\Psi_{\rm in}\rangle}\) and one of the two equivalent conditions holds_ 1. \(|\Psi_{x}\rangle=e^{-{\rm i}(x\Delta)H_{\rm g,V}}|\Psi_{\rm in}\rangle\;,\;x \in X\)_;_ 2. \(\Delta^{-1}\,\Pi(\Delta H)\Pi=H_{\rm g,V}\)_,_ _where \(H_{\rm g,V}\) is the geodesic Hamiltonian_ (3) _with \(|\Psi\rangle\) and \(|\hat{\Psi}\rangle\) replaced by \(|\Psi_{\rm in}\rangle\) and \(|\hat{\Psi}_{\rm in}\rangle/|\hat{\Psi}_{\rm in}\|\), respectively, and \(\Pi\) is the projector onto the sum of the eigenspaces of \(H\) for the eigenvalues \(\epsilon_{\rm max}\) and \(\epsilon_{\rm min}\). Note that_ (a) _implies_ \[\rho_{x}=\gamma_{{\rm g},V}(x\Delta)\;,\;x\in X \tag{83}\] _where \(\gamma_{{\rm g},V}\) is a geodesic starting at \(\rho_{\rm in}={\rm tr}_{\rm A}|\Psi_{\rm in}\rangle\langle\Psi_{\rm in}|\)._ Proof.: Assume that (I) and (II) are fulfilled. It has been pointed out in Sec. II.3 that the states \(|\Psi_{\rm in}\rangle\) maximizing the variance \(\langle(\Delta H)^{2}\rangle_{\Psi_{\rm in}}\) are the superpositions \[|\Psi_{\rm in}\rangle=\frac{1}{\sqrt{2}}\Big{(}|\epsilon_{\rm max}\rangle+e^{ {\rm i}\varphi}|\epsilon_{\rm min}\rangle\Big{)} \tag{84}\] with \(|\epsilon_{\rm max}\rangle\) and \(|\epsilon_{\rm min}\rangle\) two eigenstates of \(H\) associated to \(\epsilon_{\rm max}\) and \(\epsilon_{\rm min}\) and \(\varphi\in\mathbb{R}\). For such input states one has \[|\hat{\Psi}_{\rm in}\rangle=-{\rm i}(\Delta H)|\Psi_{\rm in}\rangle=-{\rm i} \frac{\Delta}{\sqrt{2}}\Big{(}|\epsilon_{\rm max}\rangle-e^{{\rm i}\varphi}| \epsilon_{\rm min}\rangle\Big{)}\;. \tag{85}\] Furthermore, \[|\Psi_{x}\rangle = \frac{1}{\sqrt{2}}\Big{(}e^{-{\rm i}x\Delta}|\epsilon_{\rm max} \rangle+e^{{\rm i}(x\Delta+\varphi)}|\epsilon_{\rm min}\rangle\Big{)} \tag{86}\] \[= \cos(x\Delta)|\Psi_{\rm in}\rangle+\frac{1}{\Delta}\sin(x\Delta) |\hat{\Psi}_{\rm in}\rangle\;.\] Since \(\langle\Psi_{\rm in}|\Psi_{\rm in}\rangle=0\) and \(\|\hat{\Psi}_{\rm in}\|^{2}=\langle(\Delta H)^{2}\rangle_{\Psi_{\rm in}}=\Delta^ {2}\), the last expression in (86) is nothing but the arc of great circle (44) at time \(\tau=x\Delta\), i.e., \(|\Psi_{x}\rangle=|\Psi_{\rm g}(x\Delta)\rangle\) with \(|\Psi_{\rm g}(\tau)\rangle\) a geodesic on the unit sphere \(\mathcal{S}_{\mathcal{K}}\) (see Sec. IV.1). By condition (I) this geodesic is a horizontal geodesic, \(|\Psi_{\rm g}(\tau)\rangle=|\Psi_{\rm g,V}(\tau)\rangle\). In view also of (67), one deduces that (a) is true. Now, plugging (84) and (85) into the expression (3) of the geodesic Hamiltonian yields \[H_{\rm g,V}=|\epsilon_{\rm max}\rangle\langle\epsilon_{\rm max}|-|\epsilon_{\rm min }\rangle\langle\epsilon_{\rm min}|=\Pi(\Delta H)\Pi/\Delta\;. \tag{87}\] This shows that (I) & (II) \(\Rightarrow\) (a) \(\Rightarrow\) (b). Reciprocally, assume that \(|\hat{\Psi}_{\rm in}\rangle\in\mathfrak{h}_{|\Psi_{\rm in}\rangle}\) and that (b) is true. Since \(\epsilon_{\rm max}\) and \(\epsilon_{\rm min}\) are not degenerated, the equality in (b) can be rewritten as \[\big{(}\epsilon_{\rm max}-\langle H\rangle_{\Psi_{\rm in}}\rangle |\epsilon_{\rm max}\rangle\langle\epsilon_{\rm max}|-\big{(}\langle H\rangle_{ \Psi_{\rm in}}-\epsilon_{\rm min}\big{)}\times\] \[|\epsilon_{\rm min}\rangle\langle\epsilon_{\rm min}| = \Delta\big{(}|\epsilon_{+}\rangle\langle\epsilon_{+}|-|\epsilon_{-}\rangle \langle\epsilon_{-}|\big{)}\;, \tag{88}\] where we have set \[|\epsilon_{\pm}\rangle=\frac{1}{\sqrt{2}}\bigg{(}|\Psi_{\rm in}\rangle\pm{\rm i }\frac{|\hat{\Psi}_{\rm in}\rangle}{\|\hat{\Psi}_{\rm in}\|}\bigg{)}\;. \tag{89}\] Since \(\langle\Psi_{\rm in}|\hat{\Psi}_{\rm in}\rangle=0\), the vectors \(|\epsilon_{\pm}\rangle\) are normalized and orthogonal. Thus (88) implies \(|\epsilon_{+}\rangle=e^{-{\rm i}\varphi_{+}}|\epsilon_{\rm max}\rangle\) and \(|\epsilon_{-}\rangle=e^{-{\rm i}\varphi_{-}}|\epsilon_{\rm min}\rangle\), where \(\varphi_{\pm}\) are real phases. It follows that \(|\Psi_{\rm in}\rangle\) is given by (84) up to an irrelevant phase factor, with \(\varphi=\varphi_{+}-\varphi_{-}\). One has \[|\Psi_{x}\rangle = e^{-{\rm i}x\Delta H}|\Psi_{\rm in}\rangle=e^{-{\rm i}x\,\Pi( \Delta H)\Pi}|\Psi_{\rm in}\rangle \tag{90}\] \[= e^{-{\rm i}(x\Delta)H_{\rm g}}|\Psi_{\rm in}\rangle=|\Psi_{\rm g,V}(x\Delta)\rangle\] (the second and last equalities follow from (84) and (67), respectively). Our hypothesis \(|\hat{\Psi}_{\rm in}\rangle\in\mathfrak{h}_{\Psi_{\rm in}}\) implies that \(|\Psi_{\rm g,V}(\tau)\rangle\) is a horizontal geodesic (see Property (i) of Sec. III.3). Hence condition (I) is fulfilled. Furthermore, by (84) again, \(\|\hat{\Psi}_{\rm in}\|^{2}=\langle(\Delta H)^{2}\rangle_{\Psi_{\rm in}}=\Delta^ {2}\) is the maximal squared fluctuation of \(H\). Thus condition (II) also holds. We have shown that (b) \(\Rightarrow\) (a) \(\Rightarrow\) (I) & (II). Finally, by projecting the equality (a) onto \(\mathcal{E}_{H}\) one obtains (83). It is worth noting that the probe-ancilla input state \(|\Psi_{\text{in}}\rangle\) is not necessarily entangled. To see this, let us vary the phase \(\varphi\) in (84) and observe that \(|\Psi_{\text{in}}\rangle\) then moves on the horizontal geodesic as \(|\Psi_{\text{in}}^{0}\rangle\rightarrow|\Psi_{\text{in}}^{\varphi}\rangle=e^{- \mathrm{i}\varphi H_{\text{g.}V}/2}|\Psi_{\text{in}}^{0}\rangle\) up to an irrelevant phase factor, where \(|\Psi_{\text{in}}^{0}\rangle\) is the input state corresponding to \(\varphi=0\). Recall from Sec. IV.2 that all geodesics \(\gamma_{\text{g.}V}\) intersect the boundary \(\partial\mathcal{E}_{\mathcal{H}}\) of quantum states. Therefore, if e.g. the probe is a qubit then \(\varphi\) can be chosen such that \(\rho_{\text{in}}=|\psi_{\text{in}}\rangle\langle\psi_{\text{in}}|\) is a pure state, that is, \(|\Psi_{\text{in}}\rangle=|\psi_{\text{in}}\rangle|\alpha_{\text{in}}\rangle\) is a product state. In other words, albeit the superposition (84) is in general entangled, an appropriate phase choice makes it separable. Since preparing a probe and ancilla in an entangled state is challenging experimentally, this is a relevant observation. This occurs for higher-dimensional probes as well for those geodesics with observables \(M_{\rho\sigma,V}\) in (58) having two eigenvalues of multiplicities \(n-1\) and \(1\). In fact, it is shown in Appendix B that this condition ensures that one of the intersection of \(\gamma_{\text{g.}V}\) with \(\partial\mathcal{E}_{\mathcal{H}}\) is a pure state. Let us stress that the aforementioned separability refers to a disentanglement between the probe and ancilla; if the probe consists of \(N\) qubits and \(H\) acts independently on each qubit, we shall see in Sec. VI.5 below that \(|\Psi_{\text{in}}\rangle\) has maximal entanglement between the probe qubits. The next theorem characterizes all system-ancilla Hamiltonians \(H\) generating horizontal geodesics, that is, coinciding (up to a numerical factor) with a geodesic Hamiltonian \(H_{\text{g},V}\) in a two-dimensional subspace. It shows that such Hamiltonians have two eigenvectors related to each other by a local unitary acting on the system. **Theorem 4**.: _Let \(|\epsilon_{1}\rangle\) and \(|\epsilon_{2}\rangle\) be two eigenstates of \(H\) with distinct eigenvalues \(\epsilon_{1}\) and \(\epsilon_{2}\). If_ \[|\Psi_{\text{in}}\rangle=\frac{1}{\sqrt{2}}\Big{(}|\epsilon_{1}\rangle+e^{ \mathrm{i}\varphi}|\epsilon_{2}\rangle\Big{)} \tag{91}\] _then the unitary transformation \(|\Psi_{x}\rangle=e^{-\mathrm{i}x\Delta H}|\Psi_{\text{in}}\rangle\) is a horizontal geodesic if and only if \(|\epsilon_{2}\rangle=U\otimes\mathds{1}_{\text{A}}|\epsilon_{1}\rangle\) with \(U\) a local unitary acting on the system. In such a case \(|\Psi_{x}\rangle=|\Psi_{\text{g.}V}(x\Delta)\rangle\) with \(\Delta=(\epsilon_{1}-\epsilon_{2})/2\). In particular, conditions (I) and (II) hold if and only if \(|\Psi_{\text{in}}\rangle\) is given by (84) and \(|\epsilon_{\text{min}}\rangle=U\otimes\mathds{1}|\epsilon_{\text{max}}\rangle\)._ Proof.: One deduces from (91) and \(|\Psi_{x}\rangle=e^{-\mathrm{i}x\Delta H}|\Psi_{\text{in}}\rangle\) that \(|\dot{\Psi}_{\text{in}}\rangle\) is given by (85) upon substituting \(|\epsilon_{\text{max}}\rangle\) and \(|\epsilon_{\text{min}}\rangle\) by \(|\epsilon_{1}\rangle\) and \(|\epsilon_{2}\rangle\). The horizontality condition \(|\dot{\Psi}_{\text{in}}\rangle=H_{\text{S}}\otimes\mathds{1}_{\text{A}}|\Psi _{\text{in}}\rangle\) for some self-adjoint operator \(H_{\text{S}}\) such that \(\langle H_{\text{S}}\otimes\mathds{1}_{\text{A}}\rangle_{\text{u}_{\text{in}}}=0\) can be rewritten as \[|\epsilon_{2}\rangle=e^{-\mathrm{i}\varphi}\frac{\Delta-\mathrm{i}H_{\text{S }}}{\Delta+\mathrm{i}H_{\text{S}}}\otimes\mathds{1}_{\text{A}}|\epsilon_{1} \rangle=U\otimes\mathds{1}_{\text{A}}|\epsilon_{1}\rangle\, \tag{92}\] where \(U\) is a unitary operator acting on the probe. Reciprocally, if \(|\epsilon_{2}\rangle=U\otimes\mathds{1}_{\text{A}}|\epsilon_{1}\rangle\) then one finds \[|\dot{\Psi}_{\text{in}}\rangle=-\mathrm{i}\Delta(1-e^{\mathrm{i}\varphi}\,U) (1+e^{\mathrm{i}\varphi}\,U)^{-1}\otimes\mathds{1}_{\text{A}}|\Psi_{\text{in} }\rangle\, \tag{93}\] where it is assumed that \(-e^{\mathrm{i}\varphi}\) is not an eigenvalue of \(U\) (in such a way that \(1+e^{\mathrm{i}\varphi}U\) is invertible). It is easy to show that the local operator in the right-hand side of (93) is self-adjoint. Hence \(|\dot{\Psi}_{\text{in}}\rangle\in\mathfrak{h}_{|\Psi_{\text{in}}\rangle}\). ### Optimal measurements We now turn to the problem of determining the optimal measurement(s) on the probe maximizing the CFI. As explained in Sec. VI.1, these measurements are given in terms of the symmetric logarithmic derivative \(L_{x}\) of the output states \(\rho_{x}=\gamma_{\text{g.}V}(x\Delta)\). Let us fix a state \(\sigma\) on \(\gamma_{\text{g.}V}\) such that \(\rho_{x}\) belongs to the geodesic arc between \(\rho_{\text{in}}\) and \(\sigma\). Since \(\gamma_{\text{g.}V}\) is the same as the geodesic starting at \(\rho_{x}\) and passing through \(\sigma\) translated in time by \(-x\Delta\) (see Appendix A for an explicit proof), by differentiating the latter geodesic at \(\tau=0\) one gets the tangent vector of \(\gamma_{\text{g.}V}\) at \(\tau_{x}=x\Delta\). Making the substitutions \(\rho\hookrightarrow\rho_{x}\) and \(\theta_{V}\hookrightarrow\theta_{V}-\tau_{x}\) in (56), this gives \[\dot{\gamma}_{\text{g.}V}(\tau_{x})=\big{\{}\dot{X}_{\rho_{x}\sigma,V}\,,\,\rho_ {x}\big{\}} \tag{94}\] with \[\dot{X}_{\rho_{x}\sigma,V}=\frac{1}{\sin(\theta_{V}-\tau_{x})}\Big{(}M_{\rho_{x }\sigma,V}-\cos(\theta_{V}-\tau_{x})\mathds{1}\Big{)}. \tag{95}\] Plugging \(\dot{\rho}_{x}=\Delta\,\dot{\gamma}_{\text{g.}V}(x\Delta)\) into (72) and using (94) yield \[L_{x}=(2\Delta)\dot{X}_{\rho_{x}\sigma,V}. \tag{96}\] In view of (95) and (96), the eigenprojectors of \(L_{x}\) are the eigenprojectors of \(M_{\rho_{x}\sigma,V}\). These eigenprojectors, denoted hereafter by \(P_{i,V}\), are related to the kernels of the intersection states \(\rho_{i}\) of \(\gamma_{\text{g.}V}\) with the boundary of quantum states (see Sec. IV.2 and Appendix B). More precisely, one has \(\ker(\rho_{i})=P_{i,V}\mathcal{H}\). But the \(\rho_{i}\)'s are independent of the state \(\rho_{x}\) on the geodesic \(\gamma_{\text{g.}V}\). Therefore, the eigenprojectors \(P_{i,V}\) do not depend on the estimated parameter \(x\). A more explicit proof that \(P_{i,V}\) only depend on the geodesic \(\gamma_{\text{g.}V}\) is given in Appendix A. Since the eigenprojectors of \(L_{x}\) form an optimal POVM, we conclude that **Theorem 5**.: _If conditions (I) and (II) are fulfilled, i.e., for output states given by (83), there exists a \(x\)-independent optimal POVM \(\{M_{i}^{\text{opt}}\}\) given by the von Neumann measurement with projectors \(M_{i}^{\text{opt}}=P_{i,V}\) onto the kernels of the intersection states \(\rho_{i}\) of \(\gamma_{\text{g.}V}\) with \(\partial\mathcal{E}_{\mathcal{H}}\)._ As we have seen in Sec. IV.2, the number of eigenprojectors \(P_{i,V}\) is equal to the number \(q_{V}\) of distinct eigenvalues of \(M_{\rho\sigma,V}\). In particular, if \(\gamma_{\text{g.}V}\) intersects \(\partial\mathcal{E}_{\mathcal{H}}\) at a pure state, then the optimal measurement is a binary measurement consisting of \(q_{V}=2\) projectors, the first one being of rank \(n-1\) and the other of rank \(1\). More generally, thanks to the argument given in Sec. VI.1, a POVM \(\{M_{j}^{\text{opt}}\}\) is optimal if and only if \(\text{supp}(M_{j}^{\text{opt}})\subset P_{i,V}\mathcal{H}\) for any \(j\), with \(i_{j}\in\{1,\dots,q_{V}\}\). ### Heisenberg limit We show in this subsection that the estimation error of the open probe undergoing a geodesic evolution can reach the Heisenberg scaling. To this end, we assume that a \(N\)-qubit probe is coupled to \(N\) ancilla qubits \(\mathsf{A}_{1},\ldots,\mathsf{A}_{N}\). The total Hilbert space \(\mathcal{K}=\mathbb{C}^{2^{N}}\otimes(\otimes_{\nu=1}^{N}\mathbb{C}_{\mathsf{A} _{\nu}}^{2})\) has dimension \(2^{4N}\). The \(\nu\)th probe qubit \(\mathsf{S}_{\nu}\) is coupled to the \(\nu\)th ancilla qubit \(\mathsf{A}_{\nu}\) by a Hamiltonian \(H_{\nu}\) having two eigenvectors \(|e_{\nu,\pm}\rangle\) with eigenvalues \(e_{\nu,+}\) and \(e_{\nu,-}<e_{\nu,+}\) satisfying \(|e_{\nu,-}\rangle=U_{\nu}\otimes\mathds{1}_{\mathsf{A}_{\nu}}|e_{\nu,+}\rangle\), where \(U_{\nu}\) is a unitary acting on the \(\nu\)th probe qubit. The total probe-ancilla Hamiltonian reads \[H^{(N)}=\sum_{\nu=1}^{N}H_{\nu}\;, \tag{97}\] where \(H_{\nu}\) acts non-trivially on the \(\nu\)th probe and ancilla qubits only. Then \(H\) has two eigenvectors \(|\epsilon_{\text{max}}\rangle=\otimes_{\nu=1}^{N}|e_{\nu,+}\rangle\) and \(|\epsilon_{\text{min}}\rangle=\otimes_{\nu=1}^{N}|e_{\nu,-}\rangle\) associated to the highest and smallest eigenvalues \[\epsilon_{\text{max}}=\sum_{\nu=1}^{N}e_{\nu,+}\;,\;\epsilon_{\text{min}}= \sum_{\nu=1}^{N}e_{\nu,-}\;. \tag{98}\] Let us consider the multipartite entangled input state \[|\Psi_{\text{in}}^{(N)}\rangle=\frac{1}{\sqrt{2}}\Big{(}\begin{smallmatrix}N \\ \nu=1\end{smallmatrix}|e_{+,\nu}\rangle+e^{\text{i}\varphi}\begin{smallmatrix}N \\ \nu=1\end{smallmatrix}|e_{\nu,-}\rangle\Big{)}\;, \tag{99}\] where entanglement is between the different qubit pairs \(\mathsf{S}_{\nu}\mathsf{A}_{\nu}\). Since \(|\epsilon_{\text{max}}\rangle=U\otimes\mathds{1}_{\mathsf{A}}|\epsilon_{\text {min}}\rangle\) with \(U=\otimes_{\nu=1}^{N}U_{\nu}\), by Theorem 4 the unitary transformation \[|\Psi_{x}^{(N)}\rangle=e^{-\text{i}xH^{(N)}}|\Psi_{\text{in}}^{(N)}\rangle= \begin{smallmatrix}N\\ \nu=1\end{smallmatrix}e^{-\text{i}xH_{\nu}}|\Psi_{\text{in}}^{(N)}\rangle \tag{100}\] defines a horizontal geodesic on \(\mathcal{S}_{\mathcal{K}}\) and the probe state \(\rho_{x}^{(N)}=\operatorname{tr}_{\mathsf{A}}|\Psi_{x}^{(N)}\rangle\langle \Psi_{x}^{(N)}|\) follows a geodesics on \(\mathcal{E}_{\mathbb{C}^{2^{N}}}\), \[\rho_{x}^{(N)}=\gamma_{\text{g},V}^{(N)}(x\Delta_{N}) \tag{101}\] with \(\Delta_{N}=(\epsilon_{\text{max}}-\epsilon_{\text{min}})/2\). According to (81), the QFI of the probe is given by \[\mathcal{F}_{Q}(\{\rho_{x}^{(N)}\}_{x\in X})=4\Delta_{N}^{2}=\left(\sum_{\nu= 1}^{N}\left(e_{\nu,+}-e_{\nu,-}\right)\right)^{2}. \tag{102}\] If the eigenenergies \(e_{\nu,\pm}\) of the Hamiltonians \(H_{\nu}\) are independent of \(\nu\), the QFI scales like \(N^{2}\), implying a minimal error \((\Delta x)_{\text{best}}\sim N_{\text{meas}}^{-1/2}N^{-1}\) having the Heisenberg scaling. Even though an error twice smaller could be obtained by using the ancilla qubits as probes and taking a Hamiltonian given by a sum of \(2N\) Hamiltonians acting on single qubits, this would require measurements on the ancilla qubits. An example of quantum circuit implementing the parameter estimation is shown in Fig. 2. By Theorem 5, an optimal measurement is a joint von Neumann measurement on the \(N\) probe qubits with projectors onto the kernels of the intersection states \(\rho_{i}^{(N)}\) of the geodesic \(\gamma_{\text{g},V}^{(N)}\) with the boundary of the \(N\)-qubit state manifold. ## VII Conclusions and perspectives In this work we have studied the geodesics on the manifold of quantum states for the Bures distance. We have determined these geodesics and have shown that they are physical, as they correspond to quantum evolutions of an open system coupled to an ancilla. The corresponding system-ancilla coupling Hamiltonian has been derived explicitly. Examples of quantum circuits implementing some geodesics have been given. Furthermore, we have proven that the geodesics are optimal for single-parameter estimation in open quantum systems, where the unkown parameter is a phase shift proportional to the time parametrizing the geodesic and it is assumed that measurements can not be performed on the ancilla. These results open the route to experimental observations of geodesics in multi-qubit quantum information platforms offering a high degree of control on the Hamiltonian. Such experimental realizations would be of interest for high-precision estimations in situations where only a part of these qubits can be measured. The methods developed in this paper, which are borrowed from Riemannian geometry, are expected to be applicable as well to non ideal quantum metrology setups. For instance, when the coupling with the environment provokes energy losses and dephasing, additional couplings with engineered reservoirs could be tailored to modify the state transformation so that it becomes closer to a geodesic. This would increase the precision of the estimation by reducing the amount of information on the parameter lost in the environment. Another potential field of application of the Bures geodesics is incoherent quantum control. In order to efficiently steer a quantum mixed state \(\rho\) to a given desired state \(\sigma\), an idea is to adjust the control parameters in such a way as to follow as closely as possible the shortest geodesic joining \(\rho\) and \(\sigma\)[37]. Among other directions worth exploring is the relation between the geodesics and the quantum speed limit in open systems. The present work can be contextualized as belonging to an emerging broader research topic. Information geometry has been developed in the last decades by Amari and coworkers [38; 39] in an attempt to use concepts and methods from Riemannian geometry in information theory. It has been successfully applied to many fields, such as machine learning, signal processing, optimization, statistics, and neurosciences. The application of this approach to quantum information processing remains largely unexplored. It will hopefully open new challenging perspectives. **Acknowledgments** The author acknowledges support from the ANID Fondecyt Grant No 1190134 and is grateful to Fethi Mahmoudi and Gerard Besson for useful discussions. ## Appendix A Properties of the geodesic operators \(M_{\rho\sigma,V}\) The operators \(M_{\rho\sigma,V}\) in the expressions (56) and (57) of the Bures geodesics have following properties: 1. \(M_{\rho\sigma,V}\,\rho\,M_{\rho\sigma,V}=\sigma\); 2. \(\mathrm{tr}[\rho M_{\rho\sigma,V}]=\cos\theta_{V},\ \mathrm{tr}[\rho M_{\rho \sigma,V}^{2}]=1\); 3. \(M_{\sigma\rho,\bar{V}}=M_{\rho\sigma,V}^{-1}\) and \(M_{\rho\sigma,V}\,\rho=\sigma M_{\sigma\rho,\bar{V}}\), where \(\widetilde{V}=U_{\sigma\rho}VU_{\sigma\rho}^{\dagger}\). Properties (a) and (b) follow from (58) and (55). The first identity in (c) follows from the equality \(M_{\rho\sigma,V}=\sqrt{\sigma}U_{\sigma\rho}V\rho^{-1/2}\) (compare (51) and (58)) and the fact that the unitary \(U_{\rho\sigma}\) in the polar decomposition of \(\sqrt{\rho}\sqrt{\sigma}\) is equal to the adjoint of \(U_{\sigma\rho}\) (recall that if \(O=U|O|\) then \(O^{\dagger}=U^{\dagger}|O^{\dagger}|\)). The second identity is then deduced from (a). Observe that (a) is equivalent to \(\gamma_{\mathrm{g},V}(\theta_{V})=\sigma\). Property (b) can be used to show that \(\mathrm{tr}\,\gamma_{\mathrm{g},V}(\tau)=1\) for all \(\tau\) (as it should be since \(\gamma_{\mathrm{g},V}(\tau)\) is a quantum state). Property (c) insures that the geodesic joining \(\sigma\) to \(\rho\), obtained by exchanging \(\rho\) and \(\sigma\) in (56)-(58), coincides with the time-reversed geodesic \(\gamma_{\mathrm{g},\bar{V}}(\theta_{V}-\tau)\). By using the self-adjointness and unitarity of \(V\) and \([\Lambda_{\rho\rho},V]=0\), it is easy to show that \(\widetilde{V}\) enjoys the same properties, the commutation being with \(\Lambda_{\rho\sigma}=|\sqrt{\rho}\sqrt{\sigma}|=U_{\sigma\rho}\Lambda_{\sigma \rho}U_{\sigma\rho}^{\dagger}\). The next property tells us how \(M_{\rho\sigma,V}\) is transformed as one moves \(\rho\) along the geodesic \(\gamma_{\mathrm{g},V}\), keeping \(\sigma\) fixed. For any invertible state \(\rho_{t}=\gamma_{\mathrm{g},V}(t)\) on \(\gamma_{\mathrm{g},V}\), with \(0\leq t\leq\theta_{V}\), one has 1. \(M_{\rho\sigma,V}=M_{\rho_{t}\sigma,V_{t}}X_{\rho\sigma,V}(t)\), where \(X_{\rho\sigma,V}(t)\) is given by (57) and \(V_{t}\) is some self-adjoint unitary operator commuting with \(\Lambda_{\sigma\rho_{t}}=|\sqrt{\sigma}\sqrt{\rho_{t}}|\). This formula is related to the fact that the geodesics joining \(\rho_{t}\) and \(\sigma\) are the geodesics joining \(\rho\) and \(\sigma\) shifted in time, \[\gamma_{\mathrm{g},V_{t}}^{(t)}(\tau)=\gamma_{\mathrm{g},V}(t+\tau)\;,\;0 \leq\tau\leq\theta_{V}-t\;.\] (108) We will prove in Appendix B that the spectrum of \(V_{t}\) is constant in time save at the intersection times of \(\gamma_{\mathrm{g},V}\) with the boundary of quantum states \(\partial\mathcal{E}_{\mathcal{H}}\), where some eigenvalues of \(V_{t}\) may jump from \(-1\) to \(+1\). In particular, if \(V=\mathds{1}\) then \(V_{t}=\mathds{1}\) for \(0\leq t\leq\theta_{1}\). Formula (d) can be proven directly from (56) and (58), but it is simpler to derive it from the properties of horizontal geodesics on the hypersphere \(\mathcal{S}_{\mathcal{K}}^{\mathrm{inv}}\). In fact, it is clear geometrically that the arc of great circle joining \(|\Psi_{t}\rangle=|\Psi_{\mathrm{g},V}(t)\rangle\) to \(|\Phi_{V}\rangle\) has length \(\theta_{V}-t\) and is contained in the arc of great circle joining \(|\Psi\rangle\) to \(|\Phi_{V}\rangle\), so that it is parametrized by \[|\Psi_{\mathrm{g},V}^{(t)}(\tau)\rangle=|\Psi_{\mathrm{g},V}(t+\tau)\rangle\;,\;0\leq\tau\leq\theta_{V}-t\;. \tag{109}\] Since \(|\Psi_{\mathrm{g},V}(\tau)\rangle\) is a horizontal geodesic, its tangent vector \(|\hat{\Psi}_{t}\rangle\) at \(|\Psi_{t}\rangle\) is horizontal (property (i) of Sec. III.3). According to the result of Sec. IV.1, this is equivalent to \[|\Phi_{V}\rangle=M_{\rho_{t}\sigma,V_{t}}\otimes\mathds{1}_{\!A}|\Psi_{t}\rangle \tag{110}\] for some self-adjoint unitary \(V_{t}\) commuting with \(\Lambda_{\sigma\rho_{t}}\). Now by (51), (52), and (57), one has \[|\Psi_{t}\rangle=|\Psi_{\mathrm{g},V}(t)\rangle=X_{\rho\sigma,V}(t)\otimes \mathds{1}_{\!A}|\Psi\rangle\;. \tag{111}\] Plugging (111) into (110) and comparing with (51) one gets \[M_{\rho\sigma,V}\otimes\mathds{1}_{\!A}|\Psi\rangle=M_{\rho_{t}\sigma,V_{t}}X_ {\rho\sigma,V}(t)\otimes\mathds{1}_{\!A}|\Psi\rangle\;. \tag{112}\] One easily shows that this equation is equivalent to (d) (for instance, one may rely on (46)). Furthermore, (109) implies (108) since the projection on \(\mathcal{E}_{\mathcal{H}}^{\mathrm{inv}}\) of the arc of great circle \(|\Psi_{\mathrm{g},V}^{(t)}(\tau)\rangle\) is the Bures geodesic joining \(\rho_{t}\) and \(\sigma\) with unitary \(V_{t}\). An important consequence of (d) for the application to quantum metrology is the following. As shown in Appendix B, \(X_{\rho\sigma,V}(t)\) is invertible when \(\rho_{t}\) is invertible, i.e., when \(t\) is not an intersection time of \(\gamma_{\mathrm{g},V}\) with \(\partial\mathcal{E}_{\mathcal{H}}\). In such a case \(M_{\rho_{t}\sigma,V_{t}}=M_{\rho\sigma,V}X_{\rho\sigma,V}(t)^{-1}\) is a function of the self-adjoint operator \(M_{\rho\sigma,V}\), as \(X_{\rho\sigma,V}(t)\) is a function of \(M_{\rho\sigma,V}\) by (57). Thus \(M_{\rho_{t}\sigma,V_{t}}\) has \(t\)-independent eigenprojectors, given by the eigenprojectors of \(M_{\rho\sigma,V}\). ## Appendix B Intersections of the geodesics with the boundary of quantum states In this appendix we study the intersections of the Bures geodesics with the boundary of quantum states \(\partial\mathcal{E}_{\mathcal{H}}\). As explained in the main text, we consider the extensions of the geodesics \(\gamma_{\mathrm{g},V}\) joining two states \(\rho\) and \(\sigma\in\mathcal{E}_{\mathcal{H}}^{\mathrm{inv}}\) to the time interval \([0,\pi]\), given by (56) with \(0\leq\tau\leq\pi\). These extensions are closed geodesic curves, which are denoted by the same symbol \(\gamma_{\mathrm{g},V}\). Recall that these curves depend on a self-adjoint unitary operator \(V\) commuting with \(\Lambda_{\sigma\rho}^{2}=\sqrt{\rho}\,\sigma\sqrt{\rho}\). The arc length of \(\gamma_{\mathrm{g},V}\) between \(\rho\) and \(\sigma\) is denoted by \(\theta_{V}\) (see Theorem 1). **Theorem 6**.: _One has_ 1. \(\gamma_{\mathrm{g},V}\) _intersects_ \(q_{V}\) _times_ \(\partial\mathcal{E}_{\mathcal{H}}\)_, where_ \(q_{V}\) _is the number of distinct eigenvalues of the observable_ \(M_{\rho\sigma,V}\) _defined in (_58_). While the shortest geodesic_ \(\gamma_{\mathrm{g}}\) _does not intersect_ \(\partial\mathcal{E}_{\mathcal{H}}\) _between_ \(\rho\) _and_ \(\sigma\)_, i.e.,_ \(\gamma_{\mathrm{g}}([0,\theta_{1}])\subset\mathcal{E}_{\mathcal{H}}^{\mathrm{ inv}}\)_, the other geodesics with_ \(V\neq\mathds{1}\) _do so at least once. More precisely, the number of intersection points of_ \(\gamma_{\mathrm{g},V}([0,\theta_{V}])\) _with_ \(\partial\mathcal{E}_{\mathcal{H}}\) _is equal to the multiplicity of the eigenvalue_ \(-1\) _of_ \(V\) _._ 2. _The intersection points_ \(\rho_{i}\) _of_ \(\gamma_{\mathrm{g},V}\) _with_ \(\partial\mathcal{E}_{\mathcal{H}}\) _have ranks_ \(n-m_{i,V}\) _and supports_ \((\mathds{1}-P_{i,V})\mathcal{H}\)_, where_ \(m_{i,V}\) _and_ \(P_{i,V}\) _are the multiplicities of the eigenvalues and the spectral projectors of_ \(M_{\rho\sigma,V}\)_, respectively. In particular,_ \[\sum_{i=1}^{q_{V}}\dim(\ker\rho_{i})=n\;.\] (104) 3. _Given an invertible state_ \(\rho\in\mathcal{E}_{\mathcal{H}}^{\mathrm{inv}}\) _and a pure state_ \(|\phi_{1}\rangle\) _such that_ \(\langle\phi_{1}|\rho|\phi_{1}\rangle>0\)_, there are exactly two geodesics passing through_ \(\rho\) _and intersecting_ \(\partial\mathcal{E}_{\mathcal{H}}\) _at_ \(\rho_{1}=|\phi_{1}\rangle\langle\phi_{1}|\)_, namely the shortest geodesic_ \(\gamma_{\mathrm{g},\rho\to\rho_{1}}(\tau)\) _joining_ \(\rho\) _and_ \(\rho_{1}\) _and its time reversal_ \(\gamma_{\mathrm{g},\rho\to\rho_{1}}(\pi-\tau)\)_. Moreover,_ \(\gamma_{\mathrm{g},\rho\to\rho_{1}}\) _intersects twice_ \(\partial\mathcal{E}_{\mathcal{H}}\)_; the other intersection point_ \(\rho_{2}\) _has rank_ \(n-1\) _and support orthogonal to_ \(|\phi_{1}\rangle\)_, being therefore separated from_ \(\rho_{1}\) _by a geodesic distance_ \(\pi/2\)_._ The results (i) and (ii) have been proven in Ref. [27] in the particular case \(V=\mathds{1}\). It follows from (ii) that \(\gamma_{\mathrm{g},V}\) intersects \(\partial\mathcal{E}_{\mathcal{H}}\) at a pure state if and only if \(M_{\rho\sigma,V}\) has two eigenvalues of multiplicities \(n-1\) and \(1\). Note that this always holds for \(n=2\) (in fact, for a qubit \(\partial\mathcal{E}_{\mathcal{H}}\) is the set of pure states). For a qutrit (\(n=3\)), there are up to time-reversal four geodesics passing through two generic states \(\rho>0\) and \(\sigma>0\) (see the discussion after Theorem 1). The shortest geodesic \(\gamma_{\mathrm{g}}\) joining \(\rho\) and \(\sigma\), obtained for \(V=\mathds{1}\), does not intersect \(\partial\mathcal{E}_{\mathcal{H}}\) between these two states. The three other geodesics \(\gamma_{\mathrm{g},V}\) correspond to \(V\) having spectrum \(\{1,1,-1\}\) (or \(\{-1,-1,1\}\) for the time-reversal geodesics) and intersect the boundary once (twice) between \(\rho\) and \(\sigma\). If \(M_{\rho\sigma,V}\) has non-degenerated eigenvalues then \(\gamma_{\mathrm{g},V}\) has \(q_{V}=3\) intersections \(\rho_{i}\) with \(\partial\mathcal{E}_{\mathcal{H}}\), which have rank \(2\). Proof.: To simplify the notation we do not write explicitly the dependence on \(\rho\), \(\sigma\), and \(V\) of the operators \(M_{\rho\sigma,V}\), \(X_{\rho\sigma,V}\), etc. Following the arguments of [27], we observe that in view of (56), \(\gamma_{\mathrm{g},V}(\tau)\in\partial\mathcal{E}_{\mathcal{H}}\) if and only if \(\det\gamma_{\mathrm{g},V}(\tau)=\det X(\tau)^{2}\det\rho=0\), that is, \(\det X(\tau)=0\). The last determinant is the characteristic polynomial of \(M\), see (57). Thus \(\gamma_{\mathrm{g},V}\) intersects \(q\) times \(\partial\mathcal{E}_{\mathcal{H}}\) at times \(\tau_{1}<\ldots<\tau_{q}\) given by \[\frac{\sin(\theta_{V}-\tau_{i})}{\sin\tau_{i}}=-\mu_{i}\;\Leftrightarrow\; \mathrm{cotan}\,\tau_{i}=\frac{\cos\theta_{V}-\mu_{i}}{\sin\theta_{V}}\;, \tag{105}\] where \(\mu_{1}<\ldots<\mu_{q}\) are the distinct eigenvalues of \(M\). If \(V=\mathds{1}\) then \(M>0\) and thus \(\mu_{1}>0\). Hence \(\mathrm{cotan}\,\tau_{1}<\mathrm{cotan}\,\theta_{1}\), so that the first intersection time satisfies \(\tau_{1}>\theta_{1}\). This tells us that the shortest geodesic arc \(\gamma_{\mathrm{g}}([0,\theta_{1}])\) starting at \(\rho\) and ending at \(\sigma\) is contained in \(\mathcal{E}_{\mathcal{H}}^{\mathrm{inv}}\) and that \(\gamma_{\mathrm{g}}\) intersects \(q\) times the boundary on its part starting from \(\sigma\) and going back to \(\rho\). In contrast, let us show that if \(V\neq\mathds{1}\) then \(M\) has at least one negative eigenvalue \(\mu_{1}<0\). Actually, \(V\) has at least one eigenvalue \(v_{k}=-1\). Denote by \(|\varphi_{k}\rangle\) a common eigenvector of \(V\) and \(\Lambda\) for the eigenvalues \(v_{k}\) and \(\lambda_{k}\), respectively. Then \[\langle\varphi_{k}|\sqrt{\rho}\,M\sqrt{\rho}|\varphi_{k}\rangle=\langle\varphi _{k}|\Lambda V|\varphi_{k}\rangle=-\lambda_{k}<0\;. \tag{106}\] By the variational principle it follows that \(\mu_{1}<0\). Hence \(M\) has \(s\geq 1\) negative eigenvalues \(\mu_{1}<\ldots<\mu_{s}<0\). By (105), one deduces that \(\mathrm{cotan}\,\tau_{i}>\mathrm{cotan}\,\theta_{V}\) and thus \(\tau_{i}<\theta_{V}\) for \(i=1,\ldots,s\). A reversed inequality holds for \(i>s\). This shows that \(\gamma_{\mathrm{g},V}\) intersects the boundary \(s\) times on its part between \(\rho\) and \(\sigma\). The fact that \(s\) is equal to the multiplicity of the eigenvalue \(-1\) of \(V\) follows from a similar argument, using the min-max theorem for self-adjoint operators. Let us now prove that the intersection states \(\rho_{i}=\gamma_{\mathrm{g}}(\tau_{i})\in\partial\mathcal{E}_{\mathcal{H}}\) have ranks \(r_{i}=n-m_{i}\) and supports \[\mathrm{supp}(\rho_{i})=Q_{i}\mathcal{H}=[\ker(M-\mu_{i})]^{\perp}\;, \tag{107}\] where \(m_{i}\) and \(P_{i}\) are the multiplicity and spectral projector of \(M\) for the eigenvalue \(\mu_{i}\) and \(Q_{i}=1-P_{i}\). We first note that \[X_{i}=X(\tau_{i})=\frac{\sin\tau_{i}}{\sin\theta}(M-\mu_{i}) \tag{108}\] has rank \(r_{i}\) and support \(Q_{i}\mathcal{H}\), so that \(X_{i}=Q_{i}X_{i}Q_{i}\). But \[\rho_{i}=X_{i}\,\rho\,X_{i}\;, \tag{109}\] hence \(\ker(\rho_{i})\supset\ker(X_{i})\). Reciprocally, let \(|\varphi\rangle\in\ker(\rho_{i})\). Then \(\rho\,X_{i}|\varphi\rangle\in P_{i}\mathcal{H}\), that is, \(Q_{i}\,\rho\,Q_{i}X_{i}|\varphi\rangle=0\). Since \(Q_{i}\,\rho\,Q_{i}\) is invertible on \(Q_{i}\mathcal{H}\) (recall that \(\rho>0\)), one has \(|\varphi\rangle\in\ker(X_{i})\). This implies that \(\ker(\rho_{i})=\ker(X_{i})=P_{i}\mathcal{H}\) and thus \(\mathrm{supp}(\rho_{i})=Q_{i}\mathcal{H}\), as announced above. We now prove the last point (iii). Let \(\gamma_{\mathrm{g},V}\) be a geodesic starting at \(\rho\) and intersecting the boundary at \(\rho_{1}=|\phi_{1}\rangle\langle\phi_{1}|\). Thanks to (107) one has \[M-\mu_{1}=\langle M-\mu_{1}\rangle_{\phi_{1}}|\phi_{1}\rangle\langle\phi_{1}|\;. \tag{110}\] Using (105) and (110) one obtains \[\langle M-\mu_{1}\rangle_{\phi_{1}}\langle\rho\rangle_{\phi_{1}}= \mathrm{tr}(M-\mu_{1})\rho \tag{111}\] \[=\mathrm{tr}\,M\rho-\cos\theta+\sin\theta\,\mathrm{cotan}\tau_{1}\; =\;\sin\theta\,\mathrm{cotan}\tau_{1}\;,\] where the last equality follows from property (b) of Appendix A. Furthermore, equating \(X_{1}\rho X_{1}\) with \(\rho_{1}=|\phi_{1}\rangle\langle\phi_{1}|\) and using (108), (110), and (111) one gets \(\cos\tau_{1}=\pm\langle\rho\rangle_{\phi_{1}}^{1/2}\). Therefore, for any choice of the operator \(V\), \(\tau_{1}\) is either equal to \(d_{\mathrm{B}}(\rho,\rho_{1})=\cos(\langle\rho\rangle_{\phi_{1}}^{1/2})\) or to \(\pi\) minus this distance. We can now express \(\mu_{1}\) and \(\langle M-\mu_{1}\rangle_{\phi_{1}}\) in terms of \(\cos\theta\), \(\sin\theta\), \(\cos\tau_{1}\), and \(\sin\tau affirmations in (iii) are direct consequences of (ii). Note that if \(\langle\rho\rangle_{\phi_{1}}=0\) then there are no geodesic joining \(\rho\) to \(\rho_{i}\) because in such a case (30) and (31) imply that \(X_{1}\rho X_{1}\) vanish, in contradiction with \(X_{1}\rho X_{1}=\rho_{1}\). \(\Box\) Let us now apply Theorem 6 and a continuity argument to determine the self-adjoint unitary operators \(V_{t}\) appearing in property (d) and Eq. (30) of Appendix. A. Recall that \(V_{t}\) is associated to the time-shifted geodesic \(\gamma_{\mathrm{g},V_{t}}^{(t)}(\tau)=\gamma_{\mathrm{g},V}(t+\tau)\) joining \(\rho_{t}\) and \(\sigma\) and that \(V_{t}\) commutes with \(\Lambda_{\sigma\rho_{t}}=|\sqrt{\sigma}\sqrt{\rho_{t}}|\). Denoting as above by \(\tau_{1}<\tau_{2}<\cdots<\tau_{q}\) the intersection times of \(\gamma_{\mathrm{g},V}\) with \(\partial\mathcal{E}_{\mathcal{H}}\), we first assume that \(0\leq t<\tau_{1}\). One deduces from property (d) that \[V_{t}=\Lambda_{\sigma\rho_{t}}^{-1}\sqrt{\rho_{t}}M_{\rho\sigma,V}X_{\rho \sigma,V}(t)^{-1}\sqrt{\rho_{t}}\;. \tag{32}\] Here, we have used that \(\Lambda_{\sigma\rho_{t}}\) and \(X_{\rho\sigma,V}(t)\) are invertible for \(0\leq t<\tau_{1}\) (in fact, \(\det\rho_{t}=(\det X_{\rho\sigma,V}(t))^{2}\det\rho\neq 0\) for \(t\neq\tau_{i}\), see the proof of Theorem 6). Furthermore, \(\Lambda_{\sigma\rho_{t}}^{-1}\) and \(X_{\rho\sigma,V}(t)^{-1}\) are continuous in time in view of the continuity of \(\rho_{t}\) and of (57). It follows that \(V_{t}\) is continuous in time on \([0,\tau_{1})\). Thus its eigenvalues \(v_{k}(t)\in\{-1,1\}\) are time-independent on this interval, \(v_{k}(t)=v_{k}\;\forall\;t\in[0,\tau_{1})\), where \(v_{k}\) are the eigenvalues of \(V=V_{0}\). Hence \(V_{t}=\sum_{k}v_{k}|\varphi_{k}(t)\rangle\langle\varphi_{k}(t)|\) for \(t\in[0,\tau_{1})\), where \(\{|\varphi_{k}(t)\rangle\}_{k=1}^{n}\) is a time-continuous orthonormal basis diagonalizing \(\Lambda_{\sigma\rho_{t}}\). In particular, if \(V=\mathds{1}\) then \(V_{t}=\mathds{1}\) for all \(t\in[0,\theta_{1}]\) (note that in such a case \(\tau_{1}>\theta_{1}\) by Theorem 6(i)). Eq. (30) then ensures that \(\gamma_{\mathrm{g}}^{(t)}(\tau)=\gamma_{\mathrm{g}}(t+\tau)\) is the shortest geodesic joining \(\rho_{t}\) and \(\sigma\) and has length \(d_{B}(\rho_{t},\sigma)=\theta_{1}-t\) with \(t=d_{B}(\rho,\rho_{t})\). This is consistent with the additivity property of the distance, \[d_{B}(\rho,\sigma)=d_{B}(\rho,\rho_{t})+d_{B}(\rho_{t},\sigma) \tag{33}\] when \(\rho_{t}\in\gamma_{\mathrm{g}}([0,\theta_{1}])\). On the other hand, if \(\tau_{i}<t<\tau_{i+1}\) then the number of intersection points with \(\partial\mathcal{E}_{\mathcal{H}}\) of the time-shifted geodesic arc \(\gamma_{\mathrm{g},V_{t}}^{(t)}([0,\theta_{V}-t])\) is reduced by \(i\) as compared to the number of intersection points of \(\gamma_{\mathrm{g},V}([0,\theta_{V}])\). According to Theorem 6(i), the multiplicity of the eigenvalue \(-1\) of \(V_{t}\) is equal to \(s-i\), where \(s\) is the multiplicity for \(V\). By the same argument as above, the eigenvalues of \(V_{t}\) and their multiplicities are constant between \(\tau_{i}\) and \(\tau_{i+1}\), but the multiplicities jump at the intersection times. In particular, the identity (33) does not hold for \(t>\tau_{1}\).
2302.05279
Magnetic catalysis in the (2+1)-dimensional Gross-Neveu model
We study the Gross-Neveu model in $2+1$ dimensions in an external magnetic field $B$. We first summarize known mean-field results, obtained in the limit of large flavor number $N_\mathrm{f}$, before presenting lattice results using the overlap discretization to study one reducible fermion flavor, $N_\mathrm{f}=1$. Our findings indicate that the magnetic catalysis phenomenon, i.e., an increase of the chiral condensate with the magnetic field, persists beyond the mean-field limit for temperatures below the chiral phase transition and that the critical temperature grows with increasing magnetic field. This is in contrast to the situation in QCD, where the broken phase shrinks with increasing $B$ while the condensate exhibits a non-monotonic $B$-dependence close to the chiral crossover, and we comment on this discrepancy. We do not find any trace of inhomogeneous phases induced by the magnetic field.
Julian J. Lenz, Michael Mandl, Andreas Wipf
2023-02-10T14:43:35Z
http://arxiv.org/abs/2302.05279v2
# Magnetic catalysis in the (2+1)-dimensional Gross-Neveu model ###### Abstract We study the Gross-Neveu model in \(2+1\) dimensions in an external magnetic field \(B\). We first summarize known mean-field results, obtained in the limit of large flavor number \(N_{\mathrm{f}}\), before presenting lattice results using the overlap discretization to study one reducible fermion flavor, \(N_{\mathrm{f}}=1\). Our findings indicate that the magnetic catalysis phenomenon, i.e., an increase of the chiral condensate with the magnetic field, persists beyond the mean-field limit for temperatures below the chiral phase transition and that the critical temperature grows with increasing magnetic field. This is in contrast to the situation in QCD, where the broken phase shrinks with increasing \(B\) while the condensate exhibits a non-monotonic \(B\)-dependence close to the chiral crossover, and we comment on this discrepancy. We do not find any trace of inhomogeneous phases induced by the magnetic field. ## I Introduction In recent years the study of strongly-interacting quantum field theories exposed to external electromagnetic fields has received a significant amount of attention in the high-energy physics community. This is due to the fact that magnetic fields are believed to play an important role in a plethora of physical processes, such as heavy-ion collisions [1; 2; 3; 4; 5], the strong interactions within neutron stars [6; 7; 8; 9], and at several stages of the early Universe [10; 11; 12; 13] - see [14] for an extensive review. Quantum Chromodynamics (QCD) is the theoretical framework underlying the strong interactions and - as such - describes the aforementioned phenomena. Since QCD cannot be studied using perturbation theory in the parameter regime of interest, one has to resort to non-perturbative methods, of which lattice quantum field theory is the most reliable one. Lattice simulations, however, suffer from the infamous complex-action problem, rendering the use of conventional Monte-Carlo methods impossible at finite density. Necdless to say, there are countless attempts aiming at circumventing the complex-action problem (see, e.g., [15]), but none of them have fully solved it within finite-density QCD. In this work we employ a different approach altogether, using a low-dimensional toy model, the Gross-Neveu (GN) model [16], as an effective description of QCD. This is motivated by the fact that the GN model shares a number of important features with QCD, such as chiral symmetry and its spontaneous breakdown, or (in 3 dimensions or less) renormalizability [17; 18]. It should be mentioned that there exist more realistic models, bearing a closer similarity to QCD than the one considered in this work, for instance models of the Nambu-Jona-Lasinio (NJL) [19] or quark-meson (see, e.g., [20]) type. Still, the simplicity of the GN model merits its use as a starting point for the search of a description of QCD using effective models, which may then be expanded upon. Furthermore, we mention that the GN model and variants thereof are also interesting from a condensed-matter perspective as they have been used successfully to describe certain one-dimensional and planar materials, such as polymers [21; 22; 23; 24], graphene [25; 26; 27; 28], and high-temperature superconductors [29; 30; 31]. One should, however, take some care in translating the results because the mapping of physical (non-relativistic) degrees of freedom to the field-theoretical description with emergent Lorentz invariance is not always straightforward - see [32] for one example. Four-Fermi theories, including GN-type models, have been extensively studied in the literature with a variety of methods. A first - and often quite reasonable - approximation is given by mean-field treatments, which become exact at infinitely large flavor numbers \(N_{\mathrm{f}}\) and can be systematically improved by expanding in orders of \(1/N_{\mathrm{f}}\). Since we use the mean-field behavior as a guideline and important comparison for our lattice investigation, we summarize the most relevant known results for our system of interest in the following. Early attempts to study the GN model in three space-time dimensions including an external magnetic field were made in [33] and extended to finite temperature in [34; 35]. It was found that the magnetic field is a strong catalyst of chiral symmetry breaking, enhancing the chiral condensate at both zero and non-zero temperature. This effect, termed _magnetic catalysis_, was explained in [36] to be caused by a dimensional reduction due to the applied field, similar to the effect of the Fermi surface in superconductivity - see also the reviews [37; 38]. The goal of this work is to investigate whether the magnetic catalysis in the GN model is merely an artifact of the large -\(N_{\mathrm{f}}\) limit, where quantum fluctuations are suppressed, or is present in the full theory at finite flavor number as well. Work in this direction has already been performed using methods superior to the mean-field approximation, such as the functional renormalization group [39] or optimized perturbation theory [40], both supporting the presence of magnetic catalysis also at fi nite \(N_{\rm f}\). However, we are not aware of any lattice studies of the GN model in an external electromagnetic field to be found in the literature at the time of writing and we attempt to fill this gap. To this end, we performed extensive lattice simulations using overlap fermions at both zero and non-zero temperature and for vanishing chemical potential. The finite-density case will be studied in detail in an upcoming publication. We note that - to the best of our knowledge - this is the first lattice Monte-Carlo simulation of GN-type models that uses overlap fermions (although theoretical considerations already exist in the literature [41]). Thus, we have put considerable effort into working out the technical details and intricacies involved. However, we decided that they are better suited to be part of another planned publication with a more technical and analytical focus. As we value the reproducibility of our results according to the FAIR Guiding Principles [42] (see [43] for a recent study about its status in our community), we will provide access to our simulation results under [44]. Furthermore, the scripts used to perform our data analyses will be found under [45]. This work is structured as follows: In Sec. II we introduce the GN model in an external magnetic field and discuss its large -\(N_{\rm f}\) limit. Our lattice formalism is outlined in Sec. III and our results are presented in Sec. IV. Finally, we discuss and critically analyze our findings and put them into perspective with respect to known QCD results in Sec. V. ## II Analytical results The GN model in its most basic form is given by the Lagrangian [16] \[{\cal L}=\bar{\psi}i\not{\partial}\psi+\frac{g^{2}}{2N_{\rm f}}(\bar{\psi} \psi)^{2}\;, \tag{1}\] featuring \(N_{\rm f}\) flavors of fermionic fields, collected implicitly in the tuple \(\psi\) and self-interacting via a scalar-scalar channel with coupling constant \(g^{2}\). The sum over flavors is implied in (1). In order to bring the model into a form amenable to our mean-field treatment as well as to lattice simulations, one introduces an auxiliary scalar field \(\sigma\) by means of a Hubbard-Stratonovich transformation. The semi-bosonized, but fully equivalent, theory then reads \[{\cal L}_{\sigma}={\rm i}\bar{\psi}(\not{\partial}+{\rm i}e\not{A}+\sigma) \psi+\frac{N_{\rm f}}{2g^{2}}\sigma^{2}\;, \tag{2}\] where we have furthermore coupled the fermions to an external vector field \(A_{\mu}\) and \(e\) denotes the elementary electric charge. For the remainder of this work we shall be concerned with a \((2+1)\)-dimensional space-time and four-component spinor fields transforming in a reducible representation of the Dirac algebra [46]. This allows one to introduce a "fifth"1 gamma matrix \(\gamma_{5}\), anti-commuting with all other \(\gamma_{\mu}\). The Lagrangian (2) is, then, invariant under a discrete \(\mathbb{Z}_{2}\) chiral transformation: Footnote 1: More precisely, the reducible representation of the Clifford algebra contains two linearly independent matrices anti-commuting with all other elements. The respective \(\mathbb{Z}_{2}\) symmetries they generate, however, are not independent. This is because the product of these matrices is non-trivial and commutes with all other elements of the Clifford algebra, giving rise to a further \(U(N_{\rm f})\) symmetry which relates the (seemingly) independent \(\mathbb{Z}_{2}\) factors. This \(U(N_{\rm f})\), however, is irrelevant for us since it persists in the presence of a chiral condensate. Further information can be found, e.g., in [47]. \[\psi\to\gamma_{5}\psi\;,\quad\bar{\psi}\to-\bar{\psi}\gamma_{5}\;,\quad\sigma \to-\sigma\;. \tag{3}\] It is this chiral symmetry and its spontaneous breaking that will be our main concern in this work. As an order parameter for chiral symmetry breaking we consider the fermion condensate \(\langle\bar{\psi}\psi\rangle\), which is proportional to the expectation value of the auxiliary field \(\sigma\) by means of a Dyson-Schwinger equation: \[\langle\bar{\psi}\psi\rangle=\frac{{\rm i}N_{\rm f}}{g^{2}}\langle\sigma \rangle\;. \tag{4}\] In the limit of an infinite flavor number, the path integral defining the partition function of the model, \[Z=\int{\cal D}\bar{\psi}{\cal D}\psi{\cal D}\sigma e^{-S[\bar{\psi},\psi,\sigma ]}\;,\;\;S[\bar{\psi},\psi,\sigma]=\int\!{\rm d}^{3}x\,{\cal L}_{\sigma}\;, \tag{5}\] is, after integrating out the fermions, reduced to the problem of minimizing the effective potential \[V_{\rm eff}=\frac{V}{2g^{2}}\sigma^{2}-\ln\det D\;, \tag{6}\] where we have assumed \(\sigma\) to be homogeneous in space and time, \(V\) denotes the space-time volume and \(D\) is the Dirac operator \[D=\not{\partial}+{\rm i}e\not{A}+\sigma\;. \tag{7}\] For \(A_{\mu}\) describing a constant and homogeneous (electro)magnetic field \(B\) and, without loss of generality, assuming \(B>0\), one finds the following effective potential density [48]: \[\frac{V_{\rm eff}}{V}=-\frac{\sigma^{2}}{2\pi}\sigma_{0}-\frac{ \sqrt{2}}{\pi}(eB)^{3/2}\zeta_{H}\left(-\frac{1}{2},\frac{\sigma^{2}}{2eB} \right)+\frac{|\sigma|eB}{2\pi}\] \[\quad-\frac{eB}{\pi\beta}\sum_{l=0}^{\infty}d_{l}\ln\left(1+\exp \left(-\beta\sqrt{\sigma^{2}+2eBl}\right)\right)\;, \tag{8}\] where \(\sigma_{0}\) denotes the minimum of \(V_{\rm eff}\) at vanishing temperature and magnetic field, \(\zeta_{H}\) is the Hurwitz zeta function, \(\beta=1/T\) denotes the inverse temperature and the last term is a sum over Landau levels \(l\) with degeneracies \(d_{l}=2-\delta_{l0}\). Remarkably, the volume-dependence of \(V_{\rm eff}\) is contained only in the discretization of \(eB\) in a finite volume, see Eq. (19) below. For a derivation of Eq. (8), see App. A. The global minima \(\langle\sigma\rangle\) of the effective potential for different temperatures and magnetic field strengths determine the mean-field phase structure of the GN model, which we show in Fig. 1. Evidently, chiral symmetry is spontaneously broken (i.e., \(\langle\sigma\rangle\neq 0\)) for low temperatures and \(B=0\). The magnetic field then enhances this breaking even further, causing the chiral condensate to increase. This is the magnetic catalysis phenomenon mentioned in the Introduction. We furthermore observe that the critical temperature \(T_{c}(B)\), beyond which chiral symmetry is restored (i.e., \(\langle\sigma\rangle=0\)) increases monotonically with \(B\) and, thus, the region of broken symmetry grows with the magnetic field. We remark at this point that the magnetic-field-induced dimensional reduction down to one space-time dimension, found in [36; 48; 49] to be responsible for magnetic catalysis, is not in conflict with the no-go theorem prohibiting the existence of phases in one dimension [50] (not to be confused with the Coleman-Hohenberg-Mermin-Wagner theorem [51; 52; 53] preventing the spontaneous breaking of _continuous_ symmetries in _two_ dimensions). This is due to the fact that the chiral condensate itself is electrically neutral and, thus, unaffected by the dimensional reduction. For a similar argument in the \(U(2)\)-symmetric NJL model, see [48]. It is the main purpose of this work to shed light on the fate of the results presented in this section when going beyond the mean-field limit, i.e., when considering a finite number of fermionic flavors \(N_{\rm f}\) and lifting the restriction of homogeneity on \(\sigma\). ## III Lattice setup ### Discretization We intend to study the theory with Lagrangian (2) on a three-dimensional lattice \(\Lambda\) with \(N_{\mu}\) lattice points in the \(x_{\mu}\)-direction (\(\mu=0,1,2\)) and an isotropic lattice constant \(a\). For the entirety of this work we shall always consider \(N_{1}\) and \(N_{2}\) to be equal, \(N_{1}=N_{2}=:N_{\rm s}\), such that the physical lattice extent in each spatial direction is given by \(L=aN_{\rm s}\). Furthermore, we introduce \(N_{\rm t}:=N_{0}\) to denote the number of lattice points in (Euclidean) time direction, such that the inverse temperature reads \(\beta=aN_{\rm t}\). We then denote the space-time volume as \(V=L^{2}\beta\). The bosonic field \(\sigma\) obeys periodic boundary conditions in all directions, while the fermions are periodic in space and anti-periodic in time. The question of which lattice discretization to use for fermions is a non-trivial one. Studies of QCD in background magnetic fields mainly rely on the use of staggered fermions [54; 55; 56; 57] (with a few authors employing overlap fermions as well [58]). However, it has become clear that staggered fermions can be problematic in asymptotically safe theories [59; 60; 61; 62; 63], of which (2) is an example. Moreover, since we are interested in studying chiral symmetry, we refrain from using Wilson fermions and since we prefer to avoid the fermion doubling problem we cannot use the naive discretization for \(N_{\rm f}<8\), either. Finally, even though in previous works [64; 61; 65; 66] the SLAC derivative [67; 68] has proven to be the best-suited discretization for studying GN-like theories on the lattice, it fails when naively applied to theories with gauge symmetry [69]. As a matter of fact, it is not obvious how to properly formulate the GN model in a magnetic field with SLAC fermions in a gauge-invariant way in the first place. We nevertheless discuss this issue further and provide a more detailed comparison between different possible discretizations in App. B. We are left with the choice of employing Ginsparg-Wilson fermions [70], which have ideal chiral properties but come with a significantly increased cost due to their non-ultralocality [71]. For our lattice studies we use Neuberger's formulation [72] of the overlap operator [73; 74], reading2 Footnote 2: We remark that this expression does not make use of \(\gamma_{5}\) and could thus be used in an irreducible representation of gamma matrices in \((2+1)\) dimensions as well [75; 76]. \[D_{\rm ov}=\frac{1}{a}\left(\mathds{1}+A/\sqrt{A^{\dagger}A}\right). \tag{9}\] Here, the kernel \(A\) is given by the Wilson operator \(D_{W}\) with a negative mass parameter \(m=-1\): \[A=aD_{W}-\mathds{1}\, \tag{10}\] \[D_{W}=\frac{1}{2}\big{[}\gamma_{\mu}\big{(}\nabla_{\mu}^{*}+ \nabla_{\mu}\big{)}-a\nabla_{\mu}^{*}\nabla_{\mu}\big{]}\, \tag{11}\] where the action of the covariant forward and backward difference operators on \(\psi(x)\) is defined as \[\begin{split}\nabla_{\mu}\psi(x)&=\frac{1}{a}\left[U_{ \mu}(x)\psi\left(x+a\hat{\mu}\right)-\psi(x)\right]\;,\\ \nabla^{*}_{\mu}\psi(x)&=\frac{1}{a}\left[\psi(x)- U^{\dagger}_{\mu}(x-a\hat{\mu})\psi\left(x-a\hat{\mu}\right)\right]\;.\end{split} \tag{12}\] In (12), \(\hat{\mu}\) denotes the unit vector in \(x_{\mu}\)-direction and \[U_{\mu}(x)=e^{\mathrm{i}aeA_{\mu}(x)} \tag{13}\] are \(U(1)\) link variables. Guided by the Lagrangian (2), where the Yukawa term \(\sigma\bar{\psi}\psi\) would reduce to a fermionic mass term if \(\sigma\) was constant, one can introduce the scalar field into the overlap formalism by the definition [41] \[D=D_{\mathrm{ov}}+\sigma\left(\mathds{1}-\frac{a}{2}D_{\mathrm{ov}}\right) \tag{14}\] for the full Dirac operator.3 For constant \(\sigma\) the second term in Eq. (14) is just a mass term in Ginsparg-Wilson language [78] (see [79] for a similar argument in the domain-wall formalism). By this definition one ensures that the identity (4), relating the expectation value of \(\sigma\) to the chiral condensate, is preserved4 on the lattice, i.e., Footnote 3: Note that this definition differs from the one given in [77]. Footnote 4: The fact that a factor of i is missing when comparing Eq. (15) to Eq. (4) is purely conventional and has no influence on any of the results or their interpretation. \[-\frac{N_{\mathrm{f}}}{g^{2}}\langle\sigma\rangle=\langle\bar{\psi}\psi\rangle_ {\mathrm{ov}}:=\left\langle\bar{\psi}\left(1-\frac{a}{2}D_{\mathrm{ov}}\right) \psi\right\rangle\;, \tag{15}\] facilitating the numerical study of chiral symmetry breaking considerably. The full action of our lattice theory thus reads \[S=\bar{\psi}D\psi+\frac{N_{\mathrm{f}}}{2g^{2}}\sigma^{2}\;, \tag{16}\] where summation over space-time and internal indices is implied. The discrete symmetry (3) of the continuum theory has an exact lattice counterpart in the overlap formalism, much like it is the case for theories with the more common \(U(1)\) chiral symmetry [80]. Namely, introducing \(\hat{\gamma}_{5}=\gamma_{5}(\mathds{1}-aD_{\mathrm{ov}})\), we find that the action (16) is invariant under \[\psi\to\hat{\gamma}_{5}\psi\;,\quad\bar{\psi}\to-\bar{\psi}\gamma_{5}\;,\quad \sigma\to-\sigma\;, \tag{17}\] by using the Ginsparg-Wilson relation [70] \[\{D_{\mathrm{ov}},\gamma_{5}\}=aD_{\mathrm{ov}}\gamma_{5}D_{\mathrm{ov}}\;. \tag{18}\] It should be noted that the additional symmetries in the continuum theory that arise due to ambiguity in the choice of the "fifth" gamma matrix (see footnote 1) can also be exactly translated to the lattice [81], but shall not be of interest in this work. From App. B we know that (massive) overlap fermions suffer from discretization effects that quantitatively change the chiral condensate in a theory of free fermions. One should, thus, investigate the interacting theory with a particular emphasis on its behavior towards the continuum limit to see if the discretization effects persist. ### Magnetic field on the lattice It is well known that the magnetic flux through a torus with an area \(L^{2}\), orthogonal to an applied magnetic field \(B\), is necessarily quantized [82; 83]. One finds the following quantization condition for the magnetic field: \[eB=\frac{2\pi}{L^{2}}b\;,\quad b\in\mathbb{Z}\;. \tag{19}\] Let us now outline how to implement an external magnetic field perpendicular to the spatial plane using the gauge links (13) in our lattice formulation (16). In the continuum one could represent such a magnetic field by, e.g., the following choice of vector potential: \[A_{0}(x)=0\;,\quad A_{1}(x)=0\;,\quad A_{2}(x)=Bx_{1}\;. \tag{20}\] On a lattice with periodic boundary conditions, however, this definition does not lead to a constant magnetic flux \[\Phi_{\mathcal{P}}=\oint_{\mathcal{P}}A_{\mu}ds_{\mu} \tag{21}\] through every lattice plaquette \(\mathcal{P}(x_{1},x_{2})\) in the spatial plane at position \((x_{1},x_{2})\) - see Fig. 2 for the definition of such a plaquette and the integration path in Eq. (21). In fact, one finds \[\Phi_{\mathcal{P}}=\begin{cases}a^{2}B&\mathrm{if}\quad 0\leq x_{2}<L-a\\ a^{2}B-aBL&\mathrm{if}\quad x_{2}=L-a\end{cases}\;, \tag{22}\] i.e., the flux through the lattice boundary in \(x_{2}\)-direction is large and opposite to the flux through the bulk, such Figure 2: Plaquette at position \((x_{1},x_{2})\) in the spatial plane. that the total magnetic flux through the lattice vanishes: \[\Phi_{\rm tot}=\sum_{\cal P}\Phi_{\cal P}=0\;. \tag{23}\] The solution is to introduce correction terms in \(A_{\mu}\) on the lattice boundary in a way that shifts all the negative (assuming \(B>0\)) flux to the single plaquette at the combined boundary \(x_{1}=x_{2}=L-a\). This can be achieved by the following definition [84]: \[A_{1}(x)=-\frac{BL}{a}x_{2}\delta_{x_{1},L-a}\;,\;A_{2}(x)=Bx_{1}\;, \tag{24}\] with \(A_{0}\) set to zero. The flux through \({\cal P}(L-a,L-a)\) is now given by \[\Phi_{\cal P}|_{x_{1}=x_{2}=L-a}=a^{2}B-BL^{2}=a^{2}B-\frac{2\pi}{e}b\;, \tag{25}\] where we have used (19), and \(\Phi_{\cal P}=a^{2}B\) everywhere else. Since in our lattice formulation \(A_{\mu}\) only appears in exponentials due to (13), the only way \(\Phi_{\cal P}\) contributes is via the plaquette terms \[U_{\mu\nu}(x)=U_{\mu}(x)U_{\nu}(x+a\hat{\mu})U_{\mu}^{\dagger}(x+a\hat{\nu})U_ {\nu}^{\dagger}(x)\;, \tag{26}\] as we have \[U_{12}(x)=e^{{\rm i}e\Phi_{\cal P}}\;. \tag{27}\] In this last expression the term proportional to \(2\pi\) in (25) cancels out. We thus end up with a situation that is physically indistinguishable from one with a constant magnetic flux \(\Phi_{\cal P}=a^{2}B\) through every plaquette and a non-vanishing total flux \[\Phi_{\rm tot}=\frac{2\pi}{e}b\;, \tag{28}\] as desired. We therefore use the following definition of the \(U(1)\) gauge links \(U_{\mu}(x)\) (13), entering the Wilson operator (11) via (12): \[\begin{split} U_{0}(x)&=1\;,\\ U_{1}(x)&=\begin{cases}e^{-2\pi{\rm i}bx_{2}/L}& \text{if }x_{1}=L-a\\ 1&\text{else}\end{cases}\;,\\ U_{2}(x)&=e^{2\pi{\rm i}bx_{1}/L^{2}}\;.\end{split} \tag{29}\] We see that the compactness of the gauge links introduces a periodicity in the magnetic field and hence an effective upper bound for the flux quantum number \(b\), i.e., \[0\leq b\leq N_{\rm s}^{2}\;. \tag{30}\] In practice, one restricts \(b\) even further in order to avoid discretization artifacts [85; 55] and we shall do the same in this work, performing simulations only up to \(b\lesssim N_{\rm s}^{2}/16\). We will provide a more detailed analysis on these discretization artifacts and similar effects in a forthcoming publication. ### Computational details Our lattice setup of the GN model in a magnetic field, using the overlap Dirac operator (14), has a significant computational advantage compared to the use of overlap fermions in gauge theories. This is due to the fact that in our case the gauge links are not dynamical, depending only on the constant magnetic field. This allows for an _exact_ computation of the massless overlap operator \(D_{\rm ov}\) in (9) that we perform once, at the beginning of a simulation. We then re-use \(D_{\rm ov}\) in every update step for the now straightforward computation of the full operator (14). Needless to say, computing the overlap operator exactly, i.e., without using approximations (see, e.g., [86]), would be unthinkable in realistic QCD simulations. For this work we have performed simulations at various temperatures and magnetic fields using a standard rHMC algorithm. We change the temperature by varying \(N_{\rm t}\) at constant \(N_{\rm s}\) and we study different lattice spacings by changing the coupling \(g^{2}\) while simultaneously adjusting \(N_{\rm s}\) such that the physical lattice volume remains constant. We furthermore approach larger physical volumes by increasing \(N_{\rm s}\) at fixed \(g^{2}\). Finally, we mention that our theory does not suffer from a complex-action problem, as is shown in App. C. ### Observables As the order parameter for chiral symmetry breaking, the main observable of interest is the chiral condensate \(\langle\sigma\rangle\) in (15). Assuming an ergodic simulation algorithm, however, this quantity will average to zero. This is because the effective potential of the GN model is known to exhibit two equivalent minima in the spontaneously broken phase, differing only in the sign of \(\sigma\), hence leading to a cancellation between those minima. In order to avoid this cancellation, we thus use the quantity \[\langle|\bar{\sigma}|\rangle\;,\quad\text{with }\bar{\sigma}=\frac{1}{V}\sum_{x \in\Lambda}\sigma(x) \tag{31}\] as an order parameter instead [87]. Here, the sum runs over the whole lattice and \(\langle\,\cdot\,\rangle\) denotes the Monte-Carlo average. While \(\langle|\bar{\sigma}|\rangle\) approaches \(\pm\langle\sigma\rangle\) in the infinite-volume limit, one should keep in mind that on finite volumes \(\langle|\bar{\sigma}|\rangle\) will never be zero exactly, even when chiral symmetry is intact, which complicates the study of phase transitions. For this reason, \(\langle|\bar{\sigma}|\rangle\) should - strictly speaking - not be referred to as an order parameter. However, for the sake of convenience we will still do so in the following. In order to find the critical temperature \(T_{\rm c}\), corresponding to the phase transition between the two respective regions of spontaneously broken and restored chiral sym metry, we study the chiral susceptibility, defined as5 Footnote 5: The factor of \(V\) is compensated by the use of space-time-averaged quantities in the expectation values such that \(\chi\) is an intensive quantity, as it should be. \[\chi=V\left(\langle\bar{\sigma}^{2}\rangle-\langle|\bar{\sigma}|\rangle^{2} \right)\, \tag{32}\] as a function of \(T\). Approaching a second-order phase transition, \(\chi(T)\) diverges rationally. This behavior is washed out by finite-volume corrections and we expect to find a sharp but smooth peak close to the transition temperature that monotonically grows, sharpens and moves towards the latter [88]. At this point we should mention that the introduction of an additional length scale and some form of imbalance in fermionic theories might induce spatial inhomogeneities in the system [89]. While this is most prominently observed in mean-field treatments at finite density [90, 32], it could also apply for external magnetic fields. In fact, it is known that in \(3+1\) dimensions magnetic fields can favor inhomogeneous condensates at finite density when they would be disfavored at \(B=0\)[91, 90, 92, 93]. This can be understood by recalling the dimensional reduction induced by the magnetic field [36] and the fact that inhomogeneous phases are more abundant in lower dimensions. Of course, our situation is qualitatively different because in our \((2+1)\)-dimensional setup the dimensional reduction in a strong magnetic field leaves us with no spatial dimension at all (and we do not expect inhomogeneities in the temporal direction in equilibrium). Since in \(2+1\) dimensions there is no conclusive evidence for the existence of inhomogeneous structures beyond mean-field (as compared to the \((1+1)\)-dimensional case [64, 65, 66, 94]) and there even exist some negative mean-field results [95, 96, 97], such inhomogeneities are not the focus of this work. Nonetheless, since the previous studies did not take into account the influence of magnetic fields, we also investigate whether an external magnetic field can induce inhomogeneities in \(2+1\) dimensions at zero density. To this end, we follow [64] by introducing the spatial correlation function \[C(x_{1}, x_{2})= \tag{33}\] \[\frac{1}{N_{\rm s}^{2}N_{\rm t}}\sum_{x^{\prime}\in\Lambda}\left< \sigma(x^{\prime}_{0},x_{1},x_{2})\sigma(x^{\prime}_{0},x_{1}+x^{\prime}_{1},x_{2}+x^{\prime}_{2})\right>\.\] As has been outlined in [64], this correlator should capture any inhomogeneities if they exist. ### Scale setting We set the scale via the order parameter at vanishing magnetic field and the lowest temperature considered, \(T_{0}\approx 0\): \[\sigma_{0}:=\langle|\bar{\sigma}|\rangle_{B=0,\,T=T_{0}}. \tag{34}\] We keep \(T_{0}\) constant as we approach the infinite-volume (\(L^{2}\to\infty\) at fixed \(a\)) and continuum (\(a\to 0\) at fixed \(L^{2}\)) limits, respectively. However, in order to ensure reasonably low scale-setting temperatures at an affordable computational cost, we consider two different \(T_{0}\) corresponding to the two different limits. For a detailed list of the parameters we have performed simulations for as well as their corresponding \(\sigma_{0}\) and \(T_{0}\), we refer to Tab. 1 in App. D. In App. D we also give a brief description on how the error estimates presented in this work are obtained. ## IV Results In this section we report on our results obtained in the GN model in \(2+1\) dimensions, using overlap fermions for one reducible fermionic flavor, \(N_{\rm f}=1\). ### Consistency checks As an important starting point, we test our discretization (14) and perform consistency checks with results in the existing literature. To this end, we show in Fig. 3 the dependence of the order parameter (31) on the coupling constant \(g^{2}\) for increasing lattice volumes. The dashed blue line shows, exemplarily for the smallest lattice considered, the right-hand side of the Dyson-Schwinger equation (15) for comparison. This indicates that Eq. (15) is, indeed, fulfilled. The coupling strengths we use for the bulk of this work lie in the left half of Fig. 3. In Fig. 3 we also show an extrapolation to the infinite Figure 3: Coupling-dependence of the chiral condensate \(\langle|\bar{\sigma}|\rangle\) for various cubic lattice sizes, \(N_{\rm t}=N_{\rm s}\). The dashed line shows \(\frac{-ag^{2}}{N_{\rm f}}\langle\bar{\psi}\psi\rangle_{\rm ov}\), as defined in Eq. (15) (with the absolute value taken appropriately), for \(N_{\rm s}=8\) and the red band is an extrapolation to the infinite volume. All quantities are given in lattice units. volume, using the finite-size scaling law \[\langle|\bar{\sigma}|\rangle=\alpha+\gamma L^{-\kappa}\;, \tag{35}\] where \(\alpha\), \(\gamma\) and \(\kappa\) are constants, for the \(L\)-dependence of the order parameter for every value of the coupling. When \(a/g^{2}\) takes values between \(0.188\) and \(0.198\) we find the offset \(\alpha\) to be consistent with zero within errors in the infinite-volume limit, which indicates the presence of a phase transition. In this case, \(\kappa\) is related to the critical exponents \(\beta\) and \(\nu\) of the order parameter and correlation length, respectively, via \[\kappa=\frac{\beta}{\nu}\;. \tag{36}\] With this crude and naive method, we find \(\beta/\nu=0.93\pm 0.29\) as a weighted average which - while not competitive in precision - is in quantitative agreement with pertinent results obtained with dedicated methods as, for example, collated in [98], \(\beta/\nu=0.637\ldots 0.843\). Recovering this non-perturbative result is a strong indication that we are simulating the correct physics. ### Vanishing magnetic field Having established the correctness of our method, we now present results for the order parameter at vanishing magnetic field and non-zero temperature, which allows for a comparison with results in [87; 99; 100], at least on a qualitative level. In Fig. 4 we show the \(T\)-dependence of \(\langle|\bar{\sigma}|\rangle\) for increasing physical volumes. We observe the expected spontaneous breaking of chiral symmetry at low temperatures, indicated by a non-vanishing order parameter, and a decrease of the condensate with increasing temperature, corresponding to the well-known picture of thermal fluctuations destroying long-range order and restoring chiral symmetry. Of course, as was mentioned above, \(\langle|\bar{\sigma}|\rangle\) cannot vanish exactly on finite volumes. What one can see, however, is that the phase transition becomes more pronounced as the volume increases, while the non-vanishing tail for high temperatures approaches lower and lower values. In order to locate the phase transition we show in Fig. 5 the \(T\)-dependence of the chiral susceptibility (32) for different volumes. As expected, it shows a pronounced peak at a critical temperature \(T=T_{\rm c}\), which shifts slightly to lower temperatures as the volume is increased. For large enough volumes, where the peak becomes even more pronounced, we find \(T_{\rm c}/\sigma_{0}\approx 0.145\). We furthermore computed the Binder cumulant [101], \[U_{L}:=1-\langle\bar{\sigma}^{4}\rangle/3\langle\bar{\sigma}^{2}\rangle^{2}\;, \tag{37}\] as a function of \(T\). The intersection of \(U_{L}(T)\) for different volumes provides us with another estimate for the critical temperature, \(T_{\rm c}/\sigma_{0}\approx 0.135\). We take the interval between the two values as a rough estimate of the actual critical temperature. A direct comparison to the existing literature [87; 99; 100] is, unfortunately, not straightforward, as those works either employ higher flavor numbers or use different scale settings. The observations presented so far are consistent with the GN model approaching a second-order phase transition in \(T\) in the infinite-volume limit, as one would expect based on the large-\(N_{\rm f}\) analysis of Sec. II and as has been previously observed in [87; 99; 100]. Obviously, bosonic quantum fluctuations do leave their mark on the system for flavor numbers as low as \(N_{\rm f}=1\), as can be seen by comparing the critical temperature quoted above with its large-\(N_{\rm f}\) value, \(T_{c}/\sigma_{0}=1/2\ln(2)\approx 0.72\), the latter being significantly larger. This means that the broken phase shrinks when one departs from the mean-field limit by decreasing \(N_{\rm f}\), which is not at all surprising given the tendency of quantum fluctuations to destroy any sort of long-range order. This phenomenon has also been observed in the earlier studies [87; 99; 100] and occurs in the \((1+1)\)-dimensional model as well [64]. Figure 5: Chiral susceptibility (32) as a function of temperature for different physical volumes. Figure 4: Temperature-dependence of the chiral condensate \(\langle|\bar{\sigma}|\rangle\) for different physical volumes and \(B=0\). We remark that even the largest volume considered in this work is still comparatively small. One should thus not be tempted to draw quantitative conclusions about the precise location or the order of the chiral phase transition at \(N_{\mathrm{f}}=1\). The qualitative behavior, however, which is what we are ultimately interested in at \(B\neq 0\), is as expected, which further builds up confidence in the chosen discretization. ### Non-zero magnetic field #### iv.3.1 Temperatures close to zero We now switch on an external magnetic field and first devote our attention to the lowest available temperatures. The \(B\)-dependence of the chiral condensate for various different lattice constants and volumes is shown in Figs. 5(a) and 5(b), respectively. In all data the magnetic field is found to increase the chiral condensate which is in qualitative agreement with the large-\(N_{\mathrm{f}}\) expectation. While the latter predicts quadratic growth for our scenario (and only linear growth in the sub-critical coupling regime), our data look rather linear but might still be compatible with a weak quadratic growth. This discrepancy could also come from discretization effects. Although Fig. 5(a) suggests that one could hope for them to be small in the interacting theory, such a deviation would be the expected form of discretization artifacts in the non-interacting case as discussed in App. B. We found that such artifacts would systematically diminish the chiral condensate such that we are confident that our results are qualitatively correct even if discretization effects are larger than suggested by Fig. 5(a). Moreover, one observes a curious non-monotonic behavior of \(\langle|\vec{\sigma}|\rangle\) with \(B\), as the order parameter seems to assume a minimum at the lowest possible non-vanishing magnetic field, corresponding to \(b=1\) in Eq. (19), for all lattice spacings. For flux parameters larger than \(1\) the condensate then grows monotonically with \(B\). This non-monotonicity, however, is a finite-size effect, as becomes clear by looking at the infinite-volume extrapolation shown in Fig. 5(b), where \(b=1\) ceases to be a minimum of \(\langle|\vec{\sigma}|\rangle\) for the largest available volume (green curve). We note that the physical volume considered in Fig. 5(a), which we keep approximately constant as we decrease the lattice spacing, corresponds to the smallest volume in Fig. 5(b). The largest magnetic fields we plot in Fig. 5(b) are determined by our requirement that \(b\leq N_{\mathrm{s}}^{2}/16\). For larger magnetic fields we find unphysical saturation effects, the onset of which is already visible in the \(N_{\mathrm{s}}=8\) data of Fig. 5(b) (blue curves). We plan to present a more detailed discussion of these discretization artifacts and the aforementioned finite-size effects, as well as a thorough spectral analysis of the overlap operator for the GN model in non-zero magnetic fields in a forthcoming publication. We arrive at the conclusion that, on sufficiently large volumes and for temperatures close to zero, the magnetic field causes the order parameter to increase, thus enhancing the breaking of chiral symmetry, in accordance with the mean-field prediction of magnetic catalysis outlined in Sec. II. This is hardly surprising, given the effective one-dimensional dynamics induced by the magnetic field. In fact, as has been argued in [36], magnetic catalysis at zero temperature is a universal, i.e., model-independent feature in \(2+1\) dimensions, at least in the absence of gauge degrees of freedom [102]. #### iv.3.2 Higher temperatures Next, we study the combined influence of finite temperature and magnetic field on the order parameter. We show phase diagrams in the \((B,T)\)-plane for various lattice sizes in Fig. 7. Evidently, magnetic catalysis takes place not only for the lowest temperatures, but for all \(T\) below \(T_{\mathrm{c}}\). We indicate the values of \(T_{\mathrm{c}}\) at \(B=0\) and \(N_{\mathrm{s}}=16\), determined above via \(\chi\) and \(U_{L}\), respectively, by the gray Figure 6: Magnetic-field-dependence of the chiral condensate \(\langle|\vec{\sigma}|\rangle\) for low temperatures. bands shown. For higher temperatures the magnetic field ceases to have a noticeable effect on the order parameter. This is not unexpected, as in this region we only measure the (modulus of the) fluctuations of \(\sigma\) around zero due to our definition of \(\langle|\bar{\sigma}|\rangle\) in Eq. (31). The magnetic fields we are forced to restrict to (in order to avoid discretization effects) in our lattice simulations at fixed lattice spacing are quite small, \(eB/\sigma_{0}^{2}\lesssim 0.35\). Hence, the results obtained in the large -\(N_{\rm f}\) approximation, shown in Fig. 1, suggest that one should not expect the broken region to grow in size all that much. Indeed, this expectation is confirmed by Fig. 7. To investigate larger values of \(eB/\sigma_{0}^{2}\), we consider the \((B,T)\) phase diagram for the smallest available lattice spacing in Fig. 8. One observes that for strong enough magnetic fields the region of spontaneously broken chiral symmetry indeed starts to grow, as expected from Fig. 1. We roughly indicate this by the gray band, which shows the critical temperature \(T_{\rm c}\), determined by the susceptibility (32), as a function of \(B\). When \(T_{\rm c}\) could not be determined unanimously we took the average of the two temperatures corresponding to the competing peaks instead and we do not show error bars for the resulting - very crude - estimate. Recall that finite-volume effects distort the behavior for weak magnetic fields. It would be interesting for future studies to consider even stronger magnetic fields in order to compare Figs. 1 and 8 on a more quantitative level. In conjunction with simulations at different flavor numbers, one could aim at finding a relation between the phase boundaries as \(N_{\rm f}\) is varied. In the simplest scenario, the critical temperature \(T_{\rm c}(B)\) could conceivably be related to its large -\(N_{\rm f}\) value by a mere \(N_{\rm f}\)-dependent scaling factor. ### Search for inhomogeneous phases Finally, we investigate the existence of inhomogeneous phases by studying the spatial correlator (33). Such a phase would likely occur at low temperatures and relatively strong magnetic fields, the former since thermal fluctuations will wash out any inhomogeneities and the latter since we know that the order parameter is homogeneous for vanishing magnetic field [95; 96; 97]. Fig. 9 shows the correlator \(C\) from Eq. (33) for a strong magnetic field along the two spatial coordinate axes and their diagonal. Each of them decays monotonically to a constant close to the contribution from the disconnected terms. In fact, we can beautifully showcase the rotational invariance here and no further structure is seen in other directions or for other parameters. We conclude that the assumption of spatial homogeneity is well justified in the accessible parameter range. Whether stronger magnetic fields could induce a spatially varying order parameter, especially in combination with a finite chemical potential, is a question for further studies. Figure 8: \((B,T)\) phase diagram for \(N_{\rm s}=16\) and \(a\sigma_{0}\approx 0.460\). The gray band shows our crude estimate for the \(B\)-dependence of the critical temperature of the phase transition, see the main text. The scale on the color bar is different from Fig. 7. Figure 7: \((B,T)\) phase diagrams for increasing volumes at constant lattice spacing. Left: \(N_{\rm s}=8\), \(a\sigma_{0}\approx 1.063\). Center: \(N_{\rm s}=12\), \(a\sigma_{0}\approx 1.004\). Right: \(N_{\rm s}=16\), \(a\sigma_{0}\approx 0.987\). The gray band indicates our estimate for the critical temperature at \(B=0\) and on the largest lattice, see the main text. ## V Discussion We have investigated the \((2+1)\)-dimensional Gross-Neveu model (2) exposed to an external magnetic field in the chiral limit using one reducible flavor of fermions and Neuberger's formulation (9) of the overlap operator. The auxiliary scalar field \(\sigma\) couples in a way so as to preserve the continuum Ward identity (4) relating the expectation value of \(\sigma\) to the chiral condensate. Our results suggest that the magnetic catalysis phenomenon, i.e., an enhancement of the order parameter for chiral symmetry breaking with the magnetic field, persists for finite flavor numbers, in accordance with the phenomenological picture of the magnetic field reducing the number of spatial dimensions, thus promoting infrared dynamics. On small volumes, however, this effect is non-monotonic for weak magnetic fields. We also remark that our lattice formulation seems to suffer from strong discretization effects in a free-theory setup, while the interacting case appears less problematic. We have furthermore investigated the fate of magnetic catalysis at finite temperature and found that it persists for all temperatures below the phase transition. Our findings are thus in qualitative agreement with mean-field [33; 34; 35] as well as beyond-mean-field [39; 40] calculations. The phase of spontaneously broken chiral symmetry grows slightly for the strongest magnetic fields considered but shrinks overall in comparison to the large-\(N_{\rm f}\) limit. It is important to stress that our results are very different from the well-known _inverse magnetic catalysis_ effect, i.e., a decrease of the order parameter with \(B\), that takes place in QCD at temperatures close to the chiral crossover [56; 57]. In QCD, the critical temperature furthermore decreases with the magnetic field [103; 104], which we also do not observe. We now comment on this issue. In QCD the aforementioned effects are likely caused by a delicate interplay between quark and gluonic degrees of freedom [105], which our model, lacking the latter, cannot reproduce. One should thus not be tempted to interpret our results as new physics. Rather, we argue that the GN model is simply, and unsurprisingly, insufficient for a proper description of QCD once gluonic effects become important. Nevertheless, as was mentioned in the Introduction, we believe that our work serves as a starting point for the ultimate goal of studying QCD in background magnetic fields from the point of view of effective models beyond the mean-field limit and on the lattice. In the following we discuss ways on how to systematically improve the GN model in order to approach QCD. To this end, one should firstly consider models in \(3+1\) dimensions that have the same continuous chiral symmetry as QCD. One may then take gluonic interactions into account, for example, by coupling the fermions to the Polyakov loop [106]. Most importantly, the crucial back-reaction of magnetized quarks onto the gluonic distribution can be taken into account by introducing a suitable effective \(B\)-dependent coupling. This has been shown to reproduce the desired features of QCD in [107; 108; 109; 110]. One could furthermore consider endowing the scalar fields in the NJL model with kinetic terms, thus enabling their interpretation as dynamical mesons and potentially add quartic mesonic self-interactions as well. The ensuing linear sigma model coupled to quarks (LSMq) has the added advantage that it is renormalizable in \(3+1\) dimensions, which the GN and NJL models are not. If one then incorporates the aforementioned magnetic-field-dependent couplings, while properly taking into account plasma screening effects, one also observes inverse magnetic catalysis as in QCD [111; 112; 113]. A proper understanding of such effective theories for QCD beyond the mean-field limit, e.g., from _ab initio_ lattice simulations at finite numbers of quark flavors and colors is therefore certainly desirable. For reviews on the topic of reproducing features of QCD in magnetic fields using model theories and a more complete list of references, see [114; 115; 116]. We would like to briefly comment on possible implications for condensed-matter systems that are described by Four-Fermi theories. While in this work we were only concerned with the strong-coupling regime, in which chiral symmetry is broken at zero temperature and magnetic field, we believe that the qualitative predictions of mean-field studies should also remain valid for weak couplings. This would then imply that strong enough magnetic fields are indeed capable of generating a mass gap, providing further evidence [117; 116] that magnetic catalysis could be responsible for the kink-like behavior of the thermal conductivity of superconducting cuprates exposed to a magnetic field observed in [118]. Finally, our results suggest that a small magnetic field does not seem to induce inhomogeneous phases in the GN model in \(2+1\) dimensions at zero density. A detailed study of the finite-density case is currently underway. Our simulation results as well as the tools required to reproduce the figures shown in this work will be found Figure 9: Spatial correlator (33) for \(N_{\rm s}=N_{\rm t}=16\), \(eB/\sigma_{0}^{2}\approx 1.39\) and \(a\sigma_{0}\approx 0.460\) along the two coordinate axes and their diagonal. Due to the lattice periodicity we only show the first half of the respective abscissas. online [44; 45]. ###### Acknowledgements. We thank Bjorn Wellegehausen for providing the code base used in the present work and for useful discussions regarding the implementation of the overlap operator in our setup. M.M. thanks Georg Bergner, Gergely Endrodi, Tamas Kovacs and Ivan Soler for enlightening discussions. J.J.L. thanks Ed Bennett for helpful discussions about the reproducibility and openess of this publication. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. 406116891 within the Research Training Group RTG 2522/1. The simulations were performed on resources of the Friedrich Schiller University in Jena supported in part by the DFG Grants INST 275/334-1 FUGG and INST 275/363-1 FUGG. The work of J. J. L. was partly supported by the UKRI Science and Technology Facilities Council (STFC) Research Software Engineering Fellowship EP/V052489/1 and by the Supercomputing Wales project, which is part-funded by the European Regional Development Fund (ERDF) via Welsh Government. This work would never have been possible without the great python ecosystem for scientific computing [119]. For our analyses, we explicitly imported the packages [120; 121; 122; 123; 124; 125] but we are also grateful for creation and maintenance of all their dependencies. ## Appendix A Data availability statement Full data underlying this work will be available at Ref. [44]. Fully automated analysis workflows will be available at Ref. [45]. Raw data and the simulation code for generating the configurations are available upon request. ## Appendix A Derivation of the effective potential in the large -\(N_{\rm f}\) limit In this appendix we outline the calculation of the effective potential in Eq. (6), see also [126]. The main difficulty is, of course, the fermionic determinant, \(\det(D)\). For the derivation we shall, in fact, consider the more general Dirac operator \[D=\not{\partial}+{\rm i}e\not{A}+\sigma+\mu\gamma_{0}\;, \tag{10}\] where we have also included a chemical potential \(\mu\) and \(A_{\mu}\) is given in Eq. (20). In the following we assume \(B>0\) without loss of generality. For the computation of \(\ln\det D\) we use the zeta-function regularization method [127]: \[\ln\det D=\frac{1}{2}\ln\det D^{2}=-\frac{1}{2}\frac{\partial}{\partial s} \zeta_{D^{2}}(s)\bigg{|}_{s=0}\;, \tag{11}\] with the zeta-function of \(D^{2}\) defined by \[\zeta_{D^{2}}(s)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}\!\!\mathrm{d}t\,t^{s-1} \operatorname{tr}e^{-tD^{2}}\;, \tag{12}\] where \(\Gamma(s)\) denotes the usual gamma function. The spectrum of \(D^{2}\) is known and its eigenvalues read \[\lambda=\sigma^{2}+(\omega_{n}+{\rm i}\mu)^{2}+(2l+1+\alpha)eB\;, \tag{13}\] where \(\omega_{n}=\frac{\pi}{\beta}(2n+1)\) are the Matsubara frequencies (\(n\in\mathbb{Z}\)), \(l\in\mathbb{N}_{0}\) denotes the Landau level index and \(\alpha\pm 1\) denotes the Zeeman splitting of energy levels of fermions with opposite spin due to the Pauli term in \(D^{2}\). The eigenvalues come with a degeneracy of \(2\cdot\frac{VeB}{2\pi\beta}\), where the first factor of 2 comes from the use of a reducible representation of gamma matrices while the second factor is the standard Landau level degeneracy. We are thus left with \[\frac{\zeta_{D^{2}}(s)}{V}=\frac{1}{\Gamma(s)}\frac{eB}{\pi\beta }\bigg{[}\int_{0}^{\infty}\!\!\mathrm{d}t\,t^{s-1}e^{-t\sigma^{2}}\sum_{n=- \infty}^{\infty}e^{-t(\omega_{n}+{\rm i}\mu)^{2}}+\] \[2\int_{0}^{\infty}\!\!\mathrm{d}t\,t^{s-1}e^{-t\sigma^{2}}\sum_{n =-\infty}^{\infty}e^{-t(\omega_{n}+{\rm i}\mu)^{2}}\sum_{l=1}^{\infty}e^{-2 eBlt}\bigg{]}\;, \tag{14}\] where we have already performed the sum over \(\alpha\) and split up the summation over Landau levels into magnetic-field-independent terms (\(l=0\)) and corrections due to \(B\) (\(l>0\)). By performing a Poisson resummation in \(n\) and taking the integrals over \(t\), a straightforward calculation leads to an expression for the zeta function, whose derivative with respect to \(s\) at \(s=0\) simplifies to \[\frac{1}{V}\frac{\partial}{\partial s}\zeta_{D^{2}}(s)\bigg{|}_{ s=0}=\frac{eB}{\pi}|\sigma|-\frac{(2eB)^{3/2}}{\pi}\zeta_{H}\left(-\frac{1}{2}, \frac{\sigma^{2}}{2eB}\right)\] \[-\frac{eB}{\pi\beta}\sum_{l=0}^{\infty}d_{l}\bigg{[}\ln\left(1+e^ {-\beta\left(\sqrt{\sigma^{2}+2eBl}+\mu\right)}\right)+(\mu\leftrightarrow-\mu) \bigg{]}\;, \tag{15}\] where \(d_{l}=2-\delta_{l0}\). After setting \(\mu=0\) and inserting this expression into (11) and (6), we obtain \[\frac{V_{\rm eff}}{V}= \frac{\sigma^{2}}{2g_{R}^{2}}-\frac{\sqrt{2}}{\pi}(eB)^{3/2} \zeta_{H}\left(-\frac{1}{2},\frac{\sigma^{2}}{2eB}\right)+\frac{|\sigma|eB}{2\pi}\] \[-\frac{eB}{\pi\beta}\sum_{l=0}^{\infty}d_{l}\ln\left(1+\exp\left(- \beta\sqrt{\sigma^{2}+2eBl}\right)\right)\;, \tag{16}\] where we have replaced \(g^{2}\) by the renormalized coupling \(g_{R}^{2}\) as dictated by the zeta function formalism. Finally, we introduce the minimum of the effective potential at vanishing temperature, density and magnetic field, \(\sigma_{0}=-\pi/g_{R}^{2}\), to recover (8). ## Appendix B Comparison of fermion discretizations We discuss and compare three different fermion discretizations one could employ when attempting to study the GN model exposed to magnetic fields: naive, SLAC and overlap fermions. For the comparison we consider a theory of massive non-interacting fermions in an external magnetic field, characterized by the Lagrangian \[\mathcal{L}=\bar{\psi}\left(\not{\partial}+\mathrm{i}e\not{A}+m\right)\psi=: \bar{\psi}D\psi\;, \tag{10}\] and compute the chiral condensate \[\langle\bar{\psi}\psi\rangle:=-\frac{1}{V}\frac{\partial}{\partial m}\ln Z\;, \tag{11}\] where the partition function \(Z\) is given by the fermion determinant, \[Z=\det D\;. \tag{12}\] We have already computed \(\ln\det D\) in the continuum theory in App. A, allowing us to directly use the result (10) by setting \(\sigma=m>0\). Thus, with Eq. (11), the chiral condensate in the continuum at \(\mu=0\) is given by the closed-form expression \[\begin{split}&\langle\bar{\psi}\psi\rangle(B)=\\ &\frac{eB}{2\pi}-\frac{m}{\pi}\sqrt{\frac{eB}{2}}\zeta_{H}\left( \frac{1}{2},\frac{m^{2}}{2eB}\right)+\frac{eB}{\pi}\sum_{l=0}^{\infty}\frac{d (l)m}{\varepsilon_{l}}\frac{1}{1+e^{\beta\varepsilon_{l}}}\;,\end{split} \tag{13}\] where \[\varepsilon_{l}=\sqrt{m^{2}+2eBl}\;. \tag{14}\] We remark again that the volume-dependence only enters via the discretization of \(eB\), Eq. (19). This means that if one were to naively take the limit \(eB\to 0\) in a continuous manner one would simultaneously approach the infinite-volume limit. To obtain the chiral condensate for vanishing magnetic field on a finite volume, one must repeat the calculation leading up to Eq. (13), replacing the last term in Eq. (12) by \(\mathbf{p}^{2}=p_{1}^{2}+p_{2}^{2}\), with \(p_{i}=\frac{2\pi}{L}n_{i}\) and \(n_{i}\in\mathbb{Z}\), and taking the sum over momenta in the place of Landau levels. Taking the four-fold degeneracy of the eigenvalues into account and repeating the steps outlined in App. A leads to the expression \[\begin{split}&\langle\bar{\psi}\psi\rangle(B=0)=\\ &\frac{m^{2}}{\pi}-\frac{2m}{L^{2}}\sum_{\mathbf{p}\neq \mathbf{0}}\frac{1}{|\mathbf{p}|}e^{-\frac{L^{2}m}{2\pi}|\mathbf{p}|}+\frac{4 }{L^{2}}\sum_{\mathbf{p}}\frac{m}{\varepsilon_{\mathbf{p}}}\frac{1}{1+e^{ \beta\varepsilon_{\mathbf{p}}}}\;,\end{split} \tag{15}\] with \[\varepsilon_{\mathbf{p}}=\sqrt{m^{2}+\mathbf{p}^{2}}\;, \tag{16}\] for the chiral condensate on a finite volume and for \(B=0\). Let us now turn to the lattice computations. The basic ingredients for implementing external magnetic fields on the lattice are outlined in Sec. III.2. For naive and overlap fermions we use the formalism developed there, mainly involving the \(U(1)\) gauge links in Eq. (29), which enter the naive Dirac operator, \[D_{\mathrm{naive}}=\frac{1}{2}\gamma_{\mu}\left(\nabla_{\mu}^{*}+\nabla_{\mu} \right)+m\;, \tag{17}\] directly (\(\nabla_{\mu}\) and \(\nabla_{\mu}^{*}\) are defined in Eq. (12)) and the overlap operator via its Wilson kernel (10). When using the SLAC derivative, however, one cannot use compact gauge variables in the form of group-valued lattice links connecting neighboring lattice sites because the derivative itself is non-local and thus involves all lattice points in a given direction. We therefore briefly discuss an alternative solution: In analogy to the continuum, we define the SLAC Dirac operator as \[D_{\mathrm{SLAC}}:=\not{\partial}^{\mathrm{SLAC}}+\mathrm{i}e\not{A}+m\;, \tag{18}\] where the SLAC derivative in position space is given by the Toeplitz matrix [128] \[\partial_{\mu}^{\mathrm{SLAC}}(x,y)=(-1)^{(x_{\mu}-y_{\mu})/a}\frac{\pi/L_{\mu} }{\sin\left(\pi(x_{\mu}-y_{\mu})/L_{\mu}\right)} \tag{19}\] if \(x_{\mu}\neq y_{\mu}\) and \(x_{\nu}=y_{\nu}\) for all \(\nu\neq\mu\), and \(\partial_{\mu}^{\mathrm{SLAC}}=0\) otherwise. Obviously, the discretization (18) is not gauge invariant. One could, however, attempt to treat the \(e\not{A}\) term as a small perturbation if the magnetic field is not too large, such that (18) still describes the correct physics approximately.6 We do so in the following, reducing its numerical value as much as possible by employing the symmetric gauge Footnote 6: One should note that this assumption is hard to justify given that the gauge field is a linear function of \(x\) that (at individual sites) can have a magnitude proportional to \(L\). \[A_{0}(x)=0\;,\quad A_{1}(x)=-\frac{B}{2}x_{2}\;,\quad A_{2}(x)=\frac{B}{2}x_{1}\;, \tag{20}\] with \(x_{1,2}\) in the range \(\left[\frac{-L}{2},\frac{L}{2}\right)\). Problems will inevitably arise once the kinetic momentum \(p_{\mu}+eA_{\mu}\) crosses the boundary of the first Brillouin zone, since there the SLAC derivative is discontinuous. This is the reason SLAC fermions are not used in gauge theories and in our case such a crossing will occur for strong magnetic fields. We note that the minimal coupling prescription used in (107) makes the lattice boundary correction terms introduced in Eq. (24) obsolete, as they cannot be compensated for in the absence of compact periodic gauge variables. We have verified that their inclusion indeed gives worse results. Notice, however, that we are dealing with a different physical situation with SLAC fermions as now the total magnetic flux through the lattice vanishes, see Sec. III.2. Let us now compare the continuum result (102) with the lattice chiral condensate, defined by \[\langle\bar{\psi}\psi\rangle_{\rm latt}=-\frac{1}{V}\operatorname{tr}\left[D _{\rm latt}^{-1}\right]\;, \tag{110}\] where \(D_{\rm latt}\) stands for \[D_{\rm latt}=\begin{cases}D_{\rm naive}&\text{for naive fermions}\;,\\ D_{\rm SLAC}&\text{for SLAC fermions}\;,\\ D\left(\mathds{1}-\frac{\alpha}{2}D_{\rm ov}\right)^{-1}&\text{for overlap fermions}\;,\end{cases} \tag{111}\] the operators \(D_{\rm ov}\) and \(D\) being defined in Eqs. (9) and (14), respectively. For naive fermions (110) has to be divided by the number of doublers, i.e., \(8\) in \(2+1\) dimensions, in order to compare with continuum results. We show in Fig. 10 the change in the chiral condensate induced by the magnetic field, \[\Delta\langle\bar{\psi}\psi\rangle=\langle\bar{\psi}\psi\rangle(B)-\langle \bar{\psi}\psi\rangle(0)\;, \tag{112}\] for the continuum result (where \(\Delta\langle\bar{\psi}\psi\rangle\) is obtained by subtracting (107) from (102)) and the three discretizations. One observes that the agreement with the continuum of doublers, which is the main reason we refrained from using the naive discretization in our study. The agreement for overlap fermions is very good for weak magnetic fields, in particular in the regime of \(eB/m^{2}\) we investigate in our simulations. For stronger magnetic fields the qualitative behavior is still the same as for the continuum result, but the quantitative deviation (which appears to be quadratic in \(B\)) is substantial. We accredit this deviation to discretization artifacts, which for massive overlap fermions are worse (\(\mathcal{O}(a)\)) than for naive fermions (\(\mathcal{O}(a^{2})\)). One should therefore be cautious when interpreting our simulation results - while we do believe in their qualitative correctness, the absolute numbers could be systematically underestimated at large magnetic fields. In future studies one could employ an improvement program, such as the one suggested in [129], to reduce discretization effects. For SLAC fermions, perhaps unsurprisingly, the agreement with the continuum result is rather poor, as the SLAC condensate does not even reproduce the qualitative features of the continuum one, for instance, the dip for the lowest allowed magnetic field. We mention a number of (ultimately futile) attempts we experimented with in order to improve the SLAC derivative in a magnetic field given in Eq. (107). First, we tried out different gauges instead of (110), the latter leading to the best agreement, however. Next, we considered a physical situation where the magnetic field is constant and positive in one half of the lattice and constant and negative (with the same absolute value) in the other half. This avoids the need of introducing the lattice boundary terms in Eq. (24) entirely, which for the SLAC formulation were quite awkward in the first place. We then only considered the chiral condensate on a single lattice point \(x\), lying in the center of the region with positive magnetic field. This was motivated by the intuition that at such a point the influence from the region with negative magnetic field should be negligible for large enough lattices. However, the agreement with continuum results we found was still poor. We conclude that more work is necessary if one aims at making SLAC fermions work for a background magnetic field. ## Appendix C Proof there is no sign problem We show that there is no complex-action problem in the overlap formalism (14) by showing that \(\det D\) is real and non-negative. To this end, we work with the following representation of gamma matrices: \[\gamma_{\mu}=\begin{pmatrix}\sigma_{\mu}&0\\ 0&-\sigma_{\mu}\end{pmatrix}\;, \tag{113}\] where the \(\sigma_{\mu}\) can be chosen as the usual Pauli matrices. This decomposition makes clear how the reducible representation we use in this work is made up from the two inequivalent irreducible representations in three space-time dimensions, \(\sigma_{\mu}\) and \(-\sigma_{\mu}\). Figure 10: Comparison of \(\Delta\langle\bar{\psi}\psi\rangle\) in Eq. (112) between continuum and lattice results for \(N_{\rm s}=16\) (for the SLAC result we use \(N_{\rm s}=15\)), \(N_{\rm t}=16\) and \(am=0.1024\). Notice that, since we work in a finite volume, the magnetic field is discrete even in the continuum. It is then straightforward to convince oneself that the overlap operator (14) also assumes a block form: \[D=\begin{pmatrix}D_{1}&0\\ 0&D_{2}\end{pmatrix}\;, \tag{16}\] where \((i=1,2)\) \[D_{i} =D_{\text{ov},i}+\sigma\left(1-\frac{a}{2}D_{\text{ov},i}\right)\;, \tag{17}\] \[D_{\text{ov},i} =\frac{1}{a}\left(\mathds{1}+A_{i}\big{/}\sqrt{A_{i}^{\dagger}A_ {i}}\right)\;,\] (18) \[A_{i} =D_{W,i}-\mathds{1}\;, \tag{19}\] and the irreducible components of the Wilson operator read (see Eq. (12) for the definitions of \(\nabla_{\mu}\) and \(\nabla_{\mu}^{*}\)) \[D_{W,1} =\frac{1}{2}\left[\sigma_{\mu}(\nabla_{\mu}^{*}+\nabla_{\mu})-a \nabla_{\mu}^{*}\nabla_{\mu}\right]\;, \tag{20}\] \[D_{W,2} =\frac{1}{2}\left[-\sigma_{\mu}(\nabla_{\mu}^{*}+\nabla_{\mu})-a \nabla_{\mu}^{*}\nabla_{\mu}\right]\;.\] We emphasize that the diagonal elements \(D_{1,2}\) in (16) are precisely the expressions one would obtain for the overlap operator when working in one of the two irreducible representations. Hence, \(D\) decomposes in complete analogy to the continuum Dirac operator. Now, obviously, \[\det D=\det D_{1}\det D_{2}\;. \tag{21}\] Furthermore, we note that the symmetric difference operator \(\nabla_{\mu}^{*}+\nabla_{\mu}\) in (20) is anti-Hermitian, while the discretized Laplacian \(\nabla_{\mu}^{*}\nabla_{\mu}\) is Hermitian, such that \(D_{W,1}\) and \(D_{W,2}\) are Hermitian conjugates of one another. By using the spectral representation of the inverse square root in the definition of \(D_{\text{ov},2}\), one can then show that the same holds for \(D_{1}\) and \(D_{2}\), such that, using Eq. (21), \[\det D=\det D_{1}\det D_{1}^{\dagger}=|\det D_{1}|^{2}\geq 0\;, \tag{22}\] i.e., there is no complex-action problem since the determinant is real and non-negative. We emphasize that the crucial ingredient for this proof was the use of a reducible representation of gamma matrices. ## Appendix D Parameters In our simulations we generated \(\mathcal{O}(10^{3})-\mathcal{O}(10^{4})\) configurations per parameter set. We performed binned jackknife resamplings for our error analyses, making sure that each bin contained at least \(\tau_{\text{int}}\) configurations (but most commonly multiples thereof), where \(\tau_{\text{int}}\) refers to the integrated auto-correlation time corresponding to the order parameter \(\langle|\bar{\sigma}|\rangle\). We found \(\tau_{\text{int}}\lesssim 50\) in all cases. We list the relevant parameters for which we have obtained simulation data as well as the respective scales \(\sigma_{0}\) and scale-setting temperatures \(T_{0}\) in Tab. 1. Notice that the scale-setting temperatures are different between the infinite-volume and continuum limits, see Sec. III.5. Since the errors in \(\sigma_{0}\) are negligible, we do not quote them here and refrain from taking their influence on error propagation into account in the entirety of this work.
2310.18987
Path Analysis for Effective Fault Localization in Deep Neural Networks
Despite deep learning's transformative impact on various domains, the reliability of Deep Neural Networks (DNNs) is still a pressing concern due to their complexity and data dependency. Traditional software fault localization techniques, such as Spectrum-based Fault Localization (SBFL), have been adapted to DNNs with limited success. Existing methods like DeepFault utilize SBFL measures but fail to account for fault propagation across neural pathways, leading to suboptimal fault detection. Addressing this gap, we propose the NP-SBFL method, leveraging Layer-wise Relevance Propagation (LRP) to identify and verify critical neural pathways. Our innovative multi-stage gradient ascent (MGA) technique, an extension of gradient ascent (GA), activates neurons sequentially, enhancing fault detection efficacy. We evaluated the effectiveness of our method, i.e. NP-SBFL-MGA, on two commonly used datasets, MNIST and CIFAR-10, two baselines DeepFault and NP- SBFL-GA, and three suspicious neuron measures, Tarantula, Ochiai, and Barinel. The empirical results showed that NP-SBFL-MGA is statistically more effective than the baselines at identifying suspicious paths and synthesizing adversarial inputs. Particularly, Tarantula on NP-SBFL-MGA had the highest fault detection rate at 96.75%, surpassing DeepFault on Ochiai (89.90%) and NP-SBFL-GA on Ochiai (60.61%). Our approach also yielded results comparable to those of the baselines in synthesizing naturalness inputs, and we found a positive correlation between the coverage of critical paths and the number of failed tests in DNN fault localization.
Soroush Hashemifar, Saeed Parsa, Akram Kalaee
2023-10-29T12:01:15Z
http://arxiv.org/abs/2310.18987v3
**Path Analysis for Effective Fault Localization in Deep Neural Networks** ###### Abstract Deep learning has revolutionized various real-world applications, but the quality of Deep Neural Networks (DNNs) remains a concern. DNNs are complex and have millions of parameters, making it difficult to determine their contributions to fulfilling a task. Moreover, the behavior of a DNN is highly influenced by the data used during training, making it challenging to collect enough data to exercise all potential DNN behavior under all possible scenarios. This paper proposes NP-SBFL method to locate faulty neural pathways (NP) using spectrum-based fault localization (SBFL). Our method identifies critical neurons using the layer-wise relevance propagation (LRP) technique and determines which critical neurons are faulty. Moreover, we propose a multi-stage gradient ascent (MGA), an extension of gradient ascent (GA), to effectively activate a sequence of neurons one at a time while maintaining the activation of previous neurons, so we are able to test the reported faulty pathways. We evaluated the effectiveness of our method, i.e. NP-SBFL-MGA, on two commonly used datasets, MNIST and CIFAR-10, two baselines DeepFault and NP-SBFL-GA, and three suspicious neuron measures, Tarantula, Ochiai, and Barinel. The empirical results showed that NP-SBFL-MGA is statistically more effective than the baselines at identifying suspicious paths and synthesizing adversarial inputs. Particularly, Tarantula on NP-SBFL-MGA had the highest fault detection rate at 96.75%, surpassing DeepFault on Ochiai (89.90%) and NP-SBFL-GA on Ochiai (60.61%). Our approach also yielded comparable results to the baselines in synthesizing naturalness inputs, and we found a positive correlation between the coverage of critical paths and the number of failed tests in DNN fault localization. **Keywords**: Fault Localization; Neural Pathway; Deep Neural Networks; Neuron Relevancy; Statistical Analysis ## 1 Introduction Deep Learning (DL) has been successful in addressing many real-world problems, such as classifying images [1], recognizing human speech [2], natural language processing [3], and software engineering tasks [4]. However, Deep Neural Networks (DNNs) have demonstrated quality issues, such as their inability to classify samples correctly in real-world applications despite achieving high accuracy. Real-world cases, such as Tesla/Uber accidents and incorrect diagnoses in healthcare, signify the need for assessment approaches in DNNs to assure their quality, particularly in safety- and security-critical systems [5]. Obviously, only DNN-based systems with high levels of trustworthiness should be approved to work in the public domain [6]. However, the complexity of DNN architectures makes it difficult to determine the contribution of each parameter to fulfilling a task. This complexity, combined with the fact that a DNN's behavior is produced according to the quality of data used to train it, makes it challenging to collect enough data to exercise all potential DNN behavior under various scenarios [7]. Therefore, effective testing approaches are needed to evaluate the quality of decisions made by DNN-based systems [8]. Although extensive research has shown that directly applying software testing methods to DNNs is not feasible, it is possible to map advanced principles behind to test DNNs [9, 10]. However, black-box DNN testing is limited in providing insights into activation pattern of intermediate neurons and identifying inputs that result in exposing unexpected network behavior. To address this, researchers have turned to white-box testing techniques from software engineering. For example, DeepXplore [11] and DeepGauge [12] use differential algorithms and multi-granularity coverage criteria for conducting effective tests. Other research has proposed testing criteria and techniques inspired by metamorphic testing [13], combinatorial testing [14], mutation testing [15], MC/DC [16], symbolic execution [17], and Concolic testing [18]. Software fault localization aims to identify a system's parts responsible for incorrect behavior [19]. Spectrum-based fault localization (SBFL) is a promising technique for locating faults in traditional software programs, as explained in Section 2. Only one technique, DeepFault [20], currently employs traditional software fault localization to find suspicious neurons [20]. It utilizes measures such as Tarantula and Ochiai from the SBFL domain. Along with DeepFault, several approaches have been recently proposed to locate the most suspicious neurons within the network in Deep Neural Networks (DNNs) using statistical techniques [21-24]. However, none of the existing fault localization techniques considers the propagation of faults through the neural connections in different layers of the DNN. Identifying the faulty path can be achieved through pathways of sequential neurons most responsible for incorrect behavior. However, there are challenges in considering pathways in DNNs. Firstly, it is not easy to define a pathway. Additionally, the tremendous number of pathways in a DNN makes it impossible to calculate pathways for large DNNs, which can quickly become infeasible. This paper proposes a novel approach for DNN fault localization that adapts Spectrum-based Fault Localization (SBFL) to locate faulty neural pathways. The NP-SBFL-MGA approach identifies critical neurons using the Layer-wise Relevance Propagation (LRP) technique and determines which critical neurons are faulty. A Multi-stage Gradient Ascent (MGA) technique, an extension of gradient ascent, is used to effectively activate a sequence of neurons one at a time while maintaining the activation of previous neurons. The method's effectiveness is evaluated on two commonly used datasets, MNIST and CIFAR-10, and compared with two baselines, DeepFault and NP-SBFL-GA, using three suspicious neuron measures, Tarantula, Ochiai, and Barinel. The approach outperforms the baselines in identifying faulty neural pathways and synthesizing adversarial inputs. Specifically, Tarantula on NP-SBFL-MGA had the highest fault detection rate at 96.75%, surpassing DeepFault on Ochiai (89.90%) and NP-SBFL-GA on Ochiai (60.61%). It is concluded that this approach provides a systematic and effective framework for DNN fault localization and can help improve the quality and reliability of DNN-based software. The effectiveness of the proposed method is evaluated on two commonly used datasets, MNIST [27] and CIFAR-10 [28]. The method is compared to two baselines, DeepFault and NP-SBFL-GA (a version of NP-SBFL that uses gradient ascent to activate neurons during input synthesis). Results show that NP-SBFL is statistically more effective than the baselines at identifying suspicious paths and synthesizing adversarial inputs. Specifically, the instance NP-SBFL-MGA using Tarantula had a 96.75% average fault detection rate compared to 89.90% for DeepFault using Ochiai and 60.61% for NP-SBFL-GA using Ochiai across all datasets and models. The proposed method also identifies unique suspicious neurons, especially in complex models, whereas the other methods rely heavily on a common neuron, making them less reliable. In addition to the fault detection rate, the naturalness of the synthesized inputs is evaluated using popular distance metrics such as Manhattan, Euclidean, Chebyshev, inception score, and Fre'chet Inception Distance. The approach yields comparable results to DeepFault. Furthermore, a positive correlation is found between the coverage of critical paths and the number of failed tests in DNN fault localization. Overall, the contributions of this paper to the field of DNN fault localization are as follows: First, spectrum-based suspiciousness measures from software engineering are used to locate faulty pathways of neurons in DNNs. Second, verifying identified faulty pathways through multi-stage gradient ascent instead of just gradient ascent. Third, the proposal of an algorithm that guides the synthesis of inputs to activate potentially suspicious neural pathways. Fourth, the thorough evaluation of NP-SBFL on two publicly available datasets (MNIST and CIFAR-10) with different models demonstrates its feasibility and effectiveness. The evaluation results show that the NP-SBFL method effectively identifies faulty neural pathways and synthesizes adversarial inputs. Compared to existing baselines, the method has demonstrated superiority in identifying unique suspicious neurons, especially in complex models. Furthermore, evaluating the synthesized inputs' naturalness shows comparable results to existing methods. The correlation analysis also suggests a positive relationship between the coverage of critical paths and the number of failed tests in DNN fault localization. The proposed NP-SBFL method offers a systematic and practical framework for DNN fault localization that can potentially improve the quality and reliability of DNN-based software. The contributions of this paper provide a foundation for future research on DNN fault localization, which could guide the development of safer and more trustworthy DNN-based systems. The subsequent sections of this paper are structured as follows. Section 2 provides a comprehensive overview, covering the background of traditional software fault localization, the fault taxonomy in DNNs, and the relevance propagation concept. Section 3 summarizes the existing literature on fault localization in DNNs. In Section 4, we introduce our proposed approach, NP-SBFL. Section 5 contains a meticulous account of the experimental setup, the evaluation performed, and the potential threats to validity. Finally, Section 6 concludes the paper. ## 2 Background The following section provides an overview of fault localization in traditional software, followed by an introduction to fault taxonomy in DNN models. Additionally, a detailed explanation of the relevancy concept is also included. ### 2.1 Fault Localization in Traditional Software Fault localization (FL) can be categorized as white box testing that aims to identify source code elements more prone to produce faults [19]. An FL technique is applied to test a program P against a test oracle T, which reports tests with passed and failed status. Subsequently, an FL measurement technique is employed to determine how likely each program element is to cause fault during the execution time, i.e. suspiciousness of the program element. Spectrum-based FL (SBFL) [29] belongs to a family of debugging techniques aiming to identify potentially faulty code by analyzing both passing and failing executions of a faulty program. It infers statistical properties from these executions and provides developers with a ranked list of potentially faulty statements to investigate. When given a faulty program and a set of test cases, an automated debugging tool based on SBFL generates a ranked list of potentially faulty statements. Developers can then inspect these statements one by one until they find the bug. ### 2.2 Faults Taxonomy in Deep Neural Networks There are fundamental differences in definition of fault and detecting faults between regular software programs and DNN models [30]. Regular programs express their logic through control flow, while weights among neurons and various activation functions play crucial role in the logic of DNN programs. In software programs, bugs are often identified by comparing ground-truth outputs with expected outputs. As long as there is a difference between the ground-truth output and the expected one, a bug is considered to be present. In contrast, DNN-based program learns from a training dataset and once the DNN misclassifies during inference, the input samples are termed as failure cases. However, it is essential to note that since a DNN model cannot guarantee complete accurate classifications, such failures do not necessarily imply the presence of a bug. To address this, DeepDiagnosis [30] has discussed eight types of failure symptoms and their underlying causes. Additionally, Neural Trojans represent another type of symptom that can lead to misbehavior in DNNs [31]. These failure roots and symptoms are summarized in Table 1. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline \# & **Symptom** & **Description** & **Root causes** \\ \hline \hline \end{tabular} \end{table} Table 1: Typical failure symptoms and their root causes in a DNN. ### Layer-Wise Relevance Propagation Layer-wise relevance propagation (LRP) [32][33] is an explanation technique applicable to neural network models that handle inputs like images, videos, or text [34][35]. LRP operates by propagating the prediction f(x) backward in the neural network using specifically designed local propagation rules. The propagation process employed by LRP adheres to a conservation property, where the relevance received by a neuron must be equally redistributed to the neurons in the lower layer. This behavior is akin to Kirchoff's conservation laws observed in electrical circuits. Let j and k represent neurons at two consecutive layers in the neural network. The relevance scores are propagated from a given layer onto the neurons of the lower layer using the rule depicted in Eq. 1. \[R_{j}=\sum_{k}\frac{z_{jk}}{\sum_{j}z_{jk}}R_{k} \tag{1}\] The variable z\({}_{jk}\) represents the contribution of neuron j to the relevance of neuron k. The denominator ensures the conservation property is maintained. The propagation process concludes once the input features are reached. By applying this rule to all neurons in the network, the layer-wise conservation property \(\sum_{j}R_{j}=\sum_{k}R_{k}\) can be easily verified and extended to a global conservation property \(\sum_{i}R_{i}=f(x)\). The overall LRP procedure is illustrated in Figure 1. ## 3 Related work The literature discusses two distinct approaches for fault localization in DNN models. Analytical methods employ analytical techniques to identify the most suspicious neurons within the network. On the other hand, statistical methods use statistics, such as spectrum-based techniques, to identify faulty paths in the network. In the following review, we will briefly explore previous works in these two categories and investigate how FL techniques have been applied to DNNs. ### 3.1 Localizing faults in parameters DeepFault [20] is an SBFL-based method designed for DNNs, which requires can be categorized as white-box approaches. DeepFault starts by identifying neurons which are more likely to cause misclassifications. To achieve this, DeepFault analyzes the behavior of intermediate neurons at inference, establishing a hit spectrum for each neuron under a specific test set. By analyzing these hit spectrums, DeepFault identifies highly defective neurons and synthesize new input samples that activate error prone neurons. The identification of suspicious neurons is based on calculating a suspiciousness score for each neuron using various measurements of suspiciousness: Tarantula [36], Ochiai [37], and DStar [38]. These measures have been used for fault localization in the field of software engineering for the first time [19]. Neurons with higher suspiciousness scores contribute more to incorrect DNN decisions, due to their lack of adequately training budget. Consequently, the weights of these neurons need to be further adjusted [39]. In the synthesis phase, correctly classified inputs are used to generate input samples that aim to maximize the activation values of neurons identified as suspicious. By doing so, the information produced by these neurons propagates through the network, causing shifts in the decision boundaries learned by the network. NNrepair [22] proposes separate repair techniques for both the intermediate and last layers, aiming to find potentially faulty network weights. Repairing all the output classes at once can be challenging. So, NNrepair prepare a set of expert networks, each specializing in one specific label class, which is computationally more feasible. These experts are combined to create a final repaired classifier. In the fault localization step, network activation patterns are utilized as an oracle for correct behavior. An activation pattern \(\sigma\) specifies an activation state ("on" or "off") for a subset of neurons at a layer in the network. These patterns guide the identification of potentially faulty neurons. Highly Figure 1: Illustration of the LRP procedure. Each neuron redistributes to the lower layer as much as it has received from a higher layer. supported correct-label patterns corresponding to each output class at an intermediate layer are extracted. A highly supported correct-label pattern suggests that the network would likely classify any input satisfying the pattern to the appropriate label, while misclassified inputs would not meet the correct label pattern. For each incorrect input, the activations of the neurons corresponding to the correct-label pattern are compared, and neurons with different activations are considered potentially faulty. The repair process aims to modify the outputs of these neurons for each failing input to match the correct label pattern for their respective labels. ### 3.2 Localizing faults in architecture DeepLocalize [21] presents a fault localization approach for DNNs, which requires access to the source code of the DNN model. DeepLocalize aims at fixing bugs in the architecture of the DL program. They propose two mechanisms to collect dynamic traces: (1) the first technique works by mapping the code into an intermediate representation of the DNN. This representation allows the internal states of the DNN to be monitored, enabling the insertion of probes for dynamic analysis during the training process. (2) The second technique investigates the internal state of the DNN source code by injecting a callback function. Through this dynamic analysis, DeepLocalize identifies the faulty hyper-parameters, including layers, responsible for the error. DeepLocalize aims to achieve three main objectives: (1) determine if there exist a bug inside the DL program, (2) localize the fault, and (3) provide failure information. They test DeepLocalize on buggy models collected from commits of GitHub and posts on Stack Overflow platforms, demonstrating its ability to identify bugs in 23 out of 29 buggy programs using the first technique and 34 out of 40 using the callback function. DeepDiagnosis [30], as an analytical approach, offers bug fixes in the DL program. During training, the approach monitors essential indicators like weights and gradients, analyzing the recorded values to detect symptoms and potential training issues. When a symptom is identified, a Decision Tree is utilized to diagnose it based on pre-defined rules. The process begins by taking the initial model architecture and a training dataset as input and passing a callback method to the _fit(.)_ method. This callback method captures and records key values (e.g., weights and gradients) during feed-forward and backward propagation. A dynamic detector reports various symptoms at different stages during training, based on error conditions. If a symptom is detected, the recorded key values are analyzed to determine potential locations in the input model that need correction. DeepDiagnosis then reports the type of symptom, the layers and stages where it was detected, and suggests a location for fixing the issue. Finally, Cao et al. [23] propose DeepFD, a learning-based fault diagnosis and localization framework that treats the fault localization task as a learning problem, in a statistical manner. It infers suspicious fault types by monitoring runtime features extracted during DNN model training and then locates the diagnosed faults in the architecture of the DL programs. DeepFD overcomes limitations by identifying the root causes of faults in DL programs instead of neurons and diagnoses them using a learning approach instead of a set of hard-coded rules. Table 2 summarizes the benefits of existing and our proposed NP-SBFL fault localization methods. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Method** & **Approach** & **Repair model** & **Phase** & **Detection region** & **Focus** \\ \hline DeepFault [20] & Statistical & & Post-train & whole network & Neuron \\ \hline DeepLocalize [21] & Statistical & & During train & - & Hyper-parameters \\ \hline NNrepair [22] & Statistical & & Post-train & fully connected layers & Weights \\ \hline DeepDiagnosis [30] & Analytical & & During train & - & Hyper-parameters \\ \hline \hline \end{tabular} \end{table} Table 2: An overview of the explained Fault-Localization methods. ## 4 Proposed Approach This section introduces our NP-SBFL white-box approach, which presents a systematic testing method for DNNs. The approach aims to identify and locate highly erroneous neural pathways within a DNN. By employing a sequence of analysis, identification, and synthesis steps on a pre-trained DNN, NP-SBFL can effectively pinpoint these problematic pathways and create new inputs that activate them. A comprehensive overview of NP-SBFL is given in Section 4.1, while Sections 4.2 to 4.4 delve into the specific details of the NP-SBFL steps. ### Overview In this section, we provide an overview of NP-SBFL. Initially, we extract neurons that are critical from each layer of the neural network. This extraction process involves using LRP, as explained in Section 2.3, to unhide the internal decision process for input data. LRP measures how relevant each neuron is, starting from the output layer and tracing back to the first layer, in terms of their impact on prediction results. This relevance signifies the influence of each neuron on the way the network make decisions. Once we have the relevant neurons and their activation values for each layer, we employ fault localization methods based on spectrum, to identify suspicious neurons. Finally, we generate a synthesized dataset to activate these suspicious paths and verify whether they lead to incorrect classification results. To achieve this, we propose the use of a multi-stage gradient ascent technique to activate these faulty paths effectively. ### Neural Pathways Analysis In this section, we present the method for identifying the essential decision path for a given input. As mentioned earlier, the relevance of each neuron \(R_{i}^{l}\) signifies its contribution to the final decision. A higher relevance value indicates that neuron i at layer l has a more significant impact on the decision-making process. Since a DNN's prediction relies on the extracting high quality features at each layer, the critical neurons with high relevance reveal the logic behind the decision process of the DNN [40]. Inspired by [40], for an input x, we formulate the condition of critical neurons as shown in Eq. 2 at each layer \(l_{i}\): \[\sum_{i\in l_{i}}R_{i}^{l}\ >\ \alpha\ \cdot\ g_{f}(x) \tag{2}\] where \(\alpha\) is a hyper-parameter controlling the volume of neurons that are considered as critical at each layer. By definition, a critical neuron positively contributes to the decision, meaning it has a positive relevance value. To determine the set of critical neurons, we define this set as the minimum subset of neurons with a cumulative relevance exceeding a predefined threshold. This threshold is a fraction of \(g_{f}(x)\) denoting the cumulative relevance of the input values. By adjusting \(\alpha\), we can control the number of critical neurons located at each layer, where smaller \(\alpha\) values result in fewer critical neurons being chosen. ### Suspicious Neural Pathways Identification The next stage of NP-SBFL involves examining neural activity to pinpoint faulty neurons in each layer individually. To achieve this, NP-SBFL detects suspicious neurons by defining characteristics of a neuron's execution procedure, drawing inspiration from DeepFault [20]. However, unlike DeepFault, which considers all neurons collectively, our focus is on critical neurons in each layer, as they hold greater relevance to the DNN's decision-making process. The quantities \(A^{c}_{p}\) and \(A^{c}_{f}\) represent the number of times a neuron was covered by the input sample and resulted in a passed or failed decision, respectively. Similarly, quantities \(A^{n}_{p}\) and \(A^{n}_{f}\) indicate cases where the neuron remained uncovered by the input sample. The abstract procedure for determining the suspicious scores are presented in Algorithm 1. ``` 1:Input a neural network with L layers; test set T; activation threshold \(\beta\) 2:Output suspiciousness scores of all neurons in the network 3:\(A^{c}_{p}=\) matrix of active-passed tests with initial value of \(0\) 4:\(A^{n}_{p}=\) matrix of inactive-passed tests with initial value of \(0\) 5:\(A^{n}_{f}=\) matrix of active-failed tests with initial value of \(0\) 6:\(A^{n}_{f}=\) matrix of inactive-failed tests with initial value of \(0\) 7:\(supiciousness Scores=\) matrix of suspiciousness scores associated with each neuron 8:for\(input_{sample}\in test\ set\ T\)do 9: predict \(input_{sample}\) by the network 10: get relevancy and activation values of all layers of network 11:\(neurons_{critical}=[]\) 12:for\(layer\ l=1,2,\ldots,L\)do 13: find critical neurons according to equation (2) and append them to \(neuron_{critical}\) 14:endfor 15:for\(layer\ l=1,2,\ldots,L\)do 16:for\(neuron\ n\in layer\ l\)do 17:if\(input_{sample}\) is correctly classified then 18:if\(n_{activation}\geq\beta\) and \(n\in neurons_{critical}[l]\)then 19:\(increment\ A^{n}_{p}[l][n]\) 20:endif 21:if\(n_{activation}<\beta\) and \(n\in neurons_{critical}[l]\)then 22:\(increment\ A^{n}_{p}[l][n]\) 23:endif 24:else 25:if\(n_{activation}\geq\beta\) and \(n\in neurons_{critical}[l]\)then 26:\(increment\ A^{n}_{f}[l][n]\) 27:endif 28:if\(n_{activation}<\beta\) and \(n\in neurons_{critical}[l]\)then 29:\(increment\ A^{n}_{f}[l][n]\) 30:endif 31:endif 32:endfor 33:endfor 34:for\(layer\ l=1,2,\ldots,L\)do 35:for\(neuron\ n\in l\)do 36:\( suspiciousness Scores[l][n]=SBFL(A^{c}_{p}[l][n],A^{n}_{p}[l][n],A^{c}_{f}[l][n],A^{n}_{f}[l][n])\) 37:endfor 38:return\(suspiciousness Scores\) ``` **Algorithm 1** Determine Suspiciousness Scores Under a test set T, NP-SBFL analyzes the behavior of relevant neurons in the DNN to create a hit spectrum for each neuron (lines 6-32), which describes its dynamic behavior. This involves utilizing the activation patterns of relevant neurons in each layer for analysis (lines 16-29). After obtaining the set of hit spectrums through DNN analysis, NP-SBFL proceeds to individually identify suspicious relevant neurons in each layer. These suspicious neurons are grouped from the first to the last layer, forming a sequence of neurons. Finally, NP-SBFL employs a suspiciousness measure based on spectrum, which calculates a suspiciousness score for each neuron based on spectrum-related information (lines 33-37). The higher the suspiciousness score, the more likely the neuron was inadequately trained, contributing to incorrect DNN decisions. In this paper, we implement NP-SBFL using three distinct suspiciousness measures: Tarantula [36], Ochiai [41], and Barinel [29]. Their algebraic formulas are presented in Table 3. These suspiciousness measures operate on the principle that a neuron is deemed more suspicious if it is frequently covered by test inputs resulting in incorrect DNN decisions, and less frequently covered by test inputs leading to correct decisions. Upon analyzing the suspiciousness of neurons in each layer of a DNN, the neurons are arranged in descending order of suspiciousness. The k most likely defective neurons are then selected in each layer to form a pattern of interconnected neurons. ### Suspiciousness-Guided Input Synthesis To assess the faulty paths identified in the previous steps, it is necessary to activate these suspicious paths and check whether they lead to misclassifications by the DNN. To achieve this, we aim to maximize the activation of the set of suspicious neurons determined by NP-SBFL, using a technique called Activation Maximization. Activation Maximization is a visualization method for neural networks that seeks to maximize the activation of specific neurons. During standard training, the weights and biases of the neural network are iteratively adjusted to minimize the error (or loss) across training examples in the dataset. Activation Maximization, on the other hand, works inversely. After the classifier has been trained, we want to iteratively find the parts of the data that the model associates with a specific class. One approach used in Activation Maximization is Gradient Ascent (GA). Gradient Ascent involves using the derivative of a differentiable function, f(x): \(R^{d}\rightarrow\mathbb{R}\), to determine how much f(x) will change if we increase x slightly. This information is valuable for optimization problems, such as finding a minimum. In Gradient Descent algorithm, we start from any point and move in the direction that decreases f(x) the most, repeatedly calculating the gradient and taking steps until we reach the minimum. In the context of our model, the gradient of the function (neural network) provides information about each input dimension's effect on increasing the function's value [43]. For activating target neurons, we apply the GA to each neuron, adding the gradient of that neuron with respect to the input data to the input image pixels. However, since NP-SBFL operates based on a sequence of neurons, the \begin{table} \begin{tabular}{l c} \hline \hline **Suspiciousness Measure** & **Algebraic Formula** \\ \hline Tarantula & \(\dfrac{A_{f}^{c}}{A_{f}^{c}\ +\ A_{f}^{n}}\) \\ \(\dfrac{A_{f}^{c}}{A_{f}^{c}\ +\ A_{f}^{n}}\ +\dfrac{A_{p}^{c}}{A_{p}^{c}\ +\ A_{p}^{n}}\) \\ \hline Ochiai & \(\dfrac{A_{f}^{c}}{\sqrt{\left(A_{f}^{c}\ +\ A_{f}^{n}\right)\ *\ (A_{f}^{c}\ +\ A_{p}^{c})}}\) \\ \hline Barinel & \(1-\dfrac{A_{p}^{c}}{A_{f}^{c}\ +\ A_{p}^{c}}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Suspiciousness measures used in NP-SBFL. naive GA may not be effective in activating these paths. Activating a neuron in later layers could inadvertently deactivate previous neurons in earlier layers along the path, or vice-versa. To address this issue, we developed a method called multi-stage Gradient Ascent (MGA), an extension of GA. MGA activates a sequence of neurons one at a time while maintaining the activation of previous neurons. To achieve this, we minimize a loss function for all layers sequentially. The intended loss function is represented by Eq. 3. \[\mathit{loss}_{L}=\mathit{-activation}_{target}^{L}+\sum_{l=1}^{L-1}(- \mathit{activation}_{target}^{l}+\mathit{\left\lfloor activation_{target}^ {L}\right\rfloor}\mathit{-activation}_{target}^{l}) \tag{3}\] The loss function is minimized layer-by-layer. For each layer L, the target activations in the current layer (first term) and all previous target activations are kept activated (second term). As described in Algorithm 2, the gradient of this loss function is then added to the input image to synthesize it and activate the suspicious paths (line 11). Finally, the domain constraints are applied to the synthesized image (line 12), which consists of normalizing the image's new pixel values in range of [0, 1]. The effectiveness of MGA is explored in later sections. ``` 1:Input a neural network with L layers; test set T; list of suspicious neurons in each layer \(\mathit{suspicious_{neurons}}\); learning rate \(lr\) 2:Output a set of synthesized images to activate faulty paths 3:\(synthesizedImages\) = [] 4:for\(input_{sample}\in tests\,set\,\,T\)do 5:for\(iteration=1,2,\,\ldots,\,I\)do 6: predict \(input_{sample}\) by the network 7:if\(input_{sample}\) is correctly classified then 8:for\(layer\,l=1,2,\ldots,L\)do 9:for\(neuron\,n\in\mathit{suspicious_{neurons}}[l]\)do 10: calculate loss objective according to equation (3) 11:endfor 12:\(gradients=\frac{\partial loss}{\partial input_{sample}}\) 13:\(input_{sample}=input_{sample}-lr*gradients\) 14: apply domain constraints on \(input_{sample}\) 15:endfor 16: append synthesized \(input_{sample}\) to \(synthesizedImages\) 17:endif 18:endfor 19:endfor 20:return\(synthesizedImages\) ``` **Algorithm 2** Suspiciousness-Guided Input Synthesis ## 5 Implementation The prototype tool we have implemented on top of PyTorch v2.0 has been designed to streamline the evaluation and adoption of the NP-SBFL approach. By providing intuitive functionality, we aim to make it easier for researchers and practitioners to incorporate NP-SBFL into their work. Additionally, we have made the full experimental results available on the NP-SBFL project page at [https://github.com/soroushhashemifar/NP-SBFL](https://github.com/soroushhashemifar/NP-SBFL), allowing for easy access and reference. ## 6 Evaluation In this section, we describe our experiments to evaluate the effectiveness of our fault localization technique, NP-SBFL, in localizing faulty sequences of neurons within a neural network. Firstly, we describe our experimental setup, including the dataset used, the neural network architecture, and the evaluation metrics employed. Next, we outline the research questions we aim to answer through our experimental evaluation. These questions include comparing the performance of different fault localization approaches, evaluating our technique's effectiveness using different suspiciousness measures, and examining the correlation between critical path coverage and the number of failed tests. Subsequently, we present the results of our experiments and discuss their implications. We provide a detailed analysis of the performance of different fault localization approaches and evaluate our technique's effectiveness using different suspiciousness measures. Finally, we discuss potential threats to the validity of our experimental evaluation and outline steps taken to mitigate these threats. ### Experimental Setup We evaluate NP-SBFL using two widely-used datasets. The first dataset, MNIST [27], contains handwritten digit samples comprising 60,000 training and 10,000 testing samples. Each sample comprises a 28x28 pixel image with a class label ranging from 0 to 9. The second dataset, CIFAR-10 [28], is an image dataset containing 50,000 training samples and 10,000 testing samples. This dataset includes 32x32 images of ten distinct classes, such as dogs, birds, and cars. In our analysis of both datasets, we examine six DNNs implemented through the PyTorch framework, with the specific configurations outlined in Table 4. Each DNN has distinct architectural features and varying numbers of trainable parameters. To obtain a minimum accuracy of 95% on the MNIST dataset, we employ three deep Fully-connected NNs. Conversely, for the CIFAR-10 dataset, we experiment with three convolutional NNs enhanced by max-pooling and ReLU activation functions to achieve a minimum accuracy of 70%. We have implemented NP-SBFL using the suspiciousness measures Tarantula, Ochiai, and Barinel, as shown in Table 5. To assess the effectiveness of NP-SBFL, we conducted experiments using different numbers of suspicious neurons, specifically k@{1,5,10} for MNIST models and k@{10,30,50} for CIFAR models. In addition, we conducted extensive experiments for models of Table 4 to optimize the hyper-parameters of Algorithm 2 and facilitate the reproducibility of our results. Through empirical analysis, we determined that the values listed in Table 5 are suitable for perturbing inputs in MNIST and CIFAR-10 models. The NP-SBFL method employs two gradient ascent synthesizers - NP-SBFL-GA, a simple one, and NP-SBFL-MGA, a multi-stage synthesizer. For DeepFault, we utilized the step size values recommended by the original paper. Furthermore, we determine an upper-bound for allowed distance (d) to ensure it does not exceed the distances specified in Table 5, considering the range of values provided in each dimension of input sample and also its highest pixel value. For DeepFault, we again employed the distance values suggested by the original paper. Exploring alternative step and parameter d is an area of future investigation for our research. The evaluations were conducted on an Ubuntu desktop with a memory capacity of 16 GB and an Intel(r) Core(tm) i7-4790K CPU running at 4.00GHz with eight cores. Throughout the experiments, a criticality coefficient \(\alpha\) value of 0.7 was employed. Given that all activation functions were ReLU, neurons with an activation value greater than 0.0 were considered to be activated. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **Model** & \begin{tabular}{c} **\# Trainable** \\ **Params** \\ \end{tabular} & **Architecture** & **Accuracy** & \begin{tabular}{c} **\#Correctly** \\ **Classified** \\ **Samples** \\ \end{tabular} \\ \hline \multirow{4}{*}{MNIST} & MNIST\_1 & 27,420 & 5 * \textless{}30\textgreater{}, \textless{}10\textgreater{} & 96.6 \% & 9631 \\ \cline{2-5} & MNIST\_2 & 22,975 & 6 * \textless{}25\textgreater{}, \textless{}10\textgreater{} & 95.8 \% & 9581 \\ \cline{2-5} & MNIST\_3 & 18,680 & 8 * \textless{}20\textgreater{}, \textless{}10\textgreater{} & 95 \% & 9512 \\ \hline \multirow{4}{*}{ \begin{tabular}{c} CIFAR-10 \\ \end{tabular} } & CIFAR\_1 & 411,434 & 2 * \textless{}32@3x3\textgreater{}, 2 * \textless{}64@3x3\textgreater{}, 4 * \textless{}128\textgreater{}, \textless{}10\textgreater{} & 70.1 \% & 7066 \\ \cline{1-1} \cline{2-5} & CIFAR\_2 & 724,010 & 2 * \textless{}32@3x3\textgreater{}, 2 * \textless{}64@3x3\textgreater{}, 2 * \textless{}64@3x3\textgreater{}, 2 * \textless{}256\textgreater{}, \textless{}10\textgreater{} & 72.6 \% & 7413 \\ \hline \hline \end{tabular} \end{table} Table 4: Configuration of all DNN models used in NP-SBFL ### Research Questions Our experimental evaluation aims to address the following research questions. * RQ1 (Validation): Does empirical evaluation validate the effectiveness of NP-SBFL in identifying suspicious paths and its ability to outperform DeepFault, a neuron-based suspiciousness selection strategy, in synthesizing adversarial inputs that lead to misclassification of previously correctly classified inputs by DNNs? * RQ2 (Comparison): How do NP-SBFL-GA, NP-SBFL-MGA, and DeepFault instances with different measurements of suspiciousness compare against each other? The evaluation is conducted by analyzing the results produced by different instances using Tarantula [36], Ochiai [41], and Barinel [29]. * RQ3 (Fault Detection Rate): Which of the selected approaches exhibits a greater fault detection rate? * RQ4 (Correlation) Is there a correlation between the coverage of critical paths and the number of failed tests in DNN fault localization? * RQ5 (Locating Unique Suspicious Neurons): Which approach is more effective in locating unique suspicious neurons among the different instances? * RQ6 (Quality of Synthesized Inputs): How well can NP-SBFL synthesize inputs of acceptable quality? ### Results and Discussion In this section, we design and conduct specific experiments to address each research question and provide insights into the impact of activating suspicious paths on the model's performance and potential vulnerabilities. Our experiment process involves several steps. Firstly, we choose a group of accurately classified samples from the test set based on the pre-trained model's predictions. Next, we utilize an input synthesis technique to activate suspicious paths in the model. This technique involves modifying the input samples to highlight certain patterns that could trigger misclassifications. Then, we measure the misclassification rate by inputting the synthesized samples into the model and comparing the predicted and actual labels. After that, we calculate the fraction of samples that activate the suspicious paths and lead to misclassification by dividing the number of misclassified samples by the total number of synthesized samples. Lastly, we analyze the ratios obtained to conclude the effectiveness of activating suspicious paths in producing faulty results. #### RQ1 (Validation) We use the NP-SBFL workflow to analyze DNNs in Table 4. This analysis involves identifying the K neurons with the highest scores using a suspiciousness measure and synthesizing new input samples from those input samples classified correctly, which exercise the identified neurons. The last column in Table 4 shows the number of correctly \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{DeepFault} & \multicolumn{2}{c}{NP-SBFL-GA} & \multicolumn{2}{c}{NP-SBFL-MGA} \\ \cline{2-7} & step size & distance & step size & distance & step size & distance \\ \hline \hline MNIST-1 & 1 & 0.1 & 1 & 0.5 & 5 & 0.006 \\ \hline MNIST-2 & 1 & 0.1 & 1 & 0.5 & 5 & 0.006 \\ \hline MNIST-3 & 1 & 0.1 & 1 & 0.5 & 5 & 0.006 \\ \hline CIFAR-1 & 10 & 0.1 & 10 & 0.1 & 5 & 0.01 \\ \hline CIFAR-2 & 10 & 0.1 & 10 & 0.1 & 5 & 0.02 \\ \hline CIFAR-3 & 10 & 0.1 & 10 & 0.1 & 5 & 0.04 \\ \hline \hline \end{tabular} \end{table} Table 5: The hyper-parameters of different approaches and models. classified inputs. We evaluate the DNN's prediction performance using standard metrics like cross-entropy loss and accuracy. The analysis is done per class since inputs from the same class have similar activation patterns. Tables 6 to 8 show the average loss and accuracy of inputs synthesized by different methods, including DeepFault, NP-SBFL-GA, and NP-SBFL-MGA. They use Tarantula, Ochiai, and Barinel to identify suspicious neurons on MNIST (top) and CIFAR-10 (bottom) models from Table 4. In Tables 6 to 8, each cell value represents an average over the synthesized inputs. The number of synthesized inputs for each model is reported in the last column of Table 4. The loss and accuracy of the synthesized samples demonstrate the effectiveness of the gradient ascent method in generating improved samples to deceive the network. Table 6 reveals that DeepFault using Tarantula and Barinel achieved significantly lower prediction performance than Ochiai on all models. The performance between Tarantula and Barinel is similar. These results suggest that the identified neurons are suspicious, so their weights are insufficiently trained. In NP-SBFL-GA and NP-SBFL-MGA, Ochiai and Barinel have similar effectiveness on the models and obtain lower performance than Tarantula. These results demonstrate that the identified paths are suspicious, so their weights are insufficiently trained. Therefore, slightly perturbing the inputs correctly classified by the DNN could transform them into adversarial by increasing the activation value of suspicious paths. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline & & K=1 & K=5 & K=10 & K=1 & K=5 & K=10 & K=1 & K=5 & K=10 \\ \hline \multirow{2}{*}{MNIST-1} & Loss & **0.1553** & 0.0280 & 0.0113 & 0.1340 & **0.0383** & **0.0201** & 0.1340 & **0.0383** & **0.0201** \\ & Accuracy & **25.07** & 66.80 & 81.58 & 34.39 & **64.45** & **75.28** & 34.39 & **64.45** & **75.28** \\ \hline \multirow{2}{*}{MNIST-2} & Loss & 0.0274 & 0.0271 & 0.0208 & **0.0548** & **0.0392** & **0.0273** & **0.0548** & **0.0392** & **0.0273** \\ & Accuracy & 65.41 & 61.51 & 68.40 & **46.65** & **59.38** & **64.46** & **46.65** & **59.38** & **64.46** \\ \hline \multirow{2}{*}{MNIST-3} & Loss & **0.1297** & 0.0275 & 0.0271 & 0.0819 & **0.0574** & **0.0290** & 0.0819 & **0.0574** & 0.0275 \\ & Accuracy & **40.65** & 72.93 & 74.20 & 41.61 & **59.85** & **73.08** & 41.61 & **59.85** & 74.40 \\ \hline \hline \multirow{2}{*}{CIFAR-1} & \multirow{2}{*}{K=10} & K=30 & K=50 & K=10 & K=30 & K=50 & K=10 & K=30 & K=50 \\ & & & & & & & & & \\ \hline \multirow{2}{*}{CIFAR-1} & Loss & 0.0106 & 0.0065 & **0.0052** & **0.0107** & **0.0093** & 0.0049 & **0.0107** & **0.0093** & 0.0051 \\ & Accuracy & **59.97** & 72.95 & **77.73** & 61.47 & **65.49** & 79.63 & 61.47 & **65.49** & 79.06 \\ \hline \multirow{2}{*}{CIFAR-2} & Loss & **0.0146** & 0.0122 & 0.0119 & 0.0132 & **0.0158** & **0.0151** & 0.0132 & **0.0158** & **0.0151** \\ & Accuracy & **47.38** & 51.73 & 52.39 & 48.18 & **45.01** & **45.74** & 48.18 & **45.01** & **45.74** \\ \hline \multirow{2}{*}{CIFAR-3} & Loss & **0.0102** & **0.0074** & **0.0076** & 0.0083 & 0.0062 & 0.0070 & 0.0083 & 0.0062 & 0.0070 \\ & Accuracy & **56.06** & **65.65** & **65.20** & 63.03 & 71.69 & 67.89 & 63.03 & 71.9 & 67.89 \\ \hline \hline \end{tabular} \end{table} Table 8: Loss and accuracy of all models on the synthesized data for NP-SBFL-MGA on all the selected models. The best results per suspiciousness measure are shown in bold. K represents the number of suspicious neurons. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Measure**} & \multicolumn{4}{c}{**Tarantula**} & \multicolumn{4}{c}{**Ochiai**} & \multicolumn{4}{c}{**Barinel**} \\ \cline{3-10} & & K=1 & K=5 & K=10 & K=1 & K=5 & K=10 & K=1 & K=5 & K=10 \\ \hline \multirow{2}{*}{MNIST-1} & Loss & **0.0442** & 0.0317 & **0.0679** & 0.0293 & **0.0713** & 0.0392 & 0.0293 & **0.0713** & 0.0392 \\ & Accuracy & 47.52 & 51.68 & **34.18** & **46.53** & **28.43** & 41.0 & **46.53** & **28.43** & 41.0 \\ \hline \multirow{2}{*}{MNIST-2} & Loss & **0.0437** & **0.0450** & **0.0452** & 0.0393 & 0.0394 & 0.0429 & 0.0393 & 0.0394 & 0.0429 \\ & Accuracy & 33.99 & **30.04** & 35.06 & **33.04** & 35.01 & **34.51** & **33.04** & 35.01 & **34.51** \\ \hline \multirow{2}{*}{MNIST-3} & Loss & **0.0423** & 0.0896 & **0.0665** & 0.0201 & **0.0911** & 0.0652 & 0.0201 & **0.0911** & 0.0652 \\ & Accuracy & **26.04** & 25.47 & 30.52 & 38.19 & 19.50 & **24.17** & 38.19 & 19.50 & **24.17** \\ \hline \hline \multirow{2}{*}{CIFAR-1} & \multirow{2}{*}{K=10} & K=30 & K=50 & K=10 & K=30 & K=50 & K=10 & K=30 & K=50 \\ & Accuracy & **0.0160** & **0.0234** & **0.0209** & 0.0158 & 0.0182 & 0.0205 & 0.0158 & 0.0182 & 0.0205 \\ & Accuracy & **30.76** & **23.06** & 21.27 & 31.61 & 26.80 & 21.22 & 31.61 & 26.80 & **19.17** \\ \hline \multirow{2}{*}{CIFAR-2} & Loss & 0.0196 & 0.0233 & 0.0236 & **0.0320** & **0.0469** & **0.0547** & **0.0320** & **0.0469** & **0.0547** \\ & Accuracy & 38.01 & 32.22 & 33.91 & **20.41** & **17.65** & **15.28** & **20.41** & **17.65** & **15.28** \\ \hline \multirow{2}{*}{CIFAR-3} & Loss & 0.0183 & 0.0222 & 0.0255 & **0.0231** & **0.0242** & **0.0363** & **0.0231** & 0.0234 & **0.0363** \\ & Accuracy & 41.19 & 36.57 & 30.59 & **25.50** & **19.57** & **15.03** & **25.50** & 19.83 & **15.03** \\ \hline \hline \end{tabular} \end{table} Table 9: Loss and accuracy of all models on the synthesized data for NP-SBFL-MGA on all the selected models. The best results per suspiciousness measure are shown in bold. K represents the number of suspicious neurons. Figures 2 and 3 show the loss and accuracy of MNIST and CIFAR on the synthesized data for different approaches and suspicious measures. The figures reveal that among the different approaches, the suspicious paths reported by NP-SBFL-MGA are more responsible for insufficient DNN performance because it obtains the lowest accuracy and higher loss, except for MNIST_1, in all variant K. This observation suggests two points: 1) neural pathway fault localization is more successful than neuron-based fault localization in identifying a DNN low performance, and 2) a gradient ascent synthesizer technique is not suitable for activating sequences of neurons. The last point will be further discussed in RQ2. We used the Wilcoxon rank-sum test [44] for statistical significance at a 95% confidence level and the Vargha and Delaney's A 12 statistics [45] for the effect size measure to compare the performance of NP-SBFL-MGA instances with other methods. Table 9 reports the results. According to the table, NP-SBFL-MGA had a statistically significant difference (p-value \(<\) 0.05) compared to other approaches for all CIFAR-10 models. However, for all MNIST models, NP-SBFL-MGA achieved significantly lower accuracy than other methods. We will further investigate this observation in our future work. Tables 6 to 8 show that there is a performance difference for all instances when using different K values. Further analysis of the trend of accuracy and loss for each instance, respectively depicted in Figures 4 and 5, revealed that increasing K values during fault localization results in the synthesized inputs increasing accuracy and decreasing loss for all instances except for NP-SBFL-MGA. This trend is reversed for all instances of NP-SBFL-MGA except for MNIST with K = 10 when using Ochiai and Barinel. This achievement is because NP-SBFL-MGA uses a multi-stage gradient ascent technique, a layer-wise paradigm, to maximize the activation values produced by suspicious neural pathways, unlike DeepFault and NP-SBFL-GA. Figure 2: Loss and accuracy of MNIST on the synthesized data for different approaches and suspicious measures. Figure 3: Loss and accuracy of CIFAR on the synthesized data for different approaches and suspicious measures. \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Approach} & \multirow{2}{*}{Model} & \multirow{2}{*}{Measure} & \multicolumn{2}{c}{Tarantula} & \multicolumn{2}{c}{Ochiai} & \multicolumn{2}{c}{Barinel} \\ \cline{3-8} & & & A12 & P-value & A12 & P-value & A12 & P-value \\ \hline \hline \multirow{4}{*}{DeepFault} & \multirow{2}{*}{MNIST} & Accuracy & **0.87** & \(<\)**0.001** & **1** & \(<\)**0.001** & **0.92** & \(<\)**0.001** \\ \cline{3-8} & & Loss & 0.62 & 0.01 & 0.48 & 0.04 & 0.35 & 0.10 \\ \cline{2-8} & \multirow{2}{*}{CIFAR} & Accuracy & **1** & \(<\)**0.001** & **1** & \(<\)**0.001** & **1** & \(<\)**0.001** \\ \cline{3-8} & & Loss & **0.97** & \(<\)**0.001** & **0.97** & \(<\)**0.001** & **0.97** & \(<\)**0.001** \\ \hline \hline \end{tabular} \end{table} Table 9: The statistical test compares the performance of NP-SBFL-MGA instances with other methods. Figure 4: Accuracy of MNIST and CIFAR on the synthesized data using different approaches and suspicious measures for a variant number of suspicious neurons, K: MNIST(CIFAR). _RQ1 (Validation)._ There is empirical evidence of suspicious neural pathways that could be causing inadequate DNN performance. NP-SBFL-MGA instances with different suspiciousness measures significantly improve localizing low performance compared to other approaches for most DNN models (except for the loss case on MNIST models). Figure 5: Loss of MNIST and CIFAR on the synthesized data using different approaches and suspicious measures for a variant number of suspicious neurons, K: MNIST(CIFAR). #### RQ2 (Comparison) We compared suspiciousness measures in all instances and performed pair-wise comparisons using the Wilcoxon rank-sum test [44] for statistical significance at a 95% confidence level and the Vargha and Delaney's A'12 statistics [45] for the effect size measure to explore whether there are significant differences between Tarantula, Ochiai, and Barinel. Tables 11 and 12 present the results of these comparisons for MNIST and CIFAR10, respectively. Our study has concluded that NP-SBFG-MGA delivers superior results on all models when using any suspicious measure. This achievement suggests that the neural pathways identified as suspicious by NP-SBFG-MGA have a greater impact on DNN performance than other methods. Therefore, there is no single best spectrum-based suspiciousness measure. This finding is consistent with traditional fault localization techniques used in software development. \begin{table} \begin{tabular}{c _RQ2 (Comparison)._ NP-SBFG-MGA with any suspiciousness measure is statistically superior to other instances in uncovering the low performance of models. These findings have significant implications for the field of neural network development and suggest that focusing on specific pathways rather than single neurons can lead to more accurate fault localization. #### RQ3 (Fault Detection Rate) Table 13 displays the proportions of synthesized samples that activate the critical pathways (C) and the proportions of failed synthesized samples that activate the faulty pathways (F) for each instance of all models. In the table, DF stands for DeepFault, NG for NP-SBFL-GA, and NM represents NP-SBFL-MGA. It can be observed that NP-SBFL-MGA using Tarantula achieves higher values in terms of both criteria C and F for all the models compared to the other instances. The second-best result is obtained by DeepFault using Ochiai. The effectiveness of the generated synthesized samples in activating faulty pathways for all approaches is represented by Figures 6 to 8, using Tarantula, Ochiai, and Barinel, respectively. These figures confirm that higher values of the number of covered critical pathways are associated with higher values of the number of failed tests. This point will be statistically proven in RQ4. Figure 7 further demonstrates that Ochiai effectively detects faulty neurons for all models. The results of Figure 6 indicate that Tarantula performs effectively in detecting faulty pathways across all models. The results of DeepFault using Tarantula and Barinel are not as satisfactory for CIFAR10, while NP-SBFL-GA and NP-SBFL-MGA show similar levels of effectiveness for all models. Figure 6: A comparison between the averaged ratios of failed synthesized tests and the synthesized samples activating faulty pathways for Tarantula in different models and approaches. _RQ3 (Fault Detection Rate)._ NP-SBFL-MGA using Tarantula is the most effective approach regarding the ratio of failed synthesized test samples activating faulty pathways for all models. Moreover, Ochiai is an acceptable option for detecting faulty neurons, while Tarantula is a good alternative for detecting faulty pathways. #### RQ4 (Correlation) Figure 9 displays the scatter plots illustrating the correlation between the rate of covered critical pathways and failed tests across all instances and models. Except for DeeFault using Ochiai and NP-SBFL-GA using Tarantula, it is observed that an increase in the rate of covered critical pathways leads to a corresponding increase in the rate of failed tests. We utilized the Spearman correlation coefficient to investigate the statistical correlation between these variables [46]. The reason for selecting the Spearman correlation is its ability to quantify the strength of a monotonic relationship between two variables while remaining agnostic about the relationship's form or the distribution of the data [46]. Our findings indicate a significant positive correlation between the variables. The obtained correlation results are presented in Table 14, encompassing various instances of all DNN models. Bold values within the table denote statistically significant correlations (p-value \(<=\) 0.05). Consequently, our results showcase statistically significant correlations between the rate of covered critical pathways and the percentage of failed tests in all instances. Figure 8: A comparison between the averaged ratios of failed synthesized tests and the synthesized samples activating faulty pathways for Barinel in different models and approaches. Figure 7: A comparison between the averaged ratios of failed synthesized tests and the synthesized samples activating faulty pathways for Ochiai in different models and approaches. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Approach} & \multirow{2}{*}{Measure} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{CIFAR} \\ \cline{3-5} & & Spearman & P-value & Spearman & P-value \\ \hline \multirow{3}{*}{DeepFault} & Tarantula & **0.99** & **< 0.001** & **1** & **0** \\ \cline{2-5} & Ochiai & **0.93** & **< 0.001** & **0.91** & **< 0.001** \\ \cline{2-5} & Barinel & **0.99** & **< 0.001** & **1** & **0** \\ \hline \hline \end{tabular} \end{table} Table 13: Correlation results between the number of covered critical pathways and the detected faults by different approaches. The bolds refer to statistically significant correlations (p-value \(<\)= 0:05). Figure 9: The correlation between the rate of covered critical pathways and failed tests across all instances and models. #### RQ5 (Locating Unique Suspicious Neurons) Figure 10 displays the ratios of common neurons in inputs synthesized by NP-SBFL-MGA using Tarantula, Ochiai, and Barinel for various K and different models. Notably, from the left part of Figure 10, Ochiai and Barinel detected the same suspicious neurons, albeit with varying levels of suspicion. Conversely, the right part of Figure 10 reveals that Tarantula identified entirely distinct neurons, particularly in convolutional networks. It shows that the suspicious neurons detected by Tarantula yielded higher reliability levels than those detected by Ochiai and Barinel. Figure 11 illustrates the ratios of common neurons between the best results of DeepFault using Ochiai for K = 10 (50) and the best results of NP-SBFL-MGA using Tarantula for K = 10 (5). As observed in Figure 5, fully-connected networks exhibit more common neurons between NP-SBFL-MGA and DeepFault. However, it is worth noting that suspicious neurons detected by NP-SBFL-MGA differ significantly for larger models such as convolutional networks. Figure 10: A comparison between the Ratios of common neurons in each layer of a DNN model for NP-SBFL-MGA using Ochiai against Barinel (left) and Tarantula against Ochiai/Barinel (right). _RQ5 (Locating Unique Suspicious Neurons)._ Ochiai and Barinel detect common suspicious neurons but with varying suspicion levels among the different suspicious measures. However, Tarantula identifies distinct neurons, particularly in convolutional networks, and detects the neurons with higher reliability than those detected by Ochiai and Barinel. Besides, more common neurons between DeepFault and NP-SBFL-MGA are observed in fully-connected networks. The significant differences are in suspicious neurons detected by NP-SBFL-MGA for complex models like convolutional networks. #### RQ6 (Quality of Synthesized Inputs) This section examines the quality of the generated inputs for NP-SBFL-MGA and the baseline approach DeepFault. We analyze the distance between the original and synthesized images using various distance metrics, such as \(L_{1}\) Manhattan, \(L_{2}\) Euclidean, and \(L_{\infty}\) Chebyshev. Additionally, we consider the naturalness scores, including inception score (IS) [47] and Frechet Inception Distance (FID) [48], for different values of K (# suspicious neurons) in both approaches. Table 15 presents the results for the quality of synthesized inputs over different K values. According to the table, for MNIST models, DeepFault exhibits a consistent degree of perturbation regardless of the value of K. On the other hand, NP-SBFL illustrates an increase in distances as K increases. Both approaches yield comparable IS scores. However, DeepFault outperforms NP-SBFL-MGA regarding FID values for CIFAR models, indicating a higher level of naturalness. Table 16 compares the distances between the original and synthesized images based on different suspiciousness measures employed by various approaches. The scores for these measures are relatively close for all models in a specific approach, except for DeepFault using Ochiai over the CIFAR model, where a notable difference is observed. Specifically, the inputs synthesized by DeepFault instances using Ochiai achieve the best FID value, while those synthesized by NP-SBFL instances using Tarantula demonstrate the best FID value. In the case of MNIST, Tarantula, and Barinel exhibit the same distance for DeepFault and are lower than Ochiai. These findings provide insights into the quality of the generated inputs and the performance of different approaches in terms of perturbation and naturalness scores. The results highlight the strengths and weaknesses of NP-SBFL-MGA and DeepFault, shedding light on their suitability for specific models and metrics. _Table 14. A comparison between the quality of synthesized input over different K values. K: MNIST(CIFAR)._ \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{K} & \multirow{2}{*}{Approach} & \multicolumn{3}{c|}{MNIST} & \multicolumn{3}{c|}{CIFAR} \\ \cline{3-8} & & \(L_{1}\) & \(L_{2}\) & \(L_{\infty}\) & IS (mean) & IS (std) & FID \\ \hline \end{tabular} Figure 11: A comparison between the ratio of common neurons in each layer of a DNN model for the best results of NP-SBFL-MGA and DeepFault. ### Threats to Validity **Construct Validity**: When conducting experiments, there is a possibility of challenges to the accuracy of the results. Poor accuracy can occur due to various factors, including the selection of datasets and DNN models. To ensure the reliability of our research, we have taken steps to address these potential concerns. Firstly, we have used well-known and widely studied public datasets, such as MNIST and CIFAR-10. Furthermore, we have applied our NP-SBFL method to multiple DNN models with distinct architectures, which have all shown competitive prediction accuracies, as shown in Table 4. Additionally, we have incorporated established suspiciousness measures from the field of fault localization in software engineering, outlined in Algorithm 1, to mitigate any potential threats related to identifying suspicious neural pathways. These efforts contribute to the overall robustness and validity of our research findings. **Internal Validity**: To ensure the accuracy of NP-SBFL-MGA's ability to produce new inputs that trigger suspicious neural pathways, we addressed potential threats to internal validity. We utilized various distance metrics to verify that the generated inputs closely resemble the original inputs and are comparable to those generated by DeepFault. We also took measures to prevent NP-SBFL-MGA's suspiciousness measures from accidentally outperforming the baselines. We performed a non-parametric statistical test, specifically the Wilcoxon rank-sum test [44] for statistical significance at a 95% confidence level and Vargha and Delaney's A'12 statistics [45] for the effect size measure to compare the performance of NP-SBFL-MGA and the baselines, and assess any significant differences. **External Validity:** To address potential issues with external validity, NP-SBFL needs to be able to examine the internal structure of DNN and gather data on the activation patterns of neurons to assess their level of suspicion accurately. To achieve this, we utilized PyTorch in developing NP-SBFL, enabling comprehensive white-box analysis of DNNs. While we have examined various measures of suspicion, we acknowledge the potential existence of other measures. Furthermore, we have validated NP-SBFL against multiple instances of DNNs trained on widely-used datasets to ensure its practicality. However, further experiments are needed to assess the efficacy of NP-SBFL in different domains and networks [48]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Measure} & \multirow{2}{*}{Approach} & \multicolumn{4}{c|}{MNIST} & \multicolumn{3}{c|}{CIFAR} \\ \cline{3-8} & & \(L_{1}\) & \(L_{2}\) & \(L_{\infty}\) & IS (mean) & IS (std) & FID \\ \hline \multirow{2}{*}{Tarantula} & DeepFault & **6022.74** & **334.31** & **25.37** & 1.00 & 0.00 & **46.90** \\ \cline{2-8} & NP-SBFL-MGA & 12314.54 & 619.14 & 46.27 & 1.00 & 0.00 & 48.22 \\ \hline \multirow{2}{*}{Ochiai} & DeepFault & **7500.54** & **395.60** & **25.49** & 1.01 & 0.00 & **32.47** \\ \cline{2-8} & NP-SBFL-MGA & 12276.75 & 618.33 & 45.80 & 1.00 & 0.00 & 49.95 \\ \hline \multirow{2}{*}{Barinel} & DeepFault & **6022.74** & **334.31** & **25.37** & 1.00 & 0.00 & **46.90** \\ \cline{2-8} & NP-SBFL-MGA & 12261.40 & 617.59 & 45.78 & 1.00 & 0.00 & 49.96 \\ \hline \end{tabular} \end{table} Table 15: A comparison between the quality of synthesized input over different suspicious measures. Conclusion Deep Neural Networks have successfully solved complex tasks in many real-world applications, such as image classification, recognizing human speech, natural language processing, and software engineering. However, despite their high accuracy, DNNs have quality issues and cannot classify samples correctly in real-world applications. Therefore, there is a need for an effective assessment of the quality of DNNs, especially in safety- and security-critical systems. In this paper, we propose a new fault localization method called NP-SBFL that identifies critical neurons using the Layer-wise Relevance Propagation technique and then determines which critical neurons are faulty. The method's effectiveness is demonstrated on two benchmark datasets, MNIST and CIFAR-10, showing its high accuracy and efficiency in fault localization. We also propose a novel methodology to verify the detected faulty paths based on gradient ascent. Our results show that NP-SBFL is highly effective and can achieve an average 96.75% fault detection rate for all models under test. It substantially outperforms the DeepFault technique that analyzes neurons as the root cause of faults in the neural network. Moreover, the multi-stage gradient ascent used in NP-SBFL is more effective than the simple gradient ascent. In the future, we plan to evaluate NP-SBFL on various other DNNs and datasets. Some examples of these include the steering wheel control of a self-driving car and the Udacity dataset. Additionally, we aim to enhance the suspiciousness-guided synthesis algorithm, expand the creation of synthesized inputs, and explore methods for fixing faulty neural pathways. Furthermore, we intend to design explicit criteria to confine critical pathways, improving syntenies inputs' naturalness. These endeavors are crucial in assessing the resilience of DNNs and facilitating the development of safety cases. #### Declaration of generative AI and AI-assisted technologies in the writing process During the preparation of this work the authors used ChatGPT-3.5 in order to improve language and readability of the work. After using this tool/service, the authors reviewed and edited the content as needed and takes full responsibility for the content of the publication.
2304.02698
The Runaway Greenhouse Effect on Hycean Worlds
Hycean worlds are a proposed subset of sub-Neptune exoplanets with substantial water inventories, liquid surface oceans and extended hydrogen-dominated atmospheres that could be favourable for habitability. In this work, we aim to quantitatively define the inner edge of the Hycean habitable zone using a 1D radiative-convective model. As a limiting case, we model a dry hydrogen-helium envelope above a surface ocean. We find that 10 to 20 bars of atmosphere produces enough greenhouse effect to drive a liquid surface ocean supercritical when forced with current Earth-like instellation. Introducing water vapour into the atmosphere, we show the runaway greenhouse instellation limit is greatly reduced due to the presence of superadiabatic layers where convection is inhibited. This moves the inner edge of the habitable zone from $\approx$ 1 AU for a G-star to 1.6 AU (3.85 AU) for a Hycean world with a H$_2$-He inventory of 1 bar (10 bar). For an M-star, the inner edge is equivalently moved from 0.17 AU to 0.28 AU (0.54 AU). Our results suggest that most of the current Hycean world observational targets are not likely to sustain a liquid water ocean. We present an analytical framework for interpreting our results, finding that the maximum possible OLR scales approximately inversely with the dry mass inventory of the atmosphere. We discuss the possible limitations of our 1D modelling and recommend the use of 3D convection-resolving models to explore the robustness of superadiabatic layers.
Hamish Innes, Shang-Min Tsai, Raymond T. Pierrehumbert
2023-04-05T18:45:58Z
http://arxiv.org/abs/2304.02698v1
# The Runaway Greenhouse Effect on Hycean Worlds ###### Abstract Hycean worlds are a proposed subset of sub-Neptune exoplanets with substantial water inventories, liquid surface oceans and extended hydrogen-dominated atmospheres that could be favourable for habitability. In this work, we aim to quantitatively define the inner edge of the Hycean habitable zone using a 1D radiative-convective model. As a limiting case, we model a dry hydrogen-helium envelope above a surface ocean. We find that 10 to 20 bars of atmosphere produces enough greenhouse effect to drive a liquid surface ocean supercritical when forced with current Earth-like instellation. Introducing water vapour into the atmosphere, we show the runaway greenhouse installation limit is greatly reduced due to the presence of superadiabatic layers where convection is inhibited. This moves the inner edge of the habitable zone from \(\approx\) 1 AU for a G-star to 1.6 AU (3.85 AU) for a Hycean world with a H\({}_{2}\)-He inventory of 1 bar (10 bar). For an M-star, the inner edge is equivalently moved from 0.17 AU to 0.28 AU (0.54 AU). Our results suggest that most of the current Hycean world observational targets are not likely to sustain a liquid water ocean. We present an analytical framework for interpreting our results, finding that the maximum possible OLR scales approximately inversely with the dry mass inventory of the atmosphere. We discuss the possible limitations of our 1D modelling and recommend the use of 3D convection-resolving models to explore the robustness of superadiabatic layers. + Footnote †: journal: ApJ ## 1 Introduction ### Sub-Neptunes and Hycean Worlds Theorists have long predicted the existence of small, water-rich planets forming outside the ice line with no solar system analogue (Kuchner, 2003; Leger et al., 2004). With the launch of the Kepler space telescope, the population of detected sub-Neptunes increased by orders of magnitude (Batalha, 2014). They are considered one of the most abundant types of planet in the galaxy (alongside super-Earths) (Petigura et al., 2013; Marcy et al., 2014; Winn & Fabrycky, 2015). Limited data leaves their inner compositions largely unconstrained, with structures ranging from almost pure water to rocky/iron cores producing the same mass and radius when given a hydrogen-dominated envelope (Rogers & Seager, 2010; Rogers et al., 2011; Marcy et al., 2014; Lopez et al., 2012; Rogers, 2015; Madhusudhan et al., 2020). Population demographics provide insights into the composition of smaller planets. The discovery of the radius valley (Fulton et al., 2017) separating the population of smaller super-Earths and larger sub-Neptunes provides a way to distinguish these two types of planet. By studying how the position of the valley varies with instellation, there is consensus that it is shaped by atmospheric loss, either via core-powered mass loss (Gupta & Schlichting, 2019) or photoevaporative loss (Owen & Wu, 2013). "Hycean Worlds" (Madhusudhan et al., 2021) are a proposed subset of water-rich sub-Neptunes with hydrogen-dominated atmospheres. The sub-Neptune K2-18 b (Cloutier et al., 2017, 2019) has been the subject of multiple studies owing to the claimed detection of water vapour in its atmosphere (Benneke et al., 2019; Tsiaras et al., 2019), with Madhusudhan et al. (2020) showing that its mass and radius were consistent with being a Hycean world. Further work on the habitability (Madhusudhan et al., 2021) and interior structure (Nixon & Madhusudhan, 2021) of Hycean worlds confirm there is a wide range of mass-radius parameter space potentially occupied by this type of planet. Their larger radii and masses compared to terrestrial-type habitable planets makes them more amenable to observational characterisation, with several temperate candidates having already been identified (see Table 1 of Madhusudhan et al., 2021). ### Planetary Habitability and the Runaway Greenhouse Effect #### 1.2.1 Background Most models of planetary habitability focus on atmospheres where the mean molecular weight (MMW) of the background atmosphere is greater than the condensible component (and for good reason - such is the case on Earth). Pioneering work related to Venus's atmosphere (Simpson, 1929; Komabayasi, 1967; Ingersoll, 1969; Nakajima et al., 1992) established a maximum installation above which a planet can no longer increase its infrared cooling into space. This is known as the "runaway greenhouse" limit (Ingersoll, 1969). As high surface temperatures make water vapour a non-negligible component of the atmosphere, there comes a point where the atmosphere becomes optically thick to infrared radiation, decoupling the surface temperature from a fixed photospheric temperature. If a planet's instellation exceeds this value, its water inventory evaporates into the atmosphere, with photodissociation and atmospheric escape eventually driving it into space. The inner edge of the habitable zone (HZ) is typically calculated assuming the runaway greenhouse effect is the limiting factor influencing habitability (Kasting et al., 1993; Kopparapu, 2013). #### 1.2.2 The Impact of an H\({}_{2}\)-He Background and Convective Inhibition As interest around hydrogen-dominated habitable worlds has grown, it is important to assess the impact of hydrogen as a background gas as opposed to nitrogen or other high MMW gas. Hydrogen gas's main source of opacity comes from its collision-induced absorption (CIA) spectrum. Since the strength of absorption scales as the density of the atmosphere squared, CIA becomes an important factor in H\({}_{2}\) atmospheres of around 1 bar or thicker, with 40 bars of pure H\({}_{2}\) allowing habitable surface temperatures out to 1.5 AU for an M star and 10 AU for a G star (Pierrehumbert & Gaidos, 2011). Accounting for absorption and water vapour feedbacks is necessary to further constrain the inner edge of the habitable zone. In Madhusudhan et al. (2021), inner edge calculations are, for the most part, made assuming a constant water mixing ratio of 10% (or less, if condensation occurs). Calculations with a water-saturated atmosphere have also been performed (e.g., Figure 9 of Piette & Madhusudhan, 2020), however their model neglects compositional effects (see below). When hydrogen is used as the background gas in classical runaway greenhouse calculations (Koll and Cronin, 2019), the presence of H\({}_{2}\) introduces novel behaviour on account of its low MMW. Unlike for high MMW background gases which tend to raise the runaway greenhouse limit and lead to non-monotonic changes in outgoing longwave radiation (OLR) with surface temperature, H\({}_{2}\) atmospheres monotonically approach the pure steam limit with increasing surface temperature. Our work aims to build on Koll and Cronin (2019) by introducing the effect of MMW-induced convective inhibition on the temperature profiles of hydrogen-dominated planets. For atmospheres with a condensing component heavier than the background gas, decreasing temperatures with altitude leads to a sharp decrease in MMW between the lower atmosphere and the upper atmosphere. If the concentration of the condensible is high enough, compositional gradients stabilise the atmosphere to convection (Guillot, 1995; Li and Ingersoll, 2015; Leconte et al., 2017) and double-diffusive instabilities (Leconte et al., 2017) even if the lapse rate is super-adiabatic. Molecular gradient-induced convective inhibition has been invoked to explain periodic storms in Saturn's atmosphere (Li and Ingersoll, 2015), stable wave ducts for gravity waves in Jupiter's atmosphere (Ingersoll and Kanamori, 1995) and explain the step-like behaviour of methane's abundance in Uranus's condensation layer (Irwin et al., 2022). Neglecting this effect results in an underestimation of the deep atmospheric temperature in gas giant planets (Leconte et al., 2017). Recent studies have investigated convective inhibition due to condensing silicates in a H\({}_{2}\)-He envelope, which alters the radius and cooling times of young super-Earths and sub-Neptunes (Misener and Schlichting, 2022; Markham et al., 2022) The atmospheric structure of runaway atmospheres is often assumed to be on a moist adiabat integrated upwards from the surface. However, if the surface temperature is high enough, high moisture contents will inhibit convection and lead to radiative layers in the lower atmosphere. We will discuss this effect in further detail in Section 3. Our aim is to calculate new limiting installations for sub-Neptunes hosting a liquid water ocean with a hydrogen-dominated atmosphere. #### 1.2.3 Super-Runaway States Even a liquid water ocean can be too hot to be habitable for life as we know it, but from the standpoint of planetary structure a second important transition occurs when the surface temperature of the water layer reaches the critical point. At this point, the liquid-gas phase transition of water disappears. More importantly for our purposes, H\({}_{2}\) is completely miscible with supercritical water (Soubiran and Militzer, 2015), so that gaseous solubility is no longer limited by Henry's Law (miscibility is expected for helium and other gases as well.) In consequence, a hydrogen envelope could not remain distinct from the supercritical water interior, but instead would mix into it and be diluted into the supercritical ocean. It is an interesting theoretical question how long it would take for such mixing to occur if the planet started out with a distinct hydrogen envelope, but the more plausible scenario for a planet with an initial H\({}_{2}\)-H\({}_{2}\)O composition is that the system would start out with a supercritical H\({}_{2}\)-H\({}_{2}\)O mixture. The H\({}_{2}\) would never phase-separate into a distinct layer if the equilibrium radiation balance maintains supercritical conditions with respect to water. For this reason, we put particular emphasis on defining planetary parameters for which the ocean surface temperature approaches the critical point of water. The chief goal of this paper is to identify conditions in which a subcritical liquid water ocean can coexist with a hydrogen-rich atmosphere, though we do offer some speculations on the state a sub-Neptune settles into when those conditions are not met. The paper is structured as followed. In Section 2 we will discuss the H\({}_{2}\)-He inventory required to produce a strong enough greenhouse effect to drive surface temperatures to the critical point of water. This will give an upper-bound on the instellation a planet can receive from its host star before entering a runaway state. In Section 3 we extend our model to deal with atmospheres with a water vapour component. We will then discuss our results in the context of the literature in Section 4. ## 2 Hydrogen-helium atmosphere above a water ocean ### Model To find an upper bound on the hydrogen inventory of a planet that maintains a liquid water surface, we first model a hydrogen-helium atmosphere above a water ocean. We assume the temperature structure of the atmosphere is the dry adiabat for a hydrogen-helium mixture with solar abundances (Asplund et al., 2009), neglecting the contribution of water vapour to the atmosphere (similar to the calculation in Pierrehumbert & Gaidos (2011)). This is an unphysical assumption given warm temperatures will naturally lead to the water evaporating from the surface ocean and mixing with the hydrogen-helium gas. However, it serves as an easily calculated upper limit on the hydrogen-helium content of a planet with temperate instellation since we expect the addition of water vapour to act as a greenhouse gas, reducing the ability of the planet to cool. Our choice of a purely adiabatic atmosphere is justified from test runs using a full radiative-convective iteration (see Section 3). We found that the tropopause (at \(\approx 0.05\) bars) was typically at lower pressures than the infra-red photosphere (at \(>0.1\) bar). Radiative fluxes were calculated in the longwave (LW) and shortwave (SW) regions of the spectrum using the SOCRATES radiative transfer code (Edwards & Slingo, 1996), the details of which can be found in Appendix A. In the SW calculation, a surface albedo of 0.12 is specified (Goldblatt et al., 2013). Since we are interested in calculating the maximum hydrogen inventory possible before an ocean is driven supercritical, we calculate fluxes for an atmosphere on a dry adiabat with surface temperature equal to the critical temperature of water, \(647\,\mathrm{K}\). We vary the surface pressure logarithmically between 0.5 bar and the critical pressure of water, 220 bar. The SW fluxes were calculated using a solar instellation value of \(1361\,\mathrm{W}\,\mathrm{m}^{-2}\), however we note that since the temperature structure of the atmosphere is fixed, we can simply multiply our results by a constant factor to retrieve the SW fluxes for any arbitrary instellation. The surface gravity was kept constant at the Earth value, \(9.81\,\mathrm{m}\,\mathrm{s}^{-2}\), throughout. We consider cases where the incoming stellar spectrum has that of a G star or an M star (details of which are found in Appendix A). ### Results Only moderate pressures of hydrogen and helium are required to force surface temperatures to supercritical values. Figure 1 shows the OLR and SW absorption of these atmospheres as a function of surface pressure. The intersection points between the OLR curve (green) and the SW absorption curves (blue and orange depending on stellar type) signify atmospheric configurations in global equilibrium. For a pure H\({}_{2}\)-He atmosphere with solar instellation, \(S_{0}\), approximately 10 bars of atmosphere will cause a large enough greenhouse effect due to CIA to drive the surface ocean supercritical. The SW absorption in the G-star experiments was reduced relative to the M-star thanks to the enhanced Rayleigh scattering cross sections at low wavelengths. However, the change in stellar type causes minimal differences in the qualitative behaviour of our model. Naturally, lower instellation values require larger atmospheric masses to warm the surface to the critical temperature, and atmospheres irradiated with less than 1% of solar radiation would require a surface pressure greater than the critical pressure of water to reach the critical temperature. We note the surface gravity in these experiments was held constant at Earth's value. As noted in Pierrehumbert & Gaidos (2011), the OLR depends on the surface gravity in the combination of \(p_{s}^{2}/g\), so changing \(g\) can be accounted for simply by rescaling \(p_{s}\). From Figure 1 we see that increasing the surface gravity from its terrestrial value would be equivalent to lowering the surface pressure, i.e. increasing the OLR (assuming the change in Rayleigh scattering, which scales as \(p_{s}/g\), does not cause the planetary albedo to change significantly). Therefore, for the same level of instellation, a planet with a higher surface gravity can host a higher pressure atmosphere before going supercritical. Nevertheless, we expect the surface gravity of temperate sub-Neptunes to be of a similar order of magnitude to the Earth. The four temperate sub-Neptunes with constrained mass and radius (Table 2 of Pierrehumbert, 2023) have surface gravities spanning the range \(4\,\mathrm{m\,s^{-2}}\) to \(24\,\mathrm{m\,s^{-2}}\). ## 3 Adding Water Vapour to the Atmosphere ### Model Setup #### 3.1.1 Thermodynamics Water is added to each layer at its saturation pressure value (i.e. the relative humidity is assumed to be 100%). We use the analytic expressions for the saturation vapour pressure given by Wagner & Pruss (2002), generating a lookup table of values for each temperature for faster on-the-fly calculation. Figure 1: OLR and SW absorption as a function of surface pressure. The green curve represents the OLR of an H\({}_{2}\)-He atmosphere on a dry adiabat with surface temperature equal to the critical temperature of water. The blue and orange curves show the SW absorption at different instellations relative to solar (\(S_{0}\)) for an M star and G star respectively. #### 3.1.2 Radiative Procedures As in Section 2, radiative fluxes are calculated with the SOCRATES radiative transfer code. Unlike in Section 2 where an inverse modelling approach was taken, we now timestep the model by iterating the temperature according to: \[\frac{\partial T}{\partial t}=A(p)\frac{\partial F_{\mathrm{net}}}{\partial p} \tag{1}\] where \(T\) is the temperature, \(F_{\mathrm{net}}\) is the net flux two-stream flux calculated at the edge of each model layer (defined such that \(F_{\mathrm{net}}\) is positive if the net flux is upwards, in the \(-p\) direction). The coefficient \(A(p)\) varies in each model layer in order to approach radiative equilibrium (where \(\partial_{p}F_{\mathrm{net}}=0\)) as fast as possible. We use the method described in Malik et al. (2017), where \(A(p)\) is decreased if temperature oscillations are detected in that temperature layer and increased on each iteration otherwise. For our purposes we only need to compute the correct equilibrium profile, and do not need to compute the actual time series as the system adjusts to equilibrium. Using this approach, the independent variable is the installation, \(S\), as opposed to the surface temperature, \(T_{s}\), which is permitted to vary until the atmosphere reaches local and global radiative equilibrium. #### 3.1.3 Convective Adjustment After each iteration, the temperature-pressure profile is checked for instability to convection. We use the instability criterion modified for use in atmospheres where water vapour is at saturation with a lower mean molecular weight background gas (Leconte et al., 2017): \[(\nabla_{\mathrm{am}}-\nabla_{\mathrm{ad}})(1-\beta q\varpi)>0 \tag{2a}\] \[\mathrm{with}\quad\nabla_{x}\equiv\frac{\mathrm{d}\ln T_{x}}{ \mathrm{d}\ln p},\] (2b) \[\beta\equiv\frac{\mathrm{d}\ln p_{\mathrm{sat}}}{\mathrm{d}\ln T},\] (2c) \[\varpi\equiv(1-\mu_{d}/\mu_{v}). \tag{2d}\] The "am" and "ad" suffixes denote "ambient" and "adiabatic" respectively, \(p_{\mathrm{sat}}\) is the saturation vapour pressure and \(\mu_{d}\) and \(\mu_{v}\) are the MMWs of the dry and condensing vapour gases respectively. The first term in Equation 2a is the Schwarzschild criterion (Schwarzschild, 1906) for convective instability when the ambient lapse rate is greater than the moist adiabatic lapse rate. The second term represents the effect of mean molecular weight gradients in an environment where water vapour is held at its saturation vapour pressure value. When the water vapour concentration, \(q\), exceeds a critical value, \(q_{c}\equiv 1/\beta\varpi\), the parcel is no longer unstable to convection despite any super-adiabatic lapse rates. The value of the moist adiabatic lapse rate, \(\nabla_{\mathrm{ad}}\) is calculated using the expression in Ding and Pierrehumbert (2016), which accounts for water vapour being a non-negligible component of the atmosphere but neglects the effects of retained condensates. We replace occurences of \(L/R_{v}T\) in the lapse rate formula with the more appropriate \(\beta\) factor obtained directly from a lookup table, and any other factors of \(L\) are calculated using a lookup table using fits from Wagner and Pruss (2002). This accounts for vanishing \(L\) as \(T\) approaches the critical temperature of water (647 K). However, we note that at temperatures approaching the critical point of water, the non-idealness of water vapour will become important but is neglected in this current work for simplicity. Convective adjustment is performed pairwise on layers working upwards from the bottom of the atmosphere. If Equation 2a is satisfied, we adjust the temperature of both layers to the moist adiabat. This procedure is repeated until convergence is reached. Note that when \(q>q_{c}\), a saturated layer with lapse rate \(\nabla_{\mathrm{am}}\)_less than_ the adiabatic lapse rate - even an isothermal layer - is unstable, and in our scheme is convectively adjusted to the saturated adiabat; radiative cooling may then cause further steepening of the lapse rate. The implementation of Equation 2a leads to two radiative zones forming in the atmosphere when temperatures near the surface are high enough - a traditional stratosphere in the upper atmosphere with low lapse rates and a moisture-inhibited radiative zone below the first convective region. Since the atmosphere is optically thick in this region, the radiative lapse rate is usually high and requires higher vertical resolution. We ran the model with 200 layers, with the bottom 100 layers dedicated to resolving this radiative region of the atmosphere. In the traditional stratosphere we implement a cold trap for moisture, setting the moisture concentration to its minimum value at pressures lower than the moisture minimum. #### 3.1.4 Surface Physics At the surface we implement a rudimentary heat transfer scheme which transfers sensible heat from the surface to the lowest model layer. This prevents the surface temperature becoming unphysically hotter than the lowest model layer and is a \(0^{\mathrm{th}}\) order attempt to represent some of the heat exchange processes in the turbulent boundary layer. The surface energy equation is: \[\rho_{s}c_{s}h\frac{\mathrm{d}T_{s}}{\mathrm{d}t}=-F_{\mathrm{net}}(p_{s})+c_{ p}\rho_{\mathrm{air}}C_{D}U(T(p_{s})-T_{s}) \tag{3}\] where values for the surface density (\(\rho_{s}\)) and heat capacity (\(c_{s}\)) are taken to be those of liquid water and \(h\) (the bucket depth) is taken as \(1\,\mathrm{m}\). The drag coefficient \(C_{D}\) and the characteristic drag velocity \(U\) are taken to be 0.001 and \(10\,\mathrm{m}\,\mathrm{s}^{-1}\) respectively, from Pierrehumbert (2010). The effect of varying these parameters had a negligible effect on the overall temperature structure of the atmosphere and the final conclusions of this work. In traditional calculations of the runaway greenhouse limit using an inverse modelling approach (e.g. Kasting et al., 1993), the surface pressure \(p_{s}\) is taken to be the sum of a constant dry component, \(p_{0}\), and the saturation vapour pressure at the specified surface temperature \(T_{s}\). Surface pressure can increase with temperature, allowing an increase in total atmospheric mass as more water vapor is added to the atmosphere. However, holding \(p_{0}\) fixed as temperature increases leads to an implied change in the dry mass of the atmosphere as temperature changes, because the mean molecular weight of the atmosphere varies and the concentration of the dry mass is not uniform over the profile. For inverse climate modeling, it is straightforward to allow \(p_{s}\) to vary with temperature, but with radiative-convective calculations that compute equilibria using time-stepping or related iterations, the re-gridding needed to allow \(p_{s}\) to vary becomes unwieldy and can lead to numerical issues. For that reason, we make the additional simplification of holding \(p_{s}\) itself fixed, until conditions make such an assumption physically inconsistent. The expedient of holding \(p_{s}\) fixed was also used in the calculations of Figure 9 in Piette and Madhusudhan (2020), which were carried out with a water-saturated atmosphere. In such calculations, there is an additional reduction in implied dry mass as temperature increases, as \(p_{0}\) needs to be reduced in order to compensate for the increase with temperature of surface water vapor partial pressure. As long as this change isn't drastic, it is of little consequences for our purpose, as we are not attempting to track the actual time evolution of an atmosphere. It only means that the dry air mass in the equilibrium state is somewhat different from what was specified for the initial condition of the calculation. The magnitude of the adjustment will be quantified in Section 4.3. When the surface saturation vapor pressure approaches the specified \(p_{s}\), \(p_{0}\to 0\) and it is no longer possible to keep surface pressure fixed as surface temperature is further increased. To deal with this case, we introduce a pure-steam layer for \(p>p_{s}\), extending to a greater surface pressure \(p_{s}^{\prime}\); we then time-step the radiative-convective model only for \(p<p_{s}\), computing the radiation from the pure steam layer assuming it to lie on the pure-steam (i.e dewpoint) moist adiabat, as discussed in Pierrehumbert and Ding (2016). Since water vapour is heavy and non-buoyant in a hydrogen-dominated background atmosphere, it will remain at the bottom of the atmosphere. A radiative layer in the lower levels of the atmosphere tends to dry out the upper atmosphere (due to its steep lapse rates), leading to a layered structure with pure water vapor at the bottom, nearly pure hydrogen-helium at the top, and a sharp transition layer between the two. All of the mass of hydrogen and helium resides in the upper layer, and the opacity of this layer (which in turn depends on the dry hydrogen-helium mass) strongly affects the conditions for the surface temperature to enter a runaway state. In cases which require a pure steam layer at the bottom, we set the bottom of the atmosphere to a fixed surface temperature (which fixes the surface pressure as the saturation vapour pressure at this temperature). The approach in this case is to determine the installation which is compatible with the specified surface temperature, requiring multiple runs of the time-stepped model. Unlike the usual inverse climate modeling approach, the required instellation cannot be determined by just computing the OLR corresponding to a given \(T(p)\) profile, since the profile is affected by the instellation through stellar absorption within the atmosphere. Instead, we need to guess an instellation, time-step the model (subject to a lower boundary condition provided by the steam layer) until \(T(p)\) reaches equilibrium, and then check the top of atmosphere balance. The instellation is then adjusted until top of atmosphere balance is achieved. When the surface temperature is too high, the OLR becomes decoupled from surface temperature owing to the optically thick steam layer, and so equilibrium cannot be reached when instellation exceeds a threshold value, which defines the runaway condition. For instellation above the runaway threshold, the temperature increases until some process intervenes to allow OLR to increase again, as discussed in Boukrouche et al. (2021); Pierrehumbert (2023). In this paper, we do not compute the super-runaway equilibrated state, but for a water-rich sub-Neptune with a deep water layer it is certain to be above the critical point of water. ### Experimental Procedure To model the effect of different dry mass paths, we run the model with two surface pressures - 1 bar and 10 bar. As described above, this surface pressure is held constant until \(q(T_{s},p_{s})=1\) is reached at the bottom of the atmosphere, after which the surface temperature and pressure are increased on the pure steam adiabat. Since steep radiative lapse rates keep the atmosphere relatively dry at all pressures except very close to the surface, this acts to keep the dry mass path of the atmosphere relatively constant at \(\approx\)10\({}^{4}\) kg m\({}^{-2}\) for the 1 bar initial condition and \(\approx\)10\({}^{5}\) kg m\({}^{-2}\) for the 10 bar case (we will quantify this in Section 4.3). Extending the atmosphere on the pure steam adiabat does not add any dry mass to the atmosphere (and instead assumes that the extra mass comes from evaporation from the surface ocean). We will refer to these two cases as the "1 bar" and "10 bar" cases interchangeably with "\(10^{4}\,\mathrm{kg}\,\mathrm{m}^{-2}\)" and "\(10^{5}\,\mathrm{kg}\,\mathrm{m}^{-2}\)" cases. For both the 1 bar and 10 bar atmospheres, we perform separate runs with the G-star and M-star spectra described in Appendix A. For each set of runs, we begin with a low installation that gives surface temperatures between 270 K - 300 K and run the model to radiative-convective equilibrium. After the equilibrium state is found, we reinitialise the model with an incrementally higher instellation, chosen as a balance between numerical stability and computational efficiency. Increments of 5 W m\({}^{-2}\) and 1 W m\({}^{-2}\) were used for the 1 bar and 10 bar runs respectively. The initial temperature profile of the first run is a dry adiabat with an isothermal stratosphere - runs at higher instellations are then initialised on the final temperature profile of the previous run. When \(q(T_{s},p_{s})=1\) is reached at the bottom of the atmosphere (i.e. \(p_{\mathrm{sat}}(T_{s})=p_{s}\)), we switch to the procedure described above where the surface temperature is held fixed and the instellation is iterated until both local and global radiative equilibrium are attained. We run the model until the surface temperature reaches 600 K, giving us a range of surface temperatures between 270 K - 300 K and 600 K. ### Results #### 3.3.1 Temperature and Humidity Profiles Figures 2 and 3 show sample temperature-pressure profiles and specific humidity profiles for the M star and G star experiments. At pressure levels where \(q>q_{c}\), there is a sharp increase in lapse rate which corresponds to increased surface temperatures with respect to convecting lower atmospheres. Once the atmosphere reaches the pure steam limit in the lower atmosphere (\(q=1\)) the temperature profile follows a pure steam adiabat. In the M star 1 bar experiment and both G star experiments, enough SW radiation penetrates the lower atmosphere to make the radiative layers have an extremely steep lapse rate compared to both the moist adiabat and the pure steam adiabat. However, in the M star 10 bar experiment (bottom row of Figure 2) attenuation of SW radiation results in a smoother transition between adiabatic and radiative regions. #### 3.3.2 Surface Temperature vs. Instellation Figure 4 shows the surface temperature, \(T_{s}\), as a function of the incoming instellation. We divide our graphs into three regions. In the region where \(q(T_{s},p_{s})<q_{c}\) (green in Figure 4), the atmospheric \(T\)-\(p\) structure is a moist adiabat in the lower atmopshere with a radiative stratosphere at low pressures. Once \(q(T_{s},p_{s})>q_{c}\) (orange in Figure 4), there is a radiative layer in the lower atmosphere. The steep lapse rates in this region tend to increase the surface temperature sharply with instellation than in the lower temperature cases. Lastly, when \(q(T_{s},p_{s})=1\) (pink in Figure 4), the bottom of the atmosphere is pure steam and lies on a pure steam adiabat. At this point the atmosphere becomes optically thick at all wavelengths and the surface temperature decouples from the OLR. Further increases in the instellation cannot be compensated by additional cooling and the surface temperature increases until the ocean reservoir is depleted or the critical point of water is reached. In the latter case (applicable to sub-Neptunes with a significant water inventory), the hydrogen atmosphere is miscible with the supercritical water envelope. The possible equilibrated states of a super-runaway pure steam atmosphere are discussed in Pierrehumbert (2023). The addition of hydrogen in this scenario is beyond the scope of this work. When our initial condition is a 1 bar hydrogen-helium atmosphere, the M-star runaway limit installation is 435 W m\({}^{-2}\), and our G-star runaway limit is 530 W m\({}^{-2}\). The higher runaway limit for G-stars is due to the Rayleigh scattering cross-section being larger at shorter wavelengths, which leads to an increased SW albedo for the G-star experiment where more of the installation is at low wavelengths. Interesting behaviour is caused by the sudden drop in maximum possible installation received in both cases when the atmosphere becomes pure steam (at the boundary between the pink and orange Figure 2: Sample of temperature-pressure profiles (left column) and specific humidity profiles (right column) for the M star experiments with dry mass paths of \(10^{4}\) kg m\({}^{-2}\) (top row) and \(10^{5}\) kg m\({}^{-2}\) (bottom row). The introduction of radiative layers in the lower atmosphere causes a sharp increase in the surface temperature before the lower atmosphere becomes pure steam. The radiative layers have lower lapse rates in the \(10^{5}\) kg m\({}^{-2}\) case because less SW radiation penetrates to the lower atmosphere. regions in Figure 4). For global equilibrium, we require: \[S=\frac{4\mathrm{OLR}}{1-\alpha} \tag{4}\] where \(\alpha\) is the albedo. Inspecting the relevant terms, the drop in absorbed installation at this boundary is caused by a sudden decrease in the albedo with the introduction of the steam layer. This initial drop is caused by a sudden increase in SW absorption from the sharp increase in water vapour at the bottom of the atmosphere. Less radiation reflectsfrom the surface, decreasing \(\alpha\) and therefore \(S\) in Equation 4 (assuming OLR remains approximately constant). Adding more steam at the bottom of the atmosphere (increasing surface temperature) eventually causes the albedo to increase again, since the albedo of a thick pure steam layer is greater than the surface albedo, 0.12. Figure 3: Same as Figure 2 but for the G star experiments. In the G-star case, more SW radiation penetrates into the lower atmosphere than the M-star case, leading to a larger increase in albedo and hence a larger increase in instellation. The drop in absorbed SW radiation leads to a narrow range of instellations with multiple equilibrium surface temperatures, some of which are unstable (depending on the sign of \(\,\mathrm{d}S/\mathrm{d}T_{s}\,\), (Koll & Cronin, 2019)). In the M-star case with 1 bar H\({}_{2}\)-He, the drop in absorbed radiation leads to the atmosphere abruptly entering a runaway state at any instellation above 0.32 \(S_{0}\). With 10 bars of H\({}_{2}\)-He as the initial condition, we see similar overall behaviour as the 1 bar case. The boundaries between the three regimes identified above have shifted to higher surface temperatures due to the higher surface pressure. Due to the much higher CIA optical depth of the dry gas inventory compared to the 1 bar case, there is more SW absorption in these atmospheres. This mutes the varying albedo effect at the pure steam boundary described above, and leads to the surface temperature increasing monotonically with instellation towards the runaway limit. This limit is \(92\,\mathrm{W}\,\mathrm{m}^{-2}\) for the G-star and \(116\,\mathrm{W}\,\mathrm{m}^{-2}\) for the M-star case. We note that the M-star limit is now higher than the G-star, despite the higher albedos of the G-star irradiated atmospheres. In the M-star radiative regions at \(p\approx 10\) bar, there is very little SW radiation penetrating the radiative layer. In the optically thick limit, radiative flux can be approximated as radiative diffusion (Pierrehumbert, 2010; Heng et al., 2014), with corresponding radiative equilibrium lapse rate: \[\frac{\partial T}{\partial p}=\frac{3}{16}\frac{\kappa}{g\sigma T^{3}}S_{\mathrm{ net}} \tag{5}\] Figure 4: Surface temperature as a function of incoming instellation for different hydrogen-helium inventories. The introduction of layers with convective inhibition when the moisture content at the bottom of the atmosphere (BOA) is high enough (orange region) causes a rapid increase in surface temperature (and moisture content). Once the bottom of the atmosphere is pure steam (pink region), the atmosphere becomes optically thick and increasing surface temperature is no longer linked with an increase in cooling, causing a runaway state. where \(\kappa\) is the Rosseland mean opacity and \(S_{\rm net}\) is the net SW flux penetrating the region, which is also the flux that must be carried upward through the region by radiative transfer. If \(S_{\rm net}\) is small in the radiative region (as is the case with the M star 10 bar atmospheres), then the lapse rate is also relatively small, meaning increasing \(S\) has less of an effect on the surface temperature than in cases where more radiation penetrates into the deeper layers. The curves in Figure 4 exhibit discrete stepping in regions where there are superadiabatic radiative layers in the atmosphere. This numerical artefact arises from the sensitivity of the surface temperature to the structure of the radiative layer. Discrete stepping is worse if the vertical resolution is low and there are few model levels in the radiative region. In this case, there is a large jump in surface temperature when a new model level becomes inhibited to convection. This behaviour motivated the use of higher vertical resolution in the lower atmosphere (discussed in Section 3.1.3) which reduces the size of the jumps in \(T_{s}\) but does not completely remove them. The structure of the stepping also changes as the installation increment is increased or decreased, suggesting the surface temperature may be somewhat dependent on the initial state of the model (which is initialised from the final temperature profile of the previous run). We do not believe the numerical artefacts affect the conclusions of our work. ## 4 Discussion ### Comparison of Runaway Limits to Canonical Values Table 1 summarizes the runaway greenhouse installations found in the previous section. We can compare these to the classical runaway greenhouse limit calculated for hydrogen atmospheres assumed to be on a simple moist adiabat (e.g. Koll and Cronin, 2019). Since at high \(T_{s}\) the moist adiabat approaches the pure steam limit smoothly (Koll and Cronin, 2019), this is the same as asking the maximum installation a pure steam atmosphere can receive and remain in global radiative equilibrium. The maximal OLR (sometimes called the Simpson-Nakajima limit), is approximately \(280\,{\rm W\,m^{-2}}\) for Earth's surface gravity. In global equilibrium, this must be equal to \((1-\alpha)S/4\) by Equation 4. Our estimation of the maximum instellation therefore depends on the calculated albedo, \(\alpha\), which varies by spectral type. We calculate the runaway limit for a pure steam atmosphere to be \(1410\,{\rm W\,m^{-2}}\) and \(1150\,{\rm W\,m^{-2}}\) for our G-star and M-star cases respectively. Comparing these numbers to our model results, for 1 bar of solar H\({}_{2}\)-He mixture, the maximum instellation is less than half of the Simpson-Nakajima limit and is less than 10% of the classical limit with 10 bars of H\({}_{2}\)-He mixture. This affects the placement of the inner-edge of the habitable zone, which is given by: \[d=\bigg{(}\frac{L/L_{\odot}}{S/S_{0}}\bigg{)}^{1/2}{\rm AU} \tag{6}\] where \(L/L_{\odot}\) is the luminosity of the star normalized by the solar value. We take \(L/L_{\odot}=1\) for the G-star and \(\log_{10}(L/L_{\odot})=-1.6\) for the M-star, the same as K2-18's luminosity (Benneke et al., 2019). Our new inner-edge estimates are presented in Table 1 and are further from the host star than previous calculations. For a 10 bar dry mass inventory, our values lie outside the traditional outer edge of the habitable zone (instellations between 0.2 and 0.4 \(S_{0}\)(Kopparapu et al., 2013)). As a reality check, we note that the low runaway thresholds are compatible with the observed H\({}_{2}\)-dominated outer atmosphere of Uranus, even if the interior is primarily composed of water, since the instellation of Uranus is only 0.0027 that of Earth - more than an order of magnitude lower than the runaway threshold. Neptune is even further below the threshold. ### Why Convective Inhibition Lowers the Runaway Limit In this section we explore why, for a given installation, the surface temperature is much hotter in our experiments than in traditional calculations of the inner edge of the habitable zone. Consider a planet with a given instellation, \(S\). The instellation roughly sets the stratospheric temperature as \(T_{\rm strat}\sim(S(1-\alpha)/4\sigma)^{1/4}\) and the temperature of the radiating layer where the characteristic LW optical depth, \(\tau\), is unity. With increasing pressure, the \(T\)-\(p\) profile will follow a radiative layer, followed by a moist adiabatic layer until it reaches the level where \(q=q_{c}\). In our simulations, the atmosphere then follows a radiative lapse rate, which in general is much steeper than the equivalent moist adiabatic lapse rate so long as the atmosphere is opaque enough and has enough SW radiation penetrating to that level. This increased lapse rate leads to much higher surface temperatures for equivalent levels of instellation. This effect is illustrated in Figure 5(a). Once the bottom of the atmosphere becomes pure steam, the surface temperature increases steeply with small increases in instellation, since the bottom of the atmosphere becomes optically thick and decoupled from the OLR. Equivalently, we can imagine the temperature-pressure profile in both the classical and inhibited scenarios starting from the same surface temperature, \(T_{s}\). If \(q(T_{s},p_{s})>q_{c}\) then our modelled atmospheres will follow a steep radiative lapse rate, compared to the shallower moist adiabat. Once \(q<q_{c}\), the inhibited atmosphere will again follow a moist adiabat (albeit one with a much steeper lapse rate than the one departing from \((T_{s},p_{s})\) owing to \(q\) now being more dilute). The resulting upper atmosphere temperature of the inhibited atmosphere will be much lower, leading to a much lower OLR. In global equilibrium, \({\rm OLR}=S(1-\alpha)/4\), and so the maximum allowed instellation for a given surface temperature will be much lower. This is illustrated in Figure 5(b). ### Why Increased Dry Mass Lowers the Runaway Limit It is clear from Figure 4 that the increased dry mass path in the 10 bar H\({}_{2}\)-He experiments lowers the maximum instellation limit. Figure 5(c) shows two different dry mass path atmospheres with the same surface temperature. The pressure level at which \(\tau=1\) is relatively constant between the two cases, because in the upper atmosphere \(q<<1\) so \(\tau=1\) when \(p(\tau=1)\approx g/\kappa_{d}\) where \(\kappa_{d}\) is some characteristic grey opacity of the dry gas inventory. The temperature at which \(q=q_{c}\) is approximately constant with temperature (see Section 4.4). Increasing the dry mass path of the atmosphere also increases the pressure level at which \(q\) becomes unity and the atmosphere transitions to a steam layer. As seen in Figure 5(c), since the pure steam layer (pink) has a much lower lapse \begin{table} \begin{tabular}{c c c} \hline \hline Experiment & Runaway Limit[W m\({}^{-2}\)] (\([S_{0}]\)) & HZ inner edge [AU] \\ \hline G-star, 1 bar H\({}_{2}\)-He & 530 (0.389) & 1.60 \\ M-star, 1 bar H\({}_{2}\)-He & 435 (0.320) & 0.280 \\ G-star, 10 bar H\({}_{2}\)-He & 92 (0.0676) & 3.85 \\ M-star, 10 bar H\({}_{2}\)-He & 116 (0.0852) & 0.543 \\ \hline Simpson-Nakajima, G-star & 1410 (1.04) & 0.982 \\ Simpson-Nakajima, M-star & 1150 (0.847) & 0.172 \\ \hline \end{tabular} \end{table} Table 1: Summary of runaway instellations rate than the radiative layer (orange), the average lapse rate between the surface and the \(\tau=1\) level increases with increasing mass path. A higher average lapse rate decreases the radiating temperature Figure 5: Three \(T\)-\(p\) profiles demonstrating the effect of superadiabatic layers on the radiative balance of the atmosphere. (a) We compare an atmosphere with convective inhibition (2) to one without (1), assuming an identical installation. Although the radiating temperature (and OLR) is the same between the two cases, the atmosphere with the superadiabatic layer has a higher surface temperature. (b) We again consider an atmosphere with (2) and without (1) a superadiabatic layer, starting from a fixed surface temperature. In this case, the superadiabatic layer causes the radiation temperature (and therefore OLR) of the atmosphere to decrease, which in turn reduces the maximum installation it can receive. (c) We consider two atmospheres with inhibited layers with different dry mass paths. The pressure at \(\tau=1\) is approximately constant between the two cases. The atmosphere with the greater dry mass path, (2), has a greater average lapse rate between the surface and the radiating level than (1), where the lower atmosphere is pure steam. The radiating level is therefore colder, reducing the maximum OLR of the atmosphere. with increasing dry mass path, leading to a drop in the OLR (and hence maximum instellation) in the runaway limit. ### Discussion of Analytic OLR In Appendix B, we calculate the OLR from an inhibited atmosphere to be: \[\mathrm{OLR}=\Gamma(1+4\alpha)\Big{(}\frac{\kappa_{d}m_{d}}{\bar{\theta}}\Big{)} ^{-4\alpha}\Big{(}q_{0}\frac{\kappa_{v}m_{d}}{\varepsilon\bar{\theta}}\Big{)}^ {4/(\beta_{0}-1)}\sigma T_{0}^{4} \tag{7}\] where \(T_{0}\approx 273\) K, \(\beta_{0}\equiv L/(R_{v}T_{0})\), \(q_{0}\equiv 1/(\beta_{0}\varpi)\), \(\alpha\equiv R_{d}/c_{p}\), \(\bar{\theta}=3/5\), \(\varepsilon\equiv\mu_{v}/\mu_{d}\), \(\Gamma\) is the standard gamma function and \(\kappa_{v}\) and \(\kappa_{d}\) are the characteristic grey opacities of the moist and dry components of the atmosphere respectively. Choosing \(\kappa_{v}=0.01\)\(\mathrm{m}^{2}\,\mathrm{kg}^{-1}\) to match the Simpson-Nakajima limit for a pure steam atmosphere, this leaves the OLR as a function of the dry opacity, \(\kappa_{d}\) and the dry mass path, \(m_{d}\). Table 2 shows estimates of the analytic OLR when \(\kappa_{d}=1.6\times 10^{-4}\,\mathrm{m}^{2}\,\mathrm{kg}^{-1}\), a sensible value for the dry opacity which corresponds to an H\({}_{2}\)-He atmosphere that becomes optically thick at approximately 0.6 bars The term in \(m_{d}^{-4\alpha}\) represents the total optical depth of the dry component of the atmopshere - as this term increases, the radiating temperature drops and the OLR decreases. The second term in \(m_{d}^{4/(\beta_{0}-1)}\) represents how increasing the dry mass of the atmosphere increases the temperature at which the atmosphere becomes radiative, which increases the OLR. In general, \(\alpha\gg(\beta_{0}-1)^{-1}\), and we estimate \(4(\beta_{0}-1)^{-1}-4\alpha\approx-8/7\), so the OLR decreases with dry mass, explaining the trend in Table 1. Moreover, Table 2 shows that the 10 bar M-star case is has a reduced dry mass compared to the equivalent G-star case, explaining its higher instellation limit since from Equation 7 this atmosphere will cool more efficiently. More care could be taken to ensure that the dry mass path of the atmosphere is conserved across our different simulations. With the current model setup, this would involve iterating the surface pressure so that the integral: \[\int_{0}^{p_{s}}(1-q)\frac{\mathrm{d}p}{g} \tag{8}\] is conserved. This was deemed too computationally expensive for the current study. Alternatively, one could implement a self-consistent moisture scheme that keeps track of the mass of the vapour phase, which would naturally conserve dry mass. Moreover, in Appendix B we show that the ratio of this OLR to the Simpson-Nakajima limit can be written as: \[\frac{\mathrm{OLR}}{\mathrm{OLR}_{\mathrm{SN}}}=\frac{\Gamma(1+4\alpha)}{ \Gamma(1+4/\beta_{0})}\Big{(}q_{0}\frac{\kappa_{v}m_{d}}{\varepsilon\bar{ \theta}}\Big{)}^{4/(\beta_{0}-1)}\Big{(}\frac{\kappa_{d}m_{d}}{\bar{\theta}} \Big{)}^{-4\alpha} \tag{9}\] Since the ratio of gamma functions is of order unity, and \(4(\beta_{0}-1)^{-1}\ll 1\), so long as \(\kappa_{d}m_{d}/\bar{\theta}>1\), i.e. the dry inventory of the atmosphere is optically thick, we should expect the OLR to be lower than the classical Simpson-Nakajima limit. The analytic expression provides a relatively good estimate of the OLR but decreases slightly too steeply with increasing \(m_{d}\) - more experiments at different dry paths would need to be run to establish the limitations of the power-law formulation. ### What Does a Super-Runaway State Look Like? Having discussed how the runaway greenhouse threshold changes for inhibited atmospheres, it is natural to wonder what a super-runaway atmosphere would look like. For Earth-like planets with a finite water reservoir, eventually the water inventory will be entirely in the atmosphere, and the lower atmosphere will not be in liquid-vapour phase equilibrium, allowing it to lie on a dry adiabat and increase its OLR (Boukrouche et al., 2021). However, for a Hycean world with an almost limitless supply of water this is not possible. As discussed in Pierrehumbert (2023), a super-runaway state is likely to consist of supercritical water vapour in the lower atmosphere. Since the supercritical phase is not constrained to lie on the phase equilibrium boundary between liquid and vapour (in contrast to the condensing layers above), a deep layer of the interior heats up until the supercritical water layer penetrates to high enough altitudes that radiation to space can increase beyond the runaway limit. In the pure water case discussed in Pierrehumbert (2023), this generally leaves a thin condensing region near the top of the atmosphere, but in the case with a substantial H\({}_{2}\) layer at the top, the radiating level is in the H\({}_{2}\) layer, so the condensing layer is eliminated entirely. The warming proceeds until the H\({}_{2}\) layer becomes hot enough to increase the OLR, but once supercritical water is in contact with the H\({}_{2}\), that layer would mix into the supercritical water and largely disappear, because H\({}_{2}\) (and presumably also He) is completely miscible in supercritical water (Soubiran and Militzer, 2015). A full thermal evolution model would be needed to determine how long this process would take. A curious possibility emerges because the runaway instellation threshold with an H\({}_{2}\) layer is considerably below the pure-steam limit. If the instellation lies between the two thresholds, once the H\({}_{2}\) layer is diluted into the supercritical water interior, a liquid ocean could form again, forcing some H\({}_{2}\) back into the atmosphere. One possibility is that the mixing between the layers results in just enough H\({}_{2}\) remaining in the outer layer for radiative balance to be achieved, predicting a self-regulation in the thickness of the H\({}_{2}\) layer. The scenario we have modelled in this paper corresponds to a cold-start, in which the planet begins in a sub-runaway state and then undergoes a runaway as the stellar luminosity increases. This is a possible scenario for an F or G star, but an alternate scenario for the evolution is a hot-start, in which the planet begins in a super-runaway state, either because of the heat of formation of the planet or because of intense illumination in the extended pre main-sequence stage of low mass stars. In a hot-start, the initial H\({}_{2}\)-H\({}_{2}\)O inventory would most likely begin in a supercritical mixed state. While the instellation remains above the runaway threshold, a significant separated H\({}_{2}\) layer would never form (unless H\({}_{2}\) is a major proportion of the initial composition). Once the planet cools down \begin{table} \begin{tabular}{c c c c} \hline \hline Experiment & OLR [W m\({}^{-2}\)] & Dry mass [kg m\({}^{-2}\)] & Analytic OLR [W m\({}^{-2}\)] \\ \hline G-star, 1 bar H\({}_{2}\)-He & 100 & \(9.92\times 10^{4}\) & 108 \\ M-star, 1 bar H\({}_{2}\)-He & 102 & \(9.01\times 10^{4}\) & 109 \\ G-star, 10 bar H\({}_{2}\)-He & 13.8 & \(9.74\times 10^{3}\) & 11.8 \\ M-star, 10 bar H\({}_{2}\)-He & 26.4 & \(5.99\times 10^{4}\) & 19.9 \\ \hline \end{tabular} \end{table} Table 2: Comparison of model OLR with analytical calculations with a dry gas opacity of \(1.6\times 10^{-4}\) kg m\({}^{-2}\) enough to be sub-runaway, though, a liquid ocean will form, leading an H\({}_{2}\) layer to effervesce out of the subcritical liquid ocean. ### Mixing of H\({}_{2}\)-He into the Pure Steam Layer The layered structure that occurs in our model at high surface temperatures, with a pure steam layer at the bottom, is a self-consistent solution of the radiative-convective equations. It is a peculiarity of the compositional stability criterion (Equation 2a) that when \(q>q_{c}\), \(q\to 1\) appears to be a singular limit. When \(q=1\) exactly, the moist stability criterion is the usual criterion that the lapse rate be steeper than the adiabat. However, if even an infinitesimal amount of hydrogen or helium mixes into the pure steam layer, so \(q=1-\delta\), with \(\delta\ll 1\), the compositional stability criterion then nominally applies according to which lapse rate steeper than the moist adiabat are stable. Radiative cooling would then be expected to generate a steep radiative layer within the nearly pure steam layer, no matter how small \(\delta\) may be. However, since the stabilizing compositional buoyancy becomes exceedingly week for small \(\delta\), many other mixing processes could intervene, so we find the generation of a radiative layer under these circumstances to be implausible. We cannot rule out the possibility, though; it is a matter that will need to be resolved by future resolved-convection modelling. ### Implications for Observations Our main result is that the runaway limit for sub-Neptune water worlds is greatly reduced with even 1 bar of hydrogen. Due to observational biases favouring the detection of low semi-major axis planets, most of the current observational candidates for Hycean worlds have relatively high equilibrium temperatures. From Table 1, the inner edge of the habitable zone is around 0.28 AU for our formulation. Table 1 of Madhusudhan et al. (2021) lists potential Hycean world candidates, all of which lie within our inner edge estimate and therefore would only be able to sustain a liquid water ocean if hydrogen were not detected in large abundances in the atmosphere. The well-studied sub-Neptune K2-18 b, originally thought to lie right on the inner edge of the classical habitable zone, is well beyond the inner edge by this measure. In this case, the planets most likely to host liquid water oceans on close-in orbits are either terrestrial planets with a higher mean-molecular weight background gas, or pure "water-worlds" with little to no H\({}_{2}\) envelope and an atmosphere predominantly composed of steam. The latter type of planets and their evolution have been studied previously in the highly-irradiated regime (Mousis et al., 2020; Aguichine et al., 2021), finding that water worlds may fit the observed mass-radius distribution of small radius planets. The super-Earth sized planet Kepler-138 d (Piaulet et al., 2023) is a candidate volatile-rich planet. It has a 1.5 \(R_{\oplus}\) radius and low density making it not dense enough to be predominantly rocky, but also too dense to sustain a significant hydrogen envelope that would not be lost through atmospheric escape. Although this particular planet is above the runaway greenhouse installation threshold, similar cool planets may be able to host liquid water oceans at near-Earth instellations. The ability to observationally distinguish liquid water surfaces and super-runaway mixtures of H\({}_{2}\)-H\({}_{2}\)O would allow us to verify some of the predictions of this paper. Mapping the transition from sub-runaway planets to super-runaway planets as a function of instellation would give us the runaway installation limit, which could be compared to the predictions in Table 1 to provide evidence for or against robust super-adiabatic layers. The combined non-detection of ammonia and detection of methanol in a sub-Neptune atmosphere has been proposed as a method of distinguishing a shallow water surface (Tsai et al., 2021). However, this method requires around 20 transits with the James Webb Space Telescope which may prove unfeasible given time allocation restrictions. Our results also predict a sharp transition between sub-runaway atmospheres where the upper atmosphere is very dry (see Figures 2 and 3) to super-runaway atmospheres that are moist due to the mixing of H\({}_{2}\) and supercritical H\({}_{2}\)O (Pierrehumbert, 2023). In contrast, the moistening of the upper atmosphere in the classical runaway greenhouse limit without convective inhibition is more smooth as instellation is increased and would occur at higher instellations. Further complications arise from the presence of non-Hycean sub-Neptunes, where the presence of water in the atmosphere is not necessarily correlated with its interior structure or the presence of a surface. More work is needed to understand how we can observationally disentangle the various possible atmospheric structures of habitable zone sub-Neptunes. ### Robustness of Calculations and Caveats #### 4.8.1 Day-Night Averaging Our model is one-dimensional, and makes the assumption that the stellar radiation is redistributed evenly over the dayside and nightside of the planet. The superadiabatic layers in our model are sustained by the need to remove the stellar flux deposited at the surface of our model. If day-night heat redistribution is not efficient, then the nightside of these planets may be able to sustain shallower lapse rates which would aid with radiative cooling. However, for thick hydrogen atmospheres on temperate sub-Neptunes, general circulation models (Charnay et al., 2021; Innes and Pierrehumbert, 2022) have shown that the combination of slow rotation rate and low mean molecular weight atmospheres produces globally weak temperature gradients thanks to dynamical redistribution of heat. This suggests heat deposited near the surface would be transported horizontally to the nightside, maintaining the steep lapse rates globally. Scaling relations for shallow atmospheres also suggest low MMW atmospheres should have efficient redistribution of heat (Koll, 2022). #### 4.8.2 Assumption of Saturation Our model also assumes 100% relative humidity in the column (except above the stratospheric cold trap). One major effect of 3D dynamics is to cause subsiding regions (e.g. due to the descending branch of a Hadley-like circulation, or night side subsidence on a tidally locked exoplanet). Descending dry air causes compressional heating and undersaturation, and can be responsible for regions where the OLR is locally greater the result of a globally-averaged calculation (Pierrehumbert and Swanson, 1995; Leconte et al., 2013). If an atmosphere is undersaturated to the point it lies below the critical water vapour mixing ratio \(q_{c}\), then superadiabatic layers responsible for surface heating may not form. Moreover, on Earth moist convection is the main mechanism by which water vapour is transported vertically in the atmosphere. Within the inhibited superadiabatic layer, mixing by convection is suppressed and our assumption of 100% relative humidity may break down aloft. However, we note that if the near-surface layers are saturated (due to being close to the ocean surface) and the layers aloft are undersaturated, this induces an even greater mean-molecular weight gradient to stabilise the atmosphere to convection than before. Moreover, since undersaturated lofted parcels would travel on the dry adiabat, which has a steeper lapse rate than the moist adiabat, this would again help stabilise the atmosphere to convection. One could argue that decreasing relative humidity with height would affect the radiative calculations. However, from Section 4.3 and Appendix B we can see that the main driver of lower cooling is the radiative effect of the dry mass of the atmosphere. A decrease in relative humidity with height would likely decrease the thickness of the superadiabatic layer, in which case the reduction in OLR may not be as severe as in the fully saturated scenario. #### 4.8.3 Clouds and Hazes Cloud and haze opacities were not included in our model. We expect their introduction to affect our results in three main ways. Firstly, the increase in LW opacity due to cloud water or hazes would exacerbate the greenhouse effect and, if taken independently from other cloud and haze radiative feedbacks, decreases the runaway instellation. Secondly, the SW scattering properties of clouds and hazes increase the effective planetary albedo. This opposes the greenhouse effect and increases the value of the runaway instellation. Thirdly, clouds and hazes may reduce the magnitude of shortwave radiation penetrating the lower atmosphere. This would reduce the radiative lapse rates in the inhibited layers by reducing \(S_{\rm net}\) in Equation 5. The reduction of radiative lapse rates will decrease the surface temperature for any given instellation, raising the runaway instellation. A similar effect was modelled in Piette & Madhusudhan (2020), who demonstrated that surface water oceans were possible on K2-18 b if the haze scattering opacity was high enough. In this case, the lower atmosphere becomes isothermal, allowing for temperate oceans at high pressures. However, their model neglected the effect of convective inhibition. Moreover, as discussed in Section 3.1.3, saturated subadiabatic radiative layers are unstable to convection when \(q>q_{c}\), implying that above the critical moisture threshold, the moist adiabat is the minimum possible lapse rate. If the cooling effects of clouds and hazes dominate their potential warming effect, then our runaway instellations in Table 1 are likely too pessimistic and the inner edge of the habitable zone could be at lower orbital distances. The magnitude of the cloud and haze radiative effects is likely to be strongly dependent on the particles' microphysical properties and 3D spatial distribution (e.g., Yang et al., 2013; Turbet et al., 2021). These effects are beyond the scope of our simplified 1D model, though we encourage future efforts to quantify the impact of clouds and hazes on our results. #### 4.8.4 Other Heat Transport Mechanisms We also need to consider other mechanisms which may be able to transport heat through the stabilised layers. For example, how efficient is thermal conduction at transporting flux deposited in the lower atmosphere? We can compare the efficiency of thermal conduction to radiation by comparing the thermal diffusion coefficient for conduction and radiation, as in Markham et al. (2022). For an ideal gas, the thermal diffusivity of conduction is: \[k_{\rm cond}=\rho\lambda c_{v}\sqrt{\frac{2k_{B}T}{\pi m}}\approx 1\,{\rm W}\,{ \rm m}\,{\rm K}^{-1} \tag{10}\] where \(\lambda\) is the mean free path, \(m\) is the weight of an average gas molecule, \(k_{B}\) is the Boltzmann constant and \(c_{v}\) its specific heat capacity at constant volume. We choose characteristic values to give an upper limit on \(k_{\rm cond}\). We use \(T=300\) K, \(\lambda=k_{B}T/(\sqrt{2}\pi d^{2}p)\) with \(d=290\) pm (the kinetic diameter of a hydrogen molecule, Mehio et al., 2014) and \(m=2\) amu. Density is calculated using the ideal gas law and we note the final result is independent of pressure. The radiative diffusivity of an optically thick gas is approximately \[k_{\rm rad}=\frac{16}{3}\frac{\sigma T^{3}}{\kappa\rho}\approx 10^{4}\,{\rm W }\,{\rm m}\,{\rm K}^{-1} \tag{11}\] where we have used \(\kappa=0.01\) kg m\({}^{-2}\) and used the same \(T\) and \(\rho\) in our calculation of \(k_{\rm cond}\). We conclude that energy transport via radiation is much more efficient than thermal conduction. Other sources of heat transport that could be considered are advective heat transports. Although the superadiabatic regions are statically stable, eddy heat transport could play a role in transporting heat vertically, especially if the vertical wind shear is high. Moreover, we have also neglected latent heat fluxes, which could play a significant role if there is significant condensation or re-evaporation of condensates around the region of interest. Within the framework of a 1D model with no dynamics, it is very difficult to get an accurate estimate of the magnitude of these fluxes. Studying this system with a cloud-resolving model (e.g. Lefevre et al., 2021; Tan et al., 2021) is key to understanding the robustness of the superadiabatic layer against other mechanisms of heat transport. These models would also indicate the radiative effect of water clouds on the system and derived runaway greenhouse limits. #### 4.8.5 Possibility of Multiple Equilibria Lastly, apart from in the region near the runaway limit (see Section 3.3), our model doesn't consider the possibility of multiple equilibrium states and hysteresis. Given that the phase structure of a water-world can change drastically on either side of the runaway limit from steam above a surface ocean to a supercritical envelope (Pierrehumbert, 2023), there is a possibility that much warmer surface temperatures could be achieved with a similar installation if we modelled the atmosphere with a supercritical water layer mixed with hydrogen gas. Our models represent a "cold start", i.e. warming a planet up that initially starts with a surface water ocean. However, realistically a sub-Neptune will form hot and cool down from a state where the water is supercritical (Misener and Schlichting, 2022; Markham et al., 2022). If there are multiple equilibrium solutions, it is possible that even when the installation lies below the runaway threshold, the water will still be in a supercritical state. Moreover, in our model we have neglected the effect of internal heating from the residual heat of formation or tidal heating. Given that our model only requires \(\approx 1\) W m\({}^{-2}\) of solar flux penetrating the lower layers to drive steep superadiabatic lapse rates, a similar level of internal flux could equally sustain very high surface temperatures in the absence of a significant instellation. ## 5 Conclusions The aim of this study was to determine the sustainability of a liquid water ocean on a Hycean world with a significant H\({}_{2}\)-He inventory. Our major findings are: 1. Neglecting water vapour feedbacks, 10-20 bars of solar H\({}_{2}\)-He mixture will drive a surface ocean supercritical when forced with solar instellation. A planet receiveing 10 times solar instellation would have to have less than 1 bar of hydrogen to sustain a liquid water ocean. 2. Including water vapour feedbacks, the presence of superadiabatic layers where convection is inhibited in the lower atmosphere reduces the runaway greenhouse instellation limit from the Simpson-Nakajima limit significantly. For a solar H\({}_{2}\)-He inventory of around \(10^{4}\) kg m\({}^{-2}\), the runaway greenhouse limit to an instellation of approximately 530 W m\({}^{-2}\) for a G-star and 435 W m\({}^{-2}\) for an M-star. This reduces further to around 100 W m\({}^{-2}\) for an H\({}_{2}\)-He inventory of around \(10^{5}\) kg m\({}^{-2}\). 3. The reduced instellation limits correspond to moving the inner edge of the habitable zone to around 1.6 AU (3.85 AU) for a planet orbiting a G-star with 1 bar (10 bar) of H\({}_{2}\)-He and equivalently 0.280 AU (0.543 AU) for a planet orbiting an M-star (c.f. 0.982 AU and 0.172 AU for a G-star and M-star respectively from previous models). 4. Analytical models of the OLR show the key parameter responsible for the reduction in the OLR is the total optical depth of the dry inventory, given that steep superadiabatic lapse rates in the inhibited layers dry the atmosphere aloft. A higher dry optical depth reduces the radiating temperature of the atmosphere and caps the maximum cooling from a H\({}_{2}\)-H\({}_{2}\)O atmosphere. If we model the atmosphere as having a constant, gray opacity for the dry gases, then the limiting OLR scales roughly as the inverse of the dry mass path. 5. Our results suggest that most of the current Hycean world targets are within the inner limit of the habitable zone and unlikely to host liquid water oceans. The most promising targets for observing a liquid water ocean on a close-in orbit are therefore traditional terrestrial-like planets with a high-mean molecular weight background atmosphere or "water worlds" with negligible H\({}_{2}\)-He envelopes. We conclude by encouraging the use of 3D cloud resolving models to study the robustness of the inhibited, superadiabatic radiative layers to 3D dynamics and other sources of heat flux. This paper is supported by funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 740963). S.-M.T. thanks Matej Malik and Daniel Kitzmann for the discussion on the water continuum absorption. S.-M.T. acknowledges support from the University of California at Riverside and NASA Exobiology grant 80NSSC20K1437. ## Appendix A Calculation of Radiative Fluxes We calculate the radiative fluxes using the SOCRATES code, based on Edwards & Slingo (1996). SOCRATES uses the correlated-\(k\) method to efficiently calculate two-stream fluxes in both the long-wave (LW) and shortwave (SW) regions of the spectrum. Gaseous overlap is treated with the equivalent extinction method with resorting and rebinning (Lacis & Oinas, 1991). In Section 2, we calculate LW fluxes in 300 bands equally spaced in wavenumber between \(1\,\mathrm{cm}^{-1}\) and \(5000\,\mathrm{cm}^{-1}\). In the SW region, we perform two separate calculations for a G-type star (using the Lean & DeLand (2012) solar spectrum) and an M-type star (calculated using PHOENIX (Husser et al., 2013) for a 3500 K star with \(\log(g)=5.0\) and solar metallicities and alpha element abundances). In Section 2 we use 300 bands equally spaced in wavenumber in each of the LW and SW regions of the spectrum. For the calculations in Section 3 we use 30 bands in each of the LW and SW regions to speed the convergence of the numerical iteration. We found no significant differences in the temperature-pressure profiles in benchmark models run with 300 bands in each region compared to 30 bands, justifying this reduction in wavenumber resolution. For the M-star we use the range \(250\,\mathrm{cm}^{-1}\) to \(40\,000\,\mathrm{cm}^{-1}\) and for the G-star we use \(250\,\mathrm{cm}^{-1}\) to \(50\,000\,\mathrm{cm}^{-1}\). We consider the collision-induced absorption (CIA) due to H\({}_{2}\)-H\({}_{2}\) and H\({}_{2}\)-He interactions as the main sources of absorption. We calculate H\({}_{2}\)-He opacities using the HITRAN database (Karman et al., 2019) and use the HITRAN database with additional data from Borysow (2002) for the calculation of H\({}_{2}\)-H\({}_{2}\) opacities. In the SW calculation we include the effects of Rayleigh scattering by both hydrogen and helium, calcualting the cross sections using fits of refractive indices taken from Peck & Huang (1977) and Cuthbertson & Cuthbertson (1932) respectively. A SW surface albedo of 0.12 was specified (Goldblatt et al., 2013) and the surface temperature was assumed to be identical to the temperature of the lowest layer of the atmosphere. Water vapour absorption \(k\)-coefficients are calculated using HITRAN data (Karman et al., 2019) and continuum absorption is calculated with the MT_CKD model (Mlawer et al., 2012). The refractive index of water used to calculate the Rayleigh scattering coefficients was taken from Ciddor (1996). ## Appendix B Analytic OLR Calculations We can understand our results better by constructing a simple grey analytic framework for calculating the maximum OLR of our model atmospheres. In the optically thick limit (\(\tau\gg 1\)), the OLR of a grey atmosphere is given by: \[\mathrm{OLR}\approx\int_{0}^{\infty}\sigma T(\tau)^{4}e^{-\tau}\,\mathrm{d}\tau\] (B1) To calculate this integral, we require the temperature profile \(T(\tau)\) and the optical depth: \[\tau(p)=\int_{0}^{p}\frac{\mathrm{d}p^{\prime}}{\bar{\theta}g}(\kappa_{d}(1-q )+q\kappa_{v})\] (B2) where \(\kappa_{d}\) and \(\kappa_{v}\) are characteristic grey opacities for the dry and condensible phases of the atmosphere respectively and \(\bar{\theta}\) is the average zenith angle, accounting for the angular distribution of the LW radiation. To simplify the problem, we split our atmospheres into two regions separated at \(q=q_{c}=1/(\beta\varpi)\), since we expect the lapse rates in these two regions to be very different. We assume that the moisture at this level is dilute such that we can make the approximation: \[q\approx\varepsilon\frac{p_{\mathrm{sat}}}{p}\] (B3) This approximation is approximately valid if \(q_{c}<0.1\), i.e. \(T<0.1L\varpi/R_{v}=480\) K assuming \(\beta=L/R_{v}T\). We can verify that this is the case in all of our simulations. We then approximate \(p_{\mathrm{sat}}(T)\) in the same way as Koll & Cronin (2019), writing: \[p_{\mathrm{sat}}(T)=p_{0}\bigg{(}\frac{T}{T_{0}}\bigg{)}^{\beta_{0}}\] (B4) where \(\beta_{0}\equiv L/(R_{v}T_{0})\) and \((p_{0},T_{0})\) is some reference point on the vapour-liquid phase curve of water chosen to be close to our region of interest. Like in Koll & Cronin (2019), we will choose \(p_{0}\) as the pressure where a pure steam atmosphere has unity optical thickness: \[p_{0}=\frac{g\bar{\theta}}{\kappa_{v}}\] (B5) This gives \((p_{0},T_{0})=(588.6\ \mathrm{Pa},272.6\ \mathrm{K})\) for \(g=9.81\ \mathrm{m\,s^{-2}}\), \(\kappa_{v}=0.01\ \mathrm{m^{2}\,kg^{-1}}\), \(\bar{\theta}=3/5\) - remarkably close to the triple point of water vapour. If Equation B3 holds, then the temperature and pressure \((T_{*},p_{*})\) at which \(q=q_{c}\) satisfies: \[\varepsilon\frac{p_{0}}{p_{*}}\bigg{(}\frac{T_{*}}{T_{0}}\bigg{)}^{ \beta_{0}} =\frac{R_{v}T_{0}}{p} \tag{10}\] \[T_{*} =T_{0}\bigg{(}\frac{1}{\beta_{0}\varpi}\frac{p_{*}}{\varepsilon p _{0}}\bigg{)}^{1/(\beta_{0}-1)} \tag{11}\] By noting that \(\beta_{0}\gg 1\) in our range of \(T_{0}\)(Koll and Cronin, 2019), we can readily verify our dilute approxmation: \[\frac{T_{*}}{L\varpi/R_{v}}\approx\frac{1}{\beta_{0}\varpi}=0.06 \tag{12}\] for \(T_{0}=272.6\) K. We will also define \(q_{0}=1/(\beta_{0}\varpi)\) as the moisture inhibition threshhold at \(T=T_{0}\) We then want to relate our pressure \(p_{*}\) to the dry mass path. We note that since \(q_{c}\) is relatively dilute, the atmosphere will quickly dry on the moist adiabat extending upwards from this point, leaving most of the upper atmosphere dry. For a dry mass path \(m_{d}\), we can then write \(p_{*}\approx m_{d}g\). We assume that the temperature structure of the upper atmosphere (\(p<p_{*}\)) is approximately a dry adiabat emanating from \((p_{*},T_{*})\). This neglects the effect of moisture on the lapse rate, which can be significant but quickly leads to intractable solutions since lapse rate \(\,\mathrm{d}T/\mathrm{d}p\,\) depends on \(q(p,T)\). This dry adiabat has the form: \[T=T_{*}\bigg{(}\frac{p}{p_{*}}\bigg{)}^{\alpha},\quad\alpha\equiv\frac{R_{d}} {c_{p}} \tag{13}\] This allows us to write an equation for how \(\tau\) varies with \(\ln T\): \[\frac{\mathrm{d}\tau}{\mathrm{d}\ln T} =\bigg{(}\frac{\mathrm{d}\ln T}{\mathrm{d}\ln p}\bigg{)}^{-1}p \frac{\mathrm{d}\tau}{\mathrm{d}p} \tag{14}\] \[=\alpha^{-1}\bigg{[}\frac{\kappa_{d}p}{\bar{\theta}g}+(\kappa_{v }-\kappa_{d})\frac{\varepsilon p_{\mathrm{sat}}}{\bar{\theta}g}\bigg{]}\] (15) \[=\alpha^{-1}\bigg{[}\frac{\kappa_{d}p_{*}}{\bar{\theta}g}\bigg{(} \frac{T}{T_{*}}\bigg{)}^{1/\alpha}+(\kappa_{v}-\kappa_{d})\frac{\varepsilon p _{0}}{\bar{\theta}g}\bigg{(}\frac{T}{T_{0}}\bigg{)}^{\beta_{0}}\bigg{]} \tag{16}\] Integrating this relation and letting \(\tau=0\) when \((T,p)=(0,0)\) yields: \[\tau=\alpha^{-1}\bigg{[}\frac{\alpha\kappa_{d}p}{\bar{\theta}g}+(\kappa_{v}- \kappa_{d})\frac{p}{\beta_{0}\bar{\theta}g}q\bigg{]} \tag{17}\] To calculate the OLR, ideally we would invert Equation 17 to find \(T(\tau)\) in Equation 15. However, due to the mixed powers of \(T\) in Equation 17 (one term in \(T^{1/\alpha}\) and one in \(T^{\beta_{0}}\)), this cannot be done analytically. To proceed, we compare the magnitude of the two terms and argue that the second term can be neglected so long as: \[q\ll\frac{\beta_{0}\alpha}{\kappa_{v}/\kappa_{d}-1} \tag{18}\] Let us take characteristic values of \(\beta_{0}\) at \(T_{0}=273\) K, \(\alpha\approx 2/7\), and \(\kappa_{v}=0.01\) m\({}^{2}\) kg\({}^{-1}\) and \(\kappa_{d}=1.6\times 10^{-4}\) m\({}^{2}\) kg\({}^{-1}\) (this value of \(\kappa_{d}\) yields good agreement between our final analytical OLR and simulations and is consistent with H\({}_{2}\) becoming optically thick in the infra-red between 0.1 and 1 bar). In this case, Equation B14 gives: \[q\ll 0.1\] (B15) We assert this to be true around the \(\tau\sim 1\) region of the atmosphere, even if it doesn't strictly hold at \(p=p_{*}\). We then have: \[T=T_{*}\bigg{(}\frac{\tau}{\tau_{*}}\bigg{)}^{\alpha},\quad\tau_{*}\equiv \frac{\kappa_{d}p_{*}}{\bar{\theta}g}=\frac{\kappa_{d}m_{d}}{\bar{\theta}}\] (B16) which when inserted into B1 yields \[\text{OLR}=\Gamma(1+4\alpha)\tau_{*}^{-4\alpha}\sigma T_{*}^{4}\] (B17) where \(\Gamma\) is the standard gamma function. Expanding \(T_{*}\) from Equation B7 and \(\tau_{*}\) from Equation B16 yields: \[\text{OLR}=\Gamma(1+4\alpha)\Big{(}\frac{\kappa_{d}m_{d}}{\bar{\theta}}\Big{)} ^{-4\alpha}\Big{(}q_{0}\frac{\kappa_{v}m_{d}}{\varepsilon\bar{\theta}}\Big{)} ^{4/(\beta_{0}-1)}\sigma T_{0}^{4}\] (B18) We immediately see that this OLR limit does not depend on the surface temperature, which is characteristic of a runaway greenhouse atmosphere. The first bracket corresponding to \(\tau_{*}\) represents the effect of increasing the dry opacity of the atmosphere. This shifts the radiating temperature up the adiabat with exponent \(\alpha\), and therefore reduces the OLR of the atmosphere. The second bracket traces back to Equation B7 and represents how increasing the dry mass of the atmosphere increases the temperature at which the atmosphere becomes inhibited, \(T_{*}\). Increasing the base temperature of the adiabat is associated with an increase in the OLR, albeit with a weak dependence of \(m_{d}^{4/(\beta_{0}-1)}\). We can see that the OLR depends on the dry mass path \(m_{d}\) with an exponent of: \[4(\beta_{0}-1)^{-1}-4\alpha\approx-4\alpha\] (B19) since \(\beta_{0}\gg 1\). For a diatomic ideal gas \(-4\alpha\approx-8/7\), so we would expect the OLR to drop of sharply with dry mass path. We can compare Equation B18 with Equation 24 in (Koll and Cronin, 2019), for a pure steam atmosphere, which was an estimate of the classical Simpson-Nakajima limit: \[\text{OLR}_{\text{SN}}=\Gamma(1+4/\beta_{0})\bigg{(}\frac{\kappa_{v}p_{0}}{ \bar{\theta}g}\bigg{)}^{-4/\beta_{0}}\sigma T_{0}^{4}=\Gamma(1+4/\beta_{0}) \sigma T_{0}^{4}\] (B20) Taking the ratio of this equation and Equation B18 gives: \[\frac{\text{OLR}}{\text{OLR}_{\text{SN}}}=\frac{\Gamma(1+4\alpha)}{\Gamma(1+4 /\beta_{0})}\Big{(}q_{0}\frac{\kappa_{v}m_{d}}{\varepsilon\bar{\theta}}\Big{)} ^{4/(\beta_{0}-1)}\Big{(}\frac{\kappa_{d}m_{d}}{\bar{\theta}}\Big{)}^{-4\alpha}\] (B21) In general, since \(4/(\beta_{0}-1)\ll 1\) and the gamma function ratio is of order unity, so long as \(\kappa_{d}m_{d}/\bar{\theta}>1\) (i.e. the dry mass of the atmosphere is optically thick), the OLR limit will be lower than the classical Simpson-Nakajima limit. Lastly, if we note that if one were to take the opposite limit of Equation B14 (i.e. the moist optical depth at \(\tau\sim 1\) dominates the OLR, we would find: \[\mathrm{OLR}=\Gamma(1+4/\beta_{0})\bigg{(}\frac{c_{p}}{LT_{0}}\bigg{)}^{-4/\beta_ {0}}\sigma T_{0}^{4}\] (B22) which is identical to the "dilute limit" found in Koll and Cronin (2019). This limit does not depend on the dry mass path and is only moderately lower than the Simpson-Nakajima limit. Since our results vary greatly with dry mass path and are much lower than this limit, this should reassure us that Equation B14 is a good assumption.
2310.09652
BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries
Machine learning security has recently become a prominent topic in the natural language processing (NLP) area. The existing black-box adversarial attack suffers prohibitively from the high model querying complexity, resulting in easily being captured by anti-attack monitors. Meanwhile, how to eliminate redundant model queries is rarely explored. In this paper, we propose a query-efficient approach BufferSearch to effectively attack general intelligent NLP systems with the minimal number of querying requests. In general, BufferSearch makes use of historical information and conducts statistical test to avoid incurring model queries frequently. Numerically, we demonstrate the effectiveness of BufferSearch on various benchmark text-classification experiments by achieving the competitive attacking performance but with a significant reduction of query quantity. Furthermore, BufferSearch performs multiple times better than competitors within restricted query budget. Our work establishes a strong benchmark for the future study of query-efficiency in NLP adversarial attacks.
Wenjie Lv, Zhen Wang, Yitao Zheng, Zhehua Zhong, Qi Xuan, Tianyi Chen
2023-10-14T19:49:02Z
http://arxiv.org/abs/2310.09652v1
# BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries ###### Abstract Machine learning security has recently become a prominent topic in the natural language processing (NLP) area. The existing black-box adversarial attack suffers prohibitively from the high model querying complexity, resulting in easily being captured by anti-attack monitors. Meanwhile, how to eliminate redundant model queries is rarely explored. In this paper, we propose a query-efficient approach BufferSearch to effectively attack general intelligent NLP systems with the minimal number of querying requests. In general, BufferSearch makes use of historical information and conducts statistical test to avoid incurring model queries frequently. Numerically, we demonstrate the effectiveness of BufferSearch on various benchmark text-classification experiments by achieving the competitive attacking performance but with a significant reduction of query quantity. Furthermore, BufferSearch performs multiple times better than competitors within restricted query budget. Our work establishes a strong benchmark for the future study of query-efficiency in NLP adversarial attacks. ## 1 Introduction In the recent years, Deep Neural Networks (DNNs) have been shown to be effective in a variety of natural language processing applications, such as machine translation, fake news detection, question answering, and sentiment analysis (Fan _et al._, 2021; Minaee _et al._, 2021; Zhong _et al._, 2023). Meanwhile, the security of DNNs is receiving increasing attentions, since they have been proved to be easily confused by adversarial samples by slightly perturbing the original input (Szegedy _et al._, 2014). In general, there are two ways to fool NLP models via adversarial attack, _i.e._, white-box and black-box attack (Zhang _et al._, 2021). The white-box attack can access almost every aspects of the target model including the training dataset, architecture and parameters, which are not transparent to the black-box settings instead (Wang _et al._, 2021). In practice, black-box attack is more applicable since such inherent underlying information may be not accessible. However, the success of the existing state-of-the-art black-box attackers highly relies on querying the victim model sufficient many times (Yoo _et al._, 2020; Hossam _et al._, 2021). Such massive model queries might be easily detected by the anti-attack security system thereby still not convenient in reality. How to achieve efficient black-box NLP attack with the minimal number of model queries now becomes an open and hot problem. In fact, the heavy model queries derive from the two main stages of NLP black-box attacks, _i.e._, one stage to calculate and rank word salience followed by another stage to employ prescribed transformation onto the important words. Both stages contribute a fair amount of model queries. To achieve high query efficiency, there are two recent works concentrating on the first stage to optimize the important word selection. In particular, E2A (Hossam _et al._, 2021) trains an interpretable agent model to learn word salience scores by using a substitute training corpora within the similar fields of the target model. (Hossam _et al._, 2021) further improves the training process in E2A (Hossam _et al._, 2021). However, to the best of our knowledge, there is no work to eliminate redundant model queries on the second stage _i.e._, perform word transformation on the important tokens. On the other hand, in the related computer vision (CV) field, there exist a few works that establish Bayesian attackers to reduce query complexity by leveraging the historical information (Zhao _et al._, 2019; Ru _et al._, 2019), wherein they assume a Gaussian distribution to the perturbation variable and determine its type-parameters via Bayesian optimization. However, these approaches cannot be easily extended to NLP tasks because of the discreteness of the perturbation space. In this paper, we study how to optimize the model query over the second stage (word transformation stage) and propose a BufferSearch to effectively attack NLP applications with significant amount of model query reduction. Our main contributions are summarized as follows. **Algorithmic Design.** We are the first work to optimize model queries of the word transformation stage in the general black-box NLP adversarial attacks. The proposed BufferSearch delicately makes use of historical attacking information and conducts statistical test to avoid unnecessary model queries effectively. As a result, BufferSearch fools the target model effectively without incurring high query complexity. **Numerical Benchmark.** We numerically demonstrate the effectiveness of BufferSearch on the benchmark text classifi cation experiments, across different model architectures and various datasets. BufferSearch more adequately leverages each model query than state-of-the-art competitors. As a result, under unconstrained model query budget, BufferSearch dramatically avoids 32.6% queries in average and achieves competitive adversarial attacking performance. Furthermore, under constrained query budget, BufferSearch outperforms others by attaining multi-times better attacking performance. ## 2 Related Work ### Two Stage Attack In NLP Most of the existing black-box word-based attackers consist of two main stages to generate adversarial examples. Specifically, the first stage calculates salience scores for each individual word based on various criteria. For example, [1] designs four scoring functions based model prediction, _i.e._, Replace-1, Temporal Head, Temporal Tail and Combined Score. [11], [14], [15] and [16] compute the importance of word as the confidence score deviation after deletion. The tokens are then sorted via the importance score and fed into the second stage, wherein prescribed word transformation are employed on the selected tokens with significant impact onto the prediction. Both stages accompanies numerous model queries to compute either the salience score or the impact of each transformation. ### Query Efficiency in Black-Box NLP Attack The query-efficiency study in NLP black-box adversarial attack is quite limited. To the best of our knowledge, there are only a few related works. Two of them focus on optimizing the first-stage discussed in Section 2.1. In particular, E2A [17] trains an interpretable agent model to learn word importance scores on substitute training corpus from similar fields of the target model. [17] follows the same procedure but improves the training process of E2A. Although both methods reduce the model query complexity, the performance of such approaches might be sensitive to the dataset selection and training quality. Furthermore, conducting such additional training requires lots of extra efforts, which is typically not convenient and scalable to large-scale intelligent NLP systems [1]. Transfer-based attackers can generate adversarial samples without accessing the target victim model [13, 2], but have the same drawbacks as E2A. Recently, [1] generates adversarial samples within low model queries by random sampling, but such method fails on large-scale document set such as IMDB which limits its generality. ### Query Efficiency in Black-Box CV Attack In the field of computer vision black-box adversarial attack, the query efficiency has been developed to some extent. [1] partitions pixels into small groups and conduct group-wise queries. Then all pixels in one group operate the same perturbation based on the shared query. [10, 15, 16, 17, 18] avoid querying target model by accessing substitute model instead, while the construction of the substitution is costly and time-consuming. [12] reduces model queries by forming zeroth order optimization. [11] points out that the high query cost of black box attack is due to the high dimension of image input, and proposes an attack method based on dimension reduction. [1] improves boundary attack by leveraging gradient estimation of decision boundary. Because of the discreteness of text space, these successful attack methods in CV are difficult to transfer to NLP area directly. ## 3 Query Efficient Adversarial Attack ### Target Problem Formulation In this section, we formulate the target problem of query-efficient adversarial attack for general natural language processing security tasks. To the best of our knowledge, such target problem is rarely well formulated across related computer vision and natural language processing applications. For general NLP tasks, we aim at establishing an attacker \(\mathcal{A}\) such that given a victim text set \(\mathcal{X}\), for each text \(X\in\mathcal{X}\), the attacker \(\mathcal{A}\) consumes the minimal number of model queries \(\mathcal{Q}(\mathcal{A},\mathcal{M},X)\) to fool the model \(\mathcal{M}\) Considering the popular classification problem, each document \(X\in\mathcal{X}\) is associated with a ground truth label \(Y_{X}\). The target problem becomes \[\underset{\mathcal{A}}{\text{minimize}}\ \sum_{X\in\mathcal{X}} \mathcal{Q}(\mathcal{A},\mathcal{M},X),\text{s.t.}\ \mathcal{M}(\mathcal{A}(X))\neq Y_{X}\ \text{for}\ X\in\mathcal{X}, \tag{1}\] where the adversarial document \(\mathcal{A}(X)\) yields a wrongly predicted label by the model, _i.e._, \(\mathcal{M}(\mathcal{A}(X))\neq Y_{X}\). However, problem (1) is typically intractable in practice since the constraint is usually hard to achieve for every document. As an alternative, we consider the following more relaxable unconstrained problem, \[\underset{\mathcal{A}}{\text{minimize}}\ \sum_{X\in\mathcal{X}} \mathcal{Q}(\mathcal{A},\mathcal{M},X)-\lambda r(\mathcal{A},\mathcal{M},X), \tag{2}\] where \(r(\mathcal{M},\mathcal{A},X)\) is a regularization term [10, 11, 12] that measures the deviation of label between ground truth and adversarial example, _e.g._, \(\|\mathcal{M}(\mathcal{A}(X))-Y_{X}\|_{2}\), and \(\lambda\) is some positive weighting parameter. The majority of black-box adversarial attack in NLP tasks are two-stage attackers as discussed in Section 2.1, _i.e._, the first-stage picks up important words and the second-stage proceeds various attack on these candidates. Both stages require numerous model queries. Consequently, the total number of queries equals to the summation of quantity from both stages. In this paper, we concentrate on optimizing the cost of model query in the second stage, _i.e._, \(\mathcal{Q}_{2}(\mathcal{A},\mathcal{M},X)\) is free to be optimized. Therefore, the target problem becomes \[\underset{\mathcal{A}}{\text{minimize}}\ \sum_{X\in\mathcal{X}} \mathcal{Q}_{2}(\mathcal{A},\mathcal{M},X)-\lambda r(\mathcal{A},\mathcal{M},X). \tag{3}\] ### Overview To the best of our knowledge, we are the first to study optimizing the cost of model querying in the word transformation stage of black-box NLP attacks. To solve problem (3), the proposed algorithm is stated in Algorithm 1. In general, given a set of victim texts \(\mathcal{X}\) and a target model \(\mathcal{M}\), we iteratively attack each text \(X\) in \(\mathcal{X}\) and collect the intermediate information to form a table \(\mathcal{T}\) as stated in Algorithm 2. Within each individual adversarial attack, we leverage the collected historical information \(\mathcal{T}\) to refine the search space and avoid unnecessary model queries as Algorithm 3. As a result, we obtain the set of adversarial texts \(\mathcal{X}_{\text{adv}}\) with much fewer number of queries than the existing methods. **Input**: A set of victim texts \(\mathcal{X}\) with the ground truth labels \(\mathcal{Y}\), target model \(\mathcal{M}\), sentence similarity Sim( ), attack ratio \(\epsilon\) and candidate list ratio \(\gamma\in(0,1]\). **Output**: A set of adversarial examples \(\mathcal{X}_{\text{adv}}\). ``` 1: Construct an empty table \(\mathcal{T}\) to store historical adversarial attack information. 2:for each text \(X\) in \(\mathcal{X}\) and label \(Y\) in \(\mathcal{Y}\)do 3: Get adversarial text \(X_{\text{adv}}\) and update \(\mathcal{T}\) via Algorithm 2. 4:\(X_{\text{adv}}\leftarrow\text{Attack}(X,Y,\mathcal{M},\mathcal{T},\texttt{Sim}, \epsilon,\gamma)\). 5:\(\mathcal{X}_{\text{adv}}\leftarrow\mathcal{X}_{\text{adv}}\bigcup X_{\text{adv}}\). 6:return\(\mathcal{X}_{\text{adv}}\). ``` **Algorithm 1** Query-Efficient Adversarial Attack Similarly to other state-of-the-arts black-box attackers, we design a two-stage algorithm for generating each individual adversarial text. In particular, the first stage picks up a set of words with high importance from the victim text \(X\) based on various criteria as line 3-5 in Algorithm 2. Then in the second stage, the words are iterated by their ranked salience scores and transformed by some prescribed mechanisms to yield the ultimate adversarial example as line 6-19 in Algorithm 2, wherein the transformations reply on the collected historical information \(\mathcal{T}\) as shown Algorithm 3. All replacement information is fed into the table \(\mathcal{T}\) for further usage as line 8-7 in Algorithm 2. ### Word Importance Calculation In the stage one, we calculate the word salience score following the state-of-the-art ranker proposed in Textfooler [11]. In particular, suppose a victim text of \(n\) words as \(X\)=\(\{w_{1},w_{2},...,w_{n}\}\). To calculate the importance of \(w_{i}\), _i.e._, \(I(w_{i})\), we at first delete \(w_{i}\) from \(X\) to form \(X/\{w_{i}\}\)=\(\{w_{1},...,w_{i-1},w_{i+1},...,w_{n}\}\). Following Textfooler, the \(I(w_{i})\) has the following explicit forms. \(\bullet\) Suppose the predicted label remains the same after deletion, _i.e._, \(\mathcal{M}(X)=\mathcal{M}\left(X/\{w_{i}\}\right)=Y_{X}\), \[I(w_{i})=\mathcal{M}_{\mathcal{Y}_{\mathcal{X}}}(X)-\mathcal{M}_{\mathcal{Y}_ {\mathcal{X}}}\left(X/\{w_{i}\}\right). \tag{4}\] \(\bullet\) Suppose the predicted label is changed after deletion, _i.e._, \(\mathcal{M}(X)=Y_{X}\neq\hat{Y}_{X}=\mathcal{M}(X/\{w_{i}\})\), \[I(w_{i})= \mathcal{M}_{Y_{X}}(X)-\mathcal{M}_{Y_{X}}\left(X/\{w_{i}\}\right)+ \tag{5}\] \[\mathcal{M}_{\mathcal{Y}_{X}}(X)-\mathcal{M}_{\hat{Y}_{X}}\left(X /\{w_{i}\}\right),\] where \(\mathcal{M}(X)\) represents the predicted label of \(X\) by model \(\mathcal{M}\), and \(\mathcal{M}_{Y_{X}}(X)\) represents the predicted confidence score of \(X\) by \(\mathcal{M}\) on label \(Y_{X}\). ### Query-Efficient Transformation Attack In the second stage, given the set of words with high importance \(\mathcal{W}\)* from the first stage, we iterate the ranked words by their importance scores and perform a word replacement mechanism, wherein a proper word replacement should _(i)_ have similar semantic meaning with the original, and _(ii)_ confuse the target model to make wrong prediction. **Word Transformation.** Without loss of generality, consider the most popular synonym transformation as an illustrating example. For \(w\in\mathcal{W}\)*, we at first establish the candidate set for replacement of \(w\,\mathcal{C}(w)\) from \(\mathcal{S}(w)\) via Algorithm 3. Then as line 18-19 of Algorithm 2, during each replacement, we accomplish the replacement of \(w\) as the candidate \(c\)* \(\in\mathcal{C}(w)\) achieving the largest deviation of prediction distribution, \[c\textsuperscript{*}:=\operatorname*{argmax}_{c\in\mathcal{C}(w)}\ \mathcal{M}_{Y_{X}}(X_{\text{adv}})- \mathcal{M}_{Y_{X}}(X_{\text{adv},w\to c}) \tag{6}\] Consequently, the adversarial example \(X_{\text{adv}}\) is constructed either after all words in \(\mathcal{W}\)* are exhausted and replaced by corresponding best synonym as (6) or until the attack has been succeeded in advance as line 13-16 of Algorithm 2. The existing state-of-the-art methods solve problem (6) by brute-force exploiting the whole synonym space, _i.e._, \(\mathcal{C}(w)=\mathcal{S}(w)\), so that calculating the prediction deviation requires \(|\mathcal{S}(w)|\) model queries totally for each \(w\in\mathcal{W}\)*. Thus the overall procedure performs \(\sum_{w\in\mathcal{W}\text{*}}|\mathcal{S}(w)|\) model queries for the worst case, while there may exist numerous redundant queries to be optimized. **Pruning Transformation Space.** To increase the efficiency of word replacement, we prune the feasible space of transformation by identifying redundant word candidates to avoid potential unnecessary queries. Rather than utilizing the whole synonym set \(\mathcal{S}(w)\), we establish the candidate set \(\mathcal{C}(w)\) as a subset of \(\mathcal{S}(w)\) which exhibits the largest prediction deviation in the history and has high confidence to be significant again verified via an efficient sorting algorithm and a statistical test. To proceed, we at first initialize the candidate list of \(w\) as \(C_{\text{initial}}\) from the global table \(\mathcal{T}\) associated with the previous attack information \(\mathcal{H}(w,c)\) as line 5-6 in Algorithm 3, wherein \(\mathcal{H}(w,c)\) represents the history of confidence score changes of replacing \(w\) by \(c\). Given a prescribed candidate list budget \(\gamma\in(0,1]\), we pick up a pivot candidate \(c_{\text{pivot}}\in C_{\text{initial}}\) which has the \([\gamma|\mathcal{C}_{\text{initial}}||\text{th}]\) largest confidence score deterioration in average as line 8 in Algorithm 3. The pivot candidate \(c_{\text{pivot}}\) serves as the benchmark instance to filter out the candidates that are less likely to confuse the target model. We then iterate all candidates from \(\mathcal{C}_{\text{initial}}\). For each candidate \(c\), a statistical test is employed to determine if \(c\) can affect the model more significantly than the pivot candidate \(c_{\text{pivot}}\) as line 9. For simplicity, we represent the impact of replacement from \(w\) to \(c\) and \(c_{\text{pivot}}\) as \(\mu_{c}\) and \(\mu_{\text{pivot}}\), respectively. Then, we propose to use the following one-sided test, \[\text{null:}\mu_{c}\leq\mu_{\text{pivot}}\text{ vs. alternative: }\mu_{c}>\mu_{\text{pivot}}.\] To test these hypotheses, we make use of the historical \(\mathcal{H}(w,c)\) and \(\mathcal{H}(w,c_{\text{pivot}})\). Since these two populations to be compared may have unequal variance, Welch's two-sample one-tail t-test is performed here [15]. In particular, given the samples from \(\mathcal{H}(w,c)\) and \(\mathcal{H}(w,c_{\text{pivot}})\), we at first compute the t-statistic as \[t=\frac{\bar{\mathcal{H}}(w,c)-\bar{\mathcal{H}}(w,c_{\text{pivot}})}{\sqrt{ \hat{\sigma}_{c}^{2}/|\mathcal{H}(w,c)|+\hat{\sigma}_{\text{c}_{\text{pivot}}}^ {2}/|\mathcal{H}(w,c_{\text{pivot}})|}}, \tag{7}\] where the \((\bar{\cdot})\) and \(\hat{\sigma}^{2}\) represent the sample mean and the variance estimator, respectively. Given a prescribed significance level \(\alpha\), we then figure out the \((1-\alpha)\) quantile of the student's t-distribution with degree of freedom corresponding to that in the variance estimator, denoted as \(t_{\alpha}^{\text{s}}\). Next, we reject the null hypothesis if \(t\geqslant t_{\alpha}^{\text{s}}\). If we fail to reject the null hypothesis that the impact of \(c\) is less than that of \(c_{\text{pivot}}\), then \(c\) should be excluded from further consideration since replacing \(w\) as \(c\) largely generate negligible model confusion. Otherwise, we accept the alternative hypothesis of higher affect by \(c\) and include it into the ultimate candidate list \(\mathcal{C}(w)\) as line 10-11. **Complexity Analysis.** We here compare the complexity of model query by our proposed transformation attack. Since for each word \(w\in\mathcal{W}^{\text{s}}\), we proceed a model-query-free statistical approach to filter out numerous negligible candidates. The remaining candidates after pruning quantize up to \(\gamma|\mathcal{S}(w)|\), which result in an upper bound for the overall model query quantity as \(\sum_{w\in\mathcal{W}^{\text{s}}}\gamma|\mathcal{S}(w)|\). Comparing to the \(\sum_{w\in\mathcal{W}^{\text{s}}}|\mathcal{S}(w)|\) of the existing state-of-the-art methods, our method performs linearly faster in the manner of model query in theory and will be further verified in Section 4. **Input:** Victim text \(X\), ground truth label \(Y\), target model \(\mathcal{M}\), table \(\mathcal{T}\), similarity function \(\text{Sim}(\cdot)\), and ratio \(\epsilon,\gamma\in(0,1]\) for words and candidates selection. **Output:** Adversarial example \(X_{\text{adv}}\). ``` 1:Initialize \(X_{\text{adv}}\gets X\). 2:Pre-process \(X\) and get target word set \(\mathcal{W}\). 3:for each word \(w\) in \(X\)do 4: Compute the importance score \(I(w)\). 5: Set \(\mathcal{W}^{\text{s}}\) by the top \(|\epsilon|\mathcal{W}|\)\(|\) words from \(\mathcal{W}\) via \(I(w)\). 6:for each word \(w\) in \(W\)do 7: Get the candidate list \(\mathcal{C}_{w}\) from Algorithm 3. 8:\(\mathcal{C}_{w}\leftarrow\text{CANDIDATE}\_\text{LIST}(w,\mathcal{T},Y,\gamma)\). 9: Get the recent soft label \(\hat{y}_{\text{adv}}\leftarrow\mathcal{M}_{Y}(X_{\text{adv}})\). 10:for each candidate \(c\)in \(\mathcal{C}_{w}\)do 11:\(\hat{X}_{c}\leftarrow\text{Replace }w\) with \(c\) in \(X_{\text{adv}}\). 12: Get hard/soft label \(\mathcal{M}(\hat{X}_{c})\) and \(\hat{y}_{c}\leftarrow\mathcal{M}_{Y}(\hat{X}_{c})\). 13: Update \(\mathcal{T}[w,Y,c]\leftarrow\mathcal{T}[w,Y,c]\bigcup(\hat{y}_{\text{adv}}- \hat{y}_{c})\). 14:if there exists \(c\in\mathcal{C}_{w}\) such that \(\mathcal{M}(\hat{X}_{c})\neq Y\)then 15: Get the successful attack set \(\mathcal{C}_{\text{success}}\subseteq\mathcal{C}_{w}\). 16:\(c^{\text{s}}\leftarrow\operatorname*{argmax}_{c\in\mathcal{C}_{\text{success}}} \text{Sim}(X,\hat{X}_{c})\). 17:Return\(X_{\text{adv}}\leftarrow\hat{X}_{c^{\text{s}}}\). 18:else 19: Computer the best candidate \(c^{\text{s}}\) via (6). 20: Replace \(w\) with \(c^{\text{s}}\) in \(X_{\text{adv}}\). 21:Return\(X_{\text{adv}}\). ``` **Algorithm 2** Black Box Adversarial Attack ## 4 Experiments ### Datasets and Target Models To evaluate the effectiveness of our method, we apply BufferSearch on the benchmark text classification task, which is perhaps the most popular and representative task studied in NLP adversarial attack area [11]. In particular, the experiments cover various datasets such as MR [13], IMDB [11] and Yelp [15] as well as various model architectures ranging from WordCNN [12], WordLSTM [16] to BERT [4]. Remark here that though we demonstrate the effectiveness via text classification, our method is generic to be extended to other scenarios with minimal modifications. ### Baselines To quantitatively evaluate the performance, we compare BufferSearch with two state-of-the-art adversarial attack methods PWWS [14] and Textfooler (TF) [11] employed on the text classification tasks. We include random.CS [1] on IMDB to demonstrate the generality of BufferSearch on reducing query complexity for various text datasets. We exclude [17, 18], since their strategies focus on the first stage and are complementary to our methodology. ### Experimental Settings In our black-box experiments, the attacker can only access the input text and the output prediction (confidence scores) by querying the victim model. The default candidate list in Algorithm 3 is set as the synonym set for each word, which is constructed similarly to TF. Specifically, given the word embeddings [10], we compute the cosine similarity across different instances, then pick up the ones with top \(N\) sufficiently high similarity (greater than \(\delta\)) as the synonyms of each word. As other literatures, we empirically set \(N=50\) and \(\delta=0.5\)[11]. The budget hyperparameter \(\gamma\), which controls the number of candidate words, and the significance level are set to be \(0.3\) by default. ### Automatic Evaluation Metrics We evaluate the performance by four popular metrics. Adv accuracy--the accuracy of target model on the adversarial examples. Query Num--the number of model queries during adversarial attack. Perturbation Rate--the rate of tokens substituted in the origin texts. Semantic similarity--the semantic similarity between the adversarial examples and the victim examples measured by the popular open-source tool [3]. The first two measure the effectiveness of adversarial attack, and the later ones measure the attack invisibility. ### Automatic Numerical Evaluation The evaluation results of BufferSearch and strong baselines are reported in Table 2 and 3, wherein the best numbers are marked as bold for the ease of comparison. In general, our method achieves efficient and effective adversarial attacks for all experiments with detailed descriptions as follows. **Efficiency.** Compared to the state-of-the-art baselines, _i.e._, TF and PWWS, BufferSearch requires significantly fewer model queries during the whole attacking procedure. In particular, BufferSearch reduces the query cost by 32.6% in average across all experiments compared to TF, where a 42.2% reduction is even achieved in the WordCNN on MR experiment. The superiority of BufferSearch on query efficiency is due to the leverage of historical information to avoid unnecessary queries in the second stage as shown in Figure 2. PWWS is not comparable to BufferSearch and TF in terms of query efficiency, which is \(6-10\) times costly than our method. As drawn in Table 3, random.CS fails to effectively attack IMDB(BERT) though achieves the lowest query cost. **Effectiveness.** Although BufferSearch dramatically reduces the model queries, it still achieve outstanding attacking performance. Our method can greatly fool all the three target models. Even for BERT with the best robustness, the accuracy can be reduced from about 90% to no more than 12% on all benchmark datasets. In fact, BufferSearch achieves the best (lowest) and the second best After-Attack Accuracy on two and six out of nine experiments, respectively, and competitive accuracy on the remaining one test with the gap less than \(1.1\%\). All methods perform closely in the manner of perturbation rate. For example, BufferSearch is as low as 2.9% on the IMDB under WordCNN. That means for a input containing 100 tokens, BufferSearch only needs to replace three words or less to generate an adversarial output. Similar observations can be found in the semantic similarity, which is inversely proportional to the disturbance rate. **Discussion.** Throughout the experiments, it is no doubt that BufferSearch is the best in terms of query efficiency and attacking performance comprehensively. The results demonstrate the effectiveness of our proposed algorithm. On the other hand, although TF and PWWS consume dramatically more resources to query the victim models, the gains they received on the attacking performance are negligible. It consequently reveals the existence of numerous query redundancy. We hope that BufferSearch establishes a baseline for future studies to further push the boundary. ### Human Evaluation To further verify the quality of generated adversarial examples by BufferSearch, we proceed additional human evaluation following the settings of [15]. In particular, we designed three experiments in laboratory environment to evaluate the following aspects. (i) _Grammatical naturalness score._ We asked human judges to put a score (1-5) to evaluate the grammatical validity of a sentence. (ii) _Label prediction._ We showed human judges \(50\) samples randomly selected from MR training data, then asked them to classify the motion adversarial example as either negative or positive. (iii) _Semantic similarity._ We asked judges to put a score \(0,0.5,1\) to evaluate the semantic similarity between two sentences. To proceed, we randomly selected \(100\) adversarial samples generated on dataset MR under BERT and their corresponding origin samples. Each experiment was evaluated by 5 human judges, which average results are shown in Table 1. As examples shown in Figure 1, the adversarial samples generated by BufferSearch maintain the semantic information of origin input and successfully fool the victim target model. ## 5 Discussion ### Limited Query Budget Setting Besides the setting of unlimited query budget in Section 4.5, we study the performance of the attackers under restricted query budget \(Q_{\text{max}}\)'s. This constraint forces the attacker to return adversarial examples within \(Q_{\text{max}}\) queries. We test a variety of \(Q_{\text{max}}\)'s from \(\{30,90,150,300,450,750\}\) for MR, and \(\{100,300,500,1000,1500,2500\}\) for IMDB and Yelp. The performance is measured by the number of adversarial examples that successfully fool the victim models. As displayed on Figure 3, our method performs the best across all models and datasets. In particular, BufferSearch performs multiple times better than TF and PWWS under low \(Q_{\text{max}}\)'s. Then the successful attacking rates of all methods converge \begin{table} \begin{tabular}{l c c} \hline Text & origin text & adv text \\ \hline Prediction accuracy & 72.3 & 65.2 \\ Grammatical naturalness & 3.56 & 3.42 \\ Semantic similarity & 0.869 \\ \hline \hline \end{tabular} \end{table} Table 1: Average score of human evaluation. Figure 1: Adversarial samples generated by BufferSearch from MR (BERT) dataset. Target and replacement words are resp. in red and blue. The label predicted by BERT is changed. Figure 2: Query cost comparison between TF and BufferSearch over WordCNN, WordLSTM and BERT on MR, Yelp and IMDB. as the query budget increases. These experiments serve as a strong evidence that BufferSearch explored the word transformation space more effectively than other competitors. ### Different Size of Historical Information To explore the impact of different size of historical information onto the attacking performance of BufferSearch, we randomly select ten subsets with size ranging from \(10^{3}\) to \(10^{4}\) from Yelp and IMDB test datasets, then employ our method onto these subsets to collect historical tables. These tables are then leveraged into attacking the original \(10^{3}\) test samples for evaluation. As drawn in Figure 4, there is slight but not significant improvement when more historical information is leveraged. It reveals that BufferSearch does not highly rely on the historical accumulation after information saturation. ### Transferability The transferability of adversarial examples refers to the property that the same input can successfully confuse different models [11]. We evaluate the transferability on \(3\) datasets across BERT, WordCNN and WordLSTM. As shown in Table 4, the adversarial samples generated by attacking one model fool other models to varying extents. BERT exhibits the best robustness to transferable attacks. ### Adversarial Training In order to verify the effect of adversarial examples on model robustness, we retrain the BERT on MR over the training set augmented by the adversarial data generated by BufferSearch. We then employ BufferSearch on the retrained BERT and report the results in Table 5. It is apparent that the model becomes more robust to the adversarial attacks after adversarial training, since the after attack accuracy, perturbation rate and query number significantly increased. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Adv Training} & \multicolumn{2}{c}{Accuracy (\%)} & \multirow{2}{*}{\# of queries} & \multirow{2}{*}{Perturbation} \\ \cline{2-3} \cline{5-4} & Original & Adv & & & \\ \hline \(\mathcal{K}\) & 87.9 & 3.5 & 63.0 & 14.7 \\ ✓ & 85.9 & 14.1 & 81.6 & 16.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Transferability of adversarial examples generated by rowwise models and datasets then evaluated on column-wise models. Figure 4: Attacking performance based on varying size of historical information IMDB (BERT(a)) and Yelp (WordLSTM(b)). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{6}{c}{WordCNN} \\ \hline & \multicolumn{2}{c}{MR} & \multicolumn{2}{c}{Yelp} & \multicolumn{2}{c}{IMDB} \\ & TF & PWN & BufferSearch & TF & PWN & BufferSearch & TF & PWN & BufferSearch \\ Original Accuracy & 78.0 & 78.0 & 78.0 & 94.0 & 94.0 & 94.0 & 89.2 & 89.2 \\ Adv Accuracy & **1.2** & 1.4 & 1.5 & **0.1** & 1.1 & 1.2 & **0** & **0** & **0** \\ \% Perturbed & 12.6 & **10.9** & 12.7 & **6.4** & **5.7** & 7.7 & 2.5 & **2.2** & 2.9 \\ Semantic Similarity & **0.963** & 0.960 & 0.957 & **0.989** & **0.989** & 0.987 & **0.998** & **0.998** & 0.997 \\ Query Num & 97.7 & 285.1 & **52.3** & 384.3 & 1855.7 & **270.6** & 441 & 3302.7 & **345.5** \\ \hline \hline \multicolumn{6}{c}{WordLSTM} \\ \hline Original Accuracy & 80.7 & 80.7 & 80.7 & 96.0 & 96.0 & 96.0 & 89.8 & 89.8 & 89.8 \\ Adv Accuracy & **1.1** & 2.1 & 1.2 & **0.4** & 2.1 & 1.7 & **0.3** & 0.6 & **0.3** \\ \% Perturbed & 12.9 & **11.2** & 13.1 & **7.2** & 8.1 & 9 & **3.1** & **3.1** & 3.7 \\ Semantic Similarity & **0.960** & **0.960** & 0.955 & **0.987** & 0.984 & 0.983 & **0.997** & 0.996 & 0.995 \\ Query Num & 99.1 & 285.9 & **52.3** & 429.8 & 1863.6 & **313.5** & 497.0 & 3306.6 & **385.1** \\ \hline \hline \multicolumn{6}{c}{BERT} \\ \hline Original Accuracy & 90.4 & 90.4 & 90.4 & 97.0 & 97.0 & 97.0 & 90.9 & 90.9 & 90.9 \\ Adv Accuracy & **9.5** & 18.5 & 10.5 & **0.6** & 7.8 & 1.9 & **11.2** & 13.5 & 11.7 \\ \% Perturbed & 17.7 & **13.6** & 18.1 & **9.4** & 9.5 & 10.3 & **3.8** & 5.0 & 4.2 \\ Semantic Similarity & 0.939 & **0.946** & 0.933 & **0.981** & 0.978 & 0.979 & **0.995** & 0.992 & 0.994 \\ Query Num & 144.7 & 285.2 & **86.3** & 502.1 & 1811.7 & **347.4** & 693.8 & 3240.5 & **501.6** \\ \hline \hline \end{tabular} \end{table} Table 2: Results under unlimited query budget. (Original Accuracy:the model prediction accuracy on \(1000\) origin samples, Adv Accuracy:the model accuracy after attack, % Perturbed:the percentage of substitution of the original text, Semantic Similarity:the semantic similarity between original and adversarial samples, Query Num:the cost of query in attack, Average Text Length:the average length of original text.) Figure 3: The number of adversarial samples generated under varying the query budget \(Q_{\text{max}}\) on BERT. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{WorkCNN} & \multicolumn{2}{c}{WordLSTM} & BERT \\ \hline \multirow{2}{*}{MR} & WordCNN & 1.3 & 54.0 & 83.7 \\ & WordLSTM & 51.3 & 1.2 & 84.5 \\ & BERT & 57.7 & 99.6 & 10.5 \\ \hline \multirow{2}{*}{Yelp} & WordCNN & 1.2 & 83.2 & 91.3 \\ & WordLSTM & 80.3 & 1.7 & 90.4 \\ & BERT & 85.0 & 85.1 & 1.9 \\ \hline \multirow{2}{*}{MDB} & WordCNN & 0.0 & 76.0 & 84.2 \\ & WordLSTM & 81.0 & 0.3 & 81.7 \\ & BERT & 70.5 & 83.3 & 11.7 \\ \hline \hline \end{tabular} \end{table} Table 5: Attacking BERT with(out) adversarial training on MR. Conclusion And Future Work We propose a query efficient attack method BufferSearch on text classification task in black-box setting. Extensive experiments show that BufferSearch can generate high quality adversarial samples with significantly fewer queries. Our work establishes baselines for the future query efficient attack studies in NLP related tasks.
2303.11125
One-Bit Massive MIMO Precoding for Frequency-Selective Fading Channels
One-bit digital-to-analog converters (DACs) are a practical and promising solution for reducing cost and power consumption in massive multiple-input multiple-output (MIMO) systems. However, the one-bit precoding problem is NP-hard and even more challenging in frequency-selective fading channels compared to the flat-fading scenario. While block-wise processing (BWP) can effectively address the inter-symbol-interference (ISI) in frequency-selective fading channels, its computational complexity and processing delay can be too high for practical implementation. An alternative solution to alleviate the processing complexity and delay issues is symbol-wise processing (SWP) which sequentially designs the transmit signals. However, existing SWP work leaves unwanted interference for later signal designs. In this paper, we propose an SWP approach which can efficiently address the ISI even at the symbol rate. The idea is to design the transmit signal to not only be beneficial for its time slot, but also to provide constructive interference for subsequent symbols. We develop two active ISI processing methods that significantly outperform a conventional approach, one of which that even outperforms the BWP approach at low SNR.
Ly V. Nguyen, Lu Liu, Nguyen Linh-Trung, A. Lee Swindlehurst
2023-03-20T14:04:13Z
http://arxiv.org/abs/2303.11125v1
# One-Bit Massive MIMO Precoding for Frequency-Selective Fading Channels ###### Abstract One-bit digital-to-analog converters (DACs) are a practical and promising solution for reducing cost and power consumption in massive multiple-input multiple-output (MIMO) systems. However, the one-bit precoding problem is NP-hard and even more challenging in frequency-selective fading channels compared to the flat-fading scenario. While block-wise processing (BWP) can effectively address the inter-symbol-interference (ISI) in frequency-selective fading channels, its computational complexity and processing delay can be too high for practical implementation. An alternative solution to alleviate the processing complexity and delay issues is symbol-wise processing (SWP) which sequentially designs the transmit signals. However, existing SWP work leaves unwanted interference for later signal designs. In this paper, we propose an SWP approach which can efficiently address the ISI even at the symbol rate. The idea is to design the transmit signal to not only be beneficial for its time slot, but also to provide constructive interference for subsequent symbols. We develop two active ISI processing methods that significantly outperform a conventional approach, one of which that even outperforms the BWP approach at low SNR. ## I Introduction Massive multiple-input multiple-output (MIMO) technology is a key for 5G-and-beyond wireless networks due to the energy and spectral efficiency benefits that derive from employing very large antenna arrays at the base station (BS). However, cost and power consumption at the BS in massive MIMO systems can be prohibitively high when implemented with standard high-resolution radio-frequency hardware. The use of one-bit digital-to-analog converters (DACs) is an alternative solution that significantly reduces cost and power consumption in massive MIMO systems. Unfortunately, optimal one-bit massive MIMO precoding is an NP-hard problem because each antenna can only transmit a symbol in the set \(\{\pm 1\pm 1j\}\). This challenging but interesting problem has been studied intensively in the literature. However, the majority of exiting work consider flat-fading channels, e.g., [1, 2, 3, 4, 5, 6]. For frequency-selective fading channels, there has been some results reported in [7, 8, 9, 10, 11, 12, 13], but this work is primarily focused on orthogonal frequency division multiplexing (OFDM). In this paper, we study the problem of one-bit massive MIMO precoding for frequency-selective fading channels. This problem is more challenging compared to flat-fading channels due to inter-symbol-interference (ISI), where symbols transmitted in one time slot affect the received signal at not only that time slot but those in the future. This line of research can be categorized into two groups: symbol-wise processing (SWP) and block-wise processing (BWP). In SWP, the transmit signals in different time slots of a coherence block are designed sequentially and separately [7], while in BWP they are jointly optimized [7, 8, 9, 10, 11, 12, 13]. The main benefit of BWP is that ISI can be effectively addressed thanks to the joint optimization over the entire block. However, such approaches suffer from high computational complexity and long processing delay because the design of all the transmit signals in the block must be done concurrently before the signal in the first time slot can be transmitted. On the other hand, SWP can alleviate both the complexity and processing delay associated with BWP since it designs the transmit signals independently from one time slot to the next. For SWP, once the transmit signal in a given time slot is designed, it can be transmitted without waiting for the design of future signals. However, SWP is inferior to BWP in terms of performance since it cannot fully address the ISI. To the best of our knowledge, [7] is the only work in the literature of one-bit massive MIMO precoding for frequency-selective fading channels that has considered the SWP approach. However, the SWP algorithm in [7] does not take into account the effects of the transmitted signals on later time slots. Motivated by this observation, in this paper we propose an SWP approach that can efficiently address the ISI effect even at the symbol rate. The idea is to design the transmit signal to not only be beneficial for its time slot, but also to provide constructive interference for subsequent symbols. We propose two SWP methods based on the maximum-safety margin optimization metric, one of which outperforms the other at low signal-to-noise ratios (SNRs) and vice versa at high SNRs. Simulation results also show that the bit-error-rate (BER) of the proposed methods are significantly lower than that of the conventional SWP method in [7] and one of the proposed methods even outperforms the corresponding BWP approach at low SNRs. _Notation:_ Upper-case and lower-case boldface letters denote matrices and column vectors, respectively. \(|\cdot|\) denotes the absolute value of a number and \([\cdot]^{T}\) denotes the transpose. The notation \(\Re\{\cdot\}\) and \(\Im\{\cdot\}\) respectively denotes the real and imaginary parts of the complex argument. If \(\Re\{\cdot\}\) and \(\Im\{\cdot\}\) are applied to a matrix or vector, they are applied separately to every element of that matrix or vector. \(\mathbb{R}\) and \(\mathbb{C}\) denote the set of real and complex numbers, respectively, and \(j\) is the unit imaginary number satisfying \(j^{2}=-1\). ## II System Model and Problem Formulation ### _System Model_ We consider a downlink massive MIMO system with an \(N\)-antenna base station serving \(K\) single-antenna users, where it is assumed that \(N\geq K\). Let \(\mathbf{H}_{\ell}\in\mathbb{C}^{K\times N}\) denote the \(\ell^{\text{th}}\) channel tap, \(\ell\in\mathcal{L}=\{0,1,\ldots,L-1\}\), where \(L\) is the number of channel taps. We assume perfect channel state information (CSI) and focus on the precoding problem. Let \(\mathbf{x}_{t}\) denote the transmit signal vector at time slot \(t\). We assume that the base station employs two \(1\)-bit DACs, one for the in-phase and the other for the quadrature signal. Hence, the signal \(x_{t,n}\) transmitted by the \(n^{\text{th}}\) antenna is confined to the discrete set \(\mathcal{X}=\{\pm 1\pm 1j\}\). Let \(\mathbf{y}_{t}\in\mathbb{C}^{K}\) be the signal vector received by the users, which is given as \[\mathbf{y}_{t}=\sqrt{\frac{\rho}{2N}}\sum_{\ell=0}^{L-1}\mathbf{H}_{\ell} \mathbf{x}_{t-\ell}+\mathbf{n}_{t}, \tag{1}\] where \(\mathbf{n}_{t}\sim\mathcal{CN}(0,\sigma^{2}\mathbf{I}_{K})\) is the noise vector, \(t=1,\ldots,T_{\text{c}}\), where \(T_{\text{c}}\) is the length of the coherence block, and the normalization by \(2N\) leads to the interpretation of \(\rho\) as the total transmit power. ### _Problem Formulation_ Let \(\mathbf{s}_{t}\in\mathcal{C}^{K}\) denote the symbols we intend the users to detect at time slot \(t\). We consider \(D\)-PSK signaling, i.e., \(s_{t,k}\in\exp{(j\pi\frac{2d_{k}+1}{D})}\) where \(d_{k}\in\{0,\ldots,D-1\}\). The rotated noiseless received signal vector is given as \[\mathbf{z}_{t}=\gamma\operatorname{diag}{(\mathbf{s}_{t}^{*})}\sum_{\ell=0}^ {L-1}\mathbf{H}_{\ell}\mathbf{x}_{t-\ell} \tag{2}\] where \(\gamma=\sqrt{\rho/(2N)}\). The safety margin [8] of user \(k\) at time slot \(t\) is illustrated in Fig. 1 and is given by \[\delta_{t,k}=z_{t,k}^{\mathbb{R}}\sin(\theta)-|z_{t,k}^{\mathbb{I}}|\cos( \theta), \tag{3}\] where \(z_{t,k}^{\mathbb{R}}\) and \(z_{t,k}^{\mathbb{I}}\) denote the real and imaginary parts of \(z_{t,k}\), respectively, and \(\theta=\pi/D\). It is clear that the farther \(z_{t,k}\) is from the symbol decision boundaries, the more likely that the received signal \(y_{t,k}\) will be correctly detected, i.e., the more robust it will be against the effects of noise and interference. Therefore, we want to increase the safety margins of the users as much as possible. A common design approach is to maximize the minimum safety margin \(\min\delta_{t,k}\) over the users and over the entire coherence block. However, this approach requires block-wise processing of all the transmit signal vectors \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{T_{\text{c}}}\}\). Such as a block-wise design can lead to excessive computational complexity and processing delay since the signal in the first time slot \(\mathbf{x}_{1}\) cannot be transmitted until the entire block design is completed. For example, BWP based on linear programming scales polynomially with the block size \(T_{\text{c}}\) while SWP scales only linearly with \(T_{\text{c}}\)[7]. In this paper, we focus on the SWP design perspective and propose two methods that can effectively address the ISI effect. ## III Passive ISI Processing This section presents the SWP design method in [7], which is referred to as _passive_ ISI processing. The received signal vector at time slot \(t\) can be decomposed as follows: \[\mathbf{y}_{t}=\gamma\mathbf{H}_{0}\mathbf{x}_{t}+\gamma\underbrace{\sum_{ \ell=1}^{L-1}\mathbf{H}_{\ell}\mathbf{x}_{t-\ell}}_{\boldsymbol{\eta}_{t}}+ \mathbf{n}_{t}\, \tag{4}\] where the term \(\boldsymbol{\eta}_{t}\) represents the ISI due to the delayed channel taps. The rotated noiseless received signal vector can be then written in the following form: \[\mathbf{z}_{t} =\operatorname{diag}{(\mathbf{s}_{t}^{*})}\bigg{(}\gamma\mathbf{H }_{0}\mathbf{x}_{t}+\gamma\sum_{\ell=1}^{L-1}\mathbf{H}_{\ell}\mathbf{x}_{t- \ell}\bigg{)} \tag{5}\] \[=\mathbf{W}_{t}\mathbf{x}_{t}+\mathbf{u}_{t}\, \tag{6}\] where \(\mathbf{W}_{t}=\gamma\operatorname{diag}{(\mathbf{s}_{t}^{*})}\mathbf{H}_{0}\) reflects the effect of the current channel tap \(\mathbf{H}_{0}\) and \(\mathbf{u}_{t}=\gamma\operatorname{diag}{(\mathbf{s}_{t}^{*})}\sum_{\ell=1}^{L -1}\mathbf{H}_{\ell}\mathbf{x}_{t-\ell}\) accounts for the ISI due to the delayed channel taps. At a time slot \(t\), the SWP design optimizes the transmit signal vector \(\mathbf{x}_{t}\) to maximize the minimum safety margin of this time slot [7], which can be written as \[\operatorname*{maximize}_{\mathbf{x}_{t},\ \delta^{\min}} \delta^{\min}\] (7) subject to \[\delta_{t,k}\geq\delta^{\min}\ \ \forall k\in\mathcal{K},\] \[\mathbf{x}_{t}\in\{\pm 1\}^{2N}.\] The constraint \(\delta_{t,k}\geq\delta^{\min}\ \forall k\in\mathcal{K}\) can be written in the matrix form \(\mathbf{Q}_{t}\boldsymbol{\nu}_{t}\leq\mathbf{c}_{t}\), where \(\boldsymbol{\nu}_{t}=[\Re\{\mathbf{x}_{t}^{T}\},\Im\{\mathbf{x}_{t}^{T}\}, \delta^{\min}]^{T}\) is the vector variable to be optimized, \(\mathbf{c}_{t}\) is a vector accounting for the ISI and is given as \[\mathbf{c}_{t}=\begin{bmatrix}\tan(\theta)\Re\{\mathbf{u}_{t}\}-\Im\{\mathbf{ u}_{t}\}\\ \tan(\theta)\Re\{\mathbf{u}_{t}\}+\Im\{\mathbf{u}_{t}\}\end{bmatrix}, \tag{8}\] and \[\mathbf{Q}_{t}=\begin{bmatrix}\mathbf{B}_{t}-\tan(\theta)\mathbf{A}_{t}& \frac{1}{\cos(\theta)}\mathbf{1}_{K}\\ -\mathbf{B}_{t}-\tan(\theta)\mathbf{A}_{t}&\frac{1}{\cos(\theta)}\mathbf{1}_{K} \end{bmatrix}\, \tag{9}\] Fig. 1: Illustration of the safety margin for user \(k\) at time slot \(t\). The correct symbol region includes the pink and green areas. where \(\mathbf{A}_{t}=[\Re\{\mathbf{W}_{t}\},-\Im\{\mathbf{W}_{t}\}]\) and \(\mathbf{B}_{t}=[\Im\{\mathbf{W}_{t}\},\Re\{\mathbf{W}_{t}\}]\). In [7], the constraints \(x_{t,k}\in\{\pm 1\}\) are relaxed to \(-1\leq x_{t,k}\leq 1\) to obtain the following convex linear programming problem: \[\begin{split}\underset{\boldsymbol{\nu}_{t}}{\mathrm{maximize}}& [\mathbf{0}_{2N}^{T},\ 1]^{T}\boldsymbol{\nu}_{t}\\ \mathrm{subject\ to}&\mathbf{Q}_{t}\boldsymbol{\nu}_ {t}\leq\mathbf{c}_{t}\\ &-\mathbf{1}_{2N}\leq\begin{bmatrix}\Re\{\mathbf{x}_{t}\}\\ \Im\{\mathbf{x}_{t}\}\end{bmatrix}\leq\mathbf{1}_{2N}\.\end{split} \tag{10}\] If we let \(\boldsymbol{\nu}_{t}^{\star}\) be the solution of (10), the transmit signal \(\mathbf{x}_{t}\) is obtained as \(x_{t,n}=\mathrm{sign}(\boldsymbol{\nu}_{t,n}^{\star})\) for \(n=1,\ldots,2N\). _Discussion:_ In the above SWP approach, the ISI term \(\boldsymbol{\eta}_{t}\) from the past transmit signals and the effect of \(\mathbf{x}_{t}\) on time slot \(t\) are taken into account when designing the signal \(\mathbf{x}_{t}\). However, this method ignores the effect of \(\mathbf{x}_{t}\) on the future (delayed) time slots \(t+1,\ldots,t+L-1\) as illustrated in Fig. 2, and therefore unintentionally induces unwanted interference for the design of the future signals \(\mathbf{x}_{t+1},\ldots,\mathbf{x}_{t+L-1}\). In other words, the design of \(\mathbf{x}_{t}\) has to _passively_ cope with the ISI term \(\boldsymbol{\eta}_{t}\) which is unwanted interference from the design of \(\mathbf{x}_{t-1},\ldots,\mathbf{x}_{t-L+1}\). Motivated by this observation, in the following section, we propose an SWP approach that takes into account \(\boldsymbol{\eta}_{t}\) and the effect of \(\mathbf{x}_{t}\) on all time slots from \(t\) to \(t+L-1\). In this way, our proposed approach will _actively_ provide constructive interference for the future signal designs. ## IV Proposed Active ISI Processing Here, we propose an SWP approach that takes into account the interference of the past time slots while at the same time providing constructive interference for those in the future. Since the signal \(\mathbf{x}_{t}\) affects the \(L\) time slots \(t,\ldots,t+L-1\), our idea is to take into account the safety margins at these time slots when designing \(\mathbf{x}_{t}\). This is unlike the method in [7] which considers the safety margins only at time slot \(t\) when designing \(\mathbf{x}_{t}\). In the following, we propose two relevant optimization methods; one maximizes the minimum safety margin over all the users and time slots \(t,\ldots,t+L-1\), while the other maximizes the sum of the minimum safety margins obtained over the time slots \(t,\ldots,t+L-1\). ### _Method 1: Maximizing the Minimum Safety Margin_ This method aims to maximize the minimum safety margin of all \(K\) users over \(L\) time slots \(t,\ldots,t+L-1\) as follows: \[\begin{split}\underset{\mathbf{x}_{t},\ \delta^{\min}}{ \mathrm{maximize}}&\delta^{\min}\\ \mathrm{subject\ to}&\delta_{t+\ell,k}\geq\delta^{\min}\ \ \forall\ell\in\mathcal{L},\ k\in\mathcal{K}\\ &\mathbf{x}_{t}\in\{\pm 1\}^{2N}.\end{split} \tag{11}\] This optimization problem can also be relaxed and written as a linear programming problem: \[\begin{split}\underset{\boldsymbol{\nu}_{t}}{\mathrm{maximize}}& [\mathbf{0}_{2N}^{T},\ 1]^{T}\boldsymbol{\nu}_{t}\\ \mathrm{subject\ to}&\mathbf{Q}_{t+\ell}\boldsymbol{\nu}_ {t}\leq\mathbf{c}_{t+\ell}\ \ \forall\ell\in\mathcal{L}\\ &-\mathbf{1}_{2N}\leq\begin{bmatrix}\Re\{\mathbf{x}_{t}\}\\ \Im\{\mathbf{x}_{t}\}\end{bmatrix}\leq\mathbf{1}_{2N}.\end{split} \tag{12}\] Note that the definition of \(\mathbf{Q}_{t+\ell}\) requires \(\mathbf{A}_{t+\ell}\) and \(\mathbf{B}_{t+\ell}\), which are given by \(\mathbf{A}_{t+\ell}=[\Re\{\mathbf{W}_{t+\ell}\}\ -\Im\{\mathbf{W}_{t+\ell}\}]\) and \(\mathbf{B}_{t+\ell}=\begin{bmatrix}\Im\{\mathbf{W}_{t+\ell}\}&\Re\{\mathbf{W}_{t+ \ell}\}\end{bmatrix}\), where \(\mathbf{W}_{t+\ell}=\mathrm{diag}\left(\mathbf{s}_{t+\ell}^{\star}\right)\mathbf{ H}_{\ell}\). The definition of \(\mathbf{c}_{t+\ell}\) requires \(\mathbf{u}_{t+\ell}\), which is given by \(\mathbf{u}_{t+\ell}=\mathrm{diag}\left(\mathbf{s}_{t+\ell}^{\star}\right)\sum_{ \ell^{\prime}=\ell+1}^{L-1}\mathbf{H}_{\ell^{\prime}}\mathbf{x}_{t+\ell-\ell^ {\prime}}\). It should be noted that the signals \(\mathbf{x}_{t+1},\ldots,\mathbf{x}_{t+L-1}\) have not been designed yet, and therefore the safety margins at time slots \(t+1,\ldots,t+L-1\) are computed using only the previously designed signals \(\mathbf{x}_{t-1},\ldots,\mathbf{x}_{t-L+2}\). This explains why the index \(\ell^{\prime}\) in the computation of \(\mathbf{u}_{t+\ell}\) starts from \(\ell+1\) instead of \(1\). Finally, we take the sign of the first \(2N\) elements of the solution of (12) to obtain the transmit signal \(\mathbf{x}_{t}\). ### _Method 2: Maximizing the Sum of Minimum Safety Margins_ This method aims to maximize the sum of the per-time slot minimum safety margins, as follows: \[\begin{split}\underset{\mathbf{x}_{t},\ \delta^{\min}}{\mathrm{maximize}}&\sum_{\ell=0}^{L-1}\delta^{\min}_{ \ell}\\ \mathrm{subject\ to}&\delta_{t+\ell,k}\geq\delta^{\min}_{\ell}\ \ \forall\ell\in\mathcal{L},\ k\in\mathcal{K}\\ &\mathbf{x}_{t}\in\{\pm 1\}^{2N}\.\end{split} \tag{13}\] This problem can also be relaxed and written as a linear programming problem: \[\begin{split}\underset{\mathbf{v}_{t}}{\mathrm{maximize}}&[ \mathbf{0}_{2N}^{T},\ \mathbf{1}_{L}^{T}]^{T}\boldsymbol{\nu}_{t}\\ \mathrm{subject\ to}&\mathbf{G}_{t+\ell}\boldsymbol{\nu}_ {t}\leq\mathbf{c}_{t+\ell}\ \ \forall\ell\in\mathcal{L}\\ &-\mathbf{1}_{2N}\leq\begin{bmatrix}\Re\{\mathbf{x}_{t}\}\\ \Im\{\mathbf{x}_{t}\}\end{bmatrix}\leq\mathbf{1}_{2N}.\end{split} \tag{14}\] Here, \(\boldsymbol{\upsilon}_{t}=[\Re\{\mathbf{x}_{t}^{T}\},\Im\{\mathbf{x}_{t}^{T}\}, \delta^{\min}_{0},\cdots,\delta^{\min}_{L-1}]^{T}\) and \[\mathbf{G}_{t+\ell}=\begin{bmatrix}\mathbf{B}_{t+\ell}-\tan(\theta)\mathbf{A} _{t+\ell}&\frac{1}{\cos(\theta)}\mathbf{E}_{\ell+1}\\ -\mathbf{B}_{t+\ell}-\tan(\theta)\mathbf{A}_{t+\ell}&\frac{1}{\cos(\theta)} \mathbf{E}_{\ell+1}\end{bmatrix}\, \tag{15}\] where \(\mathbf{E}_{\ell+1}\) is a real-valued matrix of size \(K\times L\) whose \((\ell+1)^{\text{th}}\) column is a vector of all ones and whose other columns are all zeros. Similarly, we take the sign of the first \(2N\) elements of the solution of (14) to obtain the transmit signal \(\mathbf{x}_{t}\). Fig. 2: The design of \(\mathbf{x}_{t}\) in [7] only takes into account the ISI term \(\boldsymbol{\eta}_{t}\) and the effect of \(\mathbf{x}_{t}\) on the received signal at time \(t\) (blue arrows), and ignores the effect of \(\mathbf{x}_{t}\) on the future time slots (red arrows). ## V Numerical Results This section provides numerical results to show the superiority of the proposed methods. We set \(K=4\), \(N=64\), \(T_{\rm c}=256\), and \(D=8\) (i.e., 8-PSK signaling). Each channel element is generated as a \(\mathcal{CN}(0,1/L)\) random variable and the SNR is defined as \(\rho/\sigma^{2}\). In Fig. 3, we compare the proposed SWP methods 1 and 2 (referred to as'max-min' and'max-sum-min', respectively) with the conventional SWP method (referred to as 'passive SWP') and also the BWP method in [7]. It can be seen that the proposed methods significantly outperform the conventional passive SWP method, since the active SWP methods create constructive interference for the transmit signal design in future symbol periods to exploit. It is also interesting to note that the max-sum-min method gives the best performance at low SNRs and even outperforms the BWP method which jointly designs the entire coherence block of 256 time slots. At high SNRs, the max-min method gives lower BERs compared to the max-sum-min. To explain this, we provide a sample plot of the noiseless received signals for the max-min and max-sum-min methods in Fig. 4. It is observed that while the max-sum-min method moves the majority of signals far from the decision boundaries, the max-min method pushes the worst signal sample away from the boundaries and therefore the majority of signals are pulled closer to the decision thresholds as compared to the max-sum-min method. This explains why at low SNRs, when the noise is strong, the max-sum-min approach gives better performance. However, the drawback of the max-sum-min method is that it focuses on the strongest signals and therefore may leave some received signals very near the origin, as seen in the figure. Such signals are obviously more susceptible to a noise-induced detection error. In Fig. 5, we compare the proposed active SWP methods with the conventional passive SWP method for different numbers of channel taps \(L\). It can be seen that as \(L\) increases, the improvement between the proposed active methods and the conventional passive method also increases. This is due to the fact that a channel with a longer delay spread will result in more ISI, which significantly degrades the performance of the passive method since it ignores the future effect of a the design in a given time slot on future time slots. On the other hand, the proposed active methods better account for this effect by actively providing constructive interference that can be exploited in the design of future transmitted signals. ## VI Conclusion In this paper, we have proposed an SWP approach that not only takes into account interference from past signals on the current time slot, but that also generates constructive interference that can be exploited by future signal designs. We proposed two active ISI precoders, one based on maximizing the minimum safety margin for all users, and the other on maximizing the sum of the minimum safety margins over the delay spread. These two methods effectively address the ISI effect even at the symbol processing rate and significantly outperform a conventional SWP method. One of the proposed SWP methods can even yield better performance compared to its BWP counterpart at low SNRs. Fig. 4: Noiseless received signals for the proposed methods. Fig. 5: BER performance comparison for different values of \(L\) at 20-dB. Fig. 3: BER performance comparison with \(L=3\).
2303.06459
Optimal endorsement for network-wide distributed blockchains
Blockchains offer trust and immutability in non-trusted environments, but most are not fast enough for latency-sensitive applications. Hyperledger Fabric (HF) is a common enterprise-level platform that is being offered as Blockchain-as-a-Service (BaaS) by cloud providers. In HF, every new transaction requires a preliminary endorsement by multiple mutually untrusted parties called organizations, which contributes to the delay in storing the transaction in the blockchain. The endorsement policy is specific to each application and defines the required approvals by the endorser peers (EPs) of the involved organizations. In this paper, given an input endorsement policy, we studied the optimal choice to distribute the endorsement requests to the proper EPs. We proposed the OPEN algorithm, devised to minimize the latency due to both network delays and the processing times at the EPs. By extensive simulations, we showed that OPEN can reduce the endorsement latency up to 70% compared to the state-of-the-art solution and approximated well the introduced optimal policies while offering a negligible implementation overhead compared to them.
Iman Lotfimahyari, Paolo Giaccone
2023-03-11T17:15:49Z
http://arxiv.org/abs/2303.06459v1
# Optimal endorsement for network-wide distributed blockchains ###### Abstract Blockchains offer trust and immutability in non-trusted environments, but most are not fast enough for latency-sensitive applications. Hyperledger Fabric (HF) is a common enterprise-level platform that is being offered as Blockchain-as-a-Service (BaaS) by cloud providers. In HF, every new transaction requires a preliminary _endorsement_ by multiple mutually untrusted parties called organizations, which contributes to the delay in storing the transaction in the blockchain. The _endorsement policy_ is specific to each application and defines the required approvals by the _endorser peers_ (EPs) of the involved organizations. In this paper, given an input _endorsement policy_, we studied the optimal choice to distribute the endorsement requests to the proper EPs. We proposed the OPEN algorithm, devised to minimize the latency due to both network delays and the processing times at the EPs. By extensive simulations, we showed that OPEN can reduce the _endorsement latency_ up to 70% compared to the state-of-the-art solution and approximated well the introduced optimal policies while offering a negligible implementation overhead compared to them. Blockchains, Hyperledger Fabric, Endorsement policy ## I Introduction Nowadays, blockchains have become more and more relevant in many ICT applications. Pioneered by Bitcoin [1] and Ethereum [2], blockchains bring trust between different entities where trust is either nonexistent or unproven. They can improve security and privacy while offering a decentralized structure. The provided immutability brings visibility and traceability, beneficial for ICT applications, such as banking, supply-chain, IoT, healthcare, and energy sectors [3, 4, 5]. A blockchain is public if it is open to everyone to read otherwise it is private. But, if a node needs permission to participate in validating transactions, then the blockchain is permissioned otherwise it is permissionless [6]. In contrast to public permissionless blockchains like Bitcoin, many enterprise applications require performance that permissionless blockchains are unable to deliver. Furthermore, many use cases necessitate knowing the identity of the participants, such as in financial transactions where notary service regulations must be followed. Private permissioned blockchains, such as Hyperledger Fabric (HF) [7] and Corda [8], meet such requirements. In HF, a transaction must be endorsed (i.e., approved) by the organizations constituting the blockchain, according to a specified endorsement policy. This guarantees a mutual agreement between non-trusted parties, similarly, in the physical world, to a receipt declaring an asset transfer between two parties, signed by both parties. HF uses an architecture called Execute-Order-Validate for transactions, enabling the definition of endorsement policies. During the execution phase, the client sends the transaction to some Endorser Peers (EPs), based on the user's specified endorsement policy. Each EP processes the transaction by only simulating it without applying the results on the blockchain. The simulation result, denoted as "endorsement", is signed by the EP and returned to the client. Finally, if the endorsement policy is satisfied, the signed and endorsed proposal of the transaction will be sent to the blockchain nodes to be stored. The endorsement delay experienced by a client is affected mainly by two components: i) the network delay between the client and the EPs and ii) the processing delay at each EP. The network delay mainly depends on the network congestion and the propagation delays, whereas the processing delay depends on both the CPU capability and the computation load of each EP. Because network and processing delays are time-varying and hence difficult to predict, optimally selecting EPs is hard. Note that a selection algorithm choosing just the best EP based on the minimum experienced delays will concentrate the endorsement requests to the same EPs, increasing the network congestion and the processing load, thus increasing the overall endorsement delays. In this work, we propose an _optimal EP selection policy_ minimizing the endorsement delays. The main idea is to send redundant endorsement requests to multiple EPs. The adopted spatial diversity increases the chance of having the best EPs among the selected ones. The benefit of the proposed approach can be captured by a simple queueing model in which a task is sent in parallel to multiple servers, each with its queueing system, to minimize task completion time. In this paper, our novel contributions are as follows: * We highlight the role of the network and processing delays in the overall endorsement delay. * We refer to a simple analytical model, based on the classical theory of queueing systems, to evaluate the effect of redundancy in selecting the EPs and to compute the optimal number of EPs. * We propose an optimization approach denoted as _OPtimal ENdorsement (OPEN)_ based on the analytical results, leveraging the history of endorsement delays. * We demonstrate through extensive simulations that OPEN outperforms the state-of-the-art solution and accurately approximates other optimal policies while having a much lower implementation overhead compared to them. * We provide some preliminary results regarding a proof-of-concept implementation. The rest of this paper is structured as follows. Sec. II describes the HF architecture and then focuses on the endorsement phase delay by introducing the network model and the EP selection problem. Sec. III explains a simple analytical model, derived from classical results on queueing theory, to find the optimal number of EPs in a simplified scenario. In Sec. IV, we propose an EP selection algorithm based on the optimal replication factor computed analytically in Sec. III, able to operate in a generic scenario. In Sec. V, we assess by simulation the performance of our proposed approach and compare it with the alternatives proposed and with the state-of-the-art solution. In Sec. VI we show some preliminary results of a proof-of-concept implementation of OPEN. In Sec. VII we discuss the related work. Finally, we draw our conclusions in Sec. VIII. ## II Hyperledger Fabric architecture and endorsement We start by describing the HF architecture and then we focus on the endorsement phase. ### _Hyperledger Fabric architecture and protocol_ The Execute-Order-Validate approach enables the simulation of the transactions before the agreement of the participants on recording the results in the blockchain. We highlight the role of the following entities in the reported reference architecture in Fig. 1. The _client_ is responsible for preparing the transaction proposal of the users' transactions and sending it to the Endorser Peers (defined below) based on the specified endorsement policy. If the client receives enough endorsements before a specific time-out, it forwards them to the ordering service; otherwise, the client can re-transmit the same proposal in the hope of receiving enough endorsements in time. The _ledger_ stores transactions in a distributed manner and is composed of two distinct yet related parts. The first is the actual "blockchain," which maintains a log of transactions shared among all nodes and is immutable by design. The other is the "world state" which holds the most recent state associated with the validated transactions, which is recorded on the blockchain log. The _Peer_ is the element responsible for the following tasks. The _Endorser Peer_ (EP) simulates/executes the transaction received from the client application, based on the current values of the world state. The _Verifier/Committer_ (VCP) receives a block of simulated transactions from the ordering service and verifies their legitimacy to mark them as validated or invalidated. Then it appends the verified block to the blockchain, comprising all the transactions (validated or invalidated). To update its copy of the ledger, an EP is typically a VCP at the same time. The peers are owned by various _organizations_ that are blockchain members. An organization can be as small as individuals or as large as a multi-national corporation. The _ordering service_ receives endorsed transactions from clients, aggregates them into blocks, and distributes them to the VCPs. The _channel_ is a private blockchain overlay that provides data confidentiality and ledger isolation. The transacting parties must be authenticated on a channel to read/write the corresponding data. The _smart contract_ is a piece of code responsible for validating the transactions and thus performing any read-set/write-set of interactions with the ledger. Its functions include simulating the transactions, validating them based on the endorsement policy and corresponding word state, and finally updating the world state based on validated transactions. The _endorsement policy_ defines the logical conditions to validate a transaction in terms of the EPs on a channel that must execute a transaction proposal. In Sec. II-B, we will describe in detail the representation of the endorsement policy. The definition of an endorsement policy is at the organizational level, which means any EP of that organization can represent that organization in Fig. 1: The main entities involved in HF architecture the endorsement policy. A transaction should pass three phases to be stored in the blockchain, as shown in Fig. 2. In the _simulation phase_ the client prepares the proposal for the transaction and sends it to a set of EPs, which depends on the specified endorsement policy. The EPs execute the transaction proposal and return the signed results to the client. As soon as the endorsement policy is satisfied, the client sends the signed endorsement results to the ordering service. During the _ordering phase_ the endorsed transactions received from the clients are ordered and packed to create new blocks. The VCPs will receive these blocks. In the _verification/validation and commit phase_, the received blocks from the ordering service are verified to have legitimate transactions based on endorsement requirements and world state values. The block will be added to the blockchain by the VCPs, and the world state will be updated using the validated transactions. ### _Standard form of an endorsement policy_ HF provides a very flexible way to define an endorsement policy. We will show that any endorsement policy, despite its complexity, can be reduced to a standard form. In HF, the definition of an endorsement policy is based on a syntax that allows the operators "AND", "OR" and "\(k\)-OutOf" to be applied to a set of organizations and nested expressions [9]. In particular, the operator "\(k\)-OutOf-\(E\)" returns true whenever at least \(k\) expressions within set \(E\) are satisfied. Despite the complexity of the policy expression, we prove that the following proposition holds: **Proposition 1**.: _Any endorsement policy obtained by combining arbitrarily "AND", "OR", and "OutOf" operators is equivalent to the policy:_ \[\text{OR}(St_{1},St_{2},...) \tag{1}\] _where each \(St_{i}\) is either a single organization or the conjunction ("AND") of different organizations._ Proof.: In the case of expressions based on only "AND" and "OR" operators, thanks to the distribution principle in logic expressions, we can transform the original expression into the target form (1). In the case of "\(k\)-OutOf\((e_{1},e_{2},\ldots,e_{m})\) operator, where \(e_{i}\) is a single expression, by definition this holds: \[k\text{-OutOf}(e_{1},e_{2},\ldots,e_{m})=\text{OR}(\{\text{AND}(E)\}_{E\in \Omega})\] being \(\Omega\) the set of all \(\binom{m}{k}\) combinations of \(k\) expressions from the set of \(m\). Now, since any expression with the "OutOf" operator is equivalent to one with only "AND" and "OR", by following the previous reasoning, such expression can be reduced to the expression (1). The policy, defined at the organization level, must be mapped into a policy defined at the EP level since the endorsement requests should be sent to the proper EPs. So, getting the endorsement from a specific organization requires receiving it from _any_ of its EPs, which is equivalent to the policy 1-OutOf\((p_{1},p_{2},\ldots)\), where \(p_{i}\) are the EPs within the organization. Revisiting Proposition 1 applied at the policy expression at the EP level, we can claim: **Proposition 2**.: _Any endorsement policy defined at the organization level can be expanded into an endorsement policy defined at the EP level as follows:_ \[\text{OR}(St_{1}^{{}^{\prime}},St_{2}^{{}^{\prime}},...) \tag{2}\] _where each of \(St_{i}^{{}^{\prime}}\) is either a single EP or the conjunction ("AND") of different EPs._ The result of Proposition 1 allows investigating only one standard form of endorsement expression, independently from the original expression complexity. Now, by using Proposition 2, we will have the endorsement expression extended at the EP level. At this level, the final endorsement will be just in the form of the OR between the conjunction ("AND") of different EPs of different organizations, as in (2). For example, consider a scenario with three organizations and two EPs in each of them. If the endorsement policy is "2-OutOf\((o_{1},o_{2},o_{3})\)", then we can rewrite it as: \[\text{2-OutOf}(o_{1},o_{2},o_{3})=\\ \text{OR}(\{\text{AND}(p_{ij},p_{i^{{}^{\prime}}j^{{}^{\prime}}} ),\forall i,\forall i^{\prime}\neq i,\forall j,\forall j^{\prime}\}) \tag{3}\] where \(o_{i}\) is organization \(i\), and \(p_{ij}\) is the EP \(j\) of organization \(i\). The expanded version in (3) lists all the possible combinations of the EPs that can satisfy the endorsement policy according to the standard form. Fig. 2: Transaction processing phases in HF highlighting all the message interactions between the involved entities ### _Endorser peer (EP) selection algorithm_ In our work we focus on the EP selection algorithm, starting from the standard form of the endorsement policy. The _endorsement delay_ is the amount of time the client waits, from sending the endorsement request until receiving the first endorsement reply that satisfies the endorsement policy. The response delay from an EP is the sum of two components: the network delay and the processing delay at the EP. The _network delay_ depends on the propagation delay and the queueing delay along the path to the EP, which is affected by the time-variant congestion conditions. The overall _processing delay_ depends on the _queuing_ at the EP before being served and the _computation time_ at the EP, which depends on the CPU speed and the instantaneous CPU load and resource contentions. Because the standard form of any endorsement policy comprises an overall "OR" operator, as in (2), the endorsement latency corresponds to the _minimum_ delay to get a valid statement. Also, each statement is based on an "AND" operator between EPs, so the delay of each statement depends on the _maximum_ response delay of all EPs included in a statement. In summary, the endorsement latency depends on the "fastest" group of EPs forming a statement, while the delay of each group depends on the "slowest" EP within the group. ### _System model for the endorsement phase_ Without loss of generality, we consider a fixed network topology connecting \(C\) clients with \(Q\) organizations, each of them with a generic network connecting the internal EPs, depicted in Fig. (a)a. We assume that all nodes in the system are always available, the routing is fixed, and the links have enough bandwidth to prevent network congestion caused by the endorsement protocol. Thanks to the service discovery process in HF, we consider only the most updated EPs. ## III Background on optimal replication in queueing systems Now, we discuss an analytical model to compute the optimal number of EPs for each transaction, derived from classical results on task replication in a queueing system, as explained in Sec. VII. For the sake of readability, we report the adopted notation in Table I. We consider a simplified model as shown in Fig. (b)b, with one EP in each organization. We assume \(1\)-OutOf-\(Q\) as the endorsement policy, which corresponds to \(\text{OR}(p_{1},p_{2},\ldots,p_{Q})\) in its standard form. For now, we neglect the network delays and concentrate just on processing delays. We suppose each client generates endorsement requests according to a Poisson process with rate \(\lambda\). Each client selects at random \(R\) EPs to send the endorsement request. \(R\) will be denoted in the following as _redundancy factor_. To model the processing time variability at the EP, we assume an exponentially distributed processing time with an average \(1/\mu\), coherently with past works [10, 11]. Thus each EP can be modeled as an M/M/\(1\)1 queue with arrival rate \(\lambda RC/Q\) and service rate \(\mu\). We define the utilization factor for each EP as \(U=\lambda RC/\mu Q\). Thus, for the request traffic to be sustainable, \(U<1\) and the endorsement request arrival rate must satisfy \(\lambda<\mu Q/RC\). We can now claim the following: Footnote 1: In classical queueing theory, an M/M/1 queue has a single server, arrivals follow a Poisson process and service times are exponentially distributed [12]. **Proposition 3**.: _Under a sustainable arrival rate of endorsement requests and a random selection policy with \begin{table} \begin{tabular}{l l} \(C\) & number of clients \\ \(Q\) & number of organizations \\ \(p\) & endorser peer (EP) \\ \(\lambda\) & arrival rate of new transactions to each client \\ \(\mu\) & inverse of computation time for the EP server \\ \(U\) & utilization factor in each EP server \\ \(R\) & redundancy factor \\ \(W_{i}\) & waiting time needed for the \(i\)th request to be served \\ \(S_{i}\) & inter-arrival time between the ordered version of \(\{W_{i}\}_{i}\) \\ \(\gamma\) & normalized load factor for the worst case \(R=Q\) \\ \(\hat{R}_{k}\) & optimal \(R\) for policy \(k\)-OutOf-\(Q\) \\ \(L_{k}\) & endorsement latency for policy \(k\)-OutOf-\(Q\) \\ \(\mathcal{P}\) & set of all available EPs \\ \(\mathcal{P}_{e}\) & set of selected EPs \\ \(\mathcal{P}_{e}^{\text{old}}\) & set of previously selected EPs \\ \(T\) & probe sampling period \\ \(\text{TX}^{n}\) & transaction with local sequence number \(n\) \\ \(x^{k}\) & endorsement latency of \(\text{TX}^{k}\) for peer \(p\) \\ \(t_{p}^{\text{Exp}}\) & virtual response delay of a peer for the new TX \\ \(t_{p}^{\text{busy}}\) & virtual time at which EP \(p\) is not busy anymore \\ \(\tau_{p}^{\text{loc}}\) & processing delay for the current endorsement request \\ \(\tau_{p}^{\text{net}}\) & network delay for EP \(p\) \\ \(\tau_{p}^{\text{Queue}}\) & queueing time experienced by the TX at EP \(p\) \\ \(d_{cp}\) & the network delay between client \(c\) and EP \(p\) \\ \end{tabular} \end{table} TABLE I: Notation Fig. 3: Network model and the endorsement policy sending each request to \(R\) organizations/peers in parallel. \(R\) EPs, according to the endorsement policy \(1\)-OutOf-Q, it holds for the endorsement latency \(L_{1}\):_ \[E[L_{1}]=\frac{1}{\mu-\frac{\lambda RC}{Q}}\left(\frac{1}{R}\right)\qquad R\in[1, \ldots,Q] \tag{4}\] Proof.: From Fig. 3b, let \(\lambda^{\prime}\) be the average incoming rate of the requests for the queue of each EP such that: \[\lambda^{{}^{\prime}}=\frac{\lambda RC}{Q} \tag{5}\] We define \(W_{i}\) as the waiting time of a request to be served at the \(i\)th EP, which is the sum of queuing time and the serving time of the request in the \(i\)th EP. From M/M/1 well-known properties [12], \(W_{i}\) are i.i.d. and exponentially distributed with mean: \(E[W_{i}]=1/(\mu-\lambda^{{}^{\prime}})\). Observe that: \(L_{1}=\text{min}(W_{1},W_{2},\ldots,W_{R})\) where \(W_{i}\) are i.i.d.. From basic properties of the exponential distribution, \(L_{1}\) is exponentially distributed with mean: \[E[L_{1}]=\frac{E[W_{i}]}{R} \tag{6}\] and finally get (4). By computing the first derivative of (4) with respect to \(R\), we can prove the following: **Proposition 4**.: _Let \(\hat{R}_{1}\) be the optimal value of \(R\) that minimizes \(E[L_{1}]\) for the policy \(1\)-OutOf-\(Q\)._ \[\hat{R}_{1}=\frac{\mu Q}{2\lambda C} \tag{7}\] In summary, the optimal number of EPs changes with \(\lambda\). For low arrival rates, \(R\) must be large to exploit the spatial diversity, without incurring additional overhead in the processing times. For high arrival rates, conversely, \(R\) is small to reduce the load on the EPs. Notably, for the sake of readability, we omitted from (7) the clipping to the interval \([1,Q]\) and the rounding procedure to find the optimal integer value of \(R\). We can now extend the result of Proposition 3 to a generic OutOf policy. **Proposition 5**.: _Under a sustainable arrival rate of endorsement requests and a random selection policy with \(R\) EPs, according to the endorsement policy \(k\)-OutOf-Q, it holds for the endorsement latency \(L_{k}\):_ \[E[L_{k}]=\frac{1}{\mu-\frac{\lambda CR}{Q}}\left(\sum_{i=0}^{k-1 }\frac{1}{R-i}\right)\\ R\in[k,\ldots,Q] \tag{8}\] Proof.: Using the same definition of \(W_{i}\) as adopted in the proof of Proposition 3, we can define \(L_{k}\) as the endorsement latency for the policy \(k\)-OutOf-\(Q\). Now \(L_{k}\) can be computed as the \(k\)th order statistic as follows, \(L_{k}=(W_{1},W_{2},\ldots,W_{R})_{(k)}\), recalling the fact that \(W_{i}\) are i.i.d. and exponentially distributed, we can define \(S_{i}\) as the time interval between the ordered version of the \(W_{i}\) (i.e., \(S_{i}=W_{(i+1)}-W_{(i)}\)). Thanks to the theory of order statistics [13], \(S_{i}\) is exponentially distributed with average: \[E[S_{i}]=\frac{E[W_{i}]}{(R-i)} \tag{9}\] By combining (6) and (9) we have: \[E[L_{k}]=\sum_{i=1}^{k-1}E[S_{i}]+E(L_{1}) \tag{10}\] simplified to: \[E[L_{k}]=\sum_{i=0}^{k-1}\frac{E[W_{i}]}{(R-i)}\qquad R\in[k,\ldots,Q] \tag{11}\] and we get (8). The optimal value of \(\hat{R}\) can be computed analytically as well. We impose sustainable request arrivals, i.e., \(U<1\), _for any_\(R\) to guarantee sustainable arrivals also in the case \(R=Q\), it must hold \(\lambda<\mu/C\). Thus, we can set: \[\lambda=\gamma\frac{\mu}{C} \tag{12}\] with \(\gamma\in(0,1)\) being the load factor. By substituting (12) into (7), we can obtain the optimal number of EPs for \(1\)-Out-Of-\(Q\) policy as: \[\hat{R}_{1}=\frac{Q}{2\gamma} \tag{13}\] We can repeat the same derivation also for \(\hat{R}_{k}\), i.e., for a generic \(k\)-OutOf-\(Q\) policy. ### _Numerical evaluation_ In Fig. 4 we reported the endorsement latency computed in the function of \(\gamma\) and \(R\), obtained by substituting (12) in (8). As expected, we observe a minimum endorsement latency obtained with \(R=R_{k}\), as computed analytically, which depends on the load \(\gamma\). Due to the difficulty to estimate the load in practical scenarios (which may not be stationary), for \(k=1\), we propose heuristically choosing \(R=Q/2\) as a sub-optimal redundancy in our proposed approach, discussed in the following. This choice is robust since it is optimal at high load and at low load the latency increase is limited. Indeed, for \(\gamma=0.5\) the increase is no more than \(8\%\) compared to the optimal value, and for \(\gamma=0.1\) no more than \(12\%\). Thus for \(k=1\), \(R=Q/2\) appears to be a practical solution, which will be exploited when devising online EP selection algorithms in Sec. IV. The redundancy effect can be limited due to the number of organizations/EPs or applied endorsement policies, as they affect the number of statements generated by Proposition 2. With fewer final statements, there would be less space for redundancy. Indeed, systems with less restrictive and less complex endorsement policies (e.g., majority policies) benefit more from redundancy, while organizations benefit from adopting more EPs to increase reliability. ## IV Practical endorsers selection algorithms Now, we concentrate on the \(1\)-OutOf-\(Q\) policy, since it is coherent with the standard form of any endorsement policy. Without loss of generality, we assume just one client in the system (\(C=1\)). We assume that the client is aware of all needed information including available/most-updated EPs, thanks to the configuration query request, as shown in Fig. 2, which leverages the available service discovery process. We propose an optimization procedure to select the EPs, denoted as OPEN, whose main goal is to minimize the endorsement response delay. OPEN considers the past response delays experienced by the previously selected EPs and selects the EPs with the lowest delays. This choice is motivated by the high temporal correlation between the response delays of an EP, due to queueing in the network and in the EPs. Notably, the history is meaningful only for recently selected EPs, otherwise, it is obsolete. Therefore, it is possible that a highly loaded EP which was not recently requested becomes among the least loaded ones and is worth again sending the request to it. To address this, OPEN probes non-selected EPs by sending gratuitous endorsement requests, which are still considered in the evaluation of the endorsement policy. Furthermore, in OPEN pending requests are considered indicators of possibly congested EPs, which are chosen at a lower priority. The pseudocode of OPEN is provided in Fig. 5. Let \(\text{TX}^{n}\) be the transaction with sequence number \(n\), evaluated locally at the client. Let \(x_{p}^{n}\) be the measured response delays of \(\text{TX}^{n}\) for any EP \(p\in\mathcal{P}\). Let \(\mathcal{P}_{e}^{n}\) be the set of selected EPs for \(\text{TX}^{n}\). For each transaction, we initialize all EPs as eligible to be selected (ln. 2). Just for the first transaction, OPEN initializes the history of response delays to a dummy value and selects all EPs as selected endorsers (ln. 3-6). For a generic transaction, all response delays are initialized to a dummy value (ln. 7-9). Then the EPs are selected based on a procedure described in the next paragraph (ln. 10). Now OPEN sends the endorsement request for \(\text{TX}^{n}\) to the computed set of EPs (ln. 11) and updates the measured delays (ln. 12). A new instance of the procedure would start if a new transaction \(\text{TX}^{n+1}\) is generated. Note that the procedure ends when all the responses are received. We now discuss how Select-Endorsers function operates. Inspired by our previous result in (13), it selects \(|\mathcal{P}|/2\) EPs chosen among the ones that experienced the lowest response delays, based on the measures for the last transaction \(\text{TX}^{n-1}\). The choice is challenging when one or more responses are still pending for \(\text{TX}^{n-1}\), and the algorithm key idea is that the corresponding EPs are considered as congested and thus should not be selected for the current transaction \(\text{TX}^{n}\). The pseudocode is reported in Fig. 6. It calculates the maximum delay measured for \(\text{TX}^{n-1}\) (ln. 2). For each Fig. 4: Endorsement latency where \(Q\in[8,16,32,64]\) (left to right). The green line represents \(R=Q/2\). Fig. 5: Pseudocode of the OPEN algorithm for \(\text{TX}^{n}\) EP in \(\mathcal{P}_{e}^{n-1}\) such that the response is not received yet, we mark the corresponding EP as non-eligible (ln. 3-5). There are two cases. The first case is the special one in which no responses have been received for TX\({}^{n-1}\), thus the algorithm speculates the delay equal to the delay of TX\({}^{n-2}\) (ln. 6-7). The eligibility assigned to the EPs will lead to selecting the other \(|\mathcal{P}|/2\) EPs compared to the previous ones. The second case is the typical one in which at least some responses have been received for TX\({}^{n-1}\) (ln. 8). For the EPs used in TX\({}^{n-1}\) and for which no response has been already received, the speculated delay is equal to the maximum delay \(d_{\max}\) plus some constant \(\epsilon\), chosen enough small to be negligible compared to the average network and processing delays (e.g., 1 ns) (ln. 9). This will model the fact that the actual delay is unknown, but for sure it is strictly larger than \(d_{\max}\). Finally, for all the other EPs, not used for TX\({}^{n-1}\), the delays are speculated to be equal to \(X^{n-2}\) (ln. 10-11). Now, the EPs are sorted based on the \(X^{n-2}\) delay values and the half best will be selected (ln. 12). A random EP from not selected ones will be chosen as the gratuitous probe EP (ln. 13). The slowest EP from \(\mathcal{P}_{e}^{n}\) will be replaced with the gratuitous probe EP (ln. 14), and \(\mathcal{P}_{e}^{n}\) will be returned to the main OPEN process (ln. 15). ## V Performance evaluation ### _Methodology_ We developed an event-driven simulator using OM-NeT++ [14]. We considered a scenario with \(C=8\) clients and \(Q=8\) organizations, each of them with \(1\) EP, thus \(|\mathcal{P}|=Q\). The endorsement requests are generated according to a Poisson process at each client and we set the normalized load \(\gamma\in[0.1,0.9]\). Then fixing \(\gamma=0.5\), we considered more scenarios by varying \(Q\in\{8,16,32,64\}\), each organization with the number of EP \(\in\{1,2,4,8\}\), and \(C\in\{8,40,125,1000,8000,32000\}\) clients; in each scenario we fixed all parameters except one. To understand the performance under non-stationary requests, we also considered a Poisson-modulated process with squared-wave cyclo-stationary load, with a period equal to 1200 ms, duty cycle 50%, and normalized load \(\gamma=0.5\). To consider the effect of different kinds of computation, we assume the computation time of each EP to be either exponentially distributed, bi-modal distributed, mixed bi-modal with exponential distributed or log-normally distributed with an average equal to \(10\) ms, whose value has been achieved from our practical measurements in HF EPs. In the bi-modal case, we assumed that, with a given probability, the computation time is constant with the value \(1/\mu_{1}\), otherwise its value is \(1/\mu_{2}\). In the mixed bi-modal with the exponential case, we assumed that, with a given probability, the computation time is exponentially distributed and has an average of \(1/\mu_{1}\), otherwise its average is \(1/\mu_{2}\). In the log-normal case, we assumed that, with a given probability, the computation time is In the log-normally distributed scenario, the average computation time is equal to \(10\) ms, while the standard deviation is changing. Table II shows the coefficient of variations (Cv) for the adopted setting for the log-normal and both of the bi-modal cases. To model the heterogeneity in the computing power and resources of the EPs, we considered a _non-homogenous scenario_ in which we assigned different average computation times to different EPs (i.e., (\(2,4,6,8,12,14,16,18\)) ms) where the computation time of each EP is exponentially distributed. We considered three scenarios for the network model, two of them are synthetic and the last one is real. Let \(d_{cp}\) be the network delay between client \(c\) and EP \(p\). In the first scenario, denoted as S1, the network delays are negligible compared to the processing times at the EP, i.e., \(d_{cp}=0\) (Fig. 7a). In the second scenario, denoted as S2, we set linearly increasing delays between any client and the EPs, similarly to a linear topology where all clients are closer to the first EP, i.e., \(d_{cp}=(p+1/2)\) ms for \(p\in[1,Q]\). This implies similar delays from each \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline Cv (Bimodal) & 0.0 & 0.5 & \(1\) & \(2\) & \(5\) \\ \hline \(P(\mu=\mu_{1})\) & 0.5 & 0.6 & 0.75 & 0.9 & 0.98 \\ \(1/\mu_{1}\) [ms] & 10.0 & 5.9 & 4.2 & 3.4 & 2.85 \\ \(1/\mu_{2}\) [ms] & 10.0 & 16.1 & 27.3 & 70.0 & 360.0 \\ \hline \hline Cv (log-normal) & 0.001 & 0.5 & \(1\) & 2 & 5 \\ \hline \(\mu_{Z}\) & \(2.3025\) & \(2.191\) & \(1.956\) & \(1.498\) & \(0.673\) \\ \(\sigma_{Z}\) & 0.0010 & 0.472 & 0.832 & 1.268 & 1.805 \\ \hline \end{tabular} \end{table} TABLE II: Settings for different distributions of the computation time with different coefficient of variation (Cv). Fig. 6: Pseudocode for SelectEndorsers EP to any client while on average the total network delays are comparable to the processing times at the EPs (Fig. 6(b)). In the third scenario, denoted as S3, we selected the _Highwinds_ network from [15], shown in Fig. 8, as a real world-wide scenario where the link delays are calculated based on the physical distance between the geographical position of the nodes (using the Haversine formula) and the propagation speed is \(2/3\) the speed of light. The clients here can be divided into two groups: (i) _far clients_ placed in nodes \(1,7,8\), and (ii) _centered clients_ placed in nodes \(2,3,4,5,6\). We measure the average endorsement latency as the main performance metric. The endorsement latency is calculated from the moment the endorsement request is sent out from the client until the first response is received by the client. For comparison, we considered three EP selection algorithms, namely RND, OOD, and DSLM, where the first two are proposed by us. #### Iv-B1 Random EPs (RND) RND is the policy adopted in the analytical model of Sec. III. Every endorsement request is sent to \(R\) randomly chosen EPs. If \(R=Q/2\), the policy is denoted as _RND-half_. If \(R\) adapts to the load according to the rule \(R=Q/(2\gamma)\), as suggested by (13), we denote the policy as _RND-load_. #### Iv-B2 Dynamic Stochastic Load Minimization (DSLM) Dynamic Stochastic Load Minimization (DSLM) was proposed in [16] and the pseudocode of the version adapted to our system model is shown in Fig. 9. Just for the first TX, DSLM initializes the load \(l_{p}\) and the measured response delay \(x_{p}^{0}\) of any EP \(p\) (ln. 2-3). Typically, it randomly selects half of the EPs (ln. 4) and evaluates heuristically the load on each selected EP by the product of the square root of the response delay and the corresponding queue length (ln. 5-6). The average is obtained with an exponential moving average with parameter \(\alpha\). Finally, DSML returns the EP with the lowest estimated load among the selected ones (ln. 7). #### Iv-B3 Oracle Optimal Delays (OOD) As a reference for all the endorsement algorithms, we define an online Oracle-based Optimal Delays (OOD) EP selection policy that minimizes the endorsement latency given a fixed replication factor \(R\), denoted as OOD-\(R\). The pseudocode of OOD-\(R\) is provided in Fig. 10. We assume an oracle that knows in advance the response delay of any endorsement request if sent to a specific EP. Thus, the oracle knows for any EP \(p\): (i) the absolute time \(t_{p}^{\text{busy}}\) at which the EP will finish (or has finished) to serve the last received endorsement request TX\({}^{n-1}\), (ii) the processing time \(\tau_{p}^{\text{proc}}\) of the endorsement request TX\({}^{n}\), and (iii) the overall network delay \(\tau_{p}^{\text{net}}\) between each client and the EP. Thus, if sent to EP \(p\), the response to TX\({}^{n}\) will be received from EP \(p\) at a predicted time \(t_{p}^{\text{resp}}\) (ln. 3) equal to: \[t_{p}^{\text{resp}}=\max\{t^{\text{now}}+\tau_{p}^{\text{net}},t_{p}^{\text {busy}}\}+\tau_{p}^{\text{proc}}+\tau_{p}^{\text{net}} \tag{14}\] since if at the time of arriving the request to an EP its queue is empty, then the request will be served at \(t^{\text{now}}+\tau_{p}^{\text{net}}\), otherwise at \(t_{p}^{\text{busy}}\). Then, the request will be processed for \(\tau_{p}^{\text{proc}}\) and the response will be sent back, experiencing \(\tau_{p}^{\text{net}}\) delay. Now OOD-\(R\) chooses the EP with the smallest predicted time to minimize the response delay (ln. 4). The remaining \((R-1)\) endorsement requests (if any) will be sent to the EPs in decreasing order of predicted time (ln. 5). This allows to load the "slowest" EPs with requests whose responses will be received late and thus reduces the load on the "fastest" EPs, for the sake of future endorsement requests. Fig. 8: Real network topology (S3) showing the EPs and clients placement with interconnecting network topology Fig. 7: The two synthetic network topologies adopted for test scenarios in our simulations. It should be noted that, in the case of RND-load, the request arrival is assumed to be stationary, thus, the system load can be estimated with high accuracy. Also, OOD is implementable with enough control information, but obtaining this information would need instantaneous communication with the EPs, which is challenging to accomplish in a practical situation. So, both algorithms are not practical in a real scenario. ### _Simulations results_ For a fair comparison between OOD and other approaches, in all test scenarios we selected OOD-half, i.e., with the same \(R\) as OPEN, and RND-half, and slightly smaller \(R\) than RND-load, for which \(R\in[Q/2,Q]\). Only DSLM has a completely different redundancy factor (\(R=1\)). #### Iv-B1 Homogenous scenario The left graphs in Fig. 11 show the simulation results for a homogenous scenario with all EPs with computation times that are exponentially distributed with the same average. In scenario S1 (left-up), all delays are purely due to processing in the EPs. Since DSLM does not exploit redundancy and for \(\gamma=0.1\) the queuing at the EP is negligible, its response delay is around 10 ms, equal to the computation time. By increasing the load and hence the queueing, the average delay increases slightly. Instead, by exploiting the redundancy all the other approaches can get smaller delays, by a factor of \(2\) to \(6\). As expected, OOD-half achieves the best average endorsement latency among all solutions. At low loads, due to the maximum redundancy factor (i.e., \(R=8\)), RND-load performs closer to OOD-half by always having the fastest EP among its selection. By increasing the arrival rate, for both RND-half and RND-load, the endorsement latency increases as the selection of EPs is not efficient as OOD-half, which knows in advance the best EP. At high loads, for RND-load, \(R\) is almost 4, hence RND-load shows similar results to RND-half. OPEN has a redundancy factor \(R=4\), as RND-half, but selects EPs with smaller estimated delays. At low loads, OPEN has a small advantage over RND-half, as the queueing is almost negligible. As the offered load increases, the higher queueing makes OPEN more efficient, also thanks to the higher frequency by which the response delays are estimated. In scenario S2 (left-middle), as expected, OOD-half is the best algorithm, and DSLM is outperformed by all other solutions by a factor of \(2\) to \(3\). Due to the linearly increasing network delays, the effect of redundancy in EPs becomes less dominant, so the delay's improvement in scenario S2 is less than in S1. On the other hand, at low loads, OPEN acts slightly better than RND-half compared to S1, thanks to being aware of the network delays. At high loads, OPEN behaves close to RND-half since the queueing delays become dominant to network delays. In scenario S3 (left-down), again OOD-half is the best approach, DSLM is outperformed by all other solutions by at least a factor of \(2\). Due to the different network delays, on average much larger than the computation times, the redundancy is less effective, thus a lower delay improvement is experienced in S3 compared to S2, and S1. OPEN performs quite similarly to RND-load in low loads even with a half number of selected EPs, and much better in high loads. OPEN completely outperforms RND-half in all loads since it exploits mainly the EP with lower network delays. #### Iv-B2 Non-homogenous scenario The simulation results for a non-homogenous scenario are reported in the middle graphs of Fig. 11. As a reminder, now the average computation times for the EPs are different, but the overall average is the same as in the homogenous scenario. In all three scenarios S1, S2, and S3, as DSLM is not able to exploit redundancy, it is not able to reduce its average latency. On the other hand, by exploiting redundancy, all other approaches can reduce their average latency, where OOD-half achieves the lowest latency thanks to its global knowledge of the system. In scenario S1, RND-load reduces the latency more than RND-half, since the higher redundancy factor increases the chance of selecting EPs with lower average computation times. With the same redundancy factor as RND-half, OPEN reduces the most the delays for all loads, as it employs the latency history to select EPs with lower average computation times. In scenario S2, a similar behavior as in S1 is observed for all the algorithms. OPEN, by exploiting the delay history comprising both the average computation times and the network delays, achieves the best performance by almost approaching OOD-half. In scenario S3, we observe almost similar results as in scenario S3 of the homogeneous case, as the variation in the average computation times is still negligible to the average network delays. Fig. 10: Pseudocode of OOD-\(R\) #### V-A3 Scaling the number of organizations, EPs, and clients. The simulation results for larger scenarios are shown in Fig. 11 (right). We consider the S2 scenario, to get a heterogeneous system in terms of network delays, and we fixed \(\gamma=0.5\). By increasing the number of organizations, the overall number of EPs increases, thus the endorsement latency is reduced for all the algorithms exploiting redundancy, as shown in Fig. 11 (right-up). The same behavior is observed when the number of EPs in each organization increases (see Fig. 11 (right-middle)). The similarity with the previous graph is that we are considering the \(1\)-OutOf-\(N\) policy here, which by recalling (2), for this endorsement policy there is no difference between two EPs of the same organization or different EPs of different organizations. According to Fig. 11 (right-down), changing the number of clients has no effect on the approaches. Note that increasing the number of clients will reduce the efficiency of the information gained by OPEN and it will converge to the RND-half results for homogeneous cases with less dominant network delays. #### V-A4 Bi-modal computation times The simulation results for constant bi-modal are shown in Fig. 12 and Fig. 13. In scenario S1, for constant computation time (Cv=0), the redundancy is not beneficial for delay reduction, while at high loads it can increase the EPs' queue length and thus the delay. For larger Cv, all the algorithms, except DSLM, decrease the average endorsement delay. This is because the average of the minimum between a sequence of i.i.d. random variables is smaller when the variance is larger. All the solutions, except for DSLM, behave similarly for low and high loads. In scenario S2, also for Cv=0, the redundancy reduces the average delay. The reason is that DSLM considers the computation load at the EPs obliviously of the network delays, which are dominating the computation times. But, in the other approaches, redundancy increases the chance of selecting the EP with lower network delays. By increasing Cv, redundancy can reduce the latency even more, by benefiting from the variability in the computation times. At low load (\(\gamma=0.2\)), OPEN performs Fig. 11: Average endorsement delay for i) average computation time of 10 ms for each EP: S1 (left-up), S2 (left-middle), S3 (left-down), ii) different average computation times from [2 to 18] ms for each EP: S1 (middle-up), S2 (middle-middle), S3 (middle-down), and iii) the different number of organizations, EPs, and clients in scenario S2, for \(\gamma=0.5\) (right-most graphs). quite well as it also selects EPs with lower network delays. RND-load is performing slightly better as it sends to all EPs. OOD-half is even better than RND-load with a small margin, thanks to the lower load guaranteed by setting \(R=4\). At high load (\(\gamma=0.8\)), RND-load adopts \(R=5.7\) (on average) and the corresponding queueing penalizes the overall response delay. OPEN acts slightly better thanks to the smaller value of \(R\). In the S3 scenario, all approaches are not affected by Cv, as the variability in the computation times is compensated by the network delays which vary between \(0\) ms and \(7\) times the average computation time. At low load (\(\gamma=0.2\)), OPEN selects closer EPs in terms of network delays and outperforms RND-half by a factor greater than \(2\), while being very close to OOD-half. RND-load achieves the same results as OPEN by selecting all the EPs (i.e., \(R=8\)), which include the closest EP as well. At high load (\(\gamma=0.8\)), as in scenario S2, RND-load is penalized by the queueing. OPEN reduces the endorsement delay up to \(70\%\) compared to DSLM. Notably, differently from OPEN, RND-load may not select the closest EPs. As expected, for both loads OOD-half performs the best, since it always selects the minimum combination of the network delay and the processing delay. ### _Mixed bi-modal with exponential computation times_ The simulation results for constant bi-modal are shown in Fig. 14 and Fig. 15. For all scenarios, when Cv=0, all EPs have one computation time which is exponentially distributed with an average of \(10\) ms. So, the experienced latency in low loads and high loads is similar to what is reported in Fig. 11 (left) when \(\gamma=0.2\) and \(\gamma=0.8\) respectively. For all scenarios, the redundancy reduces the average delay. In scenario S1, by increasing the CV, all the algorithms, except DSLM, decrease the average endorsement delay. This is because the average of the minimum between a sequence of i.i.d. random variables is smaller when the variance is larger. At low load (\(\gamma=0.2\)), RND-load is performing a bit better than OPEN and RND-half, as it sends to all EPs. OOD-half is slightly better than RND-load thanks to the lower load guaranteed by setting \(R=Q/2\). At high load (\(\gamma=0.8\)), RND-load adopts \(R>Q/2\) and the corresponding queueing penalizes the overall response delay. By increasing the CV, for DSLM in both low loads and high loads, the average endorsement delay increases since even a small possibility of selecting an EP with very high computation time can increase the average endorsement delay. In scenario S2, similar to scenario S1, in both low loads and high loads by increasing the CV, the average endorsement delay for DSLM increases. In the other approaches, redundancy increases the chance of selecting the EP with lower network delays. By increasing Cv, redundancy can benefit from more variability in the computation times, and reduce the latency even more. At low load (\(\gamma=0.2\)), OPEN performs slightly better than RND-half as it also selects EPs with overall lower network delays and processing delays. RND-load is performing even better as it sends to all EPs. OOD-half is even better with a small margin, thanks to the lower load guaranteed by setting \(R=Q/2\). At high load (\(\gamma=0.8\)), RND-load adopts \(R>Q/2\) and the corresponding queueing penalizes the overall response delay. OPEN acts slightly better thanks to the smaller value of \(R\). In the S3 scenario, all algorithms benefiting from the redundancy, are not affected by Cv, as the variability in the computation times is compensated (i.e., equalized) by the network delays which vary between \(0\) ms and \(10\) times the average computation time. At low load (\(\gamma=0.2\)), OPEN being very close to OOD-half, outperforms RND-half by a factor around \(2\) due to selecting closer EPs in terms of network delays. RND-load achieves the same results as OPEN by containing the closest EP as it selects all the EPs (i.e., \(R=Q\)). At high load (\(\gamma=0.8\)), same as scenario S2, OPEN reduces the endorsement latency up to 70% compared to DSLM. RND-load is penalized by both the queueing and lower number of selected EPs (i.e., \(R<Q\)), as differently from OPEN, RND-load may not select the closest EPs. Again, for both loads OOD-half performs the best, since it always selects the minimum combination of the network delay, the queueing delay, and the processing delay. #### Iv-C1 Log-normal computation times The simulation results are shown in Fig. 16. All the results are similar to the results of high loads (\(\gamma=0.8\)) in Bi-modal computation times, except for scenario S1, in which for constant computation time (Cv=0), redundancy is not beneficial for delay reduction. For larger Cv, all the algorithms, except DSLM, decrease the average endorsement delay as the average of the minimum between a sequence of i.i.d. random variables is smaller when the variance is larger. #### Iv-C2 Cyclo-stationary request process We compared OPEN with other approaches under Poisson-modulated cycle-stationary load. We evaluated the average endorsement delays by using an exponential moving average. The results are provided in Table III. In S1 and S2, OPEN, RND-half, and RND-load showed almost constant average endorsement latency, while DSLM and ODD results are the highest and the lowest respectively. Interestingly, all the results for different approaches in S1 and S2 are very close to the results gained from Fig. 11 (left) for \(\gamma=0.5\), even if the load was changing periodically. This means that all of them are robust to load change in homogeneous scenarios. In S3, as a non-homogenous real scenario, OPEN shows a small difference of the average endorsement latency between centered and far clients (recall their definition in Sec. V-A). This difference (\(6\) ms) is negligible compared to the average network delays in S3 (\(50\) ms). The same behavior is observed for RND-load. On the other hand, in RND-half the performance depends \begin{table} \begin{tabular}{|c||c c c c|} \hline Scenario & S1 [ms] & S2 [ms] & S3 [ms] & S3 [ms] \\ & & & centered clients & far clients \\ \hline OOD-half & \(1.2\) & \(7.1\) & \(13.6\) & \(16.9\) \\ OPEN & \(2.9\) & \(11.7\) & \(17.9\) & \(24.2\) \\ RND-load & \(3.1\) & \(11.2\) & \(19.2\) & \(23.7\) \\ RND-half & \(3.2\) & \(12.2\) & \(25.9\) & \(43.6\) \\ DSLM & \(10.8\) & \(22.8\) & \(65.9\) & \(110.3\) \\ \hline \end{tabular} \end{table} TABLE III: Average endorsement delays for cyclostationary input rates with different scenarios. Fig. 12: Average endorsement latency under bimodal computation times for normalized load \(\gamma=0.2\) and for different scenarios: S1 (left), S2 (middle), S3 (right). Fig. 13: Average endorsement latency under bimodal computation times for normalized load \(\gamma=0.8\) and for different scenarios: S1 (left), S2 (middle), S3 (right). heavily on the client's position; even in the case of centered clients, RND-half experiences more endorsement latency than OPEN with far clients. As expected, OOD-half achieves the best endorsement latency with minimum difference regardless of the client's position. DSLM performs the worst with latencies about 4 times larger than OPEN. These results show that OPEN adapts to load changes even in the presence of unbalanced network delays. Also, OPEN outperforms RND-half and DSLM, while it shows similar results to RND-load and to OOD-half. ## VII Related works Different works modeled analytically the endorsement process in HF. [11] modeled the EPs as M/M/1 queues and considered the propagation delays in the network model, coherently with our work. It showed that using a pure "AND" endorsement policy, compared to "OR" or "\(k\)-OutOf-\(Q\)" policies, significantly increases the endorsement delay by increasing the number of organizations. Similarly, [17] showed the same results by modeling HF using stochastic reward networks. They also observed that for "OR" and "\(k\)-OutOf-\(Q\)" policies the latency decreases by increasing the number of EPs within the same organization, similar to the effect of increasing \(R\) in our work. [10] modeled HF using Generalized Stochastic Petri Nets (GSPN) and showed that for high request arrival rates, the endorsement phase is a performance bottleneck of HF. This is coherent with the motivation of our work, focusing on optimizing the endorsement phase. [18] considered four organizations and showed that simple endorsement policies based on "AND", "OR" and "\(k\)-OutOf-\(Q\)" operators, experience the minimum latency. [19] showed that using "\(k\)-OutOf-\(Q\)" policy, increasing \(k\) decreases the throughput and increases the latency. This is coherent with our system model since the endorsement latency will be the maximum among \(k\) request delays. [20] optimized the HF configurations to improve the throughput and reduce the delays. Coherently with our results, they showed the equivalence between the "\(1\)-OutOf-\(Q\)" policy and the "OR" among all organizations. Our results in Sec. II-B generalize such property. Some works tried to improve endorsement phase of HF. [16] proposed a way to select the best EP for "\(1\)-OutOf-\(Q\)" endorsement policy in HF \(\mathrm{v}1.4\). They introduced an algorithm running in each EP, called DSLM, to calculate the EP's load by considering multiple resource metrics within an EP. For each request, only half of the EPs are probed to get their actual load, coherently with \(R=Q/2\) adopted in OPEN. A version of DSLM tailored to our system model has been considered in Sec. V as an alternative approach to be compared with OPEN. [21] showed that the failed transactions due to timeouts are affected by the number of statements within the "AND" operator defined in the endorsement policy. Such failures increase the latency and waste of resources due to re-transmissions at the application level. [22] suggested a way to reduce the possibility of endorsing conflicting transactions. They proposed a cache mechanism inside the EPs to record some data of the recently endorsed transactions and drop the conflicting proposal before execution. Recall that, in the endorsement phase, no execution results will update the world state, so transactions with similar initial world states can propose different updates for the world state. This early drop of the proposal before execution will reduce the computing and network resources by reducing the chance of transaction failure at the validation phase. [23] removed unnecessary operations for pure read requests, by modifying the EPs algorithm to differentiate the process of pure read transactions from mixed read/write ones. This reduced the latency and resource consumption in the endorsement phase. The main idea of OPEN is to send multiple replicas of the same request to multiple peers. This approach has been deeply investigated in the literature on queueing theory, motivated by the problem of optimal job assign Fig. 17: Experimental testbed architecture Fig. 18: Measured processing delay at each EP. ment to servers. As the literature is huge, we focus just on a few papers for the sake of space. In the generic literature about distributed systems, several works [24, 25, 26] investigated the effect of sending replicas of a job to more than one randomly selected server and waiting for the first response to exploit redundancy, as in OPEN. These works introduced redundancy to reduce the job completion time and overcome server-side variability, where a server might be temporarily slow, due to many factors like garbage collection, background load, or even network interrupts. [27] showed that, besides its simplicity, in many cases, redundancy outperforms other techniques for overall response time. [28], by decoupling the inherent job size from the server-side slowdown, described a more realistic model of redundancy and showed that increasing the level of redundancy can degrade the performance, coherently with our observations in Sec. III. [29] showed that a major improvement results from having each job replicated to only two servers, coherently with our Fig. 4 which shows that for the 1-OutOf-\(k\) policy, the endorsement latency decreases mostly when varying \(R\) from 1 to 2. On the contrary, in our work, we have considered the optimal value of \(R\) that minimizes the endorsement latency, which may be greater than 2. [30] showed the reverse relation between the incoming load and the optimal number of replicas, coherently with (13), and experimentally obtained the optimal redundancy factor in different job arrival rates and for different service times. Also, [31] theoretically demonstrated that, when replicating the job to multiple servers, the best choice in case of low (or, high) loads is to replicate to all (or, only \(1\)) servers, coherently with (13) and with the operations of OPEN, which adapts the replication factor to the instantaneous load. ## VIII Conclusions We addressed the problem of minimizing the endorsement latency in HF. Leveraging some results obtained in a simplified queueing model, we proposed the OPEN algorithm to choose multiple EPs for each transaction by taking into account the measurements from the past requests, in a realistic scenario. Through simulations with OMNeT++, we showed that independently from the scenario, OPEN is robust and achieves performance remarkably close to the optimal oracle-based approach (OOD) and outperforms state-of-the-art solutions. Due to the key role of endorsement policies, we expect that our results will inspire new research directions and implementation efforts in optimizing the performance of HF and other blockchain platforms. Given the complexity of the addressed problem, new solutions based on machine learning and meta-heuristics could be devised. OPEN has been validated only by extensive simulations. Beyond the scope of this work, we implemented OPEN in HF to validate the proposed approach in a realistic setting. The experimental results of the first version of the proof-of-concept are very promising. We leave the optimization of the design of the client-based OPEN solution and its extensive experimental validation for future work.
2310.19441
Dynamic Gaussian Splatting from Markerless Motion Capture can Reconstruct Infants Movements
Easy access to precise 3D tracking of movement could benefit many aspects of rehabilitation. A challenge to achieving this goal is that while there are many datasets and pretrained algorithms for able-bodied adults, algorithms trained on these datasets often fail to generalize to clinical populations including people with disabilities, infants, and neonates. Reliable movement analysis of infants and neonates is important as spontaneous movement behavior is an important indicator of neurological function and neurodevelopmental disability, which can help guide early interventions. We explored the application of dynamic Gaussian splatting to sparse markerless motion capture (MMC) data. Our approach leverages semantic segmentation masks to focus on the infant, significantly improving the initialization of the scene. Our results demonstrate the potential of this method in rendering novel views of scenes and tracking infant movements. This work paves the way for advanced movement analysis tools that can be applied to diverse clinical populations, with a particular emphasis on early detection in infants.
R. James Cotton, Colleen Peyton
2023-10-30T11:09:39Z
http://arxiv.org/abs/2310.19441v1
# Dynamic Gaussian Splatting from Markerless Motion Capture can Reconstruct Infants Movements ###### Abstract Easy access to precise 3D tracking of movement could benefit many aspects of rehabilitation. A challenge to achieving this goal is that while there are many datasets and pretrained algorithms for able-bodied adults, algorithms trained on these datasets often fail to generalize to clinical populations including people with disabilities, infants, and neonates. Reliable movement analysis of infants and neonates is important as spontaneous movement behavior is an important indicator of neurological function and neurodevelopmental disability, which can help guide early interventions. We explored the application of dynamic Gaussian splatting to sparse markerless motion capture (MMC) data. Our approach leverages semantic segmentation masks to focus on the infant, significantly improving the initialization of the scene. Our results demonstrate the potential of this method in rendering novel views of scenes and tracking infant movements. This work paves the way for advanced movement analysis tools that can be applied to diverse clinical populations, with a particular emphasis on early detection in infants. rehabilitation, human pose estimation, infants, dynamic gaussians ## 1 Introduction "What I cannot create, I do not understand" - Richard Feynman There is a pressing need for high-quality movement analysis in rehabilitation and advances in computer vision and human pose estimation are moving closer to filling this gap. A common challenge in applying human pose estimation to rehabilitation populations is that algorithms trained on able-bodied adult populations fail to generalize to clinical populations [1]. Tracking infants raises additional challenges, as they have different anthropomorphic body portions and movement dynamics than adults, and there is very limited training data. Spontaneous infant movement behavior, that is endogenously generated, is an important indicator of neurological function and neurodevelopmental disability [2]. Cerebral palsy, the most common physical disability of childhood [3], can be predicted with high accuracy from clinical assessment of spontaneous movement behavior in infants [4]. This clinical assessment called the General Movement Assessment (GMA) [2], is used by trained clinicians who use their gestalt perception to distinguish and identify various movement patterns in the young infant from the preterm period to 5 months corrected age. The high accuracy of this assessment highlights the importance of movement behavior in understanding the nervous system that generates it. On the other hand, there is a lack of objective and quantitative knowledge surrounding early movement generation in young infancy. As the field of neonatology has advanced, infants born preterm are surviving at younger gestational ages, providing a unique opportunity to study the development of joint kinematics prior to term age. These preterm movement kinematics are likely to provide an early prognostic biomarker to identify infants at risk of neurodevelopmental disability, which could enable early intervention programs. However, this opportunity has not yet been realized as unobtrusive methods to reliably measure movement during the preterm period have not often been employed. During the preterm period, infants have fragile skin, small limbs, and sensitivity to touch which can cause autonomic dysregulation, prohibiting the use of traditional measurement approaches that require sensors or markers to be placed on the body. Therefore, human pose estimation is a potential solution to study infant movements at younger ages. Several studies have used this approach and trained infant movements to predict clinical assessments such as the GMA. For example, using a small public dataset (n=12), a complexity score of infant limb movement was created to predict GMA rating [5], which was then tested prospectively on a larger cohort of 47 infants with high specificity for detecting a normal GMA finding [6]. As an alternative to the GMA, the Computer-based Infant Movement Assessment (CIMA) model performs a time-frequency decomposition of movements estimated from video with both manual annotation and optical flow from 3-month-old infant data to predict a diagnosis of cerebral palsy at \(\geq 2\) years of age in a sample of 377 infants with comparable sensitivity and specificity to the GMA [7]. Acquiring high-quality training data on infant movements is also challenging. The markers used for marker-based motion capture are quite large compared to babies and will often be knocked off. Similarly, most wearable sensors are quite large compared to infants and neonates and are time-consuming to place, making them difficult to use in clinical practice, particularly for premature babies in the neonatal intensive care unit (NICU). There are a few existing small datasets including the MINI-RGBD dataset [8] which contains 12 movement sequences computed with synthetic textures on the Skinned-Multi Infant Linear Model (SMIL) [9] model to preserve privacy. This model was learned from data acquired with RGB and depth imaging from a Kinect sensor. Clinicians assessing the GMA from SMIL reconstructions of movement from RBG-D images showed moderate-good agreement for movement complexity and substantial-good agreement for fidgety movements, compared to scoring the real videos. There is also the SyRIP dataset, which contains a small sample of synthetic and real infants [10]. Groos et al. [11] collected a diverse dataset of 20k frames, including in-hospital and at-home video, and found that an EfficientHourglass architecture trained on this dataset showed only a slightly greater error than the spread between human annotators. Advances in 3D scene representations allow scenes to be learned from a variety of data and allow rendering of novel views. This includes methods using implicit representations of the volumes [12]. This is most often applied to static scenes from a large number of images taken in different positions, with the camera calibration identified as an initial step. Very recently, a similar visual performance at much faster rendering rates was achieved by replacing the implicit representation with a set of 3D Gaussians [13]. This was enabled through a high-performance differential renderer that performs splatting of the Gaussians to camera views, allowing the parameters of Gaussians to be optimized through gradient descent. This approach was then extended to dynamic scenes [14] that were recorded with 27 RGB cameras and additional depth cameras, by allowing the Gaussians to move over time. In addition, to enable synthesizing novel views of the subject in the scene, the Gaussians tracked meaningful body parts. Thus this approach is promising for movement analysis, both to track movements of all body parts and for synthesizing novel views of the scene that could serve as training data. We note in the last month several other works on dynamic Gaussians, including using 4D Gaussian representation that includes a time component [15, 16] and using an implicit function to model the temporal deformations of the points to reconstruct scenes from monocular views using structure from motion to initialize the scene [17]. Older work has also show that a similar differentiable rendering of 72 gaussians can be coupled to an anatomical model over configurations to perform human pose estimation [18]. The goal of this work was to test whether dynamic Gaussian tracking could be applied to a sparse set of RGB cameras, a configuration commonly used for markerless motion capture (MMC) [19, 20, 21]. MMC is seeing increasing use in rehabilitation, with applications to date predominantly focusing on gait analysis. However, achieving accurate scene reconstruction and novel field synthesis from MMC data could enable a wider range of applications. However, this is challenging, because the Gaussians locations are typically initialized using either very dense image capture or additional depth imaging. With only sparse RGB images, the initial scene reconstruction is underconstrained. We show that using a semantic segmentation mask to only reconstruct the infant results in substantial improvements in the reconstruction of unseen views. With this improved initialization, we find the dynamic Gaussians can pick up spontaneous movements of the babies and visualize this movement from novel views. In short, our contributions are: * We apply dynamic gaussian splatting to sparse, markerless motion capture data * We optimize this reconstruction method and perform ablations to demonstrate the importance of semantic information and masking * Show this allows us to track 3D movement of babies ## 2 Methods ### Participants This study was approved by our Institutional Review Board. Two infants born at term age were recruited. One infant was filmed at 3 weeks of age and again at 5 weeks of age. The second infant was filmed at 12 weeks of age. Synchronized videos were recorded as the infants were positioned in a calm, alert state, without a pacifier, for short periods on the mats, in order to observe spontaneous behavior. ### Video acquisition Multicamera data was collected with a custom system using 8 FLIR BlackFly S GigE cameras with F1.4/6mm lens. They were synchronized using the IEEE1558 protocol and acquired data at 30 fps, with a typical spread between timestamps of less than 100ubs. The images produced have a width of 2048 and a height of 1536. The cameras were mounted on tripods which were placed in a circle around a padded mat on the floor. The acquisition software was implemented in Python using the PySpin interface. For each experiment, calibration videos were acquired with a checkerboard (\(7\times 5\) grid of 38mm squares). Extrinsic and intrinsic calibration was performed using the ani-pose library [22]. The intrinsic calibration included only the first distortion parameter. ### 3D Gaussian Splatting Our method is built upon Luiten et al. [14], which is a dynamic extension of 3D Gaussian Splatting [13]. Gaussian splatting directly optimizes the parameters of a set of 3D Gaussian kernels to reconstruct a scene observed from multiple cameras. It is powered by a custom CUDA kernel for fast, differential rastering engine. Each Gaussian is described by opacity, color, location, scale, and rotation. The opacity, \(\alpha\), is a scalar value and the color, \(c\), is a 3-vector, both of which are between 0 and 1 after passing through a sigmoid transformation. The location, \(\mu\), is a 3-vector for the center of the Gaussian in Euclidean space, where we used meters as the units. The scale is a 3-vector describing the spatial extent of the Gaussian in each dimension and has an exponential non-linearity to ensure positivity. The rotation is a quaternion, which includes a non-linearity to ensure it is normalized to have a unit norm. The potential influence of each gaussian on any location in space is computed from: \[G(\mathbf{x})=\sigma(o)\,e^{-\frac{1}{2}(\mathbf{x}-\mu)^{2}\sum^{-1}(\mathbf{ x}-\mu)} \tag{1}\] Where the spatial covariance is determined by the spatial scale, \(S\in\mathbb{R}^{3}\), and the rotation, \(R\in\mathbb{R}^{3\times 3}\), which is computed from the quaternion representation: \[\Sigma=RSS^{\top}R^{\top} \tag{2}\] Combined with the opacity and color, the ensemble of Gaussians is efficiently ray-traced with the differentiable renderer. Specifically the color of each pixel is computed as: \[C=\sum_{i\in N}T_{i}\alpha_{i}c_{i} \tag{3}\] Where the transmittance, \(T_{i}=\prod_{j=1}^{i-1}\alpha_{j}\) is computed based on the opacities of the Gaussians traced along the ray. This takes in the intrinsic (without any distortion coefficients) and extrinsic parameters of the calibrated cameras. We refer to [13] for further details about the rendering engine, which includes many features for depth-sorting and spatially culling Gaussians into patches to allow it to quickly render millions of elements and also includes explicit deviations of the derivatives in the handwritten CUDA kernels. Following Kerbl et al. [13], Luiten et al. [14] a loss is computed between the reconstructed images and the observed images that includes both an L1 term D-SSIM term, where we also use a relative weighting of \(\lambda=0.2\). \[\mathcal{L}_{\text{im}}=(1-\lambda)\mathcal{L}_{1}+\lambda\mathcal{L}_{\text {D-SSIM}} \tag{4}\] Luiten et al. [14] extended this approach to include an additional color component for each Gaussian, which corresponds to a segmentation map between foreground and background, although can flexibly correspond to any secondary color information. This uses the same loss function as the regular image, \(\mathcal{L}_{\text{seg}}\). We discuss the segmentation mask further below. Luiten et al. [14] also replaced view-dependent spherical harmonic representations of color for having isotropic colors with learnable scales and means for each camera, and we retained this feature. Because the differentiable renderer does not account for camera distortions, we applied the OpenCV [23] undistort method to our raw images. ### Dynamic Losses Luiten et al. [14] includes several additional losses for the Gaussians between timesteps. Each of these is applied to a local region of Gaussians identified using KNN clustering based on the initial scene reconstruction. \[\mathcal{L}_{i,j}^{\text{rigid}}=w_{i,j}\left\lVert(\mu_{j,i-1}-\mu_{i,i-1})- R_{i,i}^{-1}R_{i,i}(\mu_{j,i}-\mu_{i,j})\right\rVert_{2} \tag{5}\] \[\mathcal{L}_{\text{rigid}}=\frac{1}{k|S|}\sum_{i\in S}\sum_{j\in\text{kmm},k} \mathcal{L}_{i,j}^{\text{rigid}} \tag{6}\] \[\mathcal{L}_{\text{rot}}=\frac{1}{k|S|}\sum_{i\in S}\sum_{j\in\text{kmm},k}w_ {i,j}\left\lVert\hat{q}_{i,i,j}^{-1}-\hat{q}_{i,i}\hat{q}_{i,i-1}\right\rVert _{2} \tag{7}\] \[\mathcal{L}_{\text{iso}}=\frac{1}{k|S|}\sum_{i\in S}\sum_{j\in\text{kmm},k}w_ {i,j}\left(\left\lVert\mu_{j,0}-\mu_{i,0}\right\rVert_{2}-\left\lVert\mu_{j, i}-\mu_{i,j}\right\rVert_{2}\right) \tag{8}\] ### Optimization We followed prior work and used the Adam optimizer with different learning rates for each of the parameters. The learning rate for the mean Gaussian locations for 0.00016 times the scale of the scene. The colors had a learning rate of 0.0025. The segmentation map had a learning rate of 0.001. The unnormalized quaternion representation had a learning rate of 0.001. The logit for the opacities had a learning rate of 0.05. The logarithm of the scales had a learning rate of 0.001. The camera means and scales (pre-exponential) had a learning rate of 1e-4. The optimization process also includes additional heuristics during training that use the accumulated derivates applied to each Gaussian to determine regions that should either have be pruned or have the density increased. Points with very low opacities are also pruned, as are points with spatial scales that exceed a threshold size (typically 10% of the scene volume). We refer to the prior work for these details. For dynamic fits, parameters from the previous frame were used to initialize the next frames, as in [14], with the velocity over the prior two frames used to predictively adjust the Gaussian positions prior to the optimization. We used 2000 iterations of optimization for subsequent frames. We rendered the scenes at the full 2048x1536 resolution, which ran at approximate 20 iterations per second on an A6000 (with two renderings per iteration to produce the segmentation mask). Similarly to [14; 13] we sampled the training views in a random order in blocks without replacement. ### Initialization In Kerbl et al. [13], Luiten et al. [14], the Gaussians are initialized from a precomputed point cloud. In the case of Luiten et al. [14], which performed dynamic reconstruction with human movement recorded with 27 cameras, this point cloud was initialized with an additional set of depth cameras. We found that with random initialization, optimization of the first frame from a random point cloud did not converge to an accurate reconstruction of the scene, as reflected both by poor generalization to camera views not used during reconstruction and by visually inspecting the optimized point cloud. In particular, we noted a lot of background Gaussians obscuring the validation view and initial experiments adjusting the pruning and density-increasing heuristics did not seem to resolve this. Visualizing interpolation between camera views made it apparent how pieces of floating geometry would align into specific places for the training views to reconstruct the correct images with an incorrect geometry, highlighting the challenges of reconstruction from these underconstrained data. To improve convergence to a reconstruction that matches the underlying geometry, we explored using additional visual cues to guide the scene reconstruction. ### Depth We attempted to provide additional supervision from inferred depth maps to remove the sparse, noisy, background Gaussians. This was motivated by recent work showing that depth was a necessary supervisory signal when using implicit representations to reconstruct dynamic scenes from sparse RGB-D images [24]. Because our cameras only produce RBG data, we used DistDepth [25] to infer the depth images, which produced plausible results. The official CUDA implementation of the differentiable renderer does not support backpropagation through the depth image, so we used a fork that implemented this functionality [26]. We followed Luiten et al. [14] by including a learnable per-camera scale and offset when comparing the rendered depth maps to the inferred maps, using an L1 loss. However, even with this additional supervision, we were unable to achieve an accurate inital reconstruction of the underlying geometry. We found comparing the depth images to the rendered depth images was still a useful diagnostic tool, and include these images in our results. ### Segmentation and Masking We then tried to use semantic segmentation to provide additional supervision. We used the Mask2Former implementation available through the Huggingfaces library [27]. Specifically, we used the large model with a Swin background, trained on the ADE20K dataset, which outputs 150 class labels [28, 29]. We mapped these 150 classes to different RGB colors, with a black background to produce the segmentation target image for each view. We did include code to map the plaything, toy class to the person class as we saw a few instances where the infant was misclassified as a toy (perhaps reflecting dolls in the training dataset). In addition to providing an additional semantic segmentation image, which was used to compute the segmentation loss, \(\mathcal{L}_{\text{seg}}\), we also supported masking the image to only include the infant. This was done by first identifying the largest contiguous mask, and only including this (the legs of experimenters and parents were sometimes visible). Then the raw image was set to zero outside the identified mask, and this masked image was used to compute the image loss, \(\mathcal{L}_{\text{im}}\). When masking, we also added the ability to prune Gaussians outside the volume where the infant was placed (within 2 meters of the calibration center near the mat). ### Sparsity pruning Even with masking during the initial reconstruction, we still found some Gaussian scattered through open areas that would obscure novel views. During the densification and pruning stages already implemented, we added an addition pruning step that would remove these points. Specifically, we would remove any Gaussian that had a minimum distance to their nearest other Gaussian that was greater than 0.1m. Like this other steps of density adjustment, this occurred for every 100th iteration between 500 and 15000. In initial experiments, we attempted making this a differentiable term in the loss function, but found that computing the pairwise distances was very slow and still was not producing the desired results. ### Training losses This gave us a total loss for initialization of: \[\mathcal{L}_{\text{init}}=\lambda_{\text{im}}\mathcal{L}_{\text{im}}+\lambda_{ \text{seg}}\mathcal{L}_{\text{seg}} \tag{9}\] And for the subsequent frames of \[\mathcal{L}_{\text{dyn}}=\lambda_{\text{im}}\mathcal{L}_{\text{im}}+\lambda_{ \text{seg}}\mathcal{L}_{\text{seg}}+\lambda_{\text{rigid}}\mathcal{L}_{\text {rigid}}+\lambda_{\text{rot}}\mathcal{L}_{\text{rot}}+\lambda_{\text{iso}} \mathcal{L}_{\text{iso}} \tag{10}\] With parameters \(\lambda_{\text{im}}=1.0\), \(\lambda_{\text{seg}}=3.0\), \(\lambda_{\text{rigid}}=4.0\), \(\lambda_{\text{rot}}=4.0\), and \(\lambda_{\text{iso}}=2.0\). ### Metrics To quantify the performance of the reconstruction, we used 7 cameras to reconstruct the scene and measured the accuracy of the reconstructed images against this validation view. The PSNR is computed from the L2 loss and is a measure of the peak signal-to-noise ratio. In all cases, this was only applied to the region of the image with the segmentation mask corresponding to the baby. \[\text{PSNR}=20\log_{10}\left(\frac{1}{\sqrt{\text{MSE}}}\right) \tag{11}\] ### Optimizing initial scene reconstruction We explored a range of parameters to determine their impact on the initial scene reconstruction, as this was essential to good dynamic performance. To quantify this we took a set of frames from two sessions, one from each baby. For each scene, we repeated the reconstruction using one of four cameras as the validation cameras and used the remaining 7 cameras to perform the scene initial reconstruction. For each of the 8 fits, we computed the metrics described above for the validation frame. ### Dynamics through deformation fields We also implemented an alternative dynamic tracking method using deformation fields, based on Yang et al. [17]. Instead of iteratively updating the location of Gaussians for each frame with a regularizer based on the prior frame location, we trained a deformation field that models the change in location, rotation and scale based on the initial 3D locations and the desired time point. This was implemented as: \[(\delta\mu,\delta r,\delta s)=F_{\theta}(\gamma(x),\gamma(y),\gamma(z),\gamma(t)) \tag{12}\] \[\gamma(p)=(\sin(2\pi p),\cos(2\pi p))_{k=0}^{L-1} \tag{13}\] where \(\gamma(p)\) is sinusoidal positional encoding applied to each of space-time coordinates, with \(L=10\) and \(F_{\theta}\) is an 8 layer MLP with hidden dimension of 512. We performed optimization of the initial scene as above for 9000 iterations, after which we performed iterations of the entire sequence for another 40k iterations. When optimizing the entire sequence, we only used the loss for the image \(\mathcal{L}_{in}\). Images from the training views were randomly sampled from different timepoints. Because this required entire sequence to be loaded into memory, the sequence length is limited by system memory. As in Wu et al. [15], we added Gaussian noise to the positional encoded value of time with a standard deviation of 0.1 that linearly reduced to 0 after 20k iterations. The found this served to anneal the solution with training and improve the temporal performance. Because the deformation field does not not use an an explicit map onto the Gaussians, but rather takes in their coordinates, it was possible to continue performing the density adjustments while training the entire sequence. We kept this component unaltered from above. ## 3 Results ### Initial static reconstructions First we tested the impact of our additional supervision sources on the initial scene reconstruction, showing that using the mask from the semantic segmentation was critical. #### 3.1.1 Novel view synthesis with masking With masking enabled and discarding all Gaussians outside 2m from the center of the mat, we got visually compelling results for novel views Figure 1 (second row). The PSNR was for the validation view (computed over the region of the baby mask, only) averaged over our 8 training conditions was about half of the PSNR for the training views Table 1. #### 3.1.2 Ablations We originally attempted to reconstruct without additional constraints. In this case, we did not prune any Gaussians based on their spatial location and initialized 100k points uniformly over a 10m area (instead of 10k for the smaller area). We found that while this was able to match the training views, the validation views were heavily obscured by random floating Gaussians Figure 1 (fourth row). This was true even when reconstruction of the training images performed well Figure 2 (fourth row). \begin{table} \begin{tabular}{l c c c c c} \hline \hline Mask & \(\lambda_{\text{avg}}\) & Spare & \# Init & Init & Val & Train \\ & & Points & Range & PSNR & PSNR \\ \hline True & 3.0 & False & 10000 & 1.5 & 13.24 & 21.09 \\ True & 3.0 & False & 20000 & 1.5 & 13.31 & 20.94 \\ False & 3.0 & False & 100000 & 10.0 & 7.79 & 25.94 \\ False & 0.0 & False & 100000 & 10.0 & 7.96 & 34.29 \\ True & 3.0 & True & 10000 & 1.5 & **13.41** & 20.71 \\ \hline \hline \end{tabular} \end{table} Table 1: Settings and results of ablation studies showing dramatic decline in validation PSNR when not including the segmentation mask. Figure 1: Example visualizations of the validation view reconstructed with the different supervision signals. The top row shows the original image with the depth image and semantic segmentation mask (the later two both inferred with algorithms described above). The next row shows the image, depth image, and semantic map reconstructed when the baby mask was applied to all views. The next row uses the semantic segmentation as a supervision signal, but is unable to reconstruct novel views other than hints of the semantic map. The final row shows that with no additional supervision, the reconstruction is very poor. Figure 2: The same as Figure 1, but for the training views. In all settings, Gaussians are optimized that can reconstruct the training views. However, the depth maps already show inconsistencies reflecting the floating geometry aligned to the camera perspective. Note that on the third row, the model is able to recreate the segmentation map. We also attempted initialization with the semantic segmentation loss. However, the views were still very obscured Figure 1 (third row). Note however that in this case the segmentation map was also reconstructed well for training views Figure 2 (third row), with some hints showing through on the validation views. Additionally, we noted the rendered depth images showed a clear discrepancy between the depth maps estimated with DistDepth, but as noted were unable to use this as a supervision signal while the depth channel is not backpropagated through the differential renderers CUDA kernel. #### 3.1.3 Hyperparameter tests For the masked reconstruction, we also explored several other hyperparameter values. For example, we found that increasing the iterations from 6000 to 16000 steps did not substantially improve the performance and by 30000 steps showed signs of overfitting. After reducing to 2000 iterations, performance degraded. For most other adjustments, we used 6000 iterations to reduce the computation time. Within the range of parameters we adjusted, their impact was relatively small compared to the impact of the masking. ### Dynamic reconstructions Based on these results, we tested the performance of dynamic tracking with both the iterative tracking approach and the deformation field approach. Quantitatively, we saw that both approaches produced comparable PSNR values for the training data (Figure 3). The iterative approach achieved higher PSNR values on the training views. Qualitatively, it appeared to produce sharper images in general. However, we also saw some Gaussians would drift away from the infant in the dynamic reconstruction, which create floaters in the rendered views. In a few frames, these would also create substantial visual artifacts. In contrast, the deformation field lost some of this higher resolution detail but more reliably captured the movements. Loading the entire sequence into memory used 300GB of system memory for 400 frames. The fitting was also substantially faster than the iterative approach, with the 40k iterations taking about 40 minutes for 400 frames, compared to nearly 12 hours for this many frames updated iteratively with 2k steps per frame. We wanted to determine whether the Gaussians track specific body parts of the infant, as was seen in Luiten et al. [14]. We created videos that show the location history of 2% of the Gaussians over the last 10 frames (interpolated up to 10x temporal resolution). We observed that as expected, individual color tracks persisted in alignment with specific body locations using both dynamic methods. For example Figure 5 shows consistent traces over the leg during these movements, as well as movement over the arm, with the markers over the head showing much less movement. We did not quantify this consistency against any existing pretrained keypoint detectors. Figure 4: Frames from iterative fit. The top row shows frames reconstructed from the validation cameras. The bottom row shows an interpolated camera view between two of the training views, with the middle three images not being from a training view. Timesteps were selected to capture different body postures. Notice the fourth interpolated frame shows substantial floaters obscuring the view that drifted during dynamic fitting. Figure 5: Snapshots of movement with 2% of Gaussians showing a trajectory of their history, with each point assigned a unique color. This shows how individual Gaussians track specific anatomic locations. The top row used the iterative dynamic tracking and the middle row used the deformation field. The bottom traces show a subset of displacements for Gaussians, showing groupings of coherent movement. Figure 3: Masked PSNR over dynamic tracking. The left column shows the training and validation PSNR for two videos fit with the iterative method. The right column shows the same but for the deformation field method. ## 4 Discussion We find that dynamic Gaussian splatting to reconstruct markerless motion capture data shows promise for rendering novel views of scenes and tracking movements within the scene. However, a good initialization is critical and challenging. With only sparse views, many unrealistic geometries can reconstruct the training views. This becomes apparent when interpolating between views, as artifactual floating geometries obscure the interpolated views and then snap into place when viewed from the training image perspectives. Applying a segmentation mask and only reconstructing the infant in the MMC view drastically reduces this problem and improves the realism of novel views. However, artifacts were still visible. Some were due to imperfect segmentation masks, causing Gaussians that capture the mat to attach to the infant. Sparsifying the geometry to remove any Gaussians that are more than 10cm from their nearest neighbor lessened these floating artifacts and improved the validation PSNR. In dynamic scenes, some Gaussians also drifted away from the infant. This suggests future opportunities to improve the priors and heuristics during initialization and on the dynamics between frames. We expected that combining segmentation masks and depth as additional supervision signals would allow us to optimize realistic geometries for the entire scene. However, even combined with additional sparsification, the result was a cluttered reconstruction that generalized poorly to new views. It is possible even with the per-camera scale and offset, the depth images were not geometrically consistent, and we saw some artifacts in the inferred depth images that would suggest this such as offsets when the leg of a tripod passed through a view. We suspect there are other opportunities to use structure-from-motion algorithms to obtain a better initialization that would improve the whole-view reconstruction, and that this will also improve the reconstruction of the subject. Iteratively updating the Gaussian locations versus using a deformation field seem to have different strengths. The iterative solution often produced sharper reconstructions, but was more prone to artifacts. This perhaps could refined through tuning the regularization losses between frames. The deformation field allows directly optimizing the entire dynamic scene, potentially allowing visual information from later in the sequence to improve earlier Gaussian locations. For example, the distribution of Gaussians in the initial scene over the surface seemed better distributed when optimizing with the deformation field. It also and takes less time to train. The memory limitations currently limit the sequence length that can be optimized, but a more efficient pipeline or caching to disk could remove this limitation. Because the deformation field was decoupled from the specific set of Gaussians, it was also possible to continue adjusting the density over the entire sequence. It also provides a natural framework to interpolate between frames and to compute the velocity of areas by differentiating with respect to time, which could potentially allow accounting for motion blurred and tracking faster movements. In this work, we focused on infants. Our masking approach is particularly effective in this situation, where only a single infant is in view. However, we anticipate this approach will be extensible to room-scale tracking of all movements, allowing us to collect diverse training data to generalize to a wide clinical population. We anticipate this can be further enhanced by using a traditional approach with many images to create an initial set of Gaussians for the background, allowing full scene tracking. Reconstructing the geometry underlying a scene to render novel views shows the amount of information extracted from a scene, but is not the most clinically useful representation of movement. This system could be further augmented with keypoint detectors or other algorithms to explicitly track semantically meaningful body components. In future work, we also anticipate using keypoint detectors to quantify the automatic tracking accuracy, as in [14]. Additionally, we could fit the SMIL model [9] to these points to produce a parametric representation that is comparable between participants. Ultimately, we hope to fit biconnechanically valid models to this movement to provide a representation comparable to the language of clinicians and biomechanics. We anticipate using this approach to synthetically render novel views from MMC datasets from a larger cohort of infants will allow us to train 2D and 3D keypoint detectors that generalize well to a wide range of infants, including those in the NICU. If this enables algorithms that can predict clinical measurements of movement quality in infants, this will create a powerful approach to early detection and interventions for infants at risk for neurodevelopmental problems. ## 5 Conclusion Dynamic gaussians can be used to reconstruct the underlying time-varying geometry of scenes from data collected with synchronous videos from only sparse views, such as from markerless motion capture systems. This allows rendering novel views and tracking the underlying geometries. This is a promising approach to tracking kinematics for individuals where there is limited training data for traditional pose estimation algorithms, such as the movements of infants, and could also produce training data for training these pose estimation algorithms. ## Acknowledgements This work was generously supported by the Research Accelerator Program of the Shirley Ryan AbilityLab and with funding from the Restore Center P2C (NIH P2CHD101913). We thank the participants and staff of the Shirley Ryan AbilityLab for their contributions to this work.
2306.04490
Detecting Nonclassicality and quantum non-Gaussianity of photon subtracted displaced Fock state
In this paper, a quantitative investigation of the non-classical and quantum non-Gaussian characters of the photon-subtracted displaced Fock state $|{\psi}\rangle=a^kD(\alpha)|{n}\rangle$, where $k$ is number of photons subtracted, $n$ is Fock parameter, is performed by using a collection of measures like Wigner logarithmic negativity, linear entropy potential, skew information based measure, and relative entropy of quantum non-Gaussianity. It is noticed that the number of photons subtracted ($k$) changes the nonclassicality and quantum non-Gaussianity in a significant amount in the regime of small values of the displacement parameter whereas Fock parameter ($n$) presents a notable change in the large regime of the displacement parameter. In this respect, the role of the Fock parameter is found to be stronger as compared to the photon subtraction number. Finally, the Wigner function dynamics considering the effects of photon loss channel is used to show that the Wigner negativity can only be exposed by highly efficient detectors.
Deepak, Arpita Chatterjee
2023-06-07T15:01:12Z
http://arxiv.org/abs/2306.04490v1
# Nonclassicality versus quantum non-Gaussianity of photon subtracted displaced Fock state ###### Abstract In this paper, a quantitative investigation of the non-classical and quantum non-Gaussian characters of the photon-subtracted displaced Fock state \(\left|\psi\right\rangle=a^{\ast}D(\alpha)\left|n\right\rangle\), where \(k\) is the number of photons subtracted, \(n\) is Fock parameter, is performed by using a collection of measures like Wigner logarithmic negativity, linear entropy potential, skew information based measure, and relative entropy of quantum non-Gaussianity. It is noticed that the number of photons subtracted (\(k\)) changes the nonclassicality and quantum non-Gaussianity in a significant amount in the regime of small values of the displacement parameter whereas the Fock parameter (\(n\)) presents a notable change in the large regime of the displacement parameter. In this respect, the role of the Fock parameter is found to be stronger as compared to the photon subtraction number. Finally, the Wigner function dynamics considering the effects of photon loss channel is used to show that the Wigner negativity can only be detected by highly efficient detectors. ## I Introduction Quantification of quantum non-Gaussianity and non-classicality of a radiation field has been considered extensively in recent days. This attention is justified as with the appearance of quantum information science, it is appreciated that nonclassicality and quantum non-Gaussianity are two main features of the quantum world, which can direct to quantum supremacy [1]. A theoretical work observed that the photon-subtracted Gaussian state is non-classical if and only if the initial Gaussian state is non-classical, whereas the scenario is different for the photon-added, photon-added-then-subtracted, and photon-subtracted-then-added Gaussian states, they are always nonclassical independent of the initial Gaussian state [2]. This investigation motivates us to study the non-classical as well as quantum non-Gaussian characteristics of the photon-subtracted displaced Fock state. Quantification of nonclassicality and quantum non-Gaussianity can be approached in different ways. In any case, no particular measure of nonclassicality or quantum non-Gaussianity can be constructed. Here we consider a measure that can estimate the quantum non-Gaussianity of a field state which is an indispensable resource for quantum information processing. Genoni et. al. [3] addressed two entanglement-distillation protocols of quantum non-Gaussianity depending on the Hilbert-Schmidt distance and the quantum relative entropy (QRE) between the state undergoing the experiment and a reference Gaussian state. They found that the quantum non-Gaussianity appears to be dependent on the protocol itself, and in either of the two distillation protocols, the amount of the gained entanglement at each step is monotonous with the quantum non-Gaussianity of the initial low-entangled state. They also illustrated that in the bipartite setting, there is a connection between correlations and quantum non-Gaussianity, that is, at fixed covariance matrix quantum non-Gaussian states have more correlations and this excess of correlations is proportional to the amount of quantum non-Gaussianity that the quantum state loses during partial trace operation or decoupling. These results on the robustness of quantum non-Gaussian entanglement in noisy Markovian channels suggest that there are regimes where quantum non-Gaussian resources can be exploited to improve quantum communication protocols. The findings of [3] pave the way for further research and indicate that a detailed understanding of the geometrical and analytical structures underlying the quantum non-Gaussian features of states and operations could be an extremely useful tool for the successful implementation of continuous-variable (CV) quantum information processing. Since the advancement of continuous-variable quantum technology, most of the protocols designed for finite-dimensional Hilbert spaces have been first implemented in the CV setting by using Gaussian states for a number of reasons. Gaussian states are experimentally produced with a high degree of control, especially in quantum optics, and Gaussian measurements may be effectively implemented in different situations. Moreover, Gaussian states, being a member of an infinite-dimensional Hilbert space, are easy to handle from a theoretical point of view as these states are fully described by the first and second moments of the canonical operators [4; 5; 6; 7; 8; 9]. However, it has recently come to light that there are situations where quantum non-Gaussianity, in the form of quantum non-Gaussian states or quantum non-Gaussian operations, is required to complete some relevant tasks in quantum information processing. For example, quantum non-Gaussianity is essential for realizing entanglement distillation [10; 11; 12], quantum error correction [13], and cluster-state quantum computation [14, 15]. Additionally, quantum non-Gaussian measurements and/or quantum non-Gaussian states are crucial for detecting violations in CV loophole-free Bell tests [16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. Also, quantum non-Gaussian states or quantum non-Gaussian operations can be used to improve quantum teleportation [26, 27, 28] and quantum cloning of coherent states, respectively. Any quantum state \(\rho\) can be described in respect of the well-known Glauber-Sudarshan \(P\) function as [29, 30] \[\rho=\int d(P(\alpha))\left|\alpha\right\rangle\left\langle\alpha\right|. \tag{1}\] If \(P(\alpha)\) skips to be a probability distribution function, the state is non-classical. In other words, a state that cannot be precisely written in terms of a statistical mixture of coherent states is known as a non-classical state. Recently, the significance of non-classical states has been improved considerably as in the dominion of meteorology and information processing [31], enhancing the performance of the devices is of high demand and it is found that the desired enhancement is possible only using quantum resources. Non-classical states are very important for acquiring quantum advantage and thus, they are considered as the basic building blocks for quantum-enhanced devices having advantages over their corresponding classical variants. It can be mentioned that the non-classical states not only detect gravitational waves in LIGO experiment [32], they are found to be essential for quantum teleportation [33, 34, 35], quantum key distribution [36, 37, 38], quantum computation [39, 40] and so on. As nonclassicality directs to quantum advantage, it is time to quantify the amount of nonclassicality and the quantum advantages it can provide. In 1987, Hillery presented a distance-based measure [41], however, there were numerous computational challenges related to that. Following that, Mari et. al. [42] introduced a measure of nonclassicality in terms of the operational distinguishability of a quantum state with a positive Wigner function, given by \(\eta(\rho)=\min\limits_{\omega\in\mathbb{C}}\left\|\rho-\omega\right\|\) where \(\mathcal{C}\) denotes the set of all quantum states with positive Wigner function and \(||.||_{1}\) is the trace norm. In 1991, Lee proposed a measure for the quantification of nonclassicality known as nonclassical depth [43] (for a brief review see [44]). These reviews unveiled that quantum non-Gaussianity inducing operations, e.g., photon addition, photon subtraction, and their combinations [45, 46] can introduce and/or increase the amount of nonclassicality in an arbitrary quantum state. Filip et. al. [47] proposed a novel criterion for uncovering quantum states of a harmonic oscillator with a positive Wigner function which could not be expressed as a convex mixture of Gaussian states. Quantum non-Gaussian states can be defined as the states which cannot be defined in terms of a probabilistic mixture of Gaussian states [48, 49]. In a recent year, Chabaud et. al. [50] introduced the so-called stellar formalism that characterizes non-Gaussian quantum states by the distribution of the zeros of their Husimi \(Q\) function in phase space. They have studied the topological properties of the stellar hierarchy with respect to the trace norm in detail. In addition, all the Gaussian states can be defined by the first- and second-order moments, which means that the mean and covariance matrix of the Gaussian states provide their complete information. We can define a complex hull \(\mathcal{G}\) with the classical probability distribution \(P_{cl}(\lambda)\) \[\rho=\int d(P_{cl}(\lambda))\left|\psi_{G}(\lambda)\right\rangle\left\langle \psi_{G}(\lambda)\right|. \tag{2}\] in the Hilbert space \(\mathcal{H}\). This set includes all the Gaussian states and some quantum non-Gaussian states. Interestingly, the quantum non-Gaussian states obtained as the statistical mixtures of Gaussian states in the form (2) have minimal applications due to their origin in classical noise [48, 49]. On the other hand, quantum non-Gaussian states \(\rho\) in \(\mathcal{H}\) are the states which do not belong to the complex hull \(\rho\notin\mathcal{G}\). It is worth studying the quantification of quantum non-Gaussianity in such a state as these states can be used as more robust resources compared to the Gaussian states ([51, 52, 53] and references therein). Specifically, no-go theorems are limiting the applications of Gaussian operations and Gaussian states in entanglement distillation, quantum error correction, quantum computing, and quantum bit commitment. The use of quantum non-Gaussian operations is known to provide advantages in quantum computation, quantum communication, quantum metrology, etc. Although, in what follows, these exciting features of nonclassicality and quantum non-Gaussianity instigated us to study how the non-classicality and quantum non-Gaussianity change with different state parameters of photon subtracted displaced Fock state (PSDFS). There are several reasons for choosing PSDFS as the state of interest to study the measures of nonclassicality and quantum non-Gaussianity. Firstly, in a different limiting context, this state provides a set of well-known quantum states having many applications. The PSDFS is the well-known state obtained by subtracting photons from the displaced Fock state (DFS). Different features of DFS have been studied extensively [54, 55, 56] in literature. Further, it is observed that quantum non-Gaussianity-inducing operators significantly affect the signatures of nonclassicality of DFS viewed through different witnesses [57]. Therefore, it is appropriate to consider the PSDFS for studying the nonclassicality and quantum non-Gaussianity features, using Wigner negativity, linear entropy potential, skew information-based measure, Wigner logarithmic negativity, and relative entropy measure of quantum non-Gaussianity. Various limiting cases of PSDFS are described in Fig. 1. PSDFS reduces to displaced Fock state if \(k=0\) and photon subtracted coherent state if \(n=0\). Further, displaced Fock state can be converted to coherent and Fock state by taking \(n=0\) and \(\alpha=0\), respectively. Again photon subtracted coherent state reduces to the coherent state by taking \(k=0\) and hence the final coherent and Fock states are further reduced to a vacuum state \(\left|0\right\rangle\) by putting \(\alpha=0\) and \(n=0\), respectively. It is worth noting that DFS has been generated experimentally by superposing a Fock state with a coherent state on a beam-splitter [58]. In this way, a schematic diagram for generating the PADFS and PSDFS using single-mode and two-mode squeezed vacuum states was proposed in [46]. Using three (two) highly transmitting beam-splitters, a conditional measurement of single photons at both detectors \(D_{1}\) and \(D_{2}\) as in Fig. 2(a) (Fig. 2(b)) would result in a single photon subtracted DFS as output from a single-mode (two-mode) squeezed vacuum state. Repeating the process \(k\) times, \(k\)-PSDFS can be built in the lab. The structure of the paper is as follows. In section II, we find the analytical expression for PSDFS, and using it we further calculate the Wigner function. In section III, we discuss different measures of nonclassicality and quantum non-Gaussianity of PSDFS and perform a comparison. In section IV, the dynamics of the Wigner function for PSDFS over the photon loss channel is described. The paper ends with a conclusion in the section V. ## II Photon subtracted displaced Fock state and its Wigner function A beam splitter whose second port is fed by a highly excited coherent state can effectively approximate the displacement operator \(D(\alpha)\) on any quantum state of the radiation field [59]. Thus, a displaced Fock state (DFS) is analytically defined as \(\left|\alpha,n\right\rangle=D(\alpha)\left|n\right\rangle\), where \(D(\alpha)\) is the displacement operator and \(\left|n\right\rangle\) is the Fock state with \(n\) photons. This nonclassical state, while \(D(\alpha)\) operating on vacuum state \(\left|0\right\rangle\) provides a classical coherent state \(\left|\alpha\right\rangle\). In the present work, we deal with photon-subtracted displaced Fock state (PSDFS) obtained by applying the quantum non-Gaussianity introducing annihilation operator \(k\)-times on DFS. We attempt here to quantify the amount of nonclassicality present in the PSDFS and then discuss its adherence with quantum non-Gaussianity in detail. The \(k\) photon subtracted displaced Fock state can be written as follows (see Appendix A) \[\left|\psi\right\rangle = Na^{k}D(\alpha)\left|n\right\rangle \tag{3}\] \[= \sum_{m=0}^{\infty}C_{m}(n,\alpha,k)\left|m\right\rangle\] where \[C_{m}(n,\alpha,k)=Ne^{-\frac{\left|\alpha\right|^{2}}{2}}\alpha^{m-n+k}\sqrt {\frac{n!}{m!}}L_{n}^{m+k-n}(\left|\alpha\right|^{2})\] with normalization factor \(N\) as \[N=\left[\sum_{r=0}^{k}\binom{k}{r}^{2}\frac{n!}{(n-r)!}|\alpha|^{2(k-r)} \right]^{-1/2}\] and \(L_{n}^{m}(x)\) is the associated Laguerre polynomial. The Wigner function for an arbitrary quantum state with density matrix \(\rho\) can be described as [60] \[W(\gamma,\gamma^{*}) = \frac{2}{\pi^{2}}e^{-2\left|\gamma\right|^{2}}\] \[\times\int d^{2}\lambda\left\langle-\lambda|\rho|\lambda\right\rangle \,\exp[-2(\gamma^{*}\lambda-\gamma\lambda^{*})]\] Using this representation, the Wigner function of PSDFS is calculated \[W(\gamma,\gamma^{*})=\frac{2N^{2}\exp(-2|\eta|^{2})}{\pi}\sum_{p,q=0}^{k} \binom{k}{p}\binom{k}{q}\alpha^{*k-p}\alpha^{k-q}n!\] Figure 2: (color online) Generation of PSDFS using different quantum states and beam-splitters. Figure 1: (color online) Some special cases of photon subtracted displaced Fock state. \[\times\sum_{r=0}^{n-p}\frac{(-2)^{p-q+r}\eta^{*p-q}}{r!(p-q)!(n-p-r)!} \ {}_{1}F_{1}(-r;p-q+1;2|\eta|^{2}), \tag{5}\] where \(\eta=\alpha-\gamma\), \({}_{1}F_{1}(a;b;x)\) is the generalized hypergeometric function (see Appendix B for details). The Wigner function is plotted here for different state parameter values. The existence of non-classical and quantum non-Gaussian features in the photon-subtracted displaced Fock state can be studied in terms of the Wigner function. The negative region of the Wigner function indicates the existence of nonclassicality. In addition to that, Hudson's theorem proved [61] that any pure quantum state having a non-negative Wigner function is necessarily Gaussian. The quantum state in which we have complete knowledge of the quantum system is known as a pure state. Different distributions of pure states can produce equivalent mixed states, and the mixed state is the combination of probabilities of the information about the quantum state [62]. PSDFS are by definition pure states. This infers that the negative values of the Wigner function witness the quantum non-Gaussianity in the state. Moreover, the "quantum non-Gaussianity criteria" [48] derived from the Hudson's theorem, sets out a lower bound on the Wigner function at the origin of phase space for a pure Gaussian state. It certifies that for any given pure single-mode Gaussian state \(|\psi_{G}\rangle\), the value of the Wigner function at the origin of the phase space is bounded from below by \(\frac{2}{\pi}\exp\{-2n(1+n)\}\) where \(n=\langle\psi_{G}\,|\,a^{\dagger}\theta\,|\psi_{G}\rangle\). Here, the Wigner function (5) satisfies the lower bound condition for the parametric values taken in Fig. 3. It may be noted from (5) that the Wigner function is of Gaussian form for \(k=n=0\) as it corresponds to the Wigner function of the coherent state. It is clear from Fig. 3 that the non-zero value of the displacement parameter can influence the quantum non-Gaussian behavior of the state if either \(k\neq 0\) or \(n\neq 0\). Moreover, the non-zero Fock parameter and photon subtraction lead to the quantum non-Gaussianity of the Wigner function having a Gaussian factor. The negative region of the Wigner function in Fig. 3 clearly exhibits the non-classical as well as the quantum non-Gaussian features of the PSDFS. It is also observed that with an increase in values of \(k\), \(n\), and \(\alpha\), the negative values of the Wigner function decrease. ### Reconstruction algorithm for Wigner function To view the applicability of the results in a practical experiment, we have considered an optical state reconstruction algorithm using the Wigner function description [63]. In the case of homodyne tomography, it is convenient to represent the reconstructed state in the form of the phase-space quasiprobability density, the Wigner function as \[W_{\rho}(Q,P)\] \[= \frac{1}{2\pi}\int_{-\infty}^{\infty}\langle Q+\frac{1}{2}Q^{ \prime}|\rho|Q-\frac{1}{2}Q^{\prime}\rangle e^{-iPQ^{\prime}}dQ^{\prime}\] The experimentally measured histogram \(Pr(Q_{\theta},\theta)\) is the integral projection of the Wigner function onto a vertical plane oriented at angle \(\theta\) to the \(Q\) axis as \[Pr(Q_{\theta},\theta)\] \[= \int_{-\infty}^{\infty}W_{\rm det}(Q_{\theta}\cos\theta-P_{\theta }\sin\theta,Q_{\theta}\sin\theta+P_{\theta}\cos\theta)dP_{\theta}\] The "detected" Wigner function \(W_{\rm det}\) corresponds to the ideal Wigner function (II) for a loss-free detector, and for a detector with quantum efficiency \(\eta\), it is obtained from the latter via a convolution as \[W_{\rm det}(Q,P)\] \[= \frac{1}{\pi(1-\eta)}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}W(Q^{\prime},P^{\prime})\] \[\times\exp\Big{[}-\frac{(Q-Q^{\prime}\sqrt{\eta})^{2}+(P-P^{ \prime}\sqrt{\eta})^{2}}{1-\eta}\Big{]}dQ^{\prime}dP^{\prime}\] In Figs. 4(a)-(c), we have plotted the detected Wigner function \(W_{\rm det}(Q,P)\) with the same parametric values as in Fig. 3. In a practical experiment, the photodiodes in the homodyne detector are not 100% efficient, i.e., they do not transform every incident photon into a photoelectron. This leads to a distortion of the quadrature noise behavior which needs to be compensated for in the reconstructed state. The reconstructed Wigner function corresponds to the detectors with a nonunitary quantum efficiency \(\eta\) (\(=0.5\)). It is clear from Fig. 4 that the Wigner function restored using tomography is adequate to confirm the nonclassical and quantum non-Gaussian features of the presented optical state. The field quadrature probability density \(Pr(Q_{\theta},\theta)\) observing the field quadrature equal to \(Q_{\theta}\) is described in Figs. 4(d)-(f). This figure signifies the error while the Wigner function of the photon subtracted displaced Fock state is reconstructed by performing a full quantum state tomography. The integral projection of the ideal Wigner function \(Pr(Q_{\theta},\theta)\) quantifies the error when the results are applied to an actual physical system. ## III Measures of nonclassicality and quantum non-Gaussianity Photon subtraction is an extensively used operation to design quantum states and is employed repeatedly to transform a classical and Gaussian state to a non-classical and quantum non-Gaussian one. Consequently, this operation can be treated as a quantum non-Gaussianity injecting operator. It can also be used for introducing nonclassicality into a quantum state by hole burning process [64; 65]. In this section, we find out the variation of nonclassicality and quantum non-Gaussianity present in the PSDFS while different state parameters (number of photons subtracted (\(k\)), displacement parameter (\(\alpha\)), Fock parameter (\(n\))) are changed. The current study is of utmost importance as it may help to identify the appropriate state parameter values for a quantum-designed state which is to be used for executing a quantum calculation, correspondence, or metrology task that requires non-classical and additionally quantum non-Gaussian states. For the measurement of non-classicality [66], the closed-form analytic expression of Figure 4: (Color online) Variation of reconstructed Wigner function \(W_{\rm det}(Q,P)\) with respect to \(Q\) and \(P\) with \(\eta=0.5\), \(\theta=\pi/4\) and (a) \(k=1\), \(n=3\), \(\alpha=0.5\), (b) \(k=2\), \(n=4\), \(\alpha=1\), (c) \(k=3\), \(n=5\), (d)-(f) are the corresponding projection curve \(Pr(Q_{\theta},\theta)\) and quadrature line \(Q_{\theta}\). The experimentally measured field quadrature probability density \(Pr(Q_{\theta},\theta)\) is the integral projection of the ideal Wigner function \(W(Q,P)\) onto a vertical plane defined by the phase of the local oscillator. an entanglement potential (referred to as linear entropy potential), skew information-based measure, and Wigner logarithmic negativity are obtained in this section. Further, Wigner logarithmic negativity and relative entropy of quantum non-Gaussianity are calculated to estimate quantum non-Gaussianity. In the following subsections, we briefly present these measures and then explicitly describe how the amount of nonclassicality and quantum non-Gaussianity in PSDFS quantified by these conditions change with the state parameters. ### Linear entropy potential Asboth [67] proposed a new measure of nonclassicality based on the fact that if a single-mode non-classical (classical) state is injected into one input port of a beam-splitter (BS) and a vacuum state is embedded from the other end, the output two-mode state should be entangled (separable). Therefore, the nonclassicality of the input single-mode state (other than the vacuum state inserted in the BS), either classical or non-classical, can be indirectly measured in terms of an entanglement measure. A variety of quantitative entanglement measures are available which can be used to estimate the nonclassicality of the input single-mode state. When an entanglement measure following Asboth's approach is used to assess the single-mode nonclassicality, it is referred as entanglement potential in correspondence with the Asboth's terminology. In particular, if concurrence (linear entropy) is used to quantify the nonclassicality of the single-mode input state by measuring the entanglement of the two-mode output state leaving from the BS, the nonclassicality criteria is named as concurrence potential (linear entropy potential) [68]. The linear entropy for a bipartite state \(\rho_{AB}\) is defined with respect to in terms of the reduced subsystem as \[L_{E}=1-\text{Tr}(\rho_{B}^{2}) \tag{9}\] where \(\rho_{B}\) is the partial trace of \(\rho_{AB}\) over the subsystem \(A\). Therefore \(L_{E}\) corresponds to 1 (0) for a maximally entangled (separable) state, and a non-zero value for an entangled state, in general, [69; 70]. To compute the linear entropy potential, we require the post-beam-splitter state \(\rho_{AB}\) which is originated if PSDFS and vacuum states are mixed at a beam-splitter. The two-mode state exiting from the beam-splitter when the Fock state is injected at one port and vacuum at the other can be expressed as follows \[\ket{n}\otimes\ket{0}=\ket{n,0}\xrightarrow{BS}\frac{1}{2^{n/2}}\sum_{j=0}^{ n}\sqrt{\binom{n}{j}}\ket{j,n-j} \tag{10}\] This equation can be used to find the density matrix \(\rho_{AB}\) as follows \[\ket{\phi}_{AB}=\sum_{m=k}^{\infty}C_{m}(n,\alpha,k)\,\left(\frac{1}{\sqrt{2} }\right)^{m} \tag{11}\] \[\times\sum_{j=0}^{m-k}\sqrt{\binom{m-k}{j}}\ket{j,m-k-j}\] and then \[\begin{split}\rho_{AB}\\ &=\sum_{m,m^{\prime}=k}^{\infty}C_{m}(n,\alpha,k)C_{m^{\prime}}^ {*}(n,\alpha,k)\left(\frac{1}{\sqrt{2}}\right)^{m+m^{\prime}}\\ &\times\sum_{j,l=0}^{m-k,m^{\prime}-k}\sqrt{\binom{m-k}{j}\binom {m^{\prime}-k}{l}}\\ &\times\ket{j,m-k-j}\bra{l,m^{\prime}-k-l}\end{split} \tag{12}\] The partial trace of \(\rho_{AB}\) with respect to the subsystem \(A\) yields the analytic expression for the linear entropy potential as \[L_{E} =1-N^{4}e^{-2\abs{\alpha}^{2}}n!^{2}\sum_{m,m_{1},m_{2}=0}^{ \infty}\frac{\abs{\alpha}^{2(m+m_{2}-2n+2k)}}{2^{m+m_{2}}m!m_{2}!}\] \[\times\binom{m+m_{2}}{m_{1}}L_{n}^{m+k-n}(\abs{\alpha}^{2})L_{n} ^{m_{1}+k-n}(\abs{\alpha}^{2})\] \[\times L_{n}^{m_{2}+k-n}(\abs{\alpha}^{2})L_{n}^{m-m_{1}+m_{2}+k-n }(\abs{\alpha}^{2}) \tag{13}\] The variation in the amount of nonclassicality with respect to the state parameters is shown in Fig. 5. The Figure 5: (color online) Linear entropy \(L_{E}\) for PSDFS as a function of displacement parameter \(\alpha\) and for (a) different number of photon subtractions \(k\) and \(n=3\), (b) different values of Fock parameter \(n\) and \(k=1\). nonclassicality of PSDFS is detected through the linear entropy potential. Here nonclassicality decreases with an increase in the displacement parameter as for \(\alpha=0\) the state reduces to the Fock state, the most nonclassical state. The value of the displacement parameter is significant in controlling the effects of photon subtraction. In a certain range of \(\alpha\), we can see that the amount of nonclassicality quantified by the linear entropy potential increases with the photon subtraction number for the states having the same initial Fock parameter. Interestingly, it is easy to observe from Fig. 5(b) that with an increase in the Fock parameter, the nonclassicality of PSDFS is found to be decreased. That means, photon subtraction is an effective tool for enhancing nonclassicality in a certain range of \(\alpha\) while higher values of the Fock parameter are less beneficial in that range of the displacement parameter. ### Skew information-based measure In 2019, the skew information-based measure is proposed by Shunlong Luo et al. [71] in the context of Wigner Yanase skew information [72]. For a pure state \(\rho\), it can be defined as \[N(\rho)=\frac{1}{2}+\langle a^{\dagger}a\rangle-\langle a^{\dagger}\rangle \langle a\rangle \tag{14}\] \(N(\rho)\) represents the quantum coherence of \(\rho\) with respect to the annihilation and creation operators, and is relatively easy to calculate. This measure is based on averages, which takes numerical value \(\frac{1}{2}\) for classical coherent state and \(n+\frac{1}{2}\) for mostly non-classical \(n\) photon Fock state \(|n\rangle\). Thus any state \(\rho\) with \(N(\rho)>\frac{1}{2}\) is nonclassical. However, this criterion is a one-sided condition as it fails for some Gaussian mixed states [71]. In the case of PSDFS, \(N(\rho)\) can be calculated using the general expectation of \(a^{\dagger p}a^{q}\) as follows: \[\langle a^{\dagger p}a^{q}\rangle =\sum_{m,m^{\prime}=0}^{\infty}C_{m}(n,\alpha,k)C_{m^{\prime}}^{ \ast}(n,\alpha,k)\,\langle m^{\prime}|\,a^{\dagger p}a^{q}\,|m\rangle\] \[=N^{2}e^{-|\alpha|^{2}}n!\alpha^{sp-q}|\alpha|^{2(k-n+q)}\sum_{m =0}^{\infty}\frac{|\alpha|^{2m}}{m!}\] \[\times L_{n}^{m+q+k-n}(|\alpha|^{2})L_{n}^{m+p+k-n}(|\alpha|^{2}) \tag{15}\] where \(L_{n}^{\alpha}(.)\) is the generalized Laguerre polynomial. Figure 6(a) shows that the nonclassicality expressed by the skew information-based measure cannot be enhanced by increasing the displacement parameter. It can also be observed that with an increase in photon subtraction, nonclassicality remains almost the same up to a certain value of displacement parameter and decreases thereafter. Moreover, Fig. 6(b) shows that the nonclassicality increases with an increase in the Fock parameter, which is the opposite as noted earlier in the case of linear entropy potential (see Fig. 5(b)), indicating that the validity of any such "amount of nonclassicality" based ordering is restricted to the specific measure involved. ### Wigner logarithmic negativity We have seen in Fig. 3 that the Wigner function diagnoses the non-classical as well as the quantum non-Gaussian nature of the PSDFS. This inspired us to quantify the amount of nonclassicality and quantum non-Gaussianity using the negative volume of the Wigner function. A simple measure of nonclassicality, named Wigner logarithmic negativity is introduced as [53] \[W=\log_{2}\left(\int d^{2}\gamma|W(\gamma,\gamma^{\ast})|\right) \tag{16}\] where the integration is performed over the complete region \(\mathcal{R}^{2n}\) of phase-space. More interestingly, in the resource theory of quantum information, it is noted that \(W\) also estimates the amount of quantum non-Gaussianity present in the PSDFS as the negative values of the Wigner function also witness the quantum non-Gaussianity of the field state. The integration in (16) is executed numerically using the Wigner function (5) to study the effect of different parameters on the Wigner logarithmic negativity of PSDFS. In the case of increasing photon subtraction number \(k\) from \(1\) to \(3\), the Wigner logarithmic negativity de Figure 6: (color online) A nonclassicality quantifier \(N(\rho)\) with respect to the displacement parameter \(\alpha\) and for (a) different photon subtraction \(k\) and \(n=3\) (b) different Fock parameter \(n\) and \(k=1\). creases. Specifically, any increase in the displacement parameter only rises nonclassicality and then quantum non-Gaussianity. Among the photon subtraction and Fock parameter, the second one is more effective in enhancing the nonclassicality and quantum non-Gaussianity of PSDFS when a single photon is subtracted. ### Relative entropy measure of quantum non-Gaussianity Different measures to inspect the quantum non-Gaussianity of a quantum state have been reported in literature [73; 74; 75; 53; 76]. In the previous section, we have observed that in the case of PSDFS, the variation in the quantum non-Gaussianity with respect to different state parameters is similar to that of nonclassicality as revealed by the Wigner logarithmic negativity. This is justified because states with negative Wigner function are a subset of quantum non-Gaussian states [53]. Hence we further investigate the quantum non-Gaussianity of the considered state using the relative entropy of quantum non-Gaussianity. This important measure can be defined with the set of all Gaussian states as follows [77] \[\delta[\rho]=S(\rho||\tau_{G}) \tag{17}\] where the relative entropy is \(S(\rho||\Lambda)=\mathrm{Tr}[\rho(\log(\rho)-\log(\Lambda))]\), and the reference mixed Gaussian state \(\tau_{G}\) is selected in such a way that the first and second-order moments are same as that of \(\rho\). Further, in the case of the Gaussian state, \(S(\rho=|\psi\rangle\left\langle\psi\right|)\) is zero, and hence \(\delta[|\psi\rangle]=S(\tau_{G})\), which is the von Neumann entropy of the reference state. A Gaussian state is fully characterized by its two first-order moments and hence by its covariance matrix which can be written as [53] \[\sigma=\begin{bmatrix}\sigma_{pp}&\sigma_{qp}\\ \sigma_{qp}&\sigma_{pp}\end{bmatrix} \tag{18}\] where \(\sigma_{pq}=\langle pq+qp\rangle-2\langle p\rangle\langle q\rangle\) for position \(q=\frac{a+a^{\dagger}}{\sqrt{2}}\) and momentum \(p=\frac{a-a^{\dagger}}{i\sqrt{2}}\). Using (15), all the elements of the covariance matrix \(\sigma\) for PSDFS can be obtained and thus the relative entropy measure of quantum non-Gaussianity reduces to [66] \[\delta[|\psi\rangle]=S(\tau_{G})=h(\mathrm{det}\big{(}\sqrt{\sigma}\big{)}) \tag{19}\] with \(h(x)=\frac{x+1}{2}\log_{2}\big{(}\frac{x+1}{2}\big{)}-\frac{x-1}{2}\log_{2} \big{(}\frac{x-1}{2}\big{)}\). It can be seen from Fig. 8(a) that the plots are increasing with \(\alpha\) for \(k=1\), \(2\), and become zero for \(k=3\). Also, the plots are gradually approaching the horizontal axis. Consequently, we can conclude that subtraction of photons reduces the relative entropy to zero or equivalently application of quantum non-Gaussianity inducing annihilation operator thrice increases quantum non-Gaussianity and nonclassicality of PSDFD for relatively small values of \(\alpha\). Also Fig. 8 (b) shows that relative entropy increase with increasing Fock state parameter \(n\) Figure 8: (color online) quantum non-Gaussianity as a function of \(\alpha\) for (a) variation in \(k\) and \(n=3\) (b) different Fock parameter \(n\) values and \(k=1\). Figure 7: (color online) Changes of Wigner logarithmic negativity \(W\) with respect to the displacement parameter \(\alpha\) for (a) different values of \(k\) and \(n=3\) (b) different values of \(n\) and \(k=1\). Wigner function of PSDFS evolving under photon loss channel The interaction between a quantum system and its environment instigates quantum to classical transition. So, the observed non-classical and quantum non-Gaussian features of the PSDFS are expected to deteriorate due to its evolution under the lossy channel. In particular, the temporal evolution of a quantum state \(\rho\) over the lossy channel can be studied using the LGKS master equation [78] given by \[\frac{\partial\rho}{\partial t}=\kappa(2a\rho a^{\dagger}-a^{\dagger}a\rho- \rho a^{\dagger}a) \tag{20}\] where \(\kappa\) is the rate of decay. Analogously, the time evolution of the Wigner function at time \(t\) in terms of the initial Wigner function of the state which is evolving under the lossy channel [79] is defined as \[W(\zeta,t)=\frac{2}{T}\int\frac{\partial^{2}\gamma}{\pi}\exp\biggl{[}-\frac{2} {T}|\zeta-\gamma e^{-\kappa t}|^{2}\biggr{]}W(\gamma,\gamma^{*},0) \tag{21}\] with \(T=1-\exp(-2\kappa t)\) and \(W(\gamma,\gamma^{*},0)\) is the Wigner function at initial time \(t=0\) which is calculated in (5). The time evolution of Wigner function (21) models dissipation due to interaction with a vacuum reservoir as well as inefficient detectors with efficiency \(\eta=1-T\). The detailed calculation of \(W(\zeta,t)\) is given in Appendix C. A compact expression of the Wigner function evolving under the photon loss channel can be obtained from (21) by using the description of the Wigner function in an ideal situation. One can easily notice that both the quantum non-Gaussianity and nonclassicality of PSDFS cannot increase due to its temporal evolution over a photon loss channel. This is clearly illustrated by Fig. 9(a)-(c) that the negative region is shrinking with increasing values of rescaled time \(\kappa t\) for a single photon subtracted DFS with \(n=3\) and displacement parameter \(\alpha=0.5\). That means increasing \(\kappa t\) diminishes the quantum features of the PSDFS. Moreover, it is clear from Fig. 9(d)-(f) that the displacement parameter \(\alpha\) is not a dominating factor for the nonclassicality and quantum non-Gaussianity of PSDFS under the effect of photon loss channel. Interestingly, the Wigner function for PSDFS evolving under photon loss channel behaves unlikely as the variation of the Wigner function in the absence of noise with the Fock parameter. Both the nonclassicality and quantum non-Gaussianity decrease as \(n\) increases. ## V Conclusion The description as well as the quantification of nonclassicality and quantum non-Gaussianity of quantum states are problems of interest in their merit. These problems are addressed here by considering PSDFS as a test case. Specifically, PSDFS is chosen as it can be reduced to various quantum states having important applications in different limits. In this work, we have used linear entropy potential, skew information-based criteria, and Wigner logarithmic negativity as measures of nonclassicality to compare the amount of nonclassicality present in a set of different quantum non-Gaussian pure states. The nonclassicality and quantum non-Gaussianity present in the PSDFS are quantified, which shows that both photon subtraction and Fock parameters enhance these quantum features whereas the latter is a more effective tool at small displacement parameters. In contrast, the displacement parameter is observed to reduce the quantum features. In view of Hudson's theorem and resource theory of quantum non-Gaussianity based on Wigner negativity, the Gaussian operations are free, and thus, displacement operation and photon loss channels are not expected to enhance quantum non-Gaussianity and/or Wigner negativity. Although, in the case of pure PSDFS, the Wigner negativity captures all the quantum non-Gaussianity but cannot predict conclusively the feasibility of quantum non-Gaussianity in PSDFS measured through inefficient detectors (or evolved through lossy channels) with the positive Wigner function. Similarly, skew information-based and linear entropy potential measures show that the Wigner function succeeds in detecting all the nonclassicality of the pure PSDFS. In this context, it can be noted that the non-Gaussianity of PSDFS for non-zero values of \(n,\,\alpha\) and \(k\) is detected by the non-Gaussianity identifying Wigner function and it decreases with the increasing values of \(n,\,\alpha\) and \(k\). The amount of non-Gaussianity is quantified by relative entropy \(\delta\) which increases with \(\alpha\) but decreases with \(n\) and \(k\). However, nonclassicality recognized by skew information-based measure is decreasing with \(k\) and increasing with \(n\) and \(\alpha\). The amount of nonclassicality present in PSDFS can be estimated by using different measures like linear entropy and Wigner logarithmic negativity but they are not consistent with respect to different parameters. The amount of nonclassicality assessed by the linear entropy is increasing with \(k\) and decreasing with \(n\) and \(\alpha\) while Wigner logarithmic negativity behaves just opposite that means it decreases with \(k\) and increases with \(n\) and \(\alpha\). Following the recent progress in quantum state engineering and quantum information processing, we conclude this article with a belief that PSDFSs will soon be constructed in laboratories and will be used for performing quantum optical, metrological, and computing tasks. ## Appendix A Detailed Simplification of state PSDFS We have, \[\left|\psi\right\rangle =Na^{k}D(\alpha)\left|n\right\rangle=Ne^{-\frac{\left|\alpha\right|^ {2}}{2}}a^{k}e^{a^{\dagger}\alpha}e^{-a\alpha^{*}}\left|n\right\rangle\] \[=Ne^{-\frac{\left|\alpha\right|^{2}}{2}}a^{k}e^{a^{\dagger} \alpha}\sum_{p=0}^{n}\frac{(-\alpha^{*})^{p}}{p!}\sqrt{\frac{n!}{(n-p)!}} \left|n-p\right\rangle\] \[=Ne^{-\frac{\left|\alpha\right|^{2}}{2}}a^{k}\sum_{q=0}^{\infty} \frac{\alpha^{q}}{q!}a^{\dagger q}\sum_{p=0}^{n}\frac{(-\alpha^{*})^{p}}{p!} \sqrt{\frac{n!}{(n-p)!}}\left|n-p\right\rangle\] \[=Ne^{-\frac{\left|\alpha\right|^{2}}{2}}\sum_{q=0}^{\infty}\frac {\alpha^{q}}{q!}\sum_{p=0}^{n}\frac{(-\alpha^{*})^{p}}{p!}\sqrt{\frac{n!}{(n- p)!}}a^{k}a^{\dagger q}\left|n-p\right\rangle\] Figure 9: (Color online) The dynamics of Wigner function evolving under photon loss channel for \(k=1\), \(n=3\), \(\alpha=0.5\) and with different values of the rescaled time (a) \(\kappa t=0.1\), (b) \(\kappa t=0.3\), (c) \(\kappa t=0.5\); \(k=1\), \(n=3\), \(\kappa t=0.1\) and with different values of displacement parameter (d) \(\alpha=1\), (e) \(\alpha=1.5\), (f) \(\alpha=2\); \(k=1\), \(\kappa t=0.1\), \(\alpha=0.5\) and with different values of Fock state parameter (g) \(n=4\), (h) \(n=5\), (i) \(n=6\); \(n=3\), \(\kappa t=0.1\), \(\alpha=0.5\) and with different values of photon subtraction parameter (j) \(k=2\), (k) \(k=4\), (l) \(k=6\), respectively. \[=Ne^{-\frac{|\alpha|^{2}}{2}}\sum_{q=0}^{\infty}\frac{\alpha^{q}}{q!} \sum_{p=0}^{n}\frac{(-\alpha^{*})^{p}}{p!}\sqrt{\frac{n!}{(n-p)!}}\sqrt{\frac{(n- p+q)!}{(n-p)!}}a^{k}\left|n-p+q\right>\] \[=Ne^{-\frac{|\alpha|^{2}}{2}}\sum_{q=0}^{\infty}\frac{\alpha^{q}}{q! }\sum_{p=0}^{n}\frac{(-\alpha^{*})^{p}}{p!}\frac{\sqrt{n!(n-p+q)!}}{(n-p)!} \sqrt{\frac{(n-p+q-k)!}{(n-p+q-k)!}}\left|n-p+q-k\right>\] \[=Ne^{-\frac{|\alpha|^{2}}{2}}\sum_{q=0}^{\infty}\frac{\alpha^{q}}{q! }\sum_{p=0}^{\min(n,n-k+q)}\frac{(-\alpha^{*})^{p}}{p!}\frac{(n-p+q)!}{(n-p)!} \sqrt{\frac{n!}{(n-p+q-k)!}}\left|n-p+q-k\right>\] \[=Ne^{-\frac{|\alpha|^{2}}{2}}\sum_{m=k}^{\infty}\sum_{p=0}^{n} \frac{\alpha^{m-n+p}}{(m-n+p)!}\frac{(-\alpha^{*})^{p}}{p!}\frac{m!}{(n-p)!} \sqrt{\frac{n!}{(m-k)!}}\left|m-k\right>\] \[=Ne^{-\frac{|\alpha|^{2}}{2}}\sum_{m=k}^{\infty}\alpha^{m-n}\sqrt {\frac{n!}{(m-k)!}}L_{n}^{m-n}(\left|\alpha\right|^{2})\left|m-k\right>\] \[=\sum_{m=k}^{\infty}C_{m}(n,\alpha,k)\left|m-k\right>\] where, \(C_{m}(n,\alpha,k)=Ne^{-\frac{|\alpha|^{2}}{2}}\alpha^{m-n}\sqrt{\frac{n!}{(m- k)!}}L_{n}^{m-n}(\left|\alpha\right|^{2})\). ## Appendix B Wigner function of PSDFS We have \(\left|\psi\right>=Na^{k}D(\alpha)\left|n\right>\). Using this \[C(\lambda)\] \[=\operatorname{Tr}(\rho D(\lambda))\] \[=N^{2}\left<n\right|D^{\dagger}(\alpha)a^{\dagger k}D(\lambda)a^{ k}D(\alpha)\left|n\right>\] \[=N^{2}\left<n\right|(a^{\dagger}+\alpha^{*})^{k}D^{\dagger}( \alpha)D(\lambda)D(\alpha)(a+\alpha)^{k}\left|n\right>\] \[=N^{2}\exp(\lambda\alpha^{*}-\lambda^{*}\alpha)\sum_{p,q=0}^{k} \binom{k}{p}\binom{k}{q}\alpha^{*k-p}\alpha^{k-q}\left<n\right|a^{\dagger p}D( \lambda)a^{q}\left|n\right>\] \[=N^{2}\exp(\lambda\alpha^{*}-\lambda^{*}\alpha-\frac{|\lambda|^ {2}}{2})\sum_{p,q=0}^{k}\binom{k}{p}\binom{k}{q}\alpha^{*k-p}\alpha^{k-q}n! \sum_{r=0}^{n-p}\frac{\lambda^{r}(-\lambda^{*})^{p-q+r}}{r!(p-q+r)!(n-p-r)!}\] Now the Wigner function is given by \[W(\gamma,\gamma^{*})\] \[=\frac{1}{\pi^{2}}\int d^{2}\lambda\,C(\lambda)e^{\gamma\lambda^{ *}-\gamma^{*}\lambda}\] \[=N^{2}\sum_{p,q=0}^{k}\binom{k}{p}\binom{k}{q}\alpha^{*k-p} \alpha^{k-q}n!\sum_{r=0}^{n-p}\frac{1}{r!(p-q+r)!(n-p-r)!}\] \[\times\frac{1}{\pi^{2}}\int d^{2}\lambda\,\lambda^{r}(-\lambda^{* })^{p-q+r}\exp(\lambda\alpha^{*}-\lambda^{*}\alpha-\frac{|\lambda|^{2}}{2}+ \gamma\lambda^{*}-\gamma^{*}\lambda)\] \[=N^{2}\sum_{p,q=0}^{k}\binom{k}{p}\binom{k}{q}\alpha^{*k-p} \alpha^{k-q}n!\sum_{r=0}^{n-p}\frac{1}{r!(p-q+r)!(n-p-r)!}\] \[\times\frac{1}{\pi^{2}}\int d^{2}\lambda\,\lambda^{r}(-\lambda^{* })^{p-q+r}\exp(\lambda\eta^{*}-\lambda^{*}\eta-\frac{|\lambda|^{2}}{2})\quad \text{ substituting }\eta=\alpha-\gamma\] \[=N^{2}\sum_{p,q=0}^{k}\binom{k}{p}\binom{k}{q}\alpha^{*k-p} \alpha^{k-q}n!\sum_{r=0}^{n-p}\frac{1}{r!(p-q+r)!(n-p-r)!}\] \[\times\frac{\partial^{r}}{\partial\eta^{*r}}\frac{\partial^{p-q+r}}{ \partial p^{p-q+r}}\frac{1}{\pi^{2}}\int d^{2}\lambda\,\exp\biggl{(}\lambda\eta^ {*}-\lambda^{*}\eta-\frac{|\lambda|^{2}}{2}\biggr{)}\] \[=N^{2}\sum_{p,q=0}^{k}\binom{k}{p}\binom{k}{q}\alpha^{*k-p} \alpha^{k-q}n!\sum_{r=0}^{n-p}\frac{1}{r!(p-q+r)!(n-p-r)!}\frac{\partial^{r}}{ \partial\eta^{*r}}\frac{\partial^{p-q+r}}{\partial\eta^{p-q+r}}\frac{2}{\pi} \exp\bigl{(}-2|\eta|^{2}\bigr{)}\] \[=\frac{2N^{2}}{\pi}\sum_{p,q=0}^{k}\binom{k}{p}\binom{k}{q}\alpha ^{*k-p}\alpha^{k-q}n!\sum_{r=0}^{n-p}\frac{(-2)^{p-q+r}\eta^{*p-q}}{r!(p-q+r)! (n-p-r)!}\] \[\times\sum_{s=0}^{r}\binom{r}{s}\frac{(p-q+r)!}{(p-q+r-s)!}|\eta |^{2(r-s)}(-2)^{r-s}\exp\bigl{(}-2|\eta|^{2}\bigr{)}\] \[=\frac{2N^{2}\exp\bigl{(}-2|\eta|^{2}\bigr{)}}{\pi}\sum_{p,q=0}^{ k}\binom{k}{p}\binom{k}{q}\alpha^{*k-p}\alpha^{k-q}n!\] \[\times\sum_{r=0}^{n-p}\frac{(-2)^{p-q+r}\eta^{*p-q}}{r!(p-q)!(n-p -r)!}\,\,_{1}F_{1}(-r;p-q+1;2|\eta|^{2})\] ## Appendix C Wigner function evolving under photon loss channel \[W(\zeta,t)\] \[=\frac{2}{T}\int\frac{d^{2}\gamma}{\pi}\exp\biggl{[}-\frac{2}{T}| \zeta-\gamma e^{-\kappa t}|^{2}\biggr{]}W(\gamma,\gamma^{*},0)\] \[=\frac{2}{T}\frac{2N^{2}}{\pi}\sum_{p,q=0}^{k}\binom{k}{p}\binom{ k}{q}\alpha^{*k-p}\alpha^{k-q}n!\sum_{r=0}^{n-p}\frac{(-2)^{p-q+r}}{r!(n-p-r)!} \sum_{s=0}^{r}\binom{r}{s}\frac{(-2)^{r-s}}{(p-q+r-s)!}\] \[\times\int\frac{d^{2}\gamma}{\pi}\exp\biggl{[}-\frac{2}{T}\left[ |\beta|^{2}e^{2\kappa t}+\beta\eta^{*}+\beta^{*}\eta+|\eta|^{2}(T+e^{-2\kappa t })\right]\biggr{]}\eta^{*p-q+r-s}\eta^{r-s}\text{ assuming }\zeta-\alpha e^{-\kappa t}=\beta e^{\kappa t}\] \[=\frac{2}{T}\frac{2N^{2}}{\pi}\sum_{p,q=0}^{k}\binom{k}{p}\binom{ k}{q}\alpha^{*k-p}\alpha^{k-q}n!\sum_{r=0}^{n-p}\frac{(-2)^{p-q+r}}{r!(n-p-r)!} \sum_{s=0}^{r}\binom{r}{s}\frac{(-2)^{r-s}}{(p-q+r-s)!}\exp\biggl{(}-\frac{2} {T}|\beta|^{2}e^{2\kappa t}\biggr{)}\] \[\times\left(-\frac{T}{2}\right)^{p-q+2(r-s)}\frac{\partial^{p-q+r -s}}{\partial\beta^{p-q+r-s}}\frac{\partial^{r-s}}{\partial\beta^{*r-s}} \int\frac{d^{2}\eta}{\pi}\exp\biggl{[}-\frac{2}{T}(\beta\eta^{*}+\beta^{*}\eta +|\eta|^{2})\biggr{]}\quad\text{using }T=1-e^{-2\kappa t}\] \[=\frac{2N^{2}}{\pi}n!\exp\biggl{[}\frac{2}{T}|\beta|^{2}(1-e^{2 \kappa t})\biggr{]}\sum_{p,q=0}^{k}\binom{k}{p}\binom{k}{q}\alpha^{*k-p} \alpha^{k-q}\,(2\beta^{*})^{p-q}\sum_{r=0}^{n-p}\frac{(-2)^{r}}{(n-p-r)!}\] \[\times\sum_{s=0}^{r}\frac{(-2)^{r-s}}{s!}\sum_{u=0}^{p-q+r-s} \frac{|\beta|^{2(r-s-u)}}{u!(p-q+r-s-u)!(r-s-u)!}\left(\frac{2}{T}\right)^{-u}\] ## Acknowledgement Deepak's work is supported by the Council of Scientific and Industrial Research (CSIR), Govt. of India (Award no. 09/1256(0006)/2019-EMR-1). A. C. acknowledges SERB, DST for the support provided through the project number SUR/2022/000899.
2301.00651
Influence of Different Subgrid Scale Models in LES of Supersonic Jet Flows
Current design constraints have encouraged the studies of aeroacoustics fields around compressible jet flows. The present work addresses the numerical study of subgrid scale modeling for unsteady turbulent jet flows as a preliminary step for future aeroacoustic analyses of main engine rocket plumes. An in-house large eddy simulation (LES) tool is developed in order to reproduce high fidelity results of compressible jet flows. In the present study, perfectly expanded jets are considered because the authors want to emphasize the effects of the jet mixing phenomena. The large eddy simulation formulation is written using the finite difference approach, with an explicit time integration and using a second order spatial discretization. The energy equation is carefully discretized in order to model the energy equation of the filtered Navier-Stokes formulation. The classical Smagorinsky model, the dynamic Smagorinsky model and the Vreman models are the chosen subgrid scale closures for the present work. Numerical simulations of perfectly expanded jets are performed and compared with the literature in order to validate and compare the performance of each subgrid closure in the solver.
Carlos Junqueira-Junior, Sami Yamouni, Joao Luiz F. Azevedo, William Wolf
2023-01-02T13:13:30Z
http://arxiv.org/abs/2301.00651v1
# Influence of Different Subgrid Scale Models in LES of Supersonic Jet Flows ###### Abstract Current design constraints have encouraged the studies of aeroacoustics fields around compressible jet flows. The present work addresses the numerical study of subgrid scale modeling for unsteady turbulent jet flows as a preliminary step for future aeroacoustic analyses of main engine rocket plumes. An in-house large eddy simulation (LES) tool is developed in order to reproduce high fidelity results of compressible jet flows. In the present study, perfectly expanded jets are considered because the authors want to emphasize the effects of the jet mixing phenomena. The large eddy simulation formulation is written using the finite difference approach, with an explicit time integration and using a second order spatial discretization. The energy equation is carefully discretized in order to model the energy equation of the filtered Navier-Stokes formulation. The classical Smagorinsky model, the dynamic Smagorinsky model and the Vreman models are the chosen subgrid scale closures for the present work. Numerical simulations of perfectly expanded jets are performed and compared with the literature in order to validate and compare the performance of each subgrid closure in the solver. ## 1 Introduction One of the main design issues related to launch vehicles lies on noise emission originated from the complex interaction between the high-temperature/high-velocity exhaustion gases and the atmospheric air. These emissions yield very high noise levels, which must be minimized due to several design constraints. For instance, the resulting pressure fluctuations can damage the solid structure of different parts of the launcher by vibrational acoustic stress. Therefore, it is a design constraint to consider the loads resulting from acoustic sources in the structural dimensioning of large launch vehicles during the take off and also during the transonic flight. Moreover, one cannot neglect the energy dissipation effect caused by the acoustic waves generated even if the vehicles is far from the ground. Theoretically, all chemical energy should be converted into kinetic energy. However, in reallity, the noise generation consumes part of the chemical energy. The acoustic design constraints have encouraged the studies of aeroacoustic fields around compressible jet flows. Instituto de Aeronautica e Espaco (IAE) in Brazil is interested in this flow configuration for rocket design applications. Unsteady property fields of the flow are necessary for the aeroacoustic studies. Therefore, the present work addresses the numerical study of unsteady turbulent compressible jet flows for such aeroacoustic applications. More precisely, on the effects of subgrid scale modeling using second order centered schemes for compressible LES. An in-house computational tool is developed regarding the study of unsteady turbulent compressible flow. JAZxY is a novel large eddy simulation tool which is developed in order to reproduce high fidelity results of compressible jet flows which are used for aeroacoustic studies using the Ffowcs Williams and Hawkings approach [1]. The LES formulation is written using the finite difference approach. Inviscid numerical fluxes are calculated using a second order accurate centered scheme with the explicit addition of artificial dissipation. A five steps second order accurate Runge-Kutta is the chosen time marching method. A formulation based on the System I set of equations [2] is used here in order to model the filtered terms of the energy equation. The classical Smagorinsky model [3, 4, 5], the dynamic Smagorinsky model [6, 7] and the Vreman model [8] are the subgrid scale (SGS) turbulence closures used in the present work. Numerical simulation of perfectly expanded jets are performed and compared with numerical [9] and experimental [10] data. ## 2 Large Eddy Simulation Filtering The large eddy simulation is based on the principle of scale separation, which is addressed as a filtering procedure in a mathematical formalism. A modified version of the the System I filtering approach [2] is used in present work which is given by \[\begin{array}{c}\frac{\partial\overline{\rho}}{\partial t}+\frac{\partial}{ \partial x_{j}}\left(\overline{\rho}\widetilde{u}_{j}\right)=0\,,\\ \frac{\partial}{\partial t}\left(\overline{\rho}\widetilde{u}_{i}\right)+ \frac{\partial}{\partial x_{j}}\left(\overline{\rho}\widetilde{u}_{i} \widetilde{u}_{j}\right)+\frac{\partial\overline{\rho}}{\partial x_{i}}- \frac{\partial\tau_{ij}}{\partial x_{j}}+\frac{1}{3}\frac{\partial}{\partial x _{j}}\left(\delta_{ij}\sigma_{ii}\right)=0\,,\\ \frac{\partial\overline{e}}{\partial t}+\frac{\partial}{\partial x_{j}}\left[ \left(\overline{e}+\overline{p}\right)\widetilde{u}_{j}\right]-\frac{ \partial}{\partial x_{j}}\left(\tau_{ij}\widetilde{u}_{i}\right)+\frac{1}{3} \frac{\partial}{\partial x_{j}}\left[\left(\delta_{ij}\sigma_{ii}\right) \widetilde{u}_{i}\right]+\frac{\partial q_{j}}{\partial x_{j}}=0\,,\end{array} \tag{1}\] in which \(t\) and \(x_{i}\) are independent variables representing time and spatial coordinates of a Cartesian coordinate system \(\mathbf{x}\), respectively. The components of the velocity vector \(\mathbf{u}\) are written as \(u_{i}\), and \(i=1,2,3\). Density, pressure and total energy per mass unit are denoted by \(\rho\), \(p\) and \(e\), respectively. The \(\left(\overline{\cdot}\right)\) and \(\left(\overline{\cdot}\right)\) operators are used in order to represent filtered and Favre averaged properties, respectively. The System I formulation neglects the double correlation term and the total energy per mass unit is written as \[\overline{e}=\frac{\overline{p}}{\gamma-1}+\frac{1}{2}\rho\widetilde{u}_{i} \widetilde{u}_{i}\,. \tag{2}\] The heat flux, \(q_{j}\), is given by \[q_{j}=\left(\kappa+\kappa_{ggs}\right)\frac{\partial\widetilde{T}}{\partial x _{j}}\,. \tag{3}\] where \(T\) is the static temperature and \(\kappa\) is the thermal conductivity, which can by expressed by \[\kappa=\frac{\mu C_{p}}{Pr}\,, \tag{4}\] The thermal conductivity is a function of the specific heat at constant pressure, \(Cp\), of the Prandtl number, \(Pr\), which is equal to \(0.72\) for air, and of the dynamic viscosity, \(\mu\). The SGS thermal conductivity, \(\kappa_{ggs}\), is written as \[\kappa_{ggs}=\frac{\mu_{ggs}C_{p}}{Pr_{ggs}}\,, \tag{5}\] where \(Pr_{sgs}\) is the SGS Prandtl number, which is equal to 0.9 for static SGS models and \(\mu_{sgs}\) is the eddy viscosity which is calculated by the SGS closure. The dynamic viscosity, \(\mu\) can be calculated using the Sutherland Law, \[\mu\left(\widetilde{T}\right)=\mu_{\infty}\left(\frac{\widetilde{T}}{\widetilde{ T}_{\infty}}\right)^{\frac{3}{2}}\frac{\widetilde{T}_{0}+S_{1}}{\widetilde{T}+S_{1 }}\quad\text{with}\,S_{1}=110.4K\,. \tag{6}\] Density, static pressure and static temperature are correlated by the equation of state given by \[\overline{p}=\rho R\widetilde{T}\,, \tag{7}\] where \(R\) is the gas constant, written as \[R=C_{p}-C_{v}\,, \tag{8}\] and \(C_{v}\) is the specif heat at constant volume. The shear-stress tensor, \(\tau_{ij}\), is written according to the Stokes hypothesis and includes the eddy viscosity, \(\mu_{sgs}\), \[\tau_{ij}=2\left(\mu+\mu_{sgs}\right)\left(\tilde{S}_{ij}-\frac{1}{3}\delta_{ ij}\tilde{S}_{kk}\right) \tag{9}\] in which \(\tilde{S}_{ij}\), components of rate-of-strain tensor, are given by \[\tilde{S}_{ij}=\frac{1}{2}\left(\frac{\partial\tilde{u}_{i}}{\partial x_{j}} +\frac{\partial\tilde{u}_{j}}{\partial x_{i}}\right)\,. \tag{10}\] The SGS stress tensor components are written using the eddy viscosity [11], \[\sigma_{ij}=-2\mu_{sgs}\left(\tilde{S}_{ij}-\frac{1}{3}\tilde{S}_{kk}\right)+ \frac{1}{3}\delta_{ij}\sigma_{kk}\,. \tag{11}\] The eddy viscosity, \(\mu_{sgs}\), and the components of the isotropic part of the SGS stress tensor, \(\sigma_{kk}\), are modeled by the SGS closure. ## 3 Subgrid Scale Modeling The present section toward the description of the turbulence modeling and the theoretical formulation of subgrid scales closures included in the present work. The closures models presented here are founded on the homogeneous turbulence theory, which is usually developed in the spectral space as an attempt to quantify the interaction between the different scales of turbulence. ### Smagorinsky Model The Smagorinsky model [3] is one of the simplest algebric models for the deviatoric part of the SGS tensor used in large-eddy simulations. The isotropic part of the SGS tensor is neglected for Smagorinsky model in the current work. This SGS closure is a classical model based the large scales properties and is written as \[\mu_{sgs}=\left(\rho C_{s}\Delta\right)^{2}|\widetilde{S}|\,, \tag{12}\] where \[|\tilde{S}|=\left(2\tilde{S}_{ij}\tilde{S}_{ij}\right)^{\frac{1}{2}}\,, \tag{13}\] \(\Delta\) is the filter size and \(C_{s}\) is the Smagorinsky constant. Several attempts can be found in the literature regarding the evaluation of the Smagorinsky constant. The value of this constant is adjusted to improve the results of different flow configurations. In pratical terms, the Smagorinsky subgrid model has a flow dependency of the constant which takes value ranging from 0.1 to 0.2 depending on the flow. The suggestion of Lilly [5], \(C_{s}=0.148\), is used in the current work. This model is generally over-dissipative in regions of large mean strain. This is particulary true in the transitional region between laminar and turbulent flows. Moreover, the limiting behavior near the wall is not correct, and the model predictions correlate poorly with the exact subgrid scale tensor [12]. However, it is a very simple model and, with the use of damping function and good calibration, can be successfully applied on large-eddy simulations. ### Vreman Model Vreman[8] proposed a turbulence model that can correctly predict inhomogeneous turbulent flows. For such flows, the eddy viscosity should become small in laminar and transitional regions. This requirement is unfortunately not satisfied by existing simple eddy-viscosity closures such as the classic Smagorinsky model.[3, 4, 13] The Vreman SGS model is very simple and is given by \[\mu_{sgs}=\rho\,\mathbf{c}\,\sqrt{\frac{B_{\beta}}{\alpha_{ij}\alpha_{ij}}}\,, \tag{14}\] with \[\alpha_{ij}=\frac{\partial\tilde{u}_{j}}{\partial x_{i}}\,, \tag{15}\] \[\beta_{ij}=\Delta_{m}^{2}\alpha_{mi}\alpha_{mj} \tag{16}\] and \[B_{\beta}=\beta_{11}\beta_{22}-\beta_{12}^{2}+\beta_{11}\beta_{33}-\beta_{13} ^{2}+\beta_{22}\beta_{33}-\beta_{23}^{2}\,. \tag{17}\] The constant \(\mathbf{c}\) is related to the Smagorinsky constant, \(C_{s}\), and it is given by \[\mathbf{c}=2.5\,C_{s}^{2}\,, \tag{18}\] and \(\Delta_{m}\) is the filter width in each direction. In the present work, the isotropic part of the SGS tensor is neglected for the Vreman model. The \(\alpha\) symbol represents the matrix of first oder derivatives of the filtered components of velocity, \(\tilde{u}_{i}\). The SGS eddy-viscosity is defined as zero when \(\alpha_{ij}\alpha_{ij}\) equals zero. Vreman[8] affirms that the tensor \(\beta\) is proportional to the gradient model[14, 15] in its general anisotropic form.[16] The Vreman model can be classified as very simple model because it is expressed in first-order derivatives and it dos not involves explicit filtering, averaging, clipping procedures and is rotationally invariant for isotropic filter widths. The model is originally created for incompressible flows and it has presented good results for two incompressible flows configurations: the transitional and turbulent mixing layer at high Reynolds number and the turbulent channel flow.[16] In both cases, the Vreman model is found to be more accurate than the classical Smagorinsky model and as good as the dynamic Smagorinsky model. ### Dynamic Smagorinsky Model Germano _et al.[17]_ developed a dynamic SGS model in order to overcome the issues of the classical Smagorinsky closure. The model uses the strain rate fields at two different scales and thus extracts spectral information in the large-scale field to extrapolate the small stresses.[7] The coefficients of the model are computed instantaneously in the dynamic model. They are function of the positioning in space and time rather than being specified a priori. Moin _et al.[7]_ extended the work of Germano for compressible flows. The dynamic Smagorinsky model for compressible flow configurations is detailed in the present section. The Dynamic model introduces the test filter, \(\widehat{(}\widehat{)}\), which has a larger filter width, \(\widehat{\Delta}\), than the one of the resolved grid filter, \(\overline{(}\cdot\widehat{)}\). The use of test filters generates a second field with larger scales than the resolved field. The Yoshizawa model[18] is used for the isotropic portion of the SGS tensor and it is written as \[\sigma_{ll}=2C_{I}\overline{\rho}\Delta^{2}|\widehat{S}|^{2}\,, \tag{19}\] where \(C_{I}\) is defined by \[C_{I}=\frac{\left\langle\widehat{p}\widehat{u}\widehat{u}_{l}-\left(\widehat {p}\widehat{u}\widehat{p}\widehat{u}_{l}/\widehat{p}\right)\right\rangle}{ \left\langle 2\widehat{\Delta}^{2}\widehat{\rho}\widehat{|}\widehat{S}|^{2}-2 \Delta^{2}\overline{\rho}\widehat{|}\widehat{S}|^{2}\right\rangle}\,. \tag{20}\] A volume averaging, here indicated by \(\langle\ \rangle\), is suggest by Moin _et al[7]_ and by Garnier _et al_ in order to avoid numerical issues. The eddy viscosity, \(\mu_{sgs}\), is calculated using the same approach used by static Smagorinsky model, \[\mu_{sgs}=\left(\rho C_{ds}\Delta\right)^{2}|\widehat{S}|\,, \tag{21}\] where \[|\tilde{S}|=\left(2\tilde{S}_{ij}\tilde{S}_{ij}\right)^{\frac{1}{2}}\,, \tag{22}\] and \(C_{ds}\) is the dynamic constant of the model, which is given by \[C_{ds}=\frac{\left\langle\left[\overline{\widehat{\rho}_{i}\widehat{u}_{j}}- \left(\overline{\widehat{\rho}_{i}\widehat{\rho}_{il}\widehat{\rho}_{j}}/ \overline{\widehat{\rho}}\right)\right]\tilde{S}_{ij}-\frac{1}{3}\tilde{S}_{ mm}\left(\mathscr{T}_{ll}-\widehat{\sigma}_{ll}\right)\right\rangle}{\left\langle 2\Delta^{2}\left[\overline{\widehat{\rho}}|\widehat{S}|\widehat{S}_{ij} \tilde{S}_{ij}-\frac{1}{3}\left(\overline{\widehat{\rho}}|\widehat{S}|\widehat {S}_{mm}\right)\widehat{S}_{ll}\right]-2\widehat{\Delta}^{2}\left(\widehat{ \widehat{\rho}}|\widehat{S}|\widehat{S}_{ij}\widehat{S}_{ij}-\frac{1}{3} \widehat{\widehat{\rho}}|\widehat{S}|\widehat{S}_{mm}\widehat{S}_{ll}\right) \right\rangle}\,. \tag{23}\] The SGS Prandtl number is computed using the dynamic constant, \(C_{ds}\), and written as \[Pr_{sgs}=C_{ds}\frac{\left\langle\Delta^{2}\!\left(\overline{\rho}|\widehat{ S}|\frac{\partial\widehat{T}}{\partial x_{j}}\right)^{\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The contravariant velocity components, \(U\), \(V\) and \(W\), are calculated as \[U=\xi_{x}\overline{u}+\xi_{y}\overline{v}+\xi_{z}\overline{w}\,, \tag{31}\] \[V=\eta_{x}\overline{u}+\eta_{y}\overline{v}+\eta_{z}\overline{w}\,,\] \[W=\zeta_{x}\overline{u}+\zeta_{y}\overline{v}+\zeta_{z}\overline {w}\,.\] The metric terms are given by \[\begin{array}{l}\xi_{x}=J\left(y_{\eta}z_{\zeta}-y_{\zeta}z_{\eta}\right)\,, \quad\xi_{y}=J\left(z_{\eta}x_{\zeta}-z_{\zeta}x_{\eta}\right)\,,\quad\xi_{z}=J \left(x_{\eta}y_{\zeta}-x_{\zeta}y_{\eta}\right)\,,\\ \eta_{x}=J\left(y_{\eta}z_{\xi}-y_{\xi}z_{\eta}\right)\,,\quad\eta_{y}=J \left(z_{\eta}x_{\xi}-z_{\xi}x_{\eta}\right)\,,\quad\eta_{z}=J\left(x_{\eta}y _{\xi}-x_{\xi}y_{\eta}\right)\,,\\ \zeta_{x}=J\left(y_{\xi}z_{\eta}-y_{\eta}z_{\xi}\right)\,,\quad\zeta_{y}=J \left(z_{\xi}x_{\eta}-z_{\eta}x_{\xi}\right)\,,\quad\zeta_{z}=J\left(x_{\xi}y _{\eta}-x_{\eta}y_{\xi}\right)\,.\end{array} \tag{32}\] The viscous flux vectors, \(\hat{\mathbf{E}}_{v}\), \(\hat{\mathbf{F}}_{v}\) and \(\hat{\mathbf{G}}_{v}\), are written as \[\hat{\mathbf{E}}_{v}=J^{-1}\left\{\begin{array}{c}0\\ \xi_{x}\tau_{xx}+\xi_{y}\tau_{xy}+\xi_{z}\tau_{xz}\\ \xi_{x}\tau_{xy}+\xi_{y}\tau_{yy}+\xi_{z}\tau_{yz}\\ \xi_{x}\tau_{xz}+\xi_{y}\tau_{yz}+\xi_{z}\tau_{zz}\\ \xi_{x}\beta_{x}+\xi_{y}\beta_{y}+\xi_{z}\beta_{z}\end{array}\right\}\,, \tag{34}\] \[\hat{\mathbf{F}}_{v}=J^{-1}\left\{\begin{array}{c}0\\ \eta_{x}\tau_{xx}+\eta_{y}\tau_{xy}+\eta_{z}\tau_{xz}\\ \eta_{x}\tau_{xy}+\eta_{y}\tau_{yy}+\eta_{z}\tau_{yz}\\ \eta_{x}\tau_{xz}+\eta_{y}\tau_{yz}+\eta_{z}\tau_{zz}\\ \eta_{x}\beta_{x}+\eta_{y}\beta_{y}+\eta_{z}\beta_{z}\end{array}\right\}\,, \tag{35}\] \[\hat{\mathbf{G}}_{v}=J^{-1}\left\{\begin{array}{c}0\\ \zeta_{x}\tau_{xx}+\zeta_{y}\tau_{xy}+\zeta_{z}\tau_{xz}\\ \zeta_{x}\tau_{xy}+\zeta_{y}\tau_{yy}+\zeta_{z}\tau_{yz}\\ \zeta_{x}\tau_{xz}+\zeta_{y}\tau_{yz}+\zeta_{z}\tau_{zz}\\ \zeta_{x}\beta_{x}+\zeta_{y}\beta_{y}+\zeta_{z}\beta_{z}\end{array}\right\}\,, \tag{36}\] where \(\beta_{x}\), \(\beta_{y}\) and \(\beta_{z}\) are defined as \[\begin{array}{l}\beta_{x}=\tau_{xx}\tilde{u}+\tau_{xy}\tilde{v}+\tau_{xz} \tilde{w}-\overline{q}_{x}\,,\\ \beta_{y}=\tau_{xy}\tilde{u}+\tau_{yy}\tilde{v}+\tau_{yz}\tilde{w}-\overline{ q}_{y}\,,\\ \beta_{z}=\tau_{xz}\tilde{u}+\tau_{yz}\tilde{v}+\tau_{zz}\tilde{w}-\overline{ q}_{z}.\end{array} \tag{37}\] ## 5 Dimensionless Formulation A convenient nondimensionalization is necessary in to order to achieve a consistent implementation of the governing equations of motion. Dimensionless formulation yields to a more general numerical tool. There is no need to change the formulation for each configuration intended to be simulated. Moreover, dimensionless formulation scales all the necessary properties to the same order of magnitude which is a computational advantage.[19] Dimensionless variables are presented in the present section in order perform the nondimensionalization of Eq. (25) The dimensionless time, \(\underline{t}\), is written as function of the speed of sound of the jet at the inlet, \(a_{j}\), and of a reference lenght, \(l\), \[\underline{t}=t\frac{a_{j}}{l}\,. \tag{38}\] The dimensionless velocity components are obtained using the speed of sound of the jet at the inlet, \[\underline{\mathbf{u}}=\frac{\mathbf{u}}{a_{j}}\,. \tag{39}\] Dimensionless pressure and energy are calculated using density and speed of the sound of the jet at the inlet as \[\underline{\rho}=\frac{p}{\rho_{j}a_{j}^{2}}\,, \tag{40}\] \[\underline{E}=\frac{E}{\rho_{j}a_{j}^{2}}\,. \tag{41}\] Dimensionless density, \(\underline{\rho}\), temperature, \(\underline{T}\) and viscosity, \(\underline{\mu}\), are calculated using freestream properties \[\underline{\rho}=\frac{\rho}{\rho_{j}}\,. \tag{42}\] One can use the dimensionless properties described above in order to write the dimensionless form of the RANS equations as \[\frac{\partial\underline{Q}}{\partial t}+\frac{\partial\underline{\mathbf{E} }_{\epsilon}}{\partial\xi}+\frac{\partial\underline{\mathbf{E}}_{\epsilon}}{ \partial\eta}+\frac{\partial\underline{\mathbf{G}}_{\epsilon}}{\partial\zeta }=\frac{M_{j}}{Re}\left(\frac{\partial\underline{\mathbf{E}}_{\epsilon}}{ \partial\xi}+\frac{\partial\underline{\mathbf{F}}_{\epsilon}}{\partial\eta}+ \frac{\partial\underline{\mathbf{G}}_{\epsilon}}{\partial\zeta}\right)\,, \tag{43}\] where the underlined terms are calculated using dimensionless properties. The Mach number of the jet, \(M_{j}\), and the Reynolds number are based on the mean inlet velocity of the jet, \(U_{j}\), diamenter of the inlet, \(D\), and freestream properties such as speed of sound, \(a_{\infty}\), density, \(\rho_{\infty}\) and viscosity, \(\mu_{\infty}\), \[M_{j}=\frac{U_{j}}{a_{\infty}}\quad\text{and}\quad Re=\frac{\rho_{j}U_{j}D}{ \mu_{j}}\,. \tag{44}\] ## VI Numerical Formulation The governing equations previously described are discretized in a structured finite difference context for general curvilinear coordinate system [19]. The numerical flux is calculated through a central difference scheme with the explicit addition of the anisotropic scalar artificial dissipation of Turkel and Vatsa [20]. The time integration is performed by an explicit, 2nd-order, 5-stage Runge-Kutta scheme [21, 22]. Conserved properties and artificial dissipation terms are properly treated near boundaries in order to assure the physical correctness of the numerical formulation. ### Spatial Discretization For the sake of simplicity the formulation discussed in the present section is no longer written using bars. However, the reader should notice that the equations are dimensionless and filtered. The Navier-Stokes equations, presented in Eq. (43), are discretized in space in a finite difference fashion and, then, rewritten as \[\left(\frac{\partial Q}{\partial t}\right)_{i,j,k}\;=\;-RHS_{i,j,k}\,, \tag{45}\] where \(RHS\) is the right hand side of the equation and it is written as function of the numerical flux vectors at the interfaces between grid points, \[\begin{array}{rcl}RHS_{i,j,k}&=&\frac{1}{\Delta\zeta}\left( \mathbf{E}_{e(i+\frac{1}{2},j,k)}-\mathbf{E}_{e(i-\frac{1}{2},j,k)}-\mathbf{E }_{e(i+\frac{1}{2},j,k)}+\mathbf{E}_{v(i-\frac{1}{2},j,k)}\right)\\ &&\frac{1}{\Delta\eta}\left(\mathbf{F}_{e(i,j+\frac{1}{2},k)}-\mathbf{F}_{e( i,j-\frac{1}{2},k)}-\mathbf{F}_{v(i,j+\frac{1}{2},k)}+\mathbf{F}_{v(i,j-\frac{1}{2},k)} \right)\\ &&\frac{1}{\Delta\zeta}\left(\mathbf{G}_{e(i,j,k+\frac{1}{2})}-\mathbf{G}_{e( i,j,k-\frac{1}{2})}-\mathbf{G}_{v(i,j,k+\frac{1}{2})}+\mathbf{G}_{v(i,j,k-\frac{1}{2})} \right)\,.\end{array} \tag{46}\] For the general curvilinear coordinate case \(\Delta\xi=\Delta\eta=\Delta\zeta=1\). The anisotropic scalar artificial dissipation method of Turkel and Vatsa [20] is implemented through the modification of the inviscid flux vectors, \(\mathbf{E}_{\epsilon}\), \(\mathbf{F}_{\epsilon}\) and \(\mathbf{G}_{\epsilon}\). The numerical scheme is nonlinear and allows the selection between artificial dissipation terms of second and fourth differences, which is very important for capturing discontinuities in the flow. The numerical fluxes are calculated at interfaces in order to reduce the size of the calculation cell and, therefore, facilitate the implementation of second derivatives since the the concept of numerical fluxes vectors is used for flux ifferencing. Only internal interfaces receive the corresponding artificial dissipation terms, and differences of the viscous flux vectors use two neighboring points of the interface. The inviscid flux vectors, with the addition of the artificial dissipation contribution, can be written as \[\mathbf{E}_{e(i\pm\frac{1}{2},j,k)}=\frac{1}{2}\left(\mathbf{E}_{e(i,j,k)}+\mathbf{E}_{e(i\pm 1,j,k)}\right)-J^{-1}\mathbf{d}_{(i\pm\frac{1}{2},j,k)}\,,\] \[\mathbf{F}_{e(i,j\pm\frac{1}{2},k)}=\frac{1}{2}\left(\mathbf{F}_ {e(i,j,k)}+\mathbf{F}_{e(i,j\pm 1,k)}\right)-J^{-1}\mathbf{d}_{(i,j\pm\frac{1}{2},k)}\,, \tag{47}\] \[\mathbf{G}_{e(i,j,k\pm\frac{1}{2})}=\frac{1}{2}\left(\mathbf{G}_ {e(i,j,k)}+\mathbf{G}_{e(i,j,k\pm 1)}\right)-J^{-1}\mathbf{d}_{(i,j,k\pm\frac{1}{2})}\,,\] in which the \(\mathbf{d}_{(i\pm 1,j,k)}\),\(\mathbf{d}_{(i,j\pm 1,k)}\) and \(\mathbf{d}_{(i,j,k\pm 1)}\) terms are the Turkel and Vatsa [20] artificial dissipation terms in the \(i\), \(j\), and \(k\) directions respectively. The scaling of the artificial dissipation operator in each coordinate direction is weighted by its own spectral radius of the corresponding flux Jacobian matrix, which gives the non-isotropic characteristics of the method [19]. The artificial dissipation contribution in the \(\xi\) direction is given by \[\mathbf{d}_{(i+\frac{1}{2},j,k)} = \lambda_{(i+\frac{1}{2},j,k)}\left[\epsilon_{(i+\frac{1}{2},j,k)} ^{(2)}\left(\mathcal{W}_{(i+1,j,k)}-\mathcal{W}_{(i,j,k)}\right)\right.\] \[\left.\epsilon_{(i+\frac{1}{2},j,k)}^{(4)}\left(\mathcal{W}_{(i+2,j,k)}-3\mathcal{W}_{(i+1,j,k)}+3\mathcal{W}_{(i,j,k)}-\mathcal{W}_{(i-1,j,k) }\right)\,\right]\,,\] in which \[\epsilon_{(i+\frac{1}{2},j,k)}^{(2)} = k^{(2)}{\rm max}\left(\nu_{(i+1,j,k)}^{d},\nu_{(i,j,k)}^{d} \right)\,, \tag{49}\] \[\epsilon_{(i+\frac{1}{2},j,k)}^{(4)} = {\rm max}\left[0,k^{(4)}-\epsilon_{(i+\frac{1}{2},j,k)}^{(2)} \right]\,. \tag{50}\] The original article [20] recommends using \(k^{(2)}=0.25\) and \(k^{(4)}=0.016\) for the dissipation artificial constants. The pressure gradient sensor, \(\nu_{(i,j,k)}^{d}\), for the \(\xi\) direction is written as \[\nu_{(i,j,k)}^{d}=\frac{|p_{(i+1,j,k)}-2p_{(i,j,k)}+p_{(i-1,j,k)}|}{p_{(i+1,j, k)}-2p_{(i,j,k)}+p_{(i-1,j,k)}}\,. \tag{51}\] The \(\mathcal{W}\) vector from Eq. (48) is calculated as a function of the conserved variable vector, \(\hat{Q}\), written in Eq. (27). The formulation intends to keep the total enthalpy constant in the final converged solution, which is the correct result for the Navier-Stokes equations with \(Re\to\infty\). This approach is also valid for the viscous formulation because the dissipation terms are added to the inviscid flux terms, in which they are really necessary to avoid nonlinear instabilities of the numerical formulation. The \(\mathcal{W}\) vector is given by \[\mathcal{W}=\hat{Q}+[0\ 0\ 0\ 0\ p]^{T}. \tag{52}\] The spectral radius-based scaling factor, \(\lambda\), for the \(i-\) th direction is written \[\lambda_{(i+\frac{1}{2},j,k)}=\frac{1}{2}\left[\left(\overline{\lambda_{\xi}} \right)_{(i,j,k)}+\left(\overline{\lambda_{\xi}}\right)_{(i+1,j,k)}\right]\,, \tag{53}\] where \[\overline{\lambda_{\xi}}_{(i,j,k)}=\lambda_{\xi}\left[1+\left(\frac{\lambda_{ \eta}}{\lambda_{\xi}}\right)^{0.5}+\left(\frac{\lambda_{\zeta}}{\lambda_{\xi} }\right)^{0.5}\right]\,. \tag{54}\] The spectral radii, \(\lambda_{\xi}\), \(\lambda_{\eta}\) and \(\lambda_{\zeta}\) are given by \[\lambda_{\xi} = |U|+a\sqrt{\xi_{x}^{2}+\eta_{y}^{2}+\zeta_{z}^{2}}\,,\] \[\lambda_{\xi} = |V|+a\sqrt{\xi_{x}^{2}+\eta_{y}^{2}+\zeta_{z}^{2}}\,, \tag{55}\] \[\lambda_{\xi} = |W|+a\sqrt{\xi_{x}^{2}+\eta_{y}^{2}+\zeta_{z}^{2}}\,,\] in which, \(U\), \(V\) and \(W\) are the contravariants velocities in the \(\xi\), \(\eta\) and \(\zeta\), previously written in Eq. (32), and \(a\) is the local speed of sound, which can be written as \[a=\sqrt{\frac{\gamma p}{\rho}}\,. \tag{56}\] The calculation of artificial dissipation terms for the other coordinate directions are completely similar and, therefore, they are not discussed in the present work. ### Time Marching Method The time marching method used in the present work is a 2nd-order, 5-step Runge-Kutta scheme based on the work of Jameson.[21, 22] The time integration can be written as \[\begin{array}{rcl}Q^{(0)}_{(i,jk,k)}&=&Q^{(n)}_{(i,k,)}\,,\\ Q^{(l)}_{(i,jk,k)}&=&Q^{(0)}_{(i,jk,)}-\quad\alpha_{l}\Delta t_{(i,j,k)}RHS^{( l-1)}_{(i,j,k)}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, where \(\infty\) and \(e\) indexes stand for the property in the freestream and in the internal region, respectively. \(q_{n}\) is the velocity component normal to the outer surface, defined as \[q_{n}=\mathbf{u}\cdot\vec{n}\,, \tag{61}\] and \(\vec{n}\) is the unit outward normal vector \[\vec{n}=\frac{1}{\sqrt{\eta_{x}^{2}+\eta_{y}^{2}+\eta_{z}^{2}}}[\eta_{x}\ \eta_{y}\ \eta_{z}]^{T}\,. \tag{62}\] Equation (61) assumes that the \(\eta\) direction is pointing from the jet to the external boundary. Solving for \(q_{n}\) and \(a\), one can obtain \[q_{nf}=\frac{\mathbf{R}^{+}+\mathbf{R}^{-}}{2}\,,\qquad a_{f}=\frac{\gamma-1} {4}(\mathbf{R}^{+}-\mathbf{R}^{-})\,. \tag{63}\] The index \(f\) is linked to the property at the boundary surface and will be used to update the solution at this boundary. For a subsonic exit boundary, \(0<q_{n_{e}}/a_{e}<1\), the velocity components are derived from internal properties as \[\begin{array}{rcl}u_{f}&=&u_{e}+(q_{nf}-q_{n_{e}})\eta_{x}\,,\\ v_{f}&=&v_{e}+(q_{nf}-q_{n_{e}})\eta_{y}\,,\\ w_{f}&=&w_{e}+(q_{nf}-q_{n_{e}})\eta_{z}\,.\end{array} \tag{64}\] Density and pressure properties are obtained by extrapolating the entropy from the adjacent grid node, \[\rho_{f}=\left(\frac{\rho_{e}^{\gamma}a_{f}^{2}}{\gamma p_{e}}\right)^{\frac{ 1}{\gamma-1}}\,,\qquad p_{f}=\frac{\rho_{f}a_{f}^{2}}{\gamma}\,.\] For a subsonic entrance, \(-1<q_{n_{e}}/a_{e}<0\), properties are obtained similarly from the freestream variables as \[\begin{array}{rcl}u_{f}&=&u_{\infty}+(q_{nf}-q_{n_{\infty}})\eta_{x}\,,\\ v_{f}&=&v_{\infty}+(q_{nf}-q_{n_{\infty}})\eta_{y}\,,\\ w_{f}&=&w_{\infty}+(q_{nf}-q_{n_{\infty}})\eta_{z}\,,\end{array} \tag{65}\] \[\rho_{f}=\left(\frac{\rho_{\infty}^{\gamma}a_{f}^{2}}{\gamma p_{\infty}} \right)^{\frac{1}{\gamma-1}}\,. \tag{66}\] For a supersonic exit boundary, \(q_{n_{e}}/a_{e}>1\), the properties are extrapolated from the interior of the domain as \[\begin{array}{rcl}\rho_{f}&=&\rho_{e}\,,\\ u_{f}&=&u_{e}\,,\\ v_{f}&=&v_{e}\,,\\ w_{f}&=&w_{e}\,,\\ e_{f}&=&e_{e}\,,\end{array} \tag{67}\] and for a supersonic entrance, \(q_{n_{e}}/a_{e}<-1\), the properties are extrapolated from the freestream variables as \[\begin{array}{rcl}\rho_{f}&=&\rho_{\infty}\,,\\ u_{f}&=&u_{\infty}\,,\\ v_{f}&=&v_{\infty}\,,\\ w_{f}&=&w_{\infty}\,,\\ e_{f}&=&e_{\infty}\,.\end{array} \tag{68}\] ### Entrance Boundary For a jet-like configuration, the entrance boundary is divided in two areas: the jet and the area above it. The jet entrance boundary condition is implemented through the use of the 1-D characteristic relations for the 3-D Euler equations for a flat velocity profile. The set of properties then determined is computed from within and from outside the computational domain. For the subsonic entrance, the \(v\) and \(w\) components of the velocity are extrapolated by a zero-order extrapolation from inside the computational domain and the angle of flow entrance is assumed fixed. The rest of the properties are obtained as a function of the jet Mach number, which is a known variable. \[\begin{array}{rcl}\left(u\right)_{1,j,k}&=&u_{j}\,,\\ \left(v\right)_{1,j,k}&=&\left(v\right)_{2,j,k}\,,\\ \left(w\right)_{1,j,k}&=&\left(w\right)_{2,j,k}\,.\end{array} \tag{69}\] The dimensionless total temperature and total pressure are defined with the isentropic relations: \[T_{t}=1+\frac{1}{2}(\gamma-1)M_{\infty}^{2}\quad\text{and}\quad P_{t}=\frac{1 }{\gamma}(T_{t})^{\frac{\gamma}{\gamma-1}}\,. \tag{70}\] The dimensionless static temperature and pressure are deduced from Eq. (70), resulting in \[\left(T\right)_{1,j,k}=\frac{T_{t}}{1+\frac{1}{2}(\gamma-1)(u^{2}+v^{2}+w^{2} )_{1,j,k}}\quad\text{and}\quad\left(p\right)_{1,j,k}=\frac{1}{\gamma}(T)_{1,j,k}^{\frac{\gamma}{\gamma-1}}\,. \tag{71}\] For the supersonic case, all conserved variables receive jet property values. The far field boundary conditions are implemented outside of the jet area in order to correctly propagate information comming from the inner domain of the flow to the outer region of the simulation. However, in the present case, \(\xi\), instead of \(\eta\), as presented in the previous subsection, is the normal direction used to define the Riemann invariants. ### Exit Boundary Condition At the exit plane, the same reasoning of the jet entrance boundary is applied. This time, for a subsonic exit, the pressure is obtained from the outside and all other variables are extrapolated from the interior of the computational domain by a zero-order extrapolation. The conserved variables are obtained as \[\left(\rho\right)_{I_{MAX},j,k} = \frac{(p)_{I_{MAX},j,k}}{(\gamma-1)(e)_{I_{MAX}-1,j,k}}, \tag{72}\] \[\left(\vec{u}\right)_{I_{MAX},j,k} = \left(\vec{u}\right)_{I_{MAX}-1,j,k},\] (73) \[\left(e_{i}\right)_{I_{MAX},j,k} = \left(\rho\right)_{I_{MAX},j,k}\left[\left(e\right)_{I_{MAX}-1,j, k}+\frac{1}{2}(\vec{u})_{I_{MAX},j,k}\cdot(\vec{u})_{I_{MAX},j,k}\right]\,, \tag{74}\] in which \(I_{MAX}\) stands for the last point of the mesh in the axial direction. For the supersonic exit, all properties are extrapolated from the interior domain. ### Centerline Boundary Condition The centerline boundary is a singularity of the coordinate transformation, and, hence, an adequate treatment of this boundary must be provided. The conserved properties are extrapolated from the ajacent longitudinal plane and are averaged in the azimuthal direction in order to define the updated properties at the centerline of the jet. The fourth-difference terms of the artificial dissipation scheme, used in the present work, are carefully treated in order to avoid the five-point difference stencils at the centerline singularity. If one considers the flux balance at one grid point near the centerline boundary in a certain coordinate direction, let \(w_{j}\) denote a component of the \(\mathcal{W}\) vector from Eq. (52) and \(\mathbf{d}_{j}\) denote the corresponding artificial dissipation term at the mesh point \(j\). In the present example, \(\left(\Delta w\right)_{j+\frac{1}{2}}\) stands for the difference between the solution at the interface for the points \(j+1\) and \(j\). The fourth-difference of the dissipative fluxes from Eq. (48) can be written as \[\mathbf{d}_{j+\frac{1}{2}}=\left(\Delta w\right)_{j+\frac{3}{2}}-2\left( \Delta w\right)_{j+\frac{1}{2}}+\left(\Delta w\right)_{j-\frac{1}{2}}\,. \tag{75}\] Considering the centerline and the point \(j=1\), as presented in Fig. 2, the calculation of \(\mathbf{d}_{1+\frac{1}{2}}\) demands the \(\left(\Delta w\right)_{\frac{1}{2}}\) term, which is unknown since it is outside the computation domain. In the present work a extrapolation is performed and given by \[\left(\Delta w\right)_{\frac{1}{2}}=-\left(\Delta w\right)_{1+\frac{1}{2}}\,. \tag{76}\] This extrapolation modifies the calculation of \(\mathbf{d}_{1+\frac{1}{2}}\) that can be written as \[\mathbf{d}_{j+\frac{1}{2}}=\left(\Delta w\right)_{j+\frac{3}{2}}-3\left(\Delta w \right)_{j+\frac{1}{2}}\,. \tag{77}\] The approach is plausible since the centerline region is smooth and does not have high gradient of properties. ### Periodic Boundary Condition A periodic condition is implemented between the first (\(K=1\)) and the last point in the azimutal direction (\(K=K_{MAX}\)) in order to close the 3-D computational domain. There are no boundaries in this direction, since all the points are inside the domain. The first and the last points, in the azimuthal direction, are superposed in order to facilitate the boundary condition implementation which is given by \[\begin{array}{rcl}(\rho)_{i,j,K_{MAX}}&=&(\rho)_{i,j,1}\,,\\ (u)_{i,j,K_{MAX}}&=&(u)_{i,j,1}\,,\\ (v)_{i,j,K_{MAX}}&=&(v)_{i,j,1}\,,\\ (w)_{i,j,K_{MAX}}&=&(w)_{i,j,1}\,,\\ (e)_{i,j,K_{MAX}}&=&(e)_{i,j,1}\,.\end{array} \tag{78}\] ## 8 Study of Supersonic Jet Flow Four numerical studies are performed in the present research in order to study the use of 2nd-order spatial discretization on large eddy simulations of a perfectly expanded jet flow configuration. The effects of mesh refinement and SGS models are compared in the present work. Two different meshes are created for the refinement study. The three SGS models implemented in the code, classic Smagorinsky, dynamic Smagorinsky and Vreman, are compared in the current section. Results are compared with analytical, numerical and experimental data from the literature.[9, 10, 24] Figure 2: Boundary points dissipation.[19] ### Geometry Characteristics Two different geometries are created for the simulations discussed in the current work. One geometry presents a cylindrical shape and the other one presents a divergent conical shape. For the sake of simplicity, the round geometry is named geometry A and the other one is named geometry B in present text. The computational domains are created in two steps. First, a 2-D region is generated. In the sequence, this region is rotated in order to generate a fully 3-D geometry. An in-house code is used for the generation of the 2-D domain of geometry A. The commercial mesh generator ANSYS(r) ICEM CFD[25] is used for the 2-D domain of geometry B. The geometry A is a cylindrical domain with radius of \(20D\) and and length of \(50D\). Geometry B presents a divergent form whose axis length is \(40D\). The minimum and maximum heights of geometry B are \(\approx 16D\) and \(25D\), respectively. The zones of this geometry are created based on results from simulations using geometry A in order to refine the mesh in the shear layer region of the flow. Geometry A and geometry B are illustrated in Fig. 3 which presents a 3-D view of the two computational domains used in the current work. The geometries are colored by a time solution of the axial component of velocity of the flow. ### Mesh Configurations One grid is generated for each geometry used in the present study. These computational grids are named mesh A and mesh B. The second mesh is created based on results using mesh A. One illustration of the computational grids is presented in Fig. 4. Mesh A is created using a mesh generator developed by the research group for the cylindrical shape configuration. This computational mesh is composed by 400 points in the axial direction, 200 points in the radial direction and 180 points in the azimuthal direction, which originates 14.4 million grid points. Hyperbolic tangent functions are used for the points distribution in the radial and axial directions. Grid points are clustered near the shear layer of the jet. The mesh is coarsened towards the outer regions of the domain in order to dissipate properties of the flow far from the jet. Such mesh refinement approach can avoid reflection of information into the domain. The radial and longitudinal dimensions of the smallest distance between mesh points of the computational grid are given by \((\Delta\underline{r})_{min}=0.002D\) and \((\Delta\underline{x})_{min}=0.0126D\), respectively. This minimal spacing occurs at the shear layer of the jet and at the entrance of the computational domain. Mesh A is created based on a reference grid of Mendez _et al.[9, 24]_. The refined computational grid is composed by 343 points in the axial direction, 398 points in the radial direction and 360 points in the azimuthal direction, which yields approximately 50 million grid points. The Figure 3: 3-D view of geometries used for the LES. 2-D mesh is generated with ANSYS(r) ICEM CFD.[25] The points are allocated using different distributions in eight edges of the 2-D domain. The same coarsening approach used for mesh A is also applied for mesh B. The distance between mesh points increase towards the outer region of the domain. This procedure force the dissipation of properties far from the jet in order to avoid reflection of data into the domain. The reader can find more details about the mesh generation on the work of Junqueira-Junior.[26] ### Flow Configuration and Boundary Conditions An unheated perfectly expanded jet flow is chosen to validate the LES tool. The flow is characterized by an unheated perfectly expanded inlet jet with a Mach number of 1.4 at the domain entrance. Therefore, the pressure ratio, \(PR=P_{j}/P_{\infty}\), and the temperature ratio, \(TR=T_{j}/T_{\infty}\), between the jet exit and the ambient freestream conditions, are equal to one, _i.e._, \(PR=1\) and \(TR=1\). The Reynolds number of the jet is \(Re=1.57\times 10^{6}\), based on the jet exit diameter. This flow configuration is chosen due to the absence of strong shocks waves. Strong discontinuities must be carefully treated using numerical approaches which are not yet implemented into the solver. Moreover, numerical and experimental data of this flow configuration are available in the literature such as the work of Mendez _et al.[9, 24]_ and the work of Bridges and Wernet.[10] The boundary conditions discussed in section VII are used in the simulations performed in the current thesis. Figure 1 presented a lateral view and a frontal view of the computational domain used by the simulation in where the positioning of each boundary condition is indicated. A flat-hat velocity profile, with \(M=1.4\), is used at the entrance boundary. Riemann invariants are used at the farfield regions. A special singularity treatment is performed at the centerline. A periodicity is imposed in the azimuthal direction in order to create a transparency for the flow. Properties of flow at the inlet and at the farfield regions have to be provided to the code in order to impose the boundary conditions. Density, \(\rho\), temperature, \(T\), velocity, \(U\), Reynolds number \(Re\), and specific heat at constant volume, \(C_{v}\), are provided in the dimensionless form to the simulation. These properties are given by \[\begin{split}\rho_{j}=1.00\,,&\quad\rho_{\infty}=1. 00\,,\\ T_{j}=1.00\,,&\quad T_{\infty}=1.00\,,\\ U_{j}=1.4\,,&\quad U_{\infty}=0.00\,,\\ Re_{j}=1.57\times 10^{6}&\quad C_{v}=1.786\,,\end{split} \tag{79}\] where the \(j\) subscript stands for property at the jet entrance and the \(\infty\) subscript stands for property at the farfield region. Figure 4: 2-D view of the computational meshes used in the current work. ### Large Eddy Simulations Four simulations are performed in the present. The objective is to study the effects of mesh refinement and to evaluate the three different SGS models included into the code. The calculations are performed in two steps. First a preliminary simulation is performed in order to achieve a statistically steady state condition. In the sequence, the simulations are run for another period in order to collect enough data for the calculation of time averaged properties of the flow and its fluctuations. The configurations of all simulations are discussed in the current section, towards the description of the preliminary calculations which are performed in order to drive the flow to a statistically steady flow condition. Table 1 presents the operating conditions of all four numerical studies performed in the current research. Mesh A is only used on S1. The other calculations are performed using the refined grid, Mesh B. The stagnated flow condition is used as initial condition for all simulations but S3, which uses the solution of S2 after 10.15 flow through times (FTT). One flow through time is the necessary time for a particle to cross all the domain considering the inlet velocity of the jet. The dimensionless time increment used for all configurations is the biggest one which the solver can handle without diverging the solution. The static Smagorinsky model [3, 4, 5] is used on S1 and S2. The dynamic Smagorinsky model [6, 7] and the Vreman model [8] are used on S3 and S4, respectively. The last column of Tab. 1 represents the period simulated by all numerical studies in order to achieve the statistically steady state flow condition. The choice of this period is related to the computational cost of each study. S1 is the least expensive test case studied. It uses a 14 million point mesh while the other simulations use the 50 million point grid. Therefore, S1 has been run for a longer period in order to achieve the statistically steady state condition. On the other hand, S3 is the most expensive numerical test case. The dynamic Smagorinsky SGS model, which is used by S3, needs more time per iteration when compared with the other SGS models implemented in the code. Hence, S3 has only been run for 5.86 FTT for this preliminary simulation. The simulations are restarted and run for another period in which data of the flow are extracted and recorded in a fixed frequency after the preliminary study. The collected data are time averaged in order to calculate mean properties of the flow and compare with the results of the numerical and experimental references. In the present work, time averaged properties are notated as \(\langle\cdot\rangle\). Table 2 presents the configuration of simulations performed in order to calculate mean flow properties. The second column presents the number of extractions performed during the simulations. Data are extracted each 0.02 dimensionless time in the present work which is equivalent to a dimensionless frequency of 50. The choice of this frequency is based on the numerical work reported in Refs. [9] and [24]. The last two columns of Tab. 2 present the total dimensionless time simulated to calculate the mean properties. A power spectral density (PSD) of the time fluctuation of the axial component of velocity, \(u^{*}\), is calculated in order to study the transient part of the flow. The PSD computation is performed using the following \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Simulation & Mesh & SGS & \(\Delta t\) & Initial condition & FTT \\ \hline S1 & A & Static Smagorinsky & \(2.5\times 10^{-5}\) & Stagnated flow & 37.8 \\ S2 & B & Static Smagorinsky & \(1\times 10^{-4}\) & Stagnated flow & 10.15 \\ S3 & B & Dynamic Smagorinsky & \(5\times 10^{-5}\) & Stagnated flow & 5.86 \\ S4 & B & Vreman & \(1\times 10^{-4}\) & S2 – 10.15 FTT & 13.65 \\ \hline \end{tabular} \end{table} Table 1: Configuration of large eddy simulations performed in the present work \begin{table} \begin{tabular}{|c|c|c|c|} \hline Simulation & Nb. Extractions & Frequency & Total time \\ \hline S1 & 2048 & 50 & 40.96 (1.14 FTT) \\ S2 & 3365 & 50 & 67.3 (2.36 FTT) \\ S3 & 2841 & 50 & 56.6 (1.98 FTT) \\ S4 & 1543 & 50 & 30.86 (1.08 FTT) \\ \hline \end{tabular} \end{table} Table 2: Time average configuration methodology: first, sensors are included at three different positions along the lipline of the jet (\(r/D=0.5\)). For each position along the lipline, 120 sensors are allocated in the azimuthal direction. Information at this direction are averaged in order eliminate azimuthal dependence. Table 3 presents the positioning of the sensors in the axial and radial directions. The choice of the positioning is based on the numerical reference of Mendez _et. al._[9, 24] In the sequence, data are extracted from the sensors in order to generate a time-dependent signal. The signal is partitioned into three equal parts and \(u^{*}\) is calculated for all three partitioned signals. In the next step of the methodology, the time fluctuation signals are multiplied by a window function in order to create periodic distribution. The Hamming window function [27] is used in the present work and it is written as \[w(n)=\alpha-\beta cos\left(\frac{2\pi n}{N-1}\right)\,, \tag{80}\] where \(\alpha=0.54\), \(\beta=0.46\), \(n\) stands for the time index and \(N\) stands for the size of the sample. After applying the fast Fourier transformation (FFT) on the signals one can calculate the PSD of \(u^{*}\). In the end, a simple average is applied on the three signals in order to have a final PSD of \(u^{*}\) distribution. In the present work the transient part is studied by the PSD of \(u^{*}\) distribution as function of the number of Strouhal which is given by \[St(t)=\frac{f(t)D}{U_{j}}\,, \tag{81}\] where \(f\) stands for the frequency as a function of the time, \(D\) stands for the inlet diameter and \(U_{j}\) stands for the velocity of the jet at the entrance of the domain. Data are collected from the sensors using the same informations provided by Tab. 2. The minimum and maximum values of the Strouhal number for all simulations are presented in Tab. 4. ### Study of Mesh Refinement Effects Effects of mesh refinement on compressible LES using the JAZzY solver are discussed in the present section. 2-D distribution of properties and profiles of S1 and S2 are collected and compared with numerical and experimental results from the literature.[9, 10, 24] Both simulations use the same SGS model, the static Smagorinsky model.[3, 4, 5] Mesh A is used on S1 and Mesh B is used on S2. Time averaged distributions of the axial component of velocity, density and eddy viscosity are presented in the subsection along with the RMS distribution of all three components of velocity, distributions of the \(\langle u^{*}v^{*}\rangle\) component of the Reynolds stress tensor and distributions of the turbulent kinetic energy, \(k\). Figure 5 illustrates the positioning of surfaces and profiles extracted for all simulations performed in the current work. \begin{table} \begin{tabular}{|c|c|c|} \hline Simulation & \(St_{min}\) & \(St_{max}\) \\ \hline S1 & 1.74 & 17.86 \\ S2 & \(1.06\times 10^{-2}\) & 17.86 \\ S3 & \(1.26\times 10^{-2}\) & 17.86 \\ S4 & \(2.31\times 10^{-2}\) & 17.86 \\ \hline \end{tabular} \end{table} Table 4: Strouhal limits for all simulations \begin{table} \begin{tabular}{|c|c|} \hline Signal & Positioning \\ \hline (a) & \((X/D=0.10,r/D=0.5)\) \\ (b) & \((X/D=0.25,r/D=0.5)\) \\ (c) & \((X/D=1.25,r/D=0.5)\) \\ \hline \end{tabular} \end{table} Table 3: Positionig of the sensors used to collect fluctuation data #### Time Averaged Axial Component of Velocity One important characteristic of a round jet flow configurations is the potential core length, \(\delta_{j}^{95\%}\). The potential core, \(U_{j}^{95\%}\), is defined as 95% of the velocity of the jet at the inlet, \[U_{j}^{95\%}=0.95\cdot U_{j}\,. \tag{82}\] Therefore, the potential core length can be defined as the positioning in the centerline where \(U_{j}^{95\%}\) is located. Time averaged results of the axial component of velocity are presented in the subsection. A lateral view of \(\langle U\rangle\) for S1 and S2, side by side, are presented in Fig. 6, where \(U_{j}^{95\%}\) is indicated by the solid line. The positioning of surfaces is indicated in Fig. 5. Table 5 presents the size of the potential core of S1, S2 and the numerical results from Refs. [9] and [24], along with the relative error compared with the experimental data.[10] Comparing the results, one can observe the difference in the potential core length between S1 and S2. The results of the first case present a smaller \(\delta_{j}^{95\%}\) when compared to results of S2, _i.e._, 5,57 and 6.84, respectively. One can say that the S1 solution is over dissipative when compared to the S2 results. The jet vanishes earlier in S1. The mesh which is used in the S1 test case is very coarse when compared with the grid used for S2. This lack of resolution can generate very dissipative solutions which yield the under prediction of the potential core length. The mesh refinement reduced in 14% the relative error of S2 when compared to the experimental data. Figure 5: Positioning in the computational domain of surfaces studied in the present work \begin{table} \begin{tabular}{|c|c|c|} \hline Simulation & \(\delta_{j}^{95\%}\) & Relative error \\ \hline S1 & 5.57 & 40\% \\ S2 & 6.84 & 26\% \\ Mendez _et al._ & 8.35 & 8\% \\ \hline \end{tabular} \end{table} Table 5: Potential core length and relative error of S1 and S2. Profiles of \(\langle U\rangle\) from S1 and S2, along the mainstream direction, and the evolution of \(\langle U\rangle\) along the centerline and along the lipline are compared with numerical and experimental results in Fig. 7. The centerline and the lipline are indicated as (E) and (F) in Fig. 5. The dash-point line and the solid line stand for the results of the S1 and S2 test cases, respectively, in Fig. 7. The square symbols stand for the LES results of Mendez _et al._, [9, 24] while the triangular symbols stand for the experimental data of Bridges and Wernet [10]. The comparison of profiles indicates that distributions of \(\langle U\rangle\) calculated on S1 and S2 correlates well with the references until \(X=5.0D\). The \(\langle U\rangle\) profile calculated with S2 at \(X=10.0D\) is under predicted when compared with the reference profiles. However, it is closer to the reference when compared with the S1 results. One can notice that S1 and S2 \(\langle U\rangle\) distributions along the centerline correlates with the references in the regions which the grid presents a good resolution. When the mesh spacing increases, due to the mesh coarsening in the streamwise direction, the time average axial component of velocity start to correlate poorly with the reference. The time averaged axial component of velocity calculated by S1, along the lipline, correlates better with the reference than the same property calculated on S2. The second case overestimates the magnitude of \(\langle U\rangle\) until \(X\approx 6.0D\). #### Root Mean Square Distribution of Time Fluctuations of Axial Velocity Component The time fluctuation part of the flow is also important to be studied. The present work evaluates the axial and radial velocity components using the root mean square. A lateral view of \(u_{RMS}^{*}\) computed by S1 and S2 simulations are presented in Figs. 8(a) and 8(b), respectively. The figures indicate that the property calculated by S1 is more spread when compared with the same property computed by S2. The mesh A refinement along with the spatial discretization can generate a more dissipative solution which creates the spread effect of \(u_{RMS}^{*}\) calculated by S1 when compared to the same property calculated by S2. The same strategy used to compare the mean profiles of velocity is used here for the study of \(u_{RMS}^{*}\). Figure 9 presents the comparison of root mean square profiles of \(u^{*}\) calculated by S1 and S2 with reference results. The profile of \(u_{RMS}^{*}\) calculated by S2 fits perfectly the reference profiles at \(X=2.5D\). The profile calculated by S1, at the same position, presents a good correlation with numerical and experimental data. However, it does not correctly represent the two peaks of the profile. For \(X=5.0D\) and \(X=10.0D\) the profiles start to diverge from the reference results. At \(X=15.0D\), the \(u_{RMS}^{*}\) profile, calculated by S1, present a different shape and different magnitude from the reference profiles. At the same position the fluctuation profile computed by S2 reproduce the same peaks of the reference data. However, the shape of the profile is completely different from the shape of profiles calculated by the references. Figure 6: Lateral view of the averaged axial component of velocity, \(\langle U\rangle\), for S1 and S2. \(\langle\bullet\rangle\) indicates the potential core of the jet, \(U_{j}^{95\%}\). Figure 7: Profiles of averaged axial component of velocity at different positions within the computational domain. (****\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\bullet\)\(\bullet\) Figure 8: Lateral view of RMS of time fluctuation of axial component of velocity, \(u^{*}_{RMS}\), RMS of time fluctuation of radial component of velocity, \(v^{*}_{RMS}\) and \((u^{*}v^{*})\) Reynolds shear stress tensor component, for S1 and S2. Figure 9: Profiles of RMS of time fluctuation of axial component of velocity, \(u^{*}_{RMS}\), for S1 and S2, at different positions within the computational domain. (), S1; (), S2; (), numerical data; (), experimental data. Figures 9(e) and 9(f) presents the distribution of \(u^{*}_{RMS}\) along the centerline and ilpline of the jet. The distributions calculated by S1 and S2 are somewhat different from the results of the references. However, one can notice an upgrade on the solution when comparing S1 and S2 results. The results achieved using the more refined mesh are closer to the reference than the results obtained using mesh A. #### Root Mean Square Distribution of Time Fluctuations of Radial Velocity Component The time fluctuation of the radial component of velocity is also compared with the reference data. Distributions of root mean square of \(v^{*}_{RMS}\) are presented in the subsection. Figures 8(c) and 8(d) illustrate a lateral view of the distribution of \(v^{*}_{RMS}\) computed by S1 and S2, respectively. A significant divergence between the results can be easily noticed on the lateral view of the \(v^{*}_{RMS}\) distribution. From \(X=2.5D\) towards the exit boundary the magnitude of fluctuation calculated by S1 is much higher than the magnitude of \(v^{*}_{RMS}\) computed by S2. Four profiles of \(v^{*}_{RMS}\) in the radial direction at \(X=2.5D\), \(X=5.0D\), \(X=10.0D\) and \(X=15.0D\) are presented in Fig. 10. S1 results presented a good correlation with the reference at \(X=2.5D\), where only the peaks of the profile are not well represented. For all other positions on the axial direction studied in the present research the \(v^{*}_{rms}\) profiles of S1 are overestimated and poorly correlates with the reference. On the other hand, \(v^{*}_{RMS}\) profiles calculated using a refined grid fits very well with the results of the numerical reference at \(X=2.5D\) and \(X=5.0D\). At \(X=10.0D\) the fluctuation profile calculated by S2 presents a better correlation with the experimental data than the numerical reference. At \(X=15.0D\) the S2 does not present a good profile of \(v^{*}_{RMS}\). #### Component of Reynolds Stress tensor Figures 8(e), 8(f) and 11 present lateral views and profiles of \(\langle u^{*}v^{*}\rangle\) component of the Reynolds stress tensor. One can observe that the distributions of the property obtained by S1 is over dissipated when compared with results collected from S2. Comparing the profiles with the reference, one can notice that the profiles achived in S1 and S2 are really far from the numerical and experimental data. The solver has produced with succes the shape of \(\langle u^{*}v^{*}\rangle\) profile. However, it fails to represent the peak of \(\langle u^{*}v^{*}\rangle\) for all profiles compared. #### Time Averaged Eddy Viscosity The effects of the mesh on the SGS modeling are also studied in the present subsection. Figure 12 presents distributions of time averaged eddy viscosity, \(\langle\mu_{t}\rangle\), calculated on S1 and S2. The Smagorinsky model [3, 4, 5] is used on both simulations. This SGS closure is highly dependent on the local mesh size. One can notice that \(\langle\mu_{t}\rangle\) presents higher values on the distributions obtained by S1. On the other hand, the eddy viscosity is only acting on the regions where the mesh is no longer very refined for the S2 study. The \(\langle\mu_{t}\rangle\) is very low in the region where the grid spacing is small. The eddy viscosity can contribute to the dissipative characteristic of the simulations. Specially for meshes with low point resolution. The divergence observed on the distribution of \(\langle\mu_{t}\rangle\) calculated by S1 and S2 is an example of such effect. However, it is important to notice that, even in regions where the mesh is not very refined, yet, not coarse, and where the eddy viscosity can be neglected, some distributions of properties, calculated by S2, have shown to be very dissipative when compared with the LES reference and with the experimental data. Therefore, one can state that the truncation errors originated from the second order spatial discretization, used on the simulations here performed, can easily overcome the effects of SGS modeling if the grid spacing is not small enough. The issue is very important for the structured mesh approach. Increasing mesh resolution in the region of interest expressively rises up the number of points all over the computational domain. Local refinement for structured mesh is not straight forward and the code used in the current work does not have such approach available. #### Power Spectral Density The power spectral density of time fluctuation of the axial component of velocity is studied in the present work in order to better understand the transient portion of the solution. Figure 13 presents the PSD of \(u^{*}\), in \(dB\), as function of the Strouhal number for S1 and S2. The signals are collected from the sensors allocated at the positions presented in Tab. 3. The PSD of \(u^{*}\) are shifted of -150**dB** and -300**dB** for \(X=0.25D\) and \(X=1.25\), respectively, in order to separate plots. One can observe that PSD signals obtained using S1 and S2 present a similar behavior at \(X=1.25D\) on the ilpline. On the other hand, it is possible to notice significant differences on the shape and on the peaks positioning for \(St>1.0\) at \(X=0.1D\) and \(X=0.25D\). The divergence indicates that the dissipative characteristic of S1 have changed the positioning of the turbulent transition when compared with the S2 study. ### Subgrid Scale Modeling Study After the mesh refinement study the three SGS models added to the solver are compared. S2, S3 and S4 simulations are performed using the static Smagorinsky model [3, 4, 5], the dynamic Smagorinsky model [7, 17] and the Vreman model [8], respectively. The same mesh with 50 million points is used for all three simulations. The stagnated flow condition is used as intial condition for S2 and S3. A restart of S2 is used as initial condition for the S4 simulation. The configuration of the numerical studies is presented at Tab. 1. The same comparisons performed on the study of mesh refinement effects, Sec. VIII.E, are performed for the SGS modeling study. Figure 10: Profiles of RMS of time fluctuation of radial component of velocity, \(v^{*}_{RMS}\), for S1 nd S2, at different positions within the computational domain. (\(\lx@math@degree\)\(\lx@math@degree\)\(\lx@math@degree\)), S1; (\(\lx@math@degree\)), S2; (\(\square\)), numerical data; (\(\triangle\)), experimental data. ### Time Averaged Axial Component of Velocity Effects of the SGS modeling on the time averaged results of the axial component of velocity are presented in the subsection. A lateral view of \(\langle U\rangle\) for S2, S3 and S4, side by side, are presented in Fig. 14, where \(U_{j}^{95\%}\) is indicated by the solid line. Table 6 presents the size of the potetial core of S2, S3 and S4 and the numerical reference [9, 24] along with the relative error compared with the experimental data [10]. Comparing the results, one cannot observe significant differences on the potential core length between S2, S3 and S4. The distribution of \(\langle U\rangle\) calculated using the dynamic Smagorinky model has shown to be \begin{table} \begin{tabular}{|c|c|c|} \hline Simulation & \(\delta_{j}^{95\%}\) & Relative error \\ \hline S2 & 6.84 & 26\% \\ S3 & 6.84 & 26\% \\ S4 & 6.28 & 32\% \\ Mendez _et al._ & 8.35 & 8\% \\ \hline \end{tabular} \end{table} Table 6: Potential core length and relative error of S2, S3 and S4. Figure 11: Profiles of the \(\langle u^{*}v^{*}\rangle\) Reynolds shear stress tensor component, for S1 and S2, at different positions within the computational domain. (\(\boldsymbol{\hat{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}\)), S1; (\(\boldsymbol{\hat{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}\)), S2; (\(\Box\)), numerical data; (\(\triangle\)), experimental data. slightly more concentrated at the centerline region. S2 and S4 time averaged distribution of \(U\) are, on some small scale, more spread than the distribution obtained by S3. Profiles of \(\langle U\rangle\) from S2, S3 and S4, along the mainstream direction, and the evolution of \(\langle U\rangle\) along the centerline and, also along the lipline, are compared with numerical and experimental results in Fig. 15. The solid line, the dashed line and the circular symbol stand for the profiles of \(\langle U\rangle\) computed by S2, S3 and S4, respectively. Figure 12. Lateral view and detailed view of time averaged eddy viscosity, \(\langle\mu_{t}\rangle\), for S1 and S2. Figure 13. Power spectral density of \(u^{*}\) as function of the Strouhal number along the lipline of the jet. (),, S1, (), S2. A shift of -150 dB and -300 dB has been added to the PSD in order to separate plots for \(X=0.25D\) and \(X=1.25D\), respectively. S4, respectively. The reference data are represented by the same symbols presented in the mesh refinement study. The comparison of profiles indicates that distributions of \(\langle U\rangle\) calculated on S2, S3 and S4 correlates well with the references until \(X=5.0D\). For \(X>10.0D\) all SGS models fail to predict the correct profile. One can notice that the evolution of \(\langle U\rangle\) along the centerline, calculated by all three simulations, are in good agreement with the numerical and experimental reference data at the region where the mesh presents a good resolution. Moreover, the three distributions calculated using different SGS closures have presented the very similar behavior. The dynamic Smagorinsky model and the Vreman model correlates better with the experimental data for \(X<5.0D\) than the classic Smagorinsky closure does. However, all simulations tend to not predict well the magnitude of \(\langle U\rangle\) on the lipline when the mesh size increases, \(X>5.0D\). #### Root Mean Square Distribution of Time Fluctuations of Axial Velocity Component A lateral view of \(u^{*}_{RMS}\) computed by S2, S3 and S4 simulations are presented in Figs. 16(a), 16(b) and 16(c), respectively. The profiles of \(u^{*}_{RMS}\) at \(X=2.5D\) obtained by S2, S3 and S4 are in good agreement with the numerical reference, as one can observe in Fig. 17. However, all simulations, including the LES reference, fail to predict the peaks of \(u^{*}_{RMS}\). At \(X=5.0D\) all three simulations have difficulties to predict the peaks of the profile. Nonetheless, the results are still in good agreement with the literature. In the sequence, the profile of \(u^{*}_{RMS}\) at \(X=10.0D\) calculated by S2, S3 and S4 starts to diverge from the reference results. Finally, at \(X=15.0D\), all SGS closures, but the dynamic Smagorinsky model, fail the predict the correct profile. S3 simulation have produced a profile of \(u^{*}_{RMS}\) at \(X=15.0D\) that is closer to the experimental data than the numerical reference. All three simulations have presented overestimated distributions of \(u^{*}_{RMS}\) along the centerline. However, for \(10D<X<15D\), the Vreman model correctly reproduces the magnitude of \(u^{*}_{RMS}\). All simulations performed in the present work have produced noisy distributions that diverge from the experimental data along the lipline. One can notice that the numerical reference has also produced an overestimated distribution of \(u^{*}_{RMS}\) at \(X<10D\). #### Root Mean Square Distribution of Time Fluctuations of Radial Velocity Component Effects of SGS modeling on the time fluctuation of the radial component of velocity are also compared with the reference data. Figures 16(d), 16(e) and 16(f) illustrate a lateral view of the distribution of \(v^{*}_{RMS}\) computed by S2, S3 and S4, respectively. The SGS models does not significantly affect the distribution of \(v^{*}_{RMS}\). All distributions calculated by S2, S3, S4 have shown similar behavior. Four profiles of \(v^{*}_{RMS}\) in the radial direction at \(X=2.5D\), \(X=5.0D\), \(X=10.0D\) and \(X=15.0D\) are presented in Fig. 18. One can observe that, for \(X\leq 10.0D\), all the profiles calculated on S2, S3 and S4 are close to the reference. Moreover, the results of the static and the dynamic Smagorinsky models are in better agreement with experimental data than the LES reference. At \(X=15.0D\), all simulations performed in the current work fail to predict the correct \(v^{*}_{RMS}\) profile. Figure 14: Lateral view of the averaged axial component of velocity, \(\langle U\rangle\), for S2, S3 and S4. () indicates the potential core of the jet, \(U^{95\%}_{j}\). Figure 15: Profiles of averaged axial component of velocity at different positions within the computational domain. (\(\longrightarrow\), S2; (\(\longrightarrow\), S3; (\(\bigcirc\)), S4; (\(\square\)), numerical data; (a), experimental data. Figure 16. Lateral view of RMS of time fluctuation of axial component of velocity, \(u^{*}_{RMS}\), RMS of time fluctuation of radial component of velocity, \(v^{*}_{RMS}\) and \(\langle u^{*}v^{*}\rangle\) Reynolds shear stress tensor component, for S2, S3 and S4. Figure 17: Profiles of RMS of time fluctuation of axial component of velocity, \(u^{*}_{RMS}\), for S2, S3 and S4, at different positions within the computational domain. (), S2; (), S3; (), S4; (), numerical data; (), experimental data. #### Component of Reynolds Stress Tensor Figures 16(g), 16(h) and 16(i) present lateral view and profiles of \(\langle u^{*}v^{*}\rangle\) component of the Reynolds stress tensor computed using three different SGS models, respectively. One can observe that the simulation performed using different SGS models have produced very similar distributions of \(\langle u^{*}v^{*}\rangle\) for the region where the mehs is refined. However, for \(X>8.0D\) the properties calculated by the different SGS closures present different behavior. The spreading is not the same for S2, S3 and S4 where \(X>8.0D\). Therefore, one can state that the static Smagorinsky, the dynamic Smagorinsky and the Vreman models react differently to the coarsening of the grid. All numerical simulations performed in the present work have failed to correct predict the profiles of \(\langle u^{*}v^{*}\rangle\) presented in Fig. 19. The peaks of the component of the Reynolds stress tensor does not correlate with the reference results. However, one should notice that the LES performed by the reference has also presented difficulties to calculate the same peaks. The cause of the issue could be related to an eventual lack of grid points in the radial direction. In spite of that, more studies on the subject are necessary in order to understand such behavior. ### Time Averaged Eddy Viscosity The distribution of the eddy viscosity, \(\mu_{t}\), is discussed in the current subsection. Figure 20 presents distributions of time averaged eddy viscosity calculated using different SGS models. All subgrid scale closures used in the present work, the static Smagorinsky,[3, 4, 5] the dynamic Smagorinsky[6, 7] and the Vreman[8] models, are dependent of the local mesh size by design. This characteristic is exposed on the lateral view of the flow presented in Fig. 20. The SGS models are only acting in the region where mesh presents a low resolution. Near the entrance domain, where the computational grid is very refined, the eddy viscosity can be neglected. The remark goes in the same direction of the work of Li and Wang,[28] which indicates that SGS closures introduce numerical dissipation that can be used as a stabilizing mechanism. However, this numerical dissipation does not necessarily add more physics of the turbulence to the LES solution. Therefore, in the present work, the numerical truncation, which generates the dissipative characteristic of JAZzY solutions, have show to overcome the effects of the SGS modeling. The mesh need to be very fine in order to achieve good results with second order spatial discretizations. The grid refinement generates very small grid spacing. Consequently, the SGS models, which are strongly dependent on the filter width, does not affect much the solution. A LES of compressible flow configurations without the use of SGS closure would be welcome in order to complete such discussion. #### Power Spectral Density The power spectral density of \(u^{*}\) is studied in the comparison of SGS modeling. Figure 21 presents the PSD of \(u^{*}\), in \(dB\), as function of the Strouhal number for S2, S3 and S4. The same methodology used on the mesh refinement study is performed here. The signals of \(u^{*}\) are collected from the sensors allocated in the computational domain. The signals of Fig. 21 are shifted of -150**dB** and -300**dB** for \(X=0.25D\) and \(X=1.25\), respectively, in order to separate plots. One can observe that PSD signals along the lipline obtained using S2, S3 and S4 have shown the same behavior. A small difference can be noticed for higher Strouhal number for the first two sensors, located at \(X=0.1D\) and \(X=0.25D\). Such remark is aligned to the same discussion performed about the eddy viscosity for different SGS models. The sensors are located in the region where the mesh present excellent resolution. Therefore, the effects of the static Smagorinsky, the dynamic Smagorinsky and the Vreman models, which are strongly dependent on the filter width, can be neglected on \(u^{*}\) for \(X<1.25D\). Figure 21: Power spectral density of \(u^{*}\) as function of the Strouhal number along the lipline of the jet. (), S2; (), S3; (), S4. A shift of -150 dB and -300 dB has been added to the PSD in order to separate plots for \(X=0.25D\) and \(X=1.25D\), respectively. Figure 20: Lateral view of time averaged eddy viscosity, \(\langle\mu_{t}\rangle\), for S2, S3 and S4. Concluding Remarks The current work is the study on effects of different subgrid scales models on perfectly expanded supersonic jet flow configurations using centered second-order spatial discretization. A formulation based on the the System I set of equations is used in the present work. The time integration is performed using a five-steps second order Runge-Kutta scheme. Four large eddy simulations of compressible jet flows are performed in the present research using two different mesh configurations an three different subgrid scale models. Their effects on the large eddy simulation solution are compared and discussed. The mesh refinement study has indicated that in the region where the grid presents high resolution, the simulations are in good agreement with experimental and numerical references. For the mesh with 14 million points the simulation has produced good results for \(X<2.5D\) and \(-1.5D<Y<0.5D\). For the other mesh, with 50 million points, the simulations provided good agreement with the literature for \(X<5.0D\) and \(-1.5D<Y<0.5D\). The eddy viscosity, calculated by the static Smagorinsky model, presents very low levels in the region where the results have good correlation with the results of the literature. The refined grid used on the mesh refinement study, mesh B, is used for the comparison of SGS models effects on the results of large eddy simulations. Three compressible jet flow simulations are performed using the classic Smagorinsky model,[3, 4, 5] the dynamic Smagorinsky model[6, 7] and the Vreman model.[8] All three simulations presented similar behavior. Results presented good agreement with the reference for \(X<5.0D\). In the region where the grid is very fine and the results correlates well with the literature, the eddy viscosity, provided by the SGS model, is very low values. The reason is related to the fact that the SGS closures used in the current work are strongly dependent of the filter width, which is proportional to the local mesh size. The numerical results indicated that it is possible to achieve good results using second-order spatial discretization. The mesh ought be well resolved in order to overcome the truncation errors from the low order numerical scheme. Very fine meshes originates very small filter width. Consequently, the effects of the eddy viscosity calculated by the SGS models on the solution become unimportant. The work of Li and Wang[28] have presented similar conclusions for simplified problems. The authors indicate that SGS closures introduce numerical dissipation that can be used as a stabilizing mechanism. However, this numerical dissipation does not necessarily add more physics of the turbulence to the LES solution. Simulations without the use of any SGS model are welcome and could reinforce the argument. ## Acknowledgments The authors gratefully acknowledge the partial support for this research provided by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico, CNPq, under the Research Grants No. 309985/2013-7, No. 400844/2014-1 and No. 443839/2014-0. The authors are also indebted to the partial financial support received from Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, FAPESP, under the Research Grants No. 2008/57866-1, No. 2013/07375-0 and No. 2013/21535-0.
2305.03877
Semantically Optimized End-to-End Learning for Positional Telemetry in Vehicular Scenarios
End-to-end learning for wireless communications has recently attracted much interest in the community, owing to the emergence of deep learning-based architectures for the physical layer. Neural network-based autoencoders have been proposed as potential replacements of traditional model-based transmitter and receiver structures. Such a replacement primarily provides an unprecedented level of flexibility, allowing to tune such emerging physical layer network stacks in many different directions. The semantic relevance of the transmitted messages is one of those directions. In this paper, we leverage a specific semantic relationship between the occurrence of a message (the source), and the channel statistics. Such a scenario could be illustrated for instance, in vehicular communications where the distance is to be conveyed between a leader and a follower. We study two autoencoder approaches where these special circumstances are exploited. We then evaluate our autoencoders, showing through the simulations that the semantic optimization can achieve significant improvements in the BLERs (up till 93.6%) and RMSEs (up till 87.3%) for vehicular communications leading to considerably reduced risks and needs for message re-transmissions.
Neelabhro Roy, Samie Mostafavi, James Gross
2023-05-05T23:35:42Z
http://arxiv.org/abs/2305.03877v2
# Semantically Optimized End-to-End Learning for Positional Telemetry in Vehicular Scenarios ###### Abstract End-to-end learning for wireless communications has recently attracted much interest in the community, owing to the emergence of deep learning-based architectures for the physical layer. Neural network-based autoencoders have been proposed as potential replacements of traditional model-based transmitter and receiver structures. Such a replacement primarily provides an unprecedented level of flexibility, allowing to tune such emerging physical layer network stacks in many different directions. The semantic relevance of the transmitted messages is one of those directions. In this paper, we leverage a specific semantic relationship between the occurrence of a message (the source), and the channel statistics. Such a scenario could be illustrated for instance, in vehicular communications where the distance is to be conveyed between a leader and a follower. We study two autoencoder approaches where these special circumstances are exploited. We then evaluate our autoencoders, showing through the simulations that the semantic optimization can achieve significant improvements in the BLERs (up till 93.6%) and RMSEs (up till 87.3%) for vehicular communications leading to considerably reduced risks and needs for message transmissions. End-to-end learning, semantic optimization, deep learning, autoencoder, wireless communications, vehicular communications ## I Introduction Softwarization of the wireless network stack has for a long time been an important research goal pursued by the community. On the research side, this materialized over a decade ago with the introduction of Software-Defined Radios, which despite their different architectures represented a big step forward in introducing more flexible applications by leveraging software-based implementations. Since then, the quest for more and more flexible wireless transceiver structures has steadily evolved and has lately been strongly influenced by machine learning. Especially the significant breakthroughs in neural networks for classification and pattern recognition associated with the advances in training acceleration through the use of GPUs about ten years ago, have led to an enormous interest in using machine learning for various problems in communication systems. In particular, deep learning (DL) has recently shown great potential to become a powerful tool to design, optimize, adapt, and secure wireless communications and in doing so, introducing an unprecedented level of flexibility in the transceiver structures. Machine learning applications to communication systems' design have thus received significant research attention recently. With respect to the implemented architectures, works either focus on substituting individual functions in the transceiver chains, or more progressively substituting larger blocks, primarily in the physical layer. Examples of the first category comprise for instance works on signal detection [1], channel estimation [2], or signal demapping in broadband wireless communication systems [3]. In all cases, it can be shown that deep learning, given sufficient training data, is either at par with legacy (model-based) approaches, or even outperforms them. Depending on the application, this can come with a lower complexity of the learning approach. In contrast to learning individual transceiver functions, substituting larger functions can achieve more flexibility [4]. The most flexible approach to date is to substitute entire blocks of transceivers by so called end-to-end approaches [5]. Here, in contrast to [4], the entire transmitter and receiver are substituted through DNNs, which allows for highly flexible signalling schemes. In detail, variational autoencoders are utilized for the joint training of transmitter and receiver. Such end-to-end approaches are most consequent in moving away from model-based transceivers, potentially jeopardising traditional system standardization. End-to-end data-driven communication methods provide principally new ways to refine communication systems due to the high degree of flexibility that they introduce. Among other directions, semantic optimization of end-to-end learning systems is a promising one. This idea goes back to Shannon and Weaver [6] in which the primary focus is how precisely transmitted symbols over an erroneous channel convey some desired meaning. While classical information theory concerns only successful transmission of symbols from transmitter to receiver and improving bit error rate (BER), semantic communication systems are tuned towards minimizing the semantic error instead of BER. Leveraging these ideas in the context of end-to-end learning for communication systems has been recently addressed by several works [7]. Authors in [8] and [9] devised a semantic communication scheme for the text transmission scenario over channels with various signal-to-noise ratios (SNRs). Utilizing natural language processing (NLP) techniques combined with DL, their system essentially encodes and decodes the meaning of each input sentence similar to a source codec. In [10], authors consider the scenario of correlated sources and devise a deep joint source-channel codec for AWGN channels. They model the source by the Gauss-Markov process and the temporal information is extracted by utilizing recurrent neural networks (RNNs). In contrast to these works, in this paper we assume a different scenario where there is a semantic relation between the source message and the channel statistics. Such a relation can be illustrated in positional telemetry or position-related transmission scenarios where there is a relation between the transmitted information and the channel quality. For example, in Vehicle-to-vehicle (V2V) communications, it becomes all the more important to ensure correct reception of messages between the vehicles, as they get closer to each other. To exemplify our proposed approach, we consider a V2V communication based set-up with a leader-follower scenario where the distance between them is estimated by the follower and subsequently transmitted to the leader. In this set-up, the transmitted message directly correlates to the channel statistic, since for higher distances the SNR is generally lower. We show that in such a set-up a large optimization potential results from building standard end-to-end autoencoders from training samples that exhibit a relationship between the message semantic and the channel state. This advantage persists over a substantial parameter range, and can be further tuned by modifications of the loss function. The rest of the paper is structured in the following manner. In Section II we first introduce the detailed system model, before we discuss in Section III our main approach. Numerical results are presented in Section IV, while the paper is concluded in Section V. ## II System Model Our work considers a simple set-up with a single-antenna transmitter, a channel and a single-antenna receiver as described in Fig. 1. The transmitter communicates a message: \[s\ \ \epsilon\ \ M=1,2,....M\] to the receiver, across the channel. To realize this, \(n\) complex baseband symbols are transmitted forming a vector \(\mathbf{x}\in\mathbb{C}^{n}\) under a power constraint. Each message \(s\) can be represented by a sequence of bits of length \(k=log_{2}(M)\). Hence, the resulting communication rate can be obtained as \(R=k/n\) in bits/channel use. The channel \(h\) between transmitter and receiver introduces path loss and shadow fading effects, leading to a substantial distortion of the originally sent message. Path loss and shadow fading effects are assumed to impact all baseband symbols of a sent message equally, but vary from message to message. At the receiver, additive white Gaussian noise n further corrupts the transmitted symbols, such that the receiver ends up with the received vector \(\mathbf{y}\in\mathbb{C}^{n}\). The resulting signal model is thus given as: \[\mathbf{y}=h\cdot\mathbf{x}+\text{n}.\] From this receive statistic, an estimate \(\tilde{s}\) of \(s\) is obtained, where the corresponding block-error rate (BLER) is obtained as: \[P_{e}=1/M*\sum_{s}Pr(\tilde{s}\neq s|s).\] Assuming the transmit power to be denoted by \(P_{tx}\) while the noise variance is given by \(\sigma^{2}\), the SNR of a received message results to \[\gamma=P_{tx}*h^{2}/\sigma^{2}.\] In this work we consider circumstances that map the occurrence of a message \(s\) to a corresponding channel attenuation \(h\). As an example scenario consider a leader-follower set-up in a vehicular environment, where the follower may wish to convey its distance \(d\) towards the leader through a wireless system, as shown in Fig. 1. Given a certain distance estimate, the approximate channel gain \(h\) is specific to the distance estimate. Likewise, for a given attenuation \(h\) due to path loss and shadow fading only a certain set of messages occur. The scenario can be extended towards the communication of position estimates or positional telemetry to fixed infrastructure nodes (V2I communication), or other communication occurrences where for a spatial region only subsets of the message set might occur. In the chosen set-up, beyond the BLER as the standard performance metric, due to the semantic of the distance between vehicles, the root mean squared error between the transmitted and estimated message is relevant. Our interest is thus in finding complex symbol representations that exploit the specific channel-message relationship while the semantic error between sent and estimated message is more relevant than a pure BLER metric. This motivates us to come up with a loss function incorporating the RMSEs between the messages. Additionally, we choose the Root Mean Squared Error (RMSE) as the loss metric to analyse the performance of our proposed model The RMSE can be defined as: \[P_{RMSE}=\sqrt{\sum_{s}(\hat{s}-s)^{2}/N}\] ## III Architecture and Approach In this section we present our system architecture and adopted approaches to realize it. ### _Baseline Approach_ Dorner _et al._ in [5] proposed an end-to-end learning based communication system, built, trained and run solely on deep neural networks, using unsynchronized off-the-shelf software defined radios (SDRs), which acts as our baseline approach. The approach aims to reproduce the input at the transmitter as the output at the receiver end, reinforcing the underlying idea of end-to-end learning. The transmitter in their proposed autoencoder architecture comprises of an embedding layer with an exponential linear unit (ELU) activation succeeded by a feedforward neural network layer whose \(2n\)-dimensional output is cast to an \(n\) dimensional complex-valued vector by considering real and imaginary halves. A normalization layer forms the last layer of the transmitter ensuring the power constraints on the encoder output \(x\) are met. For the channel characteristics, an additive white Gaussian noise (AWGN) channel trained at a constant Signal-to-Noise-Ratio(SNR) is chosen. Their proposed receiver concatenates the real and imaginary parts of the preceding channel output to a real output, which is succeeded by a'softmax' activated feedforward neural network. The output of this layer: \[b\ \ \epsilon\ \ (0,1)^{M}\] is a probability vector having corresponding probabilities of all the possible messages. The final estimate \(\hat{s}\) of \(s\) is derived from the largest index amongst the elements of \(b\). The autoencoder is consequently trained using stochastic gradient descent (SGD). They use the cross-entropy loss function as the task at hand is of classification. The resulting loss function is thus: \[L_{Loss-Baseline}=-log(b_{i})\] where \(b_{i}\) corresponds to the \(i^{th}\) element of the vector \(b\). ### _Semantically Optimized System_ The idea behind our first proposed system is to study the impact for cases where messages occur only in conjunction with certain channel statistics, as described in Section II for transmitting the distance in a leader-follower scenario. For a given distance, the transmission is distorted by the distance-dependent path loss as well as a random shadow fading component (in addition to the noise at the receiver). The resulting receive statistic is specific to the distance. Assuming \(M\) to encode a certain number of distance settings, we obtain a semantically optimized system by training the autoencoder from Section III-A with the specific channel for message \(s\). Fig. 1 exemplifies this semantically optimized architecture, where the input message sequence \(s\) is fed into the transmitter, leading to an output of \(x\) from the transmitter. The output vector then passes through the channel \(h\), which is dependant on the input \(s\). The vector \(h\) creates a mapping of each message comprising \(s\), to the corresponding channel conditions emanating out of the path loss model of our choice. The output of the channel \(y\) is then fed to the receiver where after passing through the DNNs and subsequent softmax activation, we obtain our estimate \(\hat{s}\) of the original input signal \(s\). ### _Weighted-semantically Optimized System_ As described before, the neural network architecture is designed to minimize the cross-entropy loss incurred during the encoding and decoding process. Our weighted-semantically optimized system builds on top of the aforementioned semantically optimized system, by bringing in an additional layer of semantic awareness when computing. We propose a combination of the cross entropy loss function and a modified root-mean-squared-loss function as our loss function, incorporating the labels \(s\). This is used since the end-to-end learning communication problem has been framed as a classification task (and hence, the use of cross-entropy loss), but at the same, it also aims to minimize the loss between the end-to-end metrics. The resulting function is thus: \[L_{Loss-SPL}=-log(b_{i})-1/s*\sqrt{\sum_{i=0}^{M}b_{i}(s-i)^{2}}\] where \(b_{i}\) corresponds to the \(i^{th}\) element of the vector \(b\). We will explore how introducing the newly proposed loss function while computing the cross entropy loss in combination with our modified RMSE loss function pays dividends in terms of the associated RMSEs when compared to the semantically optimized approach and the baseline approach. ## IV Results In this section, we present our simulation results evaluating our proposed model by analysing different loss metrics such as the Root Mean Squared Errors (RMSEs), the Block Error Rates (BLERs) and the associated signal constellations in various experimental settings. ### _Methodology_ In this subsection, we describe our experimental methodology and evaluate these approaches: * For our baseline implementation we use the end-to-end learning model as proposed in [5] incorporating additive white Gaussian Noise (AWGN) at a constant SNR of 7dB. * We then proceed to evaluate the proposed semantic path loss model trained at adaptive SNRs, varying according to the distance between the transmitter and receiver as mentioned in Section III-B. This scheme is referred to as 'SPL' in the forthcoming plots. * Finally, we evaluate the proposed weighted semantic path loss model, building on top of the semantic pathloss model by weighting the transmitted sequence in the Fig. 1: Autoencoder architecture Fig. 2: Semantically Optimized Autoencoder simulation scenario manner as described in Section III-C. We refer to this approach as 'Weighted SPL' in the plots. We utilize Python's open source libraries such as Tensorflow [11] and NumPy [12] to realize our approach and our simulation scenario is shown in Fig. 2. For the pathloss, we utilize the standard model \[10log_{10}*\lambda/(4\pi d)^{\phi}\] setting \(\lambda=0.05cm\) in the formula mentioned for SNR, for an IEEE 802.11p scenario with \(\phi=2.8\) We implement log-normal shadowing model in the channel with a standard deviation of 3dB. Thus, our results show a comparison of our proposed models and other legacy approaches over varying channel conditions. To train the baseline approach and our proposed model, we employ multi batch training, encompassing 3 batches of 200,000 samples each, with learning rates for each batch successively decreasing from 0.1 to 0.01 and finally to 0.001. Each batch also iterates 10,000 times while training. For the validation set, we use similar settings. We evaluate these models with a random uniform sequence of messages \(s\), where: \[s\ \ \epsilon\ \ 1,2,3,....256\] corresponding to \(k=8\). The length of the transmitted message sequence equals the batch size of the experimental run. We begin by studying the transmission of sequences where one message is composed of \(n=2\) complex symbols with \(M=256\) as also evidenced by \(k=8(k=log_{2}(M))\). These parameters are summarized in Table I. To understand the impact of the transmit power and also the number of complex channel uses a message sequence can utilize, we vary them too. We summarize the various testing scenarios in Table II: ### _Discussion_ The RMSEs for Scenario 1, Scenario 2 and Scenario 3 are shown in Fig. 2(a), 2(b) and 2(c) respectively which individually compare the RMSEs for the baseline approach, our proposed semantic pathloss model and the proposed weighted semantic pathloss model. These plots are thus indicative of the variation of RMSEs with varying distances between the point of transmission and reception. Across all the subplots we can observe that the RMSEs in the baseline approach increase after a certain distance and maintain that trend, while for the semantic pathloss model and the proposed weighted semantic pathloss model the RMSEs are significantly less. Fig. 3(a), 3(b) & 3(c) similarly show the associated BLERs under evaluation Scenario 1, Scenario 2 and Scenario 3 respectively, comparing the associated BLERs for the baseline approach, our proposed semantic pathloss model and the proposed weighted semantic pathloss model. These plots thus showcase the variation of the BLERs with varying distances between the transmitter and the receiver. Similar to the RMSE plots, in all the subplots we can observe that the BLERs in the baseline approach increase after a certain distance and maintain that trend, while for the semantic pathloss model and the proposed weighted semantic pathloss model the BLERs are much less. Fig. 4(a), 4(b) and 4(c) showcase the scatter plot of the signal constellations for the baseline model, the semantic pathloss model and the weighted semantic pathloss model respectively, under evaluation scenario 1. Though there are two complex symbols in this scenario, we present the constellations for one of them, as both of the constellations for each case are largely similar. We choose a spectral gradient to represent the messages in the constellations, to help understand the placement of the symbols by the autoencoder. As we move from the blue part of the spectrum (following VIBGYOR-Violet, Indigo, Blue, Green, Yellow, Orange and Red) to the red part, the messages \(s\) these constellation points represent, decrease in terms of the distance. This means that the points in the blue part of the spectrum represent higher distances, and the points in the red part represent lower distances. Table III provides the average RMSEs across all distances, under various testing schemes and also provides the percentage improvements of our schemes over the baseline approach. Similarly Table IV provides the average BLER values over all distances when tested with our evaluation schemes. As we can observe in Fig. 2(a), our proposed autoencoders offer a significant improvement in the RMSEs over the baseline approach, with the weighted semantic path loss model slightly outperforming the semantic path loss model, in terms of the overall RMSEs incurred across the distances. As we move from Fig. 2(a) to 2(b), we observe a reduction in the RMSEs which can be attributed to the increase in transmit power, \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Parameters** & **Scenario** & **Scenario** & **Scenario** \\ & **1** & **2** & **3** \\ \hline Tx Power & 100mW & 200mW & 100mW \\ \hline Complex symbols \(n\) & 2 & 2 & 4 \\ \hline \end{tabular} \end{table} TABLE II: Evaluation schemes \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **S.No.** & **Parameters** & **Values** \\ \hline 1 & Batch Size & 200,000 \\ \hline 2 & \(M\) & 256 \\ \hline 3 & \(k\) & 8 \\ \hline 4 & Learning rate & 0.01 \\ \hline 5 & Iterations/Batch & 10,000 \\ \hline 6 & AWGN SNR for Baseline & 7dB \\ \hline 7 & Log Normal Shadowing standard deviation & 3dB \\ \hline 8 & Frequency of transmission & 5.9GHz \\ \hline 9 & Wavelength of transmission & 5cm \\ \hline 10 & Pathloss exponent & 2.8 \\ \hline 11 & Noise Figure & -95dBm \\ \hline 12 & Transmitted power & 20-23dBm \\ \hline \end{tabular} \end{table} TABLE I: Experimental Parameters allowing for a stronger signal to be transmitted, consequently leading to the availability of higher SNRs. On comparing Fig. (a)a and (c)c we again observe a drop in the RMSEs incurred, especially in the baseline approach largely due to the availability of more complex symbols to represent the messages \(s\) spanning \(M\), as \(n\) increases from 2 to 4 across these plots. Hence, it can be concluded that increasing the transmit power offers a significant advantage in terms of higher SNR availability, consequently leading to lower RMSEs. Similar advantages can be obtained by allowing for more complex symbols to represent a transmitted sequence over our approach with 100mW transmit power and \(n=2\). The RMSEs in Fig. (b)b are still more advantageous over those shown in Fig. (c)c for our proposed approaches, if their summation across all distances are to be compared. From a practical standpoint, this drop in RMSEs leads to a much lower number of retransmissions in order to ensure the correct reception of the messages, which we believe is a significant advantage of our proposed model over the baseline approach. Across Fig. (a)a, (b)b and (c)c we can observe that our proposed models outperform the baseline approach by a significant margin. On comparing Fig. (a)a and (c)c we observe a drop in BLERs across all models being tested, which can be attributed to the availability of more channel uses for message transmission. Between Fig. (a)a and (b)b as well, we can observe a drop because of the availability of higher SNRs owing to the increase in the transmit power. Between the Weighted SPL approach and the SPL approach, we can observe a tradeoff between dropping RMSEs and increasing BLERs, with the introduction of our proposed loss function. This happens since our proposed loss function \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Evaluation Scenario** & **Baseline Approach** & **SPL** & **Weighted SPL** \\ \hline Scenario 1 & 0.07 & 0.0129 (81.5\%) & 0.016 (77.1\%) \\ \hline Scenario 2 & 0.0282 & 0.0018 (93.6\%) & 0.0023 (91.8\%) \\ \hline Scenario 3 & 0.0239 & 0.0015 (93.7\%) & 0.0016 (93.3\%) \\ \hline \end{tabular} \end{table} TABLE IV: Average BLER summary Fig. 4: BLERs under various evaluation scenarios \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Evaluation Scenario** & **Baseline Approach** & **SPL** & **Weighted SPL** \\ \hline Scenario 1 & 20.04 & 4.79 (76\%) & 4.26 (78.7\%) \\ \hline Scenario 2 & 11.42 & 1.54 (86.5\%) & 1.45 (87.3\%) \\ \hline Scenario 3 & 10.63 & 1.68 (84.1\%) & 1.51 (85.7\%) \\ \hline \end{tabular} \end{table} TABLE III: Average RMSE summary Fig. 3: RMSEs under various evaluation scenarios incorporates a modified RMSE in the loss function itself leading to lower RMSEs consistently, but increased BLERs. In Fig. 4(a), the constellations for the baseline approach are shown. We observe a generous intermixing of symbols belonging to both higher and lower distance messages \(s\) spanning \(M\), indicative of the largely equal importance the autoencoder provides to the symbols, to reduce the average BLER irrespective of their magnitudes. Moving on from Fig. 4(a) to Fig. 4(b), we observe rather segregated signal constellations as the autoencoder is now semantically empowered and places the messages indicative of lower distances(and having comparatively higher SNRs) closely towards the centre in a somewhat circular manner, and the messages indicative of longer distances mostly outside the aforementioned circle. On moving from Fig. 4(b) to Fig. 4(c), a contraction in the radii of the set of constellation points indicative of smaller messages can be observed, as a result, the points representing smaller distances are more concentrated towards the centre because of the introduction of the weighted loss function ## V Conclusions & Future Work The aim of this work was to examine end-to-end data-driven communication methods when the semantic knowledge of the channel conditions can be utilized. It can be motivated that in a V2V communication scenario, it is critical that the communication between the vehicles or potentially infrastructural components remains error and risk free, as they get closer, in order to prevent accidents. Thus, through the course of the work, a distance transmission scenario has been considered for a vehicular communications usecase, where the message directly correlates to the channel statistic, as for higher distances the SNR is generally lower. Compared to state-of-the-art learning-based communication systems that are trained over constant channel conditions, we identified a large optimization potential to reduce RMSEs by only considering training samples where the message semantics correlates with the channel statistic. Moreover, by proposing a new loss function in addition to already showing gains with the cross entropy loss function (up till 86.5%), we observe a further improvement in the corresponding RMSEs (up till 87.3%). Considering a practical scenario, this drop in RMSEs leads to a much lower number of re-transmissions needed in order to ensure the correct reception of the transmitted messages, where we believe our proposed model is significantly advantageous over the baseline approach. Future works include utilizing this approach for positional telemetry amongst vehicles and towards fixed infrastructures and a study of various loss functions on the performance of such systems. The impacts of Doppler effect could be considered as an additional semantic information to extend the proposed methods.
2304.13795
Extragalactic FXT Candidates Discovered by Chandra (2014-2022)
Extragalactic fast X-ray transients (FXTs) are short flashes of X-ray photons of unknown origin that last a few minutes to hours. We extend the search for extragalactic FXTs from Quirola et al. 2022 (Paper I; based on sources in the Chandra Source Catalog 2.0, CSC2) to further Chandra archival data between 2014-2022. We extract X-ray data using a method similar to that employed by CSC2 and apply identical search criteria as in Paper I. We report the detection of eight FXT candidates, with peak 0.3-10 keV fluxes between 1$\times$10$^{-13}$ to 1$\times$10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ and $T_{90}$ values from 0.3 to 12.1 ks. This sample of FXTs has likely redshifts between 0.7 to 1.8. Three FXT candidates exhibit light curves with a plateau (${\approx}$1-3 ks duration) followed by a power-law decay and X-ray spectral softening, similar to what was observed for a few previously reported FXTs in Paper I. In light of the new, expanded source lists (eight FXTs with known redshifts from Paper I and this work), we update the event sky rates derived in Paper I, finding 36.9$_{-8.3}^{+9.7}$ deg$^{-2}$ yr$^{-1}$ for the extragalactic samples for a limiting flux of ${\gtrsim}$1${\times}$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$, calculate the first FXT X-ray luminosity function, and compare the volumetric density rate between FXTs and other transient classes. Our latest Chandra-detected extragalactic FXT candidates boost the total Chandra sample by $\sim$50 %, and appear to have a similar diversity of possible progenitors.
J. Quirola-Vásquez, F. E. Bauer, P. G. Jonker, W. N. Brandt, G. Yang, A. J. Levan, Y. Q. Xue, D. Eappachen, E. Camacho, M. E. Ravasio, X. C. Zheng, B. Luo
2023-04-26T19:25:32Z
http://arxiv.org/abs/2304.13795v1
# Extragalactic Fast X-ray Transient Candidates Discovered by _Chandra_ (2014-2022) ###### Abstract Context:Extragalactic fast X-ray transients (FXTs) are short flashes of X-ray photons of unknown origin that last a few minutes to hours. Aims:We extend the search for extragalactic FXTs from Quirola et al. 2022 (Paper I; based on sources in the _Chandra_ Source Catalog 2.0, CSC2, using data taken between 2000-2014) to further _Chandra_ archival data between 2014-2022. Methods:We extract X-ray data using a method similar to that employed by CSC2 and apply identical search criteria as in Paper I. Results:We report the detection of eight FXT candidates, with peak 0.3-10 keV fluxes between 1\(\times 10^{-13}\) to 1\(\times 10^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\) and \(T_{00}\) values from 0.3 to 12.1 ks. This sample of FXTs has likely redshifts between 0.7 to 1.8. Three FXT candidates exhibit light curves with a plateau (\(\approx\)1-3 ks duration) followed by a power-law decay and X-ray spectral softening, similar to what was observed for a few previously reported FXTs in Paper I. In light of the new, expanded source lists (eight FXTs with known redshifts from Paper I and this work), we update the event sky rates derived in Paper I, finding 36.9\({}^{+.93}_{-.83}\) deg\({}^{-2}\) yr\({}^{-1}\) for the extragalactic samples for a limiting flux of 2\(\times 10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\), calculate the first FXT X-ray luminosity function, and compare the volumetric density rate between FXTs and other transient classes. Conclusions:Our latest _Chandra_-detected extragalactic FXT candidates boost the total _Chandra_ sample by \(\sim\)50%, and appear to have a similar diversity of possible progenitors. ## 1 Introduction The last decades have seen remarkable progress in understanding the time-resolved sky. Wide-field optical and near-infrared (NIR) surveys identified thousands of supernovae (SNe) and related sources. In the gamma-ray regime, the progenitors of both long- and short-duration gamma-ray bursts (LGRBs and SGRBs, respectively) have been identified, while in the radio bands decisive inroads have been made into the nature of the fast radio bursts (FRBs). Perhaps surprisingly, our understanding of sources with similar behavior observed in soft X-rays with _Chandra_, _XMM-Newton_ and _Swift_-XRT remains relatively poor. Phenomenologically, we define extra-galactic fast X-ray transients (FXTs) as non-Galactic sources that manifest as non-repeating flashes of X-ray photons in the soft X-ray regime \(\sim\)0.3-10 keV, with durations from minutes to hours (e.g., Alp & Larsson 2020; Quirola-Vasquez et al. 2022). Unfortunately, they still lack a concise or singular physical explanation (e.g., Soderberg et al. 2008; Jonker et al. 2013; Glennie et al. 2015; Irwin et al. 2016; Bauer et al. 2017; Lin et al. 2018, 2019; Xue et al. 2019; Yang et al. 2019; Alp & Larsson 2020; Novara et al. 2020; Lin et al. 2020; Ide et al. 2020; Pastor-Marazuela et al. 2020; Lin et al. 2021, 2022; Quirola-Vasquez et al. 2022). Critically, while of order 30 FXTs have been identified to date, both serendipitously and through careful searches, only in one case, XRT 080109/SN 2008D (Soderberg et al. 2008; Mazzali et al. 2008; Modjaz et al. 2009), has there been a detection of a multi-wavelength counterpart after the outburst. This is because, in the vast majority of cases, the transients themselves have only been identified long after the outburst via archival data mining (e.g., Alp & Larsson 2020; De Luca et al. 2021; Quirola-Vasquez et al. 2022), so that timely follow-up observations were not possible. Notably, the most stringent limits come from deep optical VLT imaging serendipitously acquired 80 minutes after the onset of XRT 141001 (\(m_{\rm B}\)\(>\)25.7 AB mag; Bauer et al. 2017). Moreover, only a handful of FXTs have had clear host-galaxy associations and even fewer have firm distance constraints (e.g., Soderberg et al. 2008; Irwin et al. 2016; Bauer et al. 2017; Xue et al. 2019; Novara et al. 2020; Lin et al. 2022; Eappachen et al. 2022; Quirola-Vasquez et al. 2022; Eappachen et al. 2023). Hence, it is not trivial to discern their energetics and distance scale and, by extension, their physical origin. A variety of different physical mechanisms have been proposed for the origin of FXTs, such as: _i_) stochastic outbursts associated with X-ray binaries in nearby galaxies [XRBs; including subclasses such as Ultra-luminous X-ray sources (ULXs), soft gamma repeaters (SGRs), and anomalous X-ray pulsars (AXPs) providing possible explanations of FXTs with \(L_{\rm X,peak}\)\(\lesssim\)10\({}^{42}\) erg s\({}^{-1}\)(see Colbert & Mushotzky 1999; Kaaret et al. 2006; Woods & Thompson 2006; Miniutti et al. 2019; and references therein); _ii_) X-ray emission generated from the shock breakout (SBO; \(L_{\rm X,peak}\)\(\sim\)10\({}^{42}\)-10\({}^{45}\) erg s\({}^{-1}\)) of a core-collapse supernova (CC-SN) once it crosses the surface of the exploding star (e.g., Soderberg et al. 2008; Nakar & Sari 2010; Waxman & Katz 2017; Novara et al. 2020; Alp & Larsson 2020); _iii_) off-axis GRBs could explain FXTs (\(L_{\rm X,peak}\)\(\lesssim\)10\({}^{45}\) erg s\({}^{-1}\)) where the X-ray emission is produced by a wider, mildly relativistic cocoon jet (Lorentz factor of \(\lesssim\)100; Zhang et al. 2004), once it breaks through the surface of a massive progenitor star (Ramirez-Ruiz et al. 2002; Zhang et al. 2004; Nakar 2015; Zhang 2018; D'Elia et al. 2018); _iv_) Tidal disruption events (TDEs; \(L_{\rm X,peak}^{\rm-16}\)\(\lesssim\)10\({}^{43}\) and \(L_{\rm X,peak}^{\rm let}\)\(\sim\)10\({}^{43}\)-10\({}^{50}\) erg s\({}^{-1}\) considering non- and jetted emission, respectively) involving a white dwarf (WD) and an intermediate-mass black hole (IMBH), whereby X-rays are produced by the tidal disruption and subsequent accretion of part of the WD in the gravitational field of the IMBH (e.g., Jonker et al. 2013; Glennie et al. 2015); and _v_) Mergers of binary neutron stars, (BNS; \(L_{\rm X,peak}\)\(\sim\)10\({}^{44}\)-10\({}^{51}\) erg s\({}^{-1}\) considering jetted and line-of-sight obscured emission; e.g., Dai et al. 2018; Jonker et al. 2013; Fong et al. 2015; Sun et al. 2017; Bauer et al. 2017; Xue et al. 2019), whereby the X-rays are created by the accretion of fallback material onto the remnant black hole (BH), a wider and mildly relativistic cocoon, or the spin-down magnetar emission (Metzger & Piro 2014; Sun et al. 2017, 2019; Metzger et al. 2018). In previous work, Quirola-Vasquez et al. (2022, hereafter Paper I) conducted a systematic search for FXTs in the _Chandra_ Source Catalog (Data Release 2.0; 169.6 Ms over 592.4 deg\({}^{2}\) using only observations with \(|\)\(b\)\(|\)\(>\)10\({}^{\circ}\) and until 2014; Evans et al. 2010, 2019, 2020a), using an X-ray flare search algorithm and incorporating various multi-wavelength constraints to rule out Galactic contamination. Paper I reported the detection of 14 FXT candidates (recovering five sources previously identified and classified as FXTs by Jonker et al. 2013; Glennie et al. 2015; Bauer et al. 2017; Lin et al. 2019) with peak fluxes (\(F_{\rm peak}\)) from 1\(\times\)10\({}^{-13}\) to 2\(\times\)10\({}^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\) (at energies of 0.5-7 keV) and \(T_{90}\) (measured as the time over which the source emits the central 90%, i.e. from 5% to 95% of its total measured counts) values from 4 to 48 ks. Intriguingly, the sample was sub-classified into two groups: six _"nearby"_ FXTs that occurred within \(d\)\(\lesssim\)100 Mpc and eight _"distant"_ FXTs with likely redshifts \(\gtrsim\)0.1. Moreover, after applying completeness corrections, the event rates for the _nearby_ and _distant_ samples are 53.7\({}^{+22.6}_{-15.1}\) and 28.2\({}^{+9.8}_{-6.9}\) deg\({}^{-2}\) yr\({}^{-1}\), respectively. However, Paper I does not analyze _Chandra_ observations beyond 2014, implying that several intriguing FXTs likely remain undiscovered. In this paper, we extend the selection of Paper I to public _Chandra_ observations between 2014-2022 using a nearly identical methodology. As in Paper I, this work focuses only on the non-repeating FXTs, to help reduce sample contamination. We further caution that the sparse nature of repeat X-ray observations means that we cannot rule out that some current FXTs could be repeating FXTs. The study of repeating FXTs is beyond the scope of this manuscript. The manuscript is organized as follows. We explain the methodology and selection criteria in Sect. 2. We present the results of a search and cross-match with other catalogs in Sect. 2.8, a spectral and timing analysis of our final candidates in Sect. 3, and the properties of the identified potential host galaxies in Sect. 4. In Sect. 6, we discuss possible interpretations of some FXTs. We derive local and volumetric rates for the FXTs in Sect. 5, and compare them to those of other X-ray transients. Finally, we present comments and conclusions in Sect. 8. Throughout the paper, a concordance cosmology with parameters \(H_{0}\)=70 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}\)=0.30, and \(\Omega_{\Lambda}\)=0.70 is adopted. Magnitudes are quoted in the AB system. Unless otherwise stated, all errors are at 1\(\sigma\) confidence level. ## 2 Methodology and Sample Selection ### Identification of X-ray sources Paper I used as an input catalog of the X-ray sources detected by the CSC2. This is not available for _Chandra_ observations beyond the end of 2014, so a crucial first step is to generate a comparable source detection catalog for the _Chandra_ observations used in this work (see Sect. 2.3), upon which we will apply our FXT candidate selection algorithm (Sect. 2.2). To generate robust X-ray source catalogs, we use the clao source detection tool wavdetect (Freeman et al. 2002). It detects possible source pixels using a series of "Mexican Hat" wavelet functions with different pixel bin sizes to account for the varying PSF size across the detector. The wavdetect tool identifies all point sources above a threshold significance of 10\({}^{-6}\) (which corresponds to one spurious source in a 1000\(\times\)1000 pixel map) and a list of radii in image pixels from 1 to 32 (to avoid missing detections at large off-axis angles). To avoid erroneous detections, we create exposure and PSF maps, which enable refinement of the source properties. The exposure maps are created by running the fluximage script with the 0.5-7 keV band (Fruscione et al. 2006), while the PSF map, which provides information on the size of the PSF at each pixel in the image, is made using the mkpsfmap task; the PSF size corresponds to the 1\(\sigma\) integrated volume of a 2D Gaussian (Fruscione et al. 2006). The output of the clao tool wavdetect is a catalog with essential information about the X-ray sources such as the positions (RA and DEC), positional uncertainty, and significance. ### Transient-Candidate Selection Algorithm We adopt the same algorithm as presented in Paper I, which augments somewhat the initial version presented in Yang et al. (2019, see their sect. 2.1 for more details). This method depends on the total (\(N_{\rm{tot}}\)) and background (\(N_{\rm{big}}\)) counts of the source, working on an unbinned _Chandra_ light curve, which is advantageous because it does not depend on the light curve shapes. The algorithm splits the light curves into different segments in two passes: _i_) in two halves and _ii_) in three regions, covering the entire _Chandra_ observation. FXT candidates are selected when: _i_) \(N_{\rm{tot}}\)\(>\)5-\(\sigma\) Poisson upper limit of \(N_{\rm{big}}\) (to exclude low signal-to-noise ratio, S/N, sources); _ii_) the counts in the different segments (\(N_{i}\)) are statistically different at a \(>4\sigma\) significance level (to select robust detections of short-duration variable sources); and _iii_) \(N_{i}\)\(>\)5 \(\times\)\(N_{j}\) or \(N_{j}\)\(>\)5 \(\times\)\(N_{i}\) (to select large-amplitude number of counts-variations). Finally, to mitigate the effect of background (especially for sources with long exposure times and large instrumental off-axis angles), we additionally chop each light curve into 20 ks segments (or time windows \(T_{\rm{window}}\)\(=\) 20 ks), and re-apply the conditions explained above. This reduces the integrated number of background counts per PSF element and thus enables identification of fainter sources at larger instrumental off-axis angles. To maintain an efficient selection of transients across the gaps between these arbitrary windows, we sequence through the entire light curve in three iterations: a forward division in 20 ks intervals, a backward division in 20 ks intervals, and finally, a forward division with a 10 ks shift in 20 ks intervals to cover gaps. Based on simulations of the CDF-S XT1 and XT2 fiducial light curves (Bauer et al., 2017; Xue et al., 2019), Paper I derived an efficiency of the method of \(\gtrsim\) 90% for sources with \(\log(F_{\rm{peak}})>-\) 12.6 located at off-axis angles \(<\) 11\(\aas@@fstack{\prime\prime}\)0, with a relatively sharp decline in efficiency for FXTs with lower fluxes; e.g., \(\approx\)50% and \(\approx\)5% efficiencies for \(\log(F_{\rm{peak}})=-\) 12.8 and \(\log(F_{\rm{peak}})=-\) 13.0, respectively, at \(\approx\) 11\(\aas@@fstack{\prime\prime}\)0. This instrumental off-axis angle limit is enforced because _Chandra_'s detection sensitivity (as measured by, e.g., effective area and PSF size) drops significantly beyond this limit (Vito et al., 2016; Yang et al., 2016). Importantly, this algorithm successfully recovered all previously reported sources (XRT 000519, XRT 030511, XRT 110103, XRT 110919, and XRT 141001; Jonker et al., 2013; Glennie et al., 2015; Bauer et al., 2017; Lin et al., 2019; Quirola-Vasquez et al., 2022), and thus is flexible enough to recognize FXTs with different light-curve shapes. We stress that this is a key advantage compared to matched filter techniques that assume an underlying light curve model profile. ### Data selection To extend the previous search for extragalactic FXTs in Paper I beyond the _Chandra Source Catalog 2.0_ (CSC2) limit of 2014, we conducted a search through all _Chandra_ ACIS imaging observations (science and calibration observations) made publicly available between 2015-01-01 and 2022-04-01. This includes 3899 individual _Chandra_-ACIS observations, outside the Galactic plane at \(|b|\)\(>\)10 deg, or \(\approx\)88.8 Ms, 264.4 deg\({}^{2}\) conforming to the following criteria. For uniformity, we consider only ACIS observations in the energy range 0.5-7.0 keV, noting that HRC-I observations comprise only a few percent of the overall observations and have poorer/softer response and limited energy resolution compared with the ACIS detector. The _Chandra_ observations target a wide variety of astronomical objects, from galaxy clusters to stellar objects. Based on the nature of the extragalactic FXTs identified systematically in Paper I and the potential sources of contamination, we limit our initial light-curve search to sources with Galactic latitudes \(|b|\)\(>\) 10 deg to reduce the expectedly high contamination rate from flaring stars. An additional benefit of considering objects outside the Galactic plane is that it helps to minimize the effects of Galactic extinction in characterizing the spectral properties of our candidates. To facilitate our search, we use the full-field per-observation event files available from the _Chandra_ Data Archive products.1 Figure 1 shows the cumulative and histogram distributions of exposure time of the _Chandra_ observations used in this work. Footnote 1: [https://cda.harvard.edu/chaser/](https://cda.harvard.edu/chaser/) ### Generation of light curves The event file contains the relevant stored photon event data, such as photon arrival time, energy, position on the detector, sky coordinates, observing conditions, and the good time interval (GTI) tables listing the start and stop times. To generate light curves, we take X-ray photons in the 0.5-7.0 keV range from each event file using an aperture of 1.5\(\times\)\(R_{90}\), where \(R_{90}\) is the radius encircling 90% of the X-ray counts. Based on simulations developed by Yang et al. (2019), the aperture of 1.5\(\times\)\(R_{90}\) encircles \(\gtrsim\)98% of X-ray counts and depends on the instrumental off-axis angle (and depends on the photon energy; for more details, see Vito et al., 2016; Hickox & Markevitch, 2006). We compute \(N_{\rm{big}}\) taking into account an annulus with inner and outer aperture radius of 1.5\(\times\)\(R_{90}\) and 1.5\(\times\)\(R_{90}\)+20 pixels, respectively. In the particular case where the background region overlaps with a nearby X-ray source, we mask the nearby source (using a radius of 1.5\(\times\)\(R_{90}\)), and do not include the masked area to estimate \(N_{\rm{big}}\). Also, we weigh \(N_{\rm{big}}\) by the source-to-background area ratio to correct the light curve of the sources. Figure 1: Histogram (_red_; _left_ Y-axis) and cumulative (_black_; _right_ Y-axis) distributions of the exposure time of the 3899 _Chandra_ observations used in this work. The inset provides a zoom-in to show the high-exposure time tail of the distribution. The _dashed vertical blue line_ indicates the median exposure time (= 19.7 ks) of the total sample. ### Astrometry of X-ray sources To improve upon the nominal absolute astrometric accuracy of _Chandra_ [0\(\aas@@fstack{\prime\prime}\)8 (1\(\aas@@fstack{\prime\prime}\)4) at 90% (99% uncertainty)]2, we cross-match the detected X-ray sources to optical sources from either the _Gaia_ Early Data Release 3 (_Gaia_-EDR3; Gaia Collaboration et al., 2021) or Sloan Digital Sky Survey Data Release 16 (SDSS-DR16; Ahumada et al., 2020) catalogues, using the wcs_match script in ciao. wcs_match compares two sets of source lists from the same sky region and provides translation, rotation and plate-scale corrections to improve the X-ray astrometric reference frame. We adopt a 2\(\aas@@fstack{\prime\prime}\)0 matching radius (i.e., \(\leq\)8 image pixels), eliminating any source pairs beyond this limit. We typically achieve an accuracy of 0\(\aas@@fstack{\prime\prime}\)08-1\(\aas@@fstack{\prime\prime}\)64 (90% quantile range). This improves our ability to discard contaminants (stellar flares, essentially) and eventually measure projected offsets between X-ray sources and host galaxies (in the case of the final sample of FXT candidates). We combine in quadrature all astrometric errors into the X-ray source positional uncertainty. Footnote 2: [https://cxc.harvard.edu/cal/ASPECT/celmon/](https://cxc.harvard.edu/cal/ASPECT/celmon/) ### Initial candidate results As a summary, we apply the FXT detection algorithm to the 0.5-7.0 keV light curves of X-ray sources outside of the Galactic plane (\(|b|>\)10 deg), resulting in 151 FXT candidates. This parent sample has total net counts and instrumental off-axis angles spanning \(\approx\)15-33,000 (mean value of 590) and \(\approx\)0.12-14.0 (mean value of 5.2) arcmin, respectively. As expected, our selection method identifies FXTs with a wide range of light curve shapes. ### Initial Purity Criteria As highlighted in both Yang et al. (2019) and Paper I, our search method does not ensure the unique identification of real extragalactic FXTs. Therefore, it is mandatory to adopt additional criteria considering archival X-ray data and multiwavelength counterparts to differentiate real extragalactic FXTs from Galactic variables and transients among the sample of 151 FXT candidates. We describe and report these additional criteria in Sect. 2.7.1-2.7.5 and summarize the number and percentage, relative to the total, of sources that pass criteria (_column 5_), as well as ignoring all previous steps (_column 4_) in Table 1. Finally, we discuss the completeness of our search and selection criteria in Sect. 2.7.6. #### 2.7.1 Criterion 1: Archival X-ray data To confirm the transient nature of the FXT candidates, a non-detection in prior and/or subsequent X-ray observations is important. In this way, we consider different observations from _Chandra_, based on other observations in the CSC2 and individual observations (Evans et al., 2010); _XMM-Newton_, based on individual observations of sources in the Serendipitous Source (4XMM-DR11; Webb et al., 2020) and Slew Survey Source Catalogues (XMMSL2; Saxton et al., 2008); and the Living _Swift_ XRT Point Source Catalogue (LSXPS) based on observations between 2005-01-01 and 2023-02-12 (Evans et al., 2023). We impose that the FXT candidate remain undetected (i.e., consistent with zero net counts) at 3\(\sigma\) confidence in all X-ray observations, aside from the _Chandra_ observation in which the FXT candidate is detected. This requirement is useful especially to exclude a large number of Galactic stellar flares, but it also may discard FXTs associated with hosts with AGNs, as well as long-lived or recurring X-ray transients (e.g., from SNe in strongly star-forming galaxies). The success of this criterion is related to the number of times a particular field is visited by X-ray facilities. To discard candidates with prior and subsequent X-ray observations with _Chandra_, we used the CSC2 or in the cases of candidates with more recent archival observations we downloaded and extracted photometry for these sources, adopting consistent source and background regions and aperture corrections compared to those used in Sect. 2.4. In total, 127 FXT candidates were observed in multiple _Chandra_ observation IDs, while 24 candidates have no additional _Chandra_ observations. To identify additional _XMM-Newton_ and _Swift_-XRT detections, we adopt a search cone radius equivalent to the 3\(\sigma\) combined positional errors of the _Chandra_ detection and tentative _XMM-Newton_ or _Swift_-XRT matches from the 4XMM-DR11, XMMSL2 and LSXPS catalogs, respectively. We additionally search the X-ray upper limit servers: Flux limits from Images from _XMM-Newton_ using DR74 (FLX),3 LSXPS,4 and the HIgh-energy Light curve GeneraTor (HILIGT) upper limit servers.5 It is important to mention that HILIGT provides upper limits for several X-ray observatory archives (including _XMM-Newton_ pointed observations and slew surveys; Rontgen Satellite (_ROSAT_) pointed observations and all-sky survey; _Einstein_ pointed observations), while LSXPS generates _Swift_-XRT upper limits.6 Footnote 3: [https://www.ledas.ac.uk/lis/flix.html](https://www.ledas.ac.uk/lis/flix.html) Footnote 4: [https://www.swift.ac.uk/LSXPS/](https://www.swift.ac.uk/LSXPS/) Footnote 5: [http://xmmuls.esac.esa.in/upperlimitserver/](http://xmmuls.esac.esa.in/upperlimitserver/) Footnote 6: We used the 0.2–12 keV energy band, which we then converted to 0.5–7.0 keV assuming the default spectral parameters \(\Gamma\)=2 and \(N_{H}\)=3\(\times\)10\({}^{20}\) cm\({}^{-2}\). We found that the reported detections are not always reliable (e.g., inconsistencies between catalogs using the same observations or failure to confirm upon visual inspection), and hence we require detections to be \(\geq\)5\(\sigma\). We found that: 72 candidates are observed in _XMM-Newton_ 4XMM-DR11, with 12 candidates detected; 65 candidates are observed in _Swift_-XRT LSXPS, with 4 candidates detected; 1 candidate is observed in _ROSAT_ pointed observations, with a clear detection; finally, all candidates are observed in the _ROSAT_ All-Sky Survey, with 5 candidates detected. Also, zero candidates are observed in _XMM-Newton_ XMMSL2 and the _Einstein_ pointed observations. The upper limits from _Chandra_ and _XMM-Newton_ pointed observations are similar to or lower than our FXT candidate peak fluxes. So, we can conclude that similar transient episodes would have been detectable in such observations if present. In total, 134 candidates have multiple hard X-ray observations by _Chandra_, _XMM-Newton_, and/or _Swift_-XRT, of which 127 candidates have been visited more than once by _Chandra_. This implies re-observed fractions of at least \(\approx\)84% among the candidate sample (a large fraction of these 84% of sources lie in fields intentionally observed multiple times; for instance, in the vicinity of the Orion Nebula or M101). The high X-ray re-detection fraction indicates that this is a very effective criterion if additional _Chandra_, _XMM-Newton_ or _Swift_ observations are available. In summary, 98 candidates pass this criterion (see Table 1), albeit largely because they lack multiple sensitive X-ray observations. We note that 20 candidates are discarded by this criterion but not by the others (see Table 1). The _left panel_ of Fig. 2 shows the net-count distribution for all the sources that pass this criterion. To conclude, this criterion appears to be an extremely effective means to identify persistent or repeat transients, when data are available. #### 2.7.2 Criterion 2: Optical detections in Gaia In previous works (e.g., Paper I and Yang et al. 2019), an important fraction of FXT candidates had a Galactic origin, especially related to relatively bright stars. To identify these, we cross-match with the _Gaia_ Early Data Release 3 (_Gaia_ EDR3; employing the VizieR package; Gaia Collaboration et al. 2021) catalog, which contains photometric and astrometric constraints for sources in the magnitude range \(G\)=3-21 mag including accurate positions, parallaxes, and proper motions throughout the Local Group (Lindegren et al. 2018; Gaia Collaboration et al. 2018). We adopt the 3\(\sigma\) positional uncertainty (obtained by the CIAO wavdetect task) associated with each candidate as our cone search radius. In general, this radius is sufficiently small to find a unique counterpart given the high spatial resolution and astrometric precision of _Chandra_(Rots & Budavari 2011); 9 candidates show multiple _Gaia_ sources in their cone search area, for which we adopt the nearest _Gaia_ source. From our initial sample of 151 FXT candidates, 107 sources have cross-matches in _Gaia EDR3_. Nevertheless, we only discard FXT candidates matched to "stellar" _Gaia_ EDR3 optical detections, where stellar is taken to mean those with nonzero proper motion and/or parallax detected at \(>\)3\(\sigma\) significance; this amounts to 83 candidates from the initial sample. These likely stellar sources cover a wide range in magnitude \(G\)=9.2-20.1 mag (\(\overline{G}\)\(\approx\)16.4 mag) and proper motion \(\mu\)=0.7-154.5 mas yr\({}^{-1}\) (\(\overline{\mu}\)\(\approx\)22.1 mas yr\({}^{-1}\)). The _middle panel_ of Fig. 2 shows the net-count distribution of the 68 sources that pass this criterion. Among the total sample, \(\approx\)55% are associated with stellar flares of bright stars. Moreover, this criterion discards 42 FXT candidates that the additional criteria do not (see Table 1). However, because of the magnitude limit and optical window of _Gaia_, this criterion may not identify all persistent or recurring transient Galactic objects, which we return to in the next subsection. As a running total, only 42 candidates successfully pass both this and the previous criterion (see Table 1). #### 2.7.3 Criterion 3: NED, SIMBAD, and VizieR Search To identify known Galactic and Local Group contaminating objects not detected by _Gaia_, we search for counterparts (or host galaxies) in large databases using the astroquery package: the NASA/IPAC Extragalactic Database (NED; Helou et al. 1991), the Set of Identifications, Measurements, and Bibliography for Astronomical Data (SIMBAD; Wenger et al. 2000), and VizieR (which provides the most complete library of published astronomical catalogs; Ochsenbein et al. 2000). We perform a cone search per FXT candidate, using a circular region with a radius of 3\(\sigma\) based on the X-ray positional uncertainty from the CIAO wavdetect task to find associated sources. These three databases contain many catalogs across the electromagnetic (EM) spectrum, which permit us to rule out candidates in our sample associated with previously classified stars, \begin{table} \begin{tabular}{l l l l l} \hline \hline Criterion & \multicolumn{5}{c}{Candidates} \\ & \# constrained & \# total removed & \# uniquely removed & \# remaining \\ \hline (\(a\)) & (\(b\)) & (\(c\)) & (\(d\)) & (\(e\)) \\ \hline 1) Archival X-ray data & 134\({}^{*}\) & 53 & 20 & 98 \\ 2) Cross-match with stars/_Gaia_ & 151 & 83 & 42 & 42 \\ 3) NED + SIMBAD + VizieR & 151 & 75 & 33 & 9 \\ \hline 4) Archival images\({}^{\dagger}\) & – & – & – & 9 \\ 5) Instrumental and variability effects\({}^{\dagger}\) & – & 1 & 1 & 8 \\ \hline \end{tabular} \end{table} Table 1: Breakdown of FXT candidates as a function of the selection criteria proposed in Sect. 2.7. _Column (\(a\))_: Criterion. _Column (\(b\))_: Number of candidates constrained by this criterion. _Column (\(c\))_: Number of candidates removed that would be cut at this stage if we disregard all previous stages. _Column (\(d\))_: Number of candidates that are solely removed by this criterion, and not any other. _Column (\(e\))_: Running total number of candidates that remain after applying this criterion. \({}^{*}\)Candidates with additional _Chandra_-ACIS, _XMM-Newton_, or _Swift_-XRT observations. \({}^{\dagger}\)Note that criteria 4 and 5 are only applied to the sources that remain after the first three criteria are applied. Figure 2: Comparison of 0.5–7.0 keV net-count distributions for the initial (_filled blue histograms_) and final (_filled black histograms_) FXT samples, as well as subsets covered by various purity criteria (_colour non-filled histograms_) for the sample. Net counts are provided by the same regions defined in Sect. 2.4. young stellar objects (YSOs) embedded inside nebulae (where the absorption and obscuration do not permit _Gaia_ detections), globular clusters, or high-mass X-ray binaries (HMXBs) in either our Galaxy or the Local Group. This criterion is important in our analysis because \(\approx\)80% (i.e., 121 FXT candidates) of the initial sample show associated sources with the SIMBAD and NED databases. We uniquely identify 33 objects, either as YSOs embedded in nebulae or stars identified by other catalogs, for instance, the VISTA Hemisphere Survey (VHS), the United Kingdom InfraRed Telescope (UKIRT) Infrared Deep Sky Survey, the Sloan Digital Sky Survey (SDSS), or the all-sky Wide-field Infrared Survey Explorer (WISE) CatWISE source catalog at 3.4 and 4.6 \(\mu\)m (McMahon et al., 2013; Dye et al., 2018; Marocco et al., 2021). It is important to mention that 33 FXT candidates are discarded solely by this criterion (see Table 1). The _right panel_ of Fig. 2 shows the net-count distribution for the 76 FXT candidates that pass this criterion. Applying all criteria thus far, the sample is reduced to nine candidates. #### 2.7.4 Archival Image Search To rule out still fainter stellar counterparts, we carried out a search of ultraviolet (UV), optical, NIR, and mid-infrared (MIR) image archives. We perform a cone search within a radius equivalent to the 3\(\sigma\)_Chandra_ positional uncertainty of the respective FXTs for the following archives: the Hubble Legacy Archive;7 the Pan-STARRS archive (Flewelling et al., 2020);8 the National Science Foundation's National Optical-Infrared Astronomy Research (NOIR) Astro Data Lab archive,9 which includes images from the Dark Energy Survey (DES; Dark Energy Survey Collaboration et al., 2016) and the Legacy Survey (DR8); the Gemini Observatory Archive;10 the National Optical Astronomy Observatory (NOAO) science archive;11 the ESO archive science portal;12 the VISTA Science Archive;13 the Spitzer Enhanced Imaging Products archive (Teplitz et al., 2010);14 the UKIRT/Wide Field Camera (WFCAM) Science Archive;15 and the WISE archive (Wright et al., 2010). Footnote 7: [https://hla.stsci.edu/hlaview.html](https://hla.stsci.edu/hlaview.html) Footnote 8: [http://ps1images.stsci.edu/cgi-bin/ps1cutouts](http://ps1images.stsci.edu/cgi-bin/ps1cutouts) Footnote 9: [https://datalab.noao.edu/isa.php](https://datalab.noao.edu/isa.php) Footnote 10: [https://archive.gemini.edu/searchform](https://archive.gemini.edu/searchform) Footnote 11: [http://archive.ldm.noao.edu/search/query/](http://archive.ldm.noao.edu/search/query/) Footnote 12: [http://archive.eso.org/scienceportal](http://archive.eso.org/scienceportal) Footnote 13: [http://horus.roe.ac.uk/vss/](http://horus.roe.ac.uk/vss/) Footnote 14: [https://irsa.ipac.caltech.edu/data/SPITZER/Enhanced/SEIP/](https://irsa.ipac.caltech.edu/data/SPITZER/Enhanced/SEIP/) Footnote 15: [http://wsa.roe.ac.uk/](http://wsa.roe.ac.uk/) For images obtained under good seeing (\(<\) 1") and weather conditions, we inspect visually for counterparts or host galaxies in the 3\(\sigma\) uncertainty X-ray location of the FXT. We only apply this additional criteria for the FXT candidates that remain after the previous three criteria (see Sect. 2.7.1-2.7.3). If a source is found, we identify it as a star if it is consistent with the spatial resolution of the imaging, we quantify its significance and assess its extent and radial profile visually. We confirm that none of the nine candidates is associated with stellar sources, leaving the number of candidates unchanged. #### 2.7.5 Instrumental and variability effects Finally, we visually check the X-ray data to rule out false-positive candidates that may arise from background flares, bad pixels or columns, or cosmic-ray afterglows rather than intrinsic variability. Again, we only undertake this last criteria for the remaining nine candidates after Sect.2.7.4. First, we use the \(\tt{glvary}\) tool to confirm variability using the Gregory-Loredo (G-L) algorithm. The Gregory-Loredo variability algorithm is a commonly used test to detect time variability in sources (Gregory & Loredo, 1992).16 This adds a second criterion for variability, increasing the probability that the light curves of our candidate FXTs show strong variability during the observation. Applying the G-L task to our sample of nine FXT candidates, we found that one of them (identified in the _Chandra_ ObsId 16302 at \(\alpha\)=13\({}^{\rm h}\)56\({}^{\rm m}\)01\({}^{\rm s}\).10, \(\delta\)=-32\({}^{\circ}\)35'15.95'') has a low probability to be a variable source (\(\approx\)0.1) with a variability index of 2.17 These results guarantee that this source is inconsistent with flux variability throughout the observation. The remaining eight sources show a clear variability throughout the _Chandra_ observation according to their variable probability (\(\gtrsim\)0.99) and variability index (\(\gtrsim\)8) (see Table 2 for more details). Footnote 16: The G-L method splits the sources into multiple time bins and looks for significant deviations between them. The tool assigns a variability index based on the total odds ratio and the corresponding probability of a variable signal. Footnote 17: Although our algorithm is designed to select only sources with large amplitude variations in the number of counts (see Sect. 2.2 or sect. 2.1 in Paper I), this source does not vary. The peculiar light curve of this source erroneously allows it pass our initial method. The light curve is split into different time windows, then erroneously our method selects this source since in one window the light curve contains one of the two peaks and a quiescent phase (mimicing the light curve of a transient source). Thus, the G-L test is necessary to rule out any such source. Finally, to reject possible strong background flaring episodes in the 0.5-7 keV band, we employ the \(\tt{demextract}\) and \(\tt{deflare}\) tools to examine the evolution of the background count rate during the observations. None of the FXT candidates is affected by background flares. Furthermore, we confirm visually that the counts from all sources are detected in dozens to hundreds of individual pixels (discarding association with bad columns or hot pixels) tracing out portions of _Chandra_'s Lissajous dither pattern (appearing as a sinusoidal-like evolution of \(x\) and \(y\) detector coordinates as a function of time; see Appendix Fig. A.2) over their duration, reinforcing that they are astrophysical sources. Therefore, we have a final sample of eight FXTs. \begin{table} \begin{tabular}{l c c c} \hline \hline FXT & Odds ratio & Prob. & Var. Index \\ \hline (1) & (2) & (3) & (4) \\ \hline 15 & 10.18 & 0.99 & 9 \\ 16 & 93.73 & 1.0 & 10 \\ 17 & 9.09 & 0.99 & 8 \\ 18 & 8.67 & 0.99 & 8 \\ 19 & 167.0 & 1.0 & 10 \\ 20 & 34.47 & 1.0 & 10 \\ 21 & 6.19 & 0.99 & 8 \\ 22 & 29.56 & 1.0 & 9 \\ \hline \end{tabular} \end{table} Table 2: Variability properties of the extragalactic FXT candidates detected and/or discussed in this work obtaining by the G-L method, ordered by subsample and date. _Column 1:_ shorthand identifier (FXT #) used throughout this work. _Column 2:_ logarithmic odds ratio (ratio of obtaining the observed distribution versus obtaining a flat distribution) for variability signal. _Column 3:_ variable signal probability (the probability that the flux calculated from the source region is not constant throughout the observation). _Column 4:_ variability index (ratio of obtaining the observed distribution versus obtaining a flat distribution). #### 2.7.6 Completeness It is important to keep in mind that real FXTs may have been ruled out erroneously by the criteria above. To roughly estimate this, we compute the probability that a FXT candidate overlaps with another X-ray source and/or star by chance. Assuming Poisson statistics (i.e., \(P(k,\lambda)\)), the probability of one source (\(k\)=1) being found by chance inside the 3\(\sigma\) localization uncertainty region of another is given by \[P(k\)=1,\lambda)\rm{=}\frac{e^{-\lambda}\lambda^{k}}{k!}, \tag{1}\] where \(\lambda\) is the source density of X-ray sources and/or stars on the sky multiplied by the 3\(\sigma\)_Chandra_ localization uncertainty area. As a reference, the mean density of X-ray sources detected by _Chandra_, _XMM-Newton_ and _Swift_-XRT is 0.36, 1.68, and 0.07 arcmin\({}^{-2}\), respectively, while the mean density of optical sources detected by _Gaia_ is 2.0 arcmin\({}^{-2}\). We use the X-ray detections from the CSC2, 4XMM-DR11 and 2SXPS catalogs (Evans et al., 2010; Webb et al., 2020; Evans et al., 2020), and the _Gaia EDR3_ catalog for stars (Gaia Collaboration et al., 2021) to determine the X-ray or optical source densities, respectively. The probability is 0.0024 and 0.0029 for X-ray and optical sources, respectively. Taking into account just the X-ray sources discarded solely by Criteria 1 or 2, 20 and 42 X-ray sources (see Table 1), respectively, we expect \(\ll\)1 of these to be ruled out wrongly. If we consider only the 109 X-ray sources which were discarded by both Criteria 1 and 2, the combined probability is 1\(\times\)10\({}^{-5}\), and thus the expected number of erroneously dismissed sources is also \(\ll\)1. Considering the densities of X-ray sources in individual _Chandra_ fields, they span a minimum-maximum density range between 0.0042-2.302 arcmin\({}^{-2}\), yielding a probability range of 1.4\(\times\)10\({}^{-6}\) to 0.0505. Thus, under these extreme density conditions, the number of X-ray sources discarded wrongly by Criteria 1 is \(\approx\)0.1. This value is relatively low, and thus reinforces the idea that an erroneous rejection is unlikely even in extreme conditions. As an extreme example, we can consider the X-ray positions of CDF-S XT2 and source XID\({}_{\rm{MS}}\)256 (\(\approx\)30 photons detected during the 4 Ms exposure, classified as a normal galaxy; Xue et al., 2011; Luo et al., 2017), which differ by only \(\approx\)2\(\aas@@fstack{\prime\prime}\)0. Upon further investigation of the flux and position of the X-ray variability, it was realized that XID\({}_{\rm{MS}}\)256 and CDF-S XT2 are distinct sources (Xue et al., 2019). The X-ray source density (at \(\sim\)2\(\aas@@fstack{\prime\prime}\)0 off-axis angle) in the _Chandra Deep Field South_ at 7 Ms is \(\approx\)5.6 arcmin\({}^{-2}\)(Luo et al., 2017), leading to a chance alignment probability (using Eq. 1) of 0.019 between CDF-S XT2 and ID\({}_{\rm{MS}}\)256. Although this value is low, it is non-zero, and thus care should be given to the spatial/temporal alignment of X-ray sources, so as to not discard candidates erroneously. It is not easy to assess the contribution by Criterion 3 to the completeness given the highly disjoint nature of the databases. Similar to Paper I, we assume that Criterion 3 does not disproportionately discard real FXTs. In aggregate, we conclude that our rejection criteria do not apparently impact on the completeness of our FXT candidate sample. #### 2.7.7 Summary We identify eight FXT candidates, three of them have been previously discovered and classified as FXTs by Xue et al. (2019), Lin et al. (2021) and Lin et al. (2022); see Sect. 2.8 for more details. Table 3 shows important information of the final sample: the coordinates, duration (\(T_{90}\)), instrumental off-axis angle, positional uncertainty, hardness ratio (HR; computed following Park et al., 2006), and S/N ratio (computed using the wavdetect tool). Figure 3 shows the background-subtracted 0.5-7.0 keV light curves of our final sample of FXT candidates: short-term, in units of counts (_first column_) and logarithmic count rates (_second column_); long-term in units of net-counts for _Chandra_ only (_third column_) and flux to compare uniformly _Chandra_, _XMM-Newton_ and _Swift_-XRT data (_fourth column_). It is important to mention that the first three criteria considered (X-ray archival data, _Gaia_ detection cross-match, and NED/SIMBAD/VizieR catalogs) contribute significantly and in complementary ways to clean the sample (especially for discarding stellar contamination). Finally, we label each FXT candidate by "XRT" followed by the date, where the first two numbers correspond to the year, the second two numbers to the month, and the last two numbers to the day (see Table 3, _second column_). Nevertheless, similar to Paper I, to identify each source quickly throughout this manuscript we also denominate them by "EXT"+# (ordered by date; see Table 3, _first column_) from 15 to 22, because this work is a sequel paper to Paper I where FXTs were labeled until FXT 14. Furthermore, FXT 18 does not have additional _Chandra_, _XMM-Newton_ or _Swift_-XRT observations to ensure its transient nature, however, we keep it to be consistent with the selection criteria of this work. ### Fainter Electromagnetic detections We now focus on a detailed multiwavelength search (in Sects. 2.8.1 to 2.8.4 from radio to gamma rays) of each candidate for a contemporaneous counterpart18 and host galaxy using several archival datasets to understand their origin. Footnote 18: Hereafter, ”counterpart” refers to the emission relating to the FXT candidate, not its host galaxy, during epochs close to the X-ray trigger. #### 2.8.1 Radio Emission We search for any possible radio emission associated to our FXT candidates using the _RADIO-Master Radio Catalog_, which is a revised master catalog with select parameters from a number of the HEASARC database tables. It holds information on radio sources across a wide range of telescopes/surveys [e.g, the Very Long Baseline Array, the Very Large Array (VLA), and the Australia Telescope Compact Array] and frequencies (from 34 MHz to 857 GHz). Because of the poor angular resolution of some associated radio catalogs, we perform an initial cone search for radio sources with a radius of 60 arcseconds. Following this initial 60 arcseconds cut, we repeat a search using limiting radii consistent with the combined radio + X-ray 3\(\sigma\) positional errors. The current version of the master catalog does not yet contain the recent VLA Sky Survey (VLASS)19 and Rapid ASKAP Continuum Survey (RACS; Hale et al., 2021) catalogs, so we additionally query these using resources from the Canadian Astronomy Data Centre20 interface. Unfortunately, our search returns no matches indicating that none of the final sample of FXT candidates or host sites is unambiguously detected at radio wavelengths. #### 2.8.2 Optical and Mid-Infrared Counterpart Emission In order to explore possible optical and MIR contemporaneous counterparts of our final sample, we examine forced differential photometry taken from the Zwicky Transient Facility (ZTF; Bellm et al. 2019; Graham et al. 2019; Masci et al. 2019), the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al. 2018; Smith et al. 2020), and a visual inspection of images obtained during epochs around the X-ray trigger obtained from the unWISE time-domain (Meisner et al. 2022) and the Legacy Surveys DR10 catalogs (Dey et al. 2019). ZTF is a wide-field (field of view of 55.0 deg\({}^{2}\)) time-domain survey, mapping the sky every few nights with custom \(g\), \(r\), and \(i\) filters to 5-\(\sigma\) limiting magnitudes of \(\approx\)20.8, 20.6, 19.9 AB mag, respectively (Bellm et al. 2019). ATLAS is a four-telescope asteroid impact early warning system, which observes the sky several times every night in custom _cyan (\(c\)_; 420-650 nm) and _orange (\(o\)_; 560-820 nm) filters to 5-\(\sigma\) limiting magnitudes of \(\approx\)19.7 AB mag (Tonry et al. 2018; Smith et al. 2020). Figures 4 and 5 show the differential photometry light curves taken from ZTF (_gri_) and ATLAS (\(co\)), respectively, for the FXT candidates. For visual clarity, the ZTF and ATLAS photometry are binned by 50 days, with the errors added in quadrature. The locations of all eight FXT candidates have been observed by ATLAS (see Fig. 4), although FXT 15 and FXT 16 were not visited by ATLAS around the time of the X-ray detection (highlighted by the vertical blue lines). ZTF, on the other hand, has only observed six FXT candidate locations (see Fig. 5), of which FXT 15, FXT 17 and FXT 18 fields not being observed around the time of the _Chandra_ detection. Notably, the most recent FXTs (FXTs 20, 21 and 22) have forced differential photometry light curves from ZTF and ATLAS, covering a wide epoch both before and after the _Chandra_ X-ray detections. Overall, none of the FXT candidates exhibits significant (\(>\)5\(\sigma\)) detections of optical variability or flares by ZTF and ATLAS around the time of the FXT candidate X-ray trigger, nor are there any robust detections in any previous or subsequent epochs. We derive 3\(\sigma\) upper limits from the closest observation taken by ZTF and ATLAS (for the available filters and FXTs), as listed in Table B.1. To check if the forced photometry is consistent with zero flux (around the X-ray trigger), we use the statistical test Constrained (Stoppa et al. 2023), developed explicitly to compare the consistency between the observations and a constant zero flux model. We adopt a methodology identical to that discussed in Eappachen et al. (2023, submitted). We applied this test considering two-time windows, [-10;20] and [-10;100] days, with respect to the X-ray trigger, because possible optical counterparts have timescales from days (e.g., the afterglow of GRBs) to weeks/months (e.g., CC-SNe emission). We concluded that for all the sources, the model of zero flux density detected by both periods is consistent with the observations. Furthermore, the DESI Legacy Imaging Surveys (DR10) combine three major public projects plus additional archival data to provide imaging over a large portion of the extragalactic sky visible from the Northern and Southern Hemispheres in at least three optical bands (\(g\), \(r\), and \(z\)). The sky coverage (\(\sim\)30,000 deg\({}^{2}\)) is approximately bound by \(-90^{\circ}\)\(<\)\(\delta\)\(<+\) 84\({}^{\circ}\) in celestial coordinates and \(|b|\)\(>\)15\({}^{\circ}\) in Galactic coordinates (Dey et al. 2019). Thus, the Legacy Imaging survey observes most FXT locations (except for FXTs 17 and 21). We explore visually each individual imaging epoch provided by the Legacy survey in \(g\)-, \(r\)-, \(i\)-, and \(z\)-bands to identify potential optical contemporaneous counterparts of the FXTs. However, no contemporaneous optical counterparts are identified around the X-ray trigger time after a visual inspection. The Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010) provides an unprecedented time-domain view of the MIR sky at W1=3.4 \(\mu m\) and W2=4.6 \(\mu m\) due to the NEOWISE mission extension (Mainzer et al. 2011, 2014). WISE has completed more than 19 full-sky epochs over a \(>\)12.5 year baseline, with each location having been observed for \(\gtrsim\)12 single exposures (Meisner et al. 2022). In order to search for a potential counterpart inside the WISE and NEOWISE images, we use the time-domain API tools provided by the _unTimely Catalog_, which considers data from 2010 to 2020 (Meisner et al. 2022). We visually inspect each single-epoch image of each FXT field (for FXT 22, only up to \(\sim\)1.5 years before the X-ray trigger). Unfortunately, none of the FXT candidates shows significant detections of variability or flares around the time of the X-ray trigger. \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline FXT & Id & Obl & Exp. (ks) & Date & \(T_{90}\) (ks) & RA (deg.) & DEC (deg.) & Off. Ang. & Pos. Unc. & HR & S/N \\ \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) \\ \hline 15 & XRT 140507 & 16093 & 68.8 & 2014-05-07 & 4.8\({}^{+3.8}_{-3.8}\) & 233.73496 & 23.46849 & 22.2 & 0\(\aas@@fstack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ }stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime }stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime }stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime }stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime }stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ }stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ }{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{\prime}stack{ \prime}stack{\prime}stack{\prime}stack{\ Figure 3: 0.5–7 keV light curves for each FXT candidate: (_1st column_) full exposure, in units of counts; (_2nd column_) zoom in, from the detection of first photon to the end of the exposure, in units of count rate (cts s\({}^{-1}\)), with log-log scaling and 5 counts per bin. The _gray dashed lines_ show the stop-time per observation regarding the beginning of the transient; (_3rd column_) long-term light curve, with each point representing individual _Chandra_ exposures (cyan circles with 1-\(\sigma\) error bars) to highlight the significance of detections and non-detections, in units of counts; (_4th column_) long-term light curve, with each point representing individual _Chandra_ (_cyan_), _XMM-Newton_ (_orange_) and _Swift-XRT_ (_green_) exposures in units of flux (erg s\({}^{-1}\) cm\({}^{-2}\)). For the long-term light curves, the observation including the transient is denoted by a _large red star_ (1-\(\sigma\) error bars), while triangles denote observations with (3-\(\sigma\)) upper limits. All fluxes are reported in the 0.5–7 keV band in the observer’s frame. #### 2.8.3 Ultraviolet, Optical, and Infrared Host Galaxy Identification To search for UV, optical, NIR and MIR emission associated with any possible host galaxy in the vicinity of each FXT candidate, we perform a cone search within a radius equivalent to the 3\(\sigma\)_Chandra_ error position (see Table 3) in the following catalogs: GALEX Data Release 5 (GR5; Bianchi et al. 2011), Figure 4: ATLAS differential photometry of the _cyan_ (\(c\)) and _orange_ (\(o\))-bands light curves performed at the position of the FXT candidates. \(<5\sigma\) data points are shown in hollow circles, while \(>5\sigma\) data points are shown in solid circles. The _blue vertical lines_ show the epochs when the FXT candidates were detected by _Chandra_, while the _dashed gray lines_ represent the zero flux. Figure 5: ZTF differential photometry of the \(g\) (_green points_), \(r\) (_red points_) and \(i\)-bands (_orange points_) light curves performed at the position of the FXT candidates. \(<5\sigma\) data points are shown in hollow circles, while \(>5\sigma\) data points are shown in solid circles. The _blue vertical lines_ shows the epochs when the FXT candidates were detected by _Chandra_, while the _dashed gray lines_ represents the zero flux. Pan-STARRS Data Release 2 (Pan-STARRS-DR2; Flewelling 2018), the DES Data Release 2 (DES-DR2; Abbott et al. 2021), the SDSS Data Release 16 (SDSS-DR16; Ahumada et al. 2020), the NOAO Source Catalog Data Release 2 (NSC-DR2; Nidever et al. 2021), the _Hubble_ Source Catalog version 3 (HSCv3; Whitmore et al. 2016), the UKIRT InfraRed Deep Sky Survey Data Release 11+(UKIDSS-DR11+; Warren et al. 2007), the UKIRT Hemisphere Survey Data Release 1 (UHS-DR1; Dye et al. 2018), the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006), the VHS band-merged multi-waveband catalogs Data Release 5 (DR5; McMahon et al. 2013), the Spitzer Enhanced Imaging Products Source List (Teplitz et al. 2010), and the unWISE (unWISE; Schlafly et al. 2019) and CatWISE (McMahon et al. 2013; Dye et al. 2018; Marocco et al. 2021) Figure 6: Archival optical, near-infrared, mid-infrared and X-ray images of extragalactic FXT candidates; the telescope/instrument +filter and FXT ID name are shown in the upper-left corner. Each cutouts is centered on the X-ray position, while red circles denote 2\(\sigma\)_Chandra_ errors in the source localisation. _(1st, 2nd, 3rd and 4th columns)_ optical band (DECam, Pan-STARRS and HST) images. _(5th and 6th columns)_ near-infrared \(J\) or \(H\) and \(K\) (2MASS, UKIRT or VISTA) images; _(7th column)_ 3.4\(\mu\)m (unWISE) images; and _(8th column)_ X-ray _Chandra_ (ACIS) 0.5–7 keV images. The _coloured arrows and circles_ show the localization of the possible host/counterparts of the FXT candidates. HST images were aligned using the astrometry of Gaia. catalogs, as well as the ESO Catalogue Facility and the NED (Helou et al., 1991), SIMBAD (Wenger et al., 2000), and VizieR (Ochsenbein et al., 2000) databases. We supplement this by including any extended sources found during our archival image analysis in Sect. 2.7.4. We assume that uncertainties in the UV through MIR centroid positions contribute negligibly to the overall error budget. Figure 6 shows image cutouts of the localization region of the FXTs (one per row), typically from Pan-STARRS, DECam, or _HST_ in the optical (_1st-4th columns_, using \(g\), \(r\), \(i\) and \(z\) or the corresponding _HST_ filters), VISTA, UKIRT, 2MASS or HST in the NIR (_5th and 6th columns_, using \(J\), \(H\), \(K\) or the corresponding _HST_ filters), unWISE in the MIR (_7th column_, in the 3.6\(\mu\)m filter) band, and the _Chandra_-ACIS image (_8th column_, in the 0.5-7.0 keV band). FXT 15 has no optical and NIR sources detected within the 3\(\sigma\) X-ray positional uncertainty of this source in the _HST_, DECam, 2MASS, or unWISE images (see Fig. 6). Upper limits are given in Table 4. FXT 16/CDF-S XT2 (identified previously by Zheng et al. 2017 and analyzed in detail by Xue et al. 2019) was detected by _Chandra_ with a 2\(\sigma\) positional uncertainty of 0\(\farcs\)32 (see Table 3). This accurate _Chandra_ X-ray position allows us to identify the host galaxy, which lies at an offset of \(\approx\) 0\(\farcs\)44 (i.e., a projected distance of \(\approx\)3.3 kpc) using _HST_ images (see Fig. 6). The galaxy has a spectroscopic redshift of \(z_{\rm spec}\)=0.738. The probability of a random match between FXT 16 and a galaxy as bright as or brighter than \(m_{\rm F1600}\)\(\approx\)24 AB mag within 0\(\farcs\)44 is \(\approx\)0.01 (Xue et al., 2019). FXT 17 does not have optical and NIR sources detected within the 3\(\sigma\) X-ray error region of this source in the Pan-STARRS, 2MASS, or unWISE images (see Fig. 6). Upper limits are given in Table 4. For FXT 18, one faint source with \(m_{r}\)\(\approx\)24.2 AB mag (see Fig. 6, _source #1_) appears inside the large localization region (\(r\)\(\approx\)7\(\farcs\)5 at 3\(\sigma\)) in the DECam \(g\), \(r\), \(i\) and \(z\)-band images with an off-set angle of \(\approx\)2\(\farcs\)6 from the X-ray center position; it has a chance association probability of \(<\)0.095. Two other sources lie slightly outside the X-ray uncertainty region, _sources #2_ and #3, with chance probabilities of 0.582 and 0.363, respectively (see Fig. 6); such high probabilities suggest an association with either one of them is unlikely. FXT 19 (reported previously by Lin et al. 2019 and analyzed in detail by Lin et al. 2022) lies close to a faint (\(m_{\rm F1600}\)\(\approx\)24.8, \(m_{\rm F1840}\)\(\approx\)24.9, \(m_{\rm F1100}\)\(\approx\)24.7 and \(m_{\rm F1600}\)\(\approx\)24.3 AB mag, using aperture photometry) and extended optical and NIR source in _HST_ imaging (see Fig. 6, _source #1_) with an angular offset \(\approx\)0\(\farcs\)45. The chance probability for FXT 19 and _source #1_ to be randomly aligned in F160W is very low, only 0.005 (Lin et al., 2022). FXT 20 was detected 6\(\fs\)812 (or \(\approx\)500 kpc in projection) from the center of the galaxy cluster Abell 1795 (located at \(\approx\)285.7 Mpc) during a _Chandra_ calibration observation (ObsId 21831). FXT 20 lies close to a faint source \(m_{r}\)\(\approx\)23.5 AB mag (see Fig. 6, _source #1_) identified in DECam \(g\), \(r\), and \(z\)-bands at an offset angle of \(\approx\)0\(\farcs\)6. The probability of a false match is \(P\)\(<\)0.005 (adopting the formalism developed by Bloom et al. 2002) for such offsets from similar or brighter objects. FXT 21 has a faint optical source (\(m_{r}\)\(\approx\)25.1 AB mag) inside the 3\(\sigma\) X-ray error position in Pan-STARRS images (see Fig. 6, _source #1_), but no source is detected in 2MASS NIR or unWISE MIR images. The offset between the FXT and the optical source position is \(\approx\)0\(\farcs\)5, with a false match probability of \(P\)\(<\)0.0085 (adopting the formalism developed by Bloom et al. 2002) for such offsets from similar or brighter objects. Finally, FXT 22 (identified previously by Lin et al. 2021) was detected 4\(\farcm\)079 (or \(\approx\)300 kpc in projection) from the center of the galaxy cluster Abell 1795 (located at \(\approx\)285.7 Mpc) during a _Chandra_ calibration observation (ObsId 24604). No sources are detected within the 3\(\sigma\) X-ray error region of this source in the DECam optical, VISTA NIR, or unWISE MIR images (see Fig. 6). However, this source falls close to an extended object, SDSS J134856.75+263946.7, with \(m_{r}\)\(\approx\)21.4 AB mag that lies at a distance of \(\approx\)4\(\farcs\)5 from the position of the FXT (\(\approx\)40 kpc in projection) with a spectroscopic redshift of \(z_{\rm spec}\)=1.5105 (Andreoni et al., 2021; Jonker et al., 2021; Eappachen et al., 2023). The probability of a false match is \(P\)\(<\)0.041 (adopting the formalism developed by Bloom et al. 2002) for such offsets from similar or brighter objects. To summarize, we conclude that four (FXTs 16, 19, 20 and 21) of the eight FXT candidates have high probabilities of being associated with faint (FXT 20) or moderately bright (FXTs 16, 19, and 21) extended sources within the 3\(\sigma\) positional error circle. In the case of FXT 22, it may be associated with the extended source SDSS J134856.75+263946.7 (\(z_{\rm spec}\)=1.5105); nevertheless, a relation with a faint background source cannot be excluded (a faint extended source is in the X-ray uncertainty region; Eappachen et al. 2023). In the case of FXT 18, its large positional uncertainty does not allow us to determine robustly the counterpart optical or NIR source. Finally, two FXT candidates (FXTs 15 and 17) have no associated optical or NIR sources in the available moderate-depth archival imaging, and remain likely extragalactic FXTs. None of the FXT candidates analyzed in this work appear to be associated with a nearby galaxy (\(\lesssim\)100 Mpc). In Sect. 3.4, we explore a scenario where these sources are related to Galactic stellar flares from faint stars. #### 2.8.4 Higher energy counterparts To explore if hard X-ray and \(\gamma\)-ray observations covered the sky locations of the FXTs, we developed a cone search in the Nuclear Spectroscopic Telescopes Array (_NuStar_; Harrison et al. 2013), _Swift_-Burst Alert Telescope (_Swift_-BAT; Sakamoto et al. 2008), INTErnational Gamma-Ray Astrophysics Laboratory (_INTEGRAL_; Rau et al. 2005), High Energy Transient Explorer 2 (_HETE_-2; Hurley et al. 2011), _InterPlanetary Network_(Ajello et al., 2019), and _Fermi_(von Kienlin et al., 2014; Narayaa Bhat et al., 2016) archives. We adopt a 10\(\farcm\) search radius for the _INTEGRAL_, _Swift_-BAT, _HETE_-2 and _Interplanetary Network_ Gamma-Ray Bursts catalogs, while for the Gamma-ray Burst Monitor (_GBM_) and the Large Area Telescope (LAT) _Fermi_ Burst catalogs we consider a cone search radius of 4 deg (which is roughly the typical positional error at \(\sim\)17\(\farcm\) confidence level for those detectors; Connaughton et al. 2015). Additionally, we implement a time constraint criterion of \(\pm\)15 days in our search between Gamma-ray and FXT triggers. To further probe whether there may be weak \(\gamma\)-ray emission below the trigger criteria of _Fermi_-GBM at the location of the FXTs, we investigated the _Fermi_-GBM daily data, the _Fermi_ position history files21, and the GBM Data Tools (Goldstein et al., 2022)2. We confirmed that FXTs 15, 16, 17, 20, and 21 were in the FoV of _Fermi_-GBM instruments during the X-ray trigger time\(\pm\)50 s, while FXTs 18, 19, and 22 were behind the Earth around the X-ray burst trigger time; thus, their fields were not visible. Table 1 summarizes the visibility of the sources and the _Fermi_-GBM instruments covering the fields around the X-ray trigger time (at a distance of \(\lesssim\)60 degrees). In summary, we find no hard X-ray or \(\gamma\)-ray counterparts associated with _NuSTAR_, _INTEGRAL_, _Swift_-BAT, _HETE-2_, _Interplanetary Network_ and the GBM and LAT _Fermi_ Burst catalogs, but cannot rule out weak \(\gamma\)-ray emission for FXTs 18, 19, and 22. ## 3 Spatial, Temporal and X-ray Spectral properties We analyze the spatial distribution of our final sample of FXT candidates in Sect. 3.1. Furthermore, the time evolution and spectral properties could give important information about the physical processes behind the FXT candidates, and thus we explore and describe these in Sects. 3.2 and 3.3, respectively. Finally, we explore a Galactic stellar flare origin of this sample in Sect. 3.4. ### Spatial properties If the FXT candidates are extragalactic and arise from large distances, then given the isotropy of the universe on large scales, we expect them to be randomly distributed on the sky. Figure 7 shows the locations, in Galactic coordinates, of the final FXT candidates of Paper I and this work, the initial FXT candidates of this work, and the _Chandra_ observations analyzed in Paper I and this work. We investigate the randomness of the FXT candidate distribution on the sky compared to all _Chandra_ observations considered in this work. For this, we use the non-parametric Kolmogorov-Smirnov (K-S) test (Kolmogorov 1933; Massey Jr 1951; Ishak 2017). We explore the randomness of the spatial distribution of our final sample of eight FXTs. For this, we simulate 10,000 samples of 40,000 random sources distributed over the sky, taking as a prior distribution the _Chandra_ sky positions used in this work (which are functions of the pointings and exposures). Out of these fake sources, we randomly select eight sources, which we compare to the spatial distribution of the eight real FXT candidates using a 2D K-S test (following the methods developed by Peacock 1983 and Fasano & Franceschini 1987). We can reject the \(\mathcal{NH}\) that these sources are drawn from the same (random) distribution only in \(\approx\)0.2% of the draws. Therefore, the positions of the eight FXT candidates are consistent with being randomly distributed over the total _Chandra_ observations on the sky. Intriguingly, FXTs 14 and 16 lie in the same field of view (i.e., in the _Chandra_ Deep Field South), as do FXTs 20 and 22 (i.e., in the direction of the galaxy cluster Abell 1795). Thus, we explore the probability that two FXTs occur in the same field, which is given by the Poisson statistic [i.e., \(P(k,\alpha)\), using Eq. 1], where \(k\)=2 and \(\alpha\) is the ratio between the total _Chandra_ exposure time in a particular field (for the _Chandra_ Deep Field South and the cluster Abell 1795 are 6.8 and 3.1 Ms, respectively23) and the total _Chandra_ exposure time analyzed in Paper I and this work (\(\approx\)169.6 and 88.8 Ms, respectively) normalized to the total number of FXTs identified (i.e., 22 FXTs). The chance probabilities for FXTs 14, 16 and 20, 22 are 0.115 and 0.029, respectively. We can conclude that the occurrence of two FXTs being found in these particular fields is unusual, but not ruled out at high significance. Footnote 23: Both values taken from [https://cda.harvard.edu/chaser](https://cda.harvard.edu/chaser) ### Temporal properties To characterize and measure the break times and light-curve slopes in the X-ray light curves of the candidate FXTs, we consider a single power-law (PL) model with index \(\tau_{1}\), or a broken power-law (BPL) model with indices \(\tau_{1}\), \(\tau_{2}\) and break time \(T_{\rm break}\) (for more detail, see Paper I, sect. 3.2). Both models describe well the majority of the FXT X-ray light curves in this work. To fit the data, we use the least-square method implemented by the lmfit Python package.24 The best-fit model parameters and statistics are given in Table 5, while the light curves (in flux units; light curves have five counts per bin) and best-fit models are shown in Fig. 8. We define the light-curve zeropoint (\(T_{0}\)=0 sec) as the time when the count rate is 3\(\sigma\) higher than the Poisson background level (see Table 5). To confirm the zeropoint, we divide the light curves in bins of \(\Delta\)=100 and 10 seconds, and compute the chance probability that the pho \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline FXT & \(m_{k}\) & \(m_{v}\) & \(m_{l}\) & \(m_{v}\) & \(m_{l}\) & \(m_{l}\) & \(m_{l}\) & \(m_{l}\) & \(m_{K}\) & W1 & W2 \\ \hline (1) & (2) & 2\(\times\)20 & \(\times\)23 & \(\times\)23 & \(\times\)22 & \(\times\)22 & \(\times\)20 & \(\times\)20 & \(\times\) & \(\times\)16 & \(\times\)16 & \(\times\)17 & \(\times\)20 & \(\times\)20 & \(\times\)26 \\ & 26.50\(\pm\)0.1 & 25.30\(\pm\)0.1 & 25.98\(\pm\)0.1 & 24.50\(\pm\)0.1 & 24.50\(\pm\)0.1 & \(\times\) & 24.50\(\pm\)0.0 & \(\times\) & 24.50\(\pm\)0.1 & 24.30\(\pm\)0.0 & \(\times\) & 23.85\(\pm\)0.1 & \(\times\) & 20.1\(\times\) & 22.40\(\pm\)0.1 & 21.98\(\pm\)0.1 \\ & 17 & \(>\)24.1\({}^{\circ}\) & \(>\)22.4\({}^{\circ}\) & \(>\)21.8\({}^{\circ}\) & \(>\)21.5\({}^{\circ}\) & \(>\)20.7\({}^{\circ}\) & \(\times\) & \(>\)17.1\({}^{\circ}\) & \(>\)17.1\({}^{\circ}\) & \(>\)17.2\({}^{\circ}\) & \(>\)19.8\({}^{\circ}\) & \(>\)20.1\({}^{\circ}\) \\ & 18(S1) & 24.6\(\pm\)0.2\({}^{\circ}\) & 24.20\(\pm\)0.1 & 24.40\(\pm\)1 & 23.62\(\pm\)0.1 & \({}^{\circ}\) & \(>\)20.1\({}^{\circ}\) & \(-\) & \(>\)19.8\({}^{\circ}\) & \(>\)19.2\({}^{\circ}\) & \(>\)19.3\({}^{\circ}\) & \(>\)20.2\({}^{\circ}\) \\ & 18(S2) & 24.9\({}^{\circ}\)0.2\({}^{\circ}\) & 24.68\(\pm\)0.1\({}^{\circ}\) & 24.20\({}^{\circ}\) & 25.24\(\pm\)0.9 & 24.34\(\pm\)1.1\({}^{\circ}\) & \(-\) & \(>\)19.8\({}^{\circ}\) & \(>\)19.2\({}^{\circ}\) & \(>\)19.3\({}^{\circ}\) & \(>\)20.2\({}^{\circ}\) \\ & 18(S3) & 25.6\(\pm\)0.7\({}^{\circ}\) & 23.08\(\pm\)0.0\({}^{\circ}\) & 21.8\(\pm\)0.2\({}^{\circ}\) & 21.20\({}^{\circ}\) & 21.1\(\pm\)0.1\({}^{\circ}\) & 22.0\(\pm\)0.2\({}^{\circ}\) & 20.1\(\pm\)0.1\({}^{\circ}\) & \(>\)19.8\({}^{\circ}\) & 19.7\(\pm\)0.2\({}^{\circ}\) & \(>\)19.3\({}^{\circ}\) & \(>\)20.2\({}^{\circ}\) \\ & 19(S1) & 25.2\({}^{\circ}\).6\({}^{\circ}\) & 24.79\({}^{\circ}\).04\({}^{\circ}\) & 24.93\(\pm\)0.05\({}^{\circ}\) & 24.32\({}^{\circ}\) & 24.40\({}^{\circ}\) & \(>\)24.6\({}^{\circ}\) & \(>\)24.6\({}^{\circ}\)/0.04\({}^{\circ}\) & 24.33\({}^{\circ}\)/0.04\({}^{\circ}\) & 24.23\({}^{\circ}\) & \(>\)20.3\({}^{\circ}\) & \(>\)20.1\({}^{\circ}\) & \(>\)20.3\({}^{\circ}\) \\ & 24.3\(\pm\)0.2\({}^{\circ}\) & 25.8\(\pm\)0.2\({}^{\circ}\) & 22.58\(\pm\)0.2\({}^{\circ}\) & 22.61\({}^{\circ}\) & 23.24\(\pm\)0.2\({}^{\circ}\) & 23.20\({}^{\circ}\) & \(>\)19.8\({}^{\circ}\) & \(-\) & \(>\)17.9\({}^{\circ}\) & \(>\)17.7\({}^{\circ}\) & \(>\)17.7\({}^{\circ}\) & \(>\)20.2\({}^{\circ}\) & \(>\)20.5\({}^{\circ}\) \\ & 21(S1) & \(>\)23.6\({}^{\circ}\) & 25.09\(\pm\)1.49\({}^{\circ}\) & 22.68\(\pm\)0.13\({}^{\circ}\) & 27.13\(\pm\)0.3\({}^{\circ}\) & 21.55 tons per bin come from the background (\(P_{\rm bkg}\))25. We found that the bins after \(T_{0}\) have a \(P_{\rm bkg}\)\(\lesssim\)0.01, while \(P_{\rm bkg}\) immediately before \(T_{0}\) is higher \(P_{\rm bkg}\)\(\gtrsim\)0.1-0.2. We use the _Bayesian Information Criterion_ (BIC)26 to understand which of the two models describes better the data. We consider the threshold criterion of \(\Delta\)BIC=BIC\({}_{h}\)-BIC\({}_{l}\)\(>\)2 to discriminate when comparing two different models, where BIC\({}_{h}\) is the higher model BIC, and BIC\({}_{l}\) is the lower model BIC. The larger \(\Delta\)BIC, the stronger the evidence against the model with a higher BIC is (Liddle 2007). Footnote 25: We use the Poisson probability mass function, \(P_{\rm bkg}\)= \(\exp{(-\mu)}\frac{\xi^{\prime}}{\xi^{\prime}}\), where \(k\) and \(\mu\) are the number of photons per bin and the background rate multiple by the bin time, respectively. The parameters of the best-fitting models of the light curves are listed in Table 5, while Figure 8 shows the best-fit broken power-law or simple power-law models. Five sources (FXTs 15, 16, 19, 20, and 22) require a break time (based on the BIC criterion), while three do not (FXTs 17, 18, and 21). In two of the former (FXT 15 and 20), \(\tau_{1}\) is negative, indicating a discernible rise phase; the other three (FXTs 16, 19, and 22) are consistent with an early plateau phase. ### Spectral properties Using X-ray spectra and response matrices generated following standard procedures for point-like sources using ciao with the specextract script, we analyze the spectral parameters of the FXT candidates. The source and background regions are the same as those previously generated for the light curves (see Sect. 2.4). To find the best-fit model, because of the low number of counts, we consider the maximum likelihood statistics for a Poisson distribution called Cash-statistics (C-stat; Cash 1979).26 Because of the Poisson nature of the X-ray spectral data, the C-stat is not distributed like \(\chi^{2}\) and the standard goodness-of-fit is inapplicable (Buchner et al. 2014; Kaastra 2017). Thus, similarly to Paper I, we use _the Bayesian X-ray Astronomy package_ (BXA; Buchner et al. 2014), which joins the Monte Carlo nested sampling algorithm MultiNest(Feroz et al. 2009) with the fitting environment of XSPEC(Arnaud 1996). BXA computes the integrals over the parameter space, called the evidence (\(\mathcal{Z}\)), which is maximized for the best-fit model, and assuming uniform model priors. Footnote 26: BIC= \(-2\ln{\mathcal{L}}+k\ln{N}\), where \(\mathcal{L}\) is the maximum value of the data likelihood, \(k\) is the number of model parameters, and \(N\) is the number of data points (Ivezić et al. 2014). We consider one simple continuum model: an absorbed power-law model (phabs*zphabs*po, hereafter the PO model), which is typically thought to be produced by a non-thermal electron distribution. We choose this simple model because we do not know the origin and the processes behind the emission of FXTs. Furthermore, the low number of counts does not warrant more complex models. The spectral absorption components phabs and zphabs represent the Galactic and intrinsic contribution to the total absorption, respectively. During the fitting process, the Galactic absorption (\(N_{\rm H,Gd}\)) was fixed according to the values of Kalberla et al. (2005) and Kalberla & Haud (2015), while for the intrinsic neutral hydrogen column density (\(N_{\rm H}\)), we carried out fits for both \(z\)=0 (which provides a lower bound Figure 7: Sky positions, in Galactic coordinate projection, of FXT candidates: the initial 151 FXT candidates of this work are represented by _blue triangles_ (see Sect. 2.6; some symbols overlap on the sky); the _final sample_ of eight extragalactic FXT candidates from this work are denoted by _large red stars_ (FXTs 20 and 22 overlap on this scale); and the final sample of 14 extragalactic FXTs analyzed in Paper I are shown as _orange circles_ (FXTs 14 and 16 overlap on this scale). The background grey scale encodes the location and number of distinct co- or closely-located observations among the combined 3899 and 5303 _Chandra_ observations used in this work and Paper I, respectively. Figure 8: Top panels: the observed 0.5–7.0 keV X-ray light curves in cgs units (_blue points_), starting at \(T\)=20 seconds. We also plot the best-fit broken power-law or simple power-law model (_red solid lines_). The light curves contain 5 counts per bin. _Bottom panels:_ the hardness ratio evolution (the soft and hard energy bands are 0.5–2.0 keV and 2.0–7.0 keV, respectively), following the Bayesian method of Park et al. (2006). The _red dashed line_ denotes a hardness ratio equal to zero. \(T_{0}=0\) s is defined here as the time when the count rate is 3\(\sigma\) higher than the Poisson background level. Figure 9: _Top panels:_ X-ray spectra (_black crosses_), in units of counts s\({}^{-1}\) keV\({}^{-1}\). We also plot the best-fit absorbed power-law (_blue lines_) spectral model; see Table 6 for the corresponding best-fitting parameters. _Bottom panels:_ residuals (defined as data-model normalized by the uncertainty) of each spectral model. on \(N_{\rm H}\) since firm redshifts are generally not known, and is useful for comparison with host-less FXTs) and the redshift values from Table 8 or fiducial values of \(z\)=1 for host-less sources. The best-fitting absorbed power-law models (and their residuals) and their parameters are provided in Fig. 9 and Table 6, respectively; additionally, Fig. 10 shows the histograms of the best-fit intrinsic neutral hydrogen column densities (\(N_{\rm H}\); _top panel_) and photon index (\(\Gamma\); _bottom panel_) for extragalactic FXTs candidates of this manuscript (_orange histograms_) and from Paper I (_blue histograms_). The candidates show a range of \(N_{\rm H}\)\(\approx\)(1.1-18.1)\(\times\)10\({}^{21}\) cm\({}^{-2}\) (assuming \(z\)=0), and a mean value of \(\overline{N}_{\rm H}\)\(\approx\)5.0\(\times\)10\({}^{21}\) cm\({}^{-2}\), consistent with the range for sources reported by Paper I (see Fig. 10, _top panel_). We note that in all cases here, the best-fit \(N_{\rm H}\) is higher than the \(N_{\rm H,Gal}\) estimates from Kalberla et al. (2005) and Kalberla & Haud (2015) by a factor of \(\approx\)4-90. In every case, intrinsic absorption and the Galactic component are needed, with at least \(\approx\)95% confidence, and in some cases even \(\approx\)99% confidence level. Therefore, two absorption components are needed in the fitting process in general. Furthermore, excluding the soft candidate FXT 18, the best-fit power-law photon index ranges between \(\Gamma\)\(\approx\)2.1-3.4 for the candidate FXTs, with a mean value of \(\overline{\Gamma}\)=2.6. FX 18 is an ex \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline FXT & \(z\) & \(N_{\rm H,Gal}\) & \(N_{H}\) & \(\Gamma\) & log Norm & Flux & C-stat(dof) & ln \(Z\) \\ \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline 15 & 0.0 & 0.5 & \(3.8^{+12.5}_{-3.5}\) & \(2.1^{+2.0}_{-1.2}\) & \(-5.9^{+0.9}_{-0.8}\) & 0.3\(\pm\)0.1 & 14.3(20) & \(-15.4\pm\)0.01 \\ & 1.0 & 0.5 & \(18.7^{+78.9}_{-18.5}\) & \(2.1^{+1.8}_{-1.2}\) & \(-5.9^{+0.8}_{-0.8}\) & 0.3\(\pm\)0.7 & 14.3(20) & \(-13.7\pm\)0.01 \\ 16 & 0.0 & 0.2 & \(2.4^{+7.3}_{-2.3}\) & \(2.1^{+1.7}_{-0.6}\) & \(-4.9^{+0.3}_{-0.2}\) & 2.9\(\pm\)0.3 & 72.8(88) & \(-46.8\pm\)0.02 \\ & 0.738 & 0.2 & \(8.0^{+7.4}_{-7.4}\) & \(2.1^{+0.7}_{-0.5}\) & \(-0.0^{+2.2}_{-0.3}\) & 2.9\(\pm\)0.3 & 73.1(88) & \(-45.8\pm\)0.02 \\ 17 & 0.0 & 0.2 & \(18.1^{+16.6}_{-14.0}\) & \(3.4^{+1.3}_{-1.8}\) & \(-4.2\pm\)0.9 & 2.3\({}^{+0.5}_{-0.3}\) & 24.9(34) & \(-19.4\pm\)0.02 \\ & 1.0 & 0.2 & \(66.8^{+31.8}_{-55.4}\) & \(2.6\pm\)1.4 & \(-4.6^{+0.6}_{-0.2}\) & \(2.5^{+0.3}_{-0.3}\) & 25.0(34) & \(-18.4\pm\)0.02 \\ 18 & 0.0 & 0.2 & \(1.1^{+3.1}_{-1.9}\) & \(>\)6.5 & \(-4.1^{+0.5}_{-0.3}\) & \(88.9^{+17.9}_{-17.9}\) & 10.9(11) & \(-15.4\pm\)0.02 \\ & 0.35 & 0.2 & \(1.3^{+3.1}_{-3.5}\) & \(>\)6.5 & \(-4.1^{+0.3}_{-0.3}\) & \(94.9^{+23.0}_{-20.8}\) & 11.1(11) & \(-15.2\pm\)0.02 \\ 19 & 0.0 & 0.5 & \(5.3^{+3.0}_{-4.3}\) & \(2.2\pm\)0.6 & \(-4.8^{+0.4}_{-0.3}\) & 3.1\(\pm\)0.3 & 99.5(126) & \(-59.9\pm\)0.02 \\ & 1.44 & 0.5 & \(47.1^{+42.6}_{-1.6}\) & \(2.2\pm\)0.6 & \(-4.9\pm\)0.3 & 3.1\(\pm\)0.2 & 99.9(126) & \(-57.9\pm\)0.02 \\ 20 & 0.0 & 0.3 & \(4.7^{+11.9}_{-4.4}\) & \(3.0^{+1.8}_{-1.3}\) & \(-4.7^{+0.8}_{-0.5}\) & \(2.2\pm\)0.5 & 30.6(36) & \(-23.4\pm\)0.02 \\ & 1.0 & 0.3 & \(25.3^{+3.7}_{-3.6}\) & \(3.0^{+1.8}_{-1.3}\) & \(-4.7^{+0.8}_{-0.5}\) & \(2.2^{+0.5}_{-0.3}\) & 30.6(36) & \(-21.6\pm\)0.01 \\ 21 & 0.0 & 0.7 & \(2.5^{+8.3}_{-4.0}\) & \(3.1^{+1.4}_{-1.3}\) & \(-4.8^{+0.6}_{-0.6}\) & 1.9\(\pm\)0.4 & 16.6(60) & \(-17.0\pm\)0.01 \\ & 0.85 & 0.7 & \(11.6^{+31.4}_{-4.0}\) & \(3.1^{+1.3}_{-1.3}\) & \(-4.8^{+0.6}_{-0.4}\) & 1.9\(\pm\)0.5 & 16.8(60) & \(-15.6\pm\)0.01 \\ 22 & 0.0 & 0.3 & \(1.9^{+5.3}_{-1.9}\) & \(2.3^{+0.3}_{-0.6}\) & \(-4.5^{+0.2}_{-0.2}\) & \(8.7^{+0.9}_{-0.7}\) & 78.1(62) & \(-49.4\pm\)0.02 \\ & 1.5105 & 0.3 & \(17.0^{+50.1}_{-15.4}\) & \(2.2^{+0.9}_{-0.5}\) & \(-4.5^{+0.3}_{-0.2}\) & \(8.6^{+0.9}_{-0.7}\) & 78.1(62) & \(-47.3\pm\)0.02 \\ \hline \end{tabular} \end{table} Table 6: Results of the 0.5–7 keV X-ray spectral fits for the final sample of FXT candidates. _Column 2:_ Redshift assumed (\(z\)=0, 1, or from Table 8). _Columns 3 and 4:_ Galactic and intrinsic column density absorption (\(\times\)10\({}^{21}\)) in units of cm\({}^{-2}\), respectively. The former is kept fixed during the fit. _Column 5:_ Photon index from the power-law model. _Column 6:_ Normalization parameter (in units of photons keV\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\)). _Column 7:_ Absorbed fluxes (\(\times\)10\({}^{-14}\)) in units of erg cm\({}^{-2}\) s\({}^{-1}\) (0.5–7.0 keV). _Column 8:_ C-stat value and the number of degrees of freedom. _Column 9:_ the log-evidence (ln \(\mathcal{L}\)) values for each model. The errors are quoted at the 3\(\sigma\) confidence level from the posterior distributions obtained by BXA except for the flux (which is quoted at 1\(\sigma\)). \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline FXT & \(T_{0}\)(UTC) & Model & T\({}_{\rm break}\)(ks) & \(\tau_{1}\) & \(\tau_{2}\) & \(F_{0}\) (erg cm\({}^{-2}\) s\({}^{-1}\)) & ln \(\mathcal{L}\)(dof) & BIC \\ \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline 15 & 2014-05-07 04:26:03.74 & BPL & 3.7\(\pm\)0.7 & -0.4\(\pm\)0.1 & 2.9\(\pm\)1.2 & (6.8\(\pm\)1.9)\(\times\)10\({ ceptionally soft source (\(\Gamma\)\(\gtrsim\)6.5) compared to both this sample and the FXT candidates presented in Paper I (see Fig. 10, _bottom panel_). Finally, FXTs 17, 18, and 21, whose light curves are best-fitted by a PL model, have some of the softest photon indices (\(\Gamma\)\(\gtrsim\)3). #### 3.3.1 Hardness ratio and photon index evolution The hardness ratio (HR) can be used to distinguish between X-ray sources, and permit us to explore their spectral evolution, especially in cases with low-count statistics (e.g., Lin et al. 2012; Peretz & Behar 2018). In this work, the HR is defined as: \[HR=\frac{H-S}{H+S}, \tag{2}\] where \(H\) and \(S\) are the number of X-ray photons in the 0.5-2.0 keV soft and 2.0-7.0 keV hard energy bands. For each source candidate, we calculate the HR using the Bayesian code BEHR (Park et al. 2006), which we list in Table 3, _column 11_, and plot in Fig. 11 (_top panel_). We compare the HR of the 90 objects identified as "stars" by _Criterion 2_ (see Fig. 11, _bottom panel_, _cyan histogram_) in Sect. 2.7.2 with the final sample of FXTs in this work (_orange histogram_) and the sample of FXTs reported by Paper I (_blue histogram_). Stars typically have very soft X-ray spectra (Gudel & Naze 2009), confirmed by the fact that \(\approx\)90% of the star candidates strongly skew toward soft HRs (\(\lesssim\)0.0). Clearly, Fig. 11 shows that FXTs do not stand out in the HR plane; thus, HR is not a useful discriminator on its own between stellar contamination and extragalactic FXTs. We also analyze how the HR and power-law index of the X-ray spectrum change with time. To this end, we compute the time-dependent HR, with the requirement of 10 counts per time bin from the source region (to improve the statistics), which we show in the _lower panels_ of Fig. 8. For sources that are well-fit by a BPL model, we also split the event files at \(T_{\rm break}\) and extract and fit the spectra to compute spectral slopes "before" and "after" \(T_{\rm break}\) (\(\Gamma_{\rm before}\) and \(\Gamma_{\rm after}\), respectively; see Table 7) using the value for the absorption derived from the fit to the full spectrum (see Table 6). We fit both spectral intervals together assuming fixed, constant \(N_{\rm H,Gal}\) and \(N_{H}\) (taken from Table 6). The spectra of FXT 16 clearly softens after the plateau phase (Fig. 8 and Table 7) at \(>\)90% confidence. Similar spectral evolution was also seen from previous FXT candidates XRT 030511 and XRT 110919 (Paper I). FXTs 15 and 20 exhibit similar spectral softening trends, with \(T_{\rm break}\) as a pivot time, although with only marginal significance, while the rest show no obvious evidence of such trends (Fig. 8 and Table 7). Finally, it is important to mention that the FXTs whose light curves follow a PL model (FXTs 17, 18, and 21) show hardening trends in their HR evolution (see Fig. 8). Figure 11: _Top panel_: Hardness ratio of each FXT candidate (using the Bayesian BEHR code; Park et al. 2006) at 1\(\sigma\) confidence level. _Bottom panel_: Hardness-ratio distributions of our final samples of FXTs (_orange_ histogram), compared to the X-ray transients classified as “stars” by _Criterion 2_ using _Gaia_ (filled _cyan_ histogram) and the sources identified previously as _distant_ FXTs (_blue histogram_) by Paper I. Figure 10: _Top panel:_ Intrinsic neutral hydrogen column density distribution, evaluated at \(z\)=0 and in units of cm\({}^{-2}\), obtained using the _power-Law_ model for extragalactic FXT candidates from this work (_orange histogram_) and Paper I (_blue histogram_). The _arrows_ indicate that the \(z\)=0 intrinsic hydrogen column densities are lower bounds. _Bottom panel_: Photon index distribution, obtained using a _power-law_ model, for FXT candidates from this work (_orange histogram_) and Paper I (_blue histogram_). Note that the uncertainties on these parameter values for individual sources can be considerable (see Table 6.) ### Galactic origin From our sample, FXTs 16 and 19 are clearly aligned with extended objects, proving an extragalactic origin. FXTs 18, 20, 21, and 22, based on their low random match probabilities, could be associated with potential hosts, supporting an extragalactic association (see Sect. 2.8 for more details). In the next paragraphs, similar to Paper I (see its Sect. 3.4 for more details), we explore any of the FXT candidates could still be associated with magnetically active M- or brown-dwarf flares, which are known to produce X-ray flares on timescales of minutes to hours, with flux enhancements up to two orders of magnitude (not only at X-ray wavelengths; Schmitt & Liefke 2004; Mitra-Kraev et al. 2005; Berger 2006; Welsh et al. 2007). Stellar flares typically show soft thermal spectra with temperatures of the order of \(kT\)\(\sim\)0.5-1 keV. M-dwarf stars (brown-dwarfs) have optical and NIR absolute magnitudes in the range of \(M_{\rm\star}\)\(\sim\)8-13 AB mag (\(M_{\rm\star}\)\(\sim\)13-18 AB mag) and \(M_{K_{\rm\star}}\)\(\sim\)3-10 AB mag (\(M_{\rm\star}\)\(\sim\)15-25 AB mag), respectively (Hawley et al. 2002; Avenhaus et al. 2012; Tinney et al. 2014). The enhanced X-ray emission of M dwarfs shows flares on the order of \(L_{\rm X}^{\rm M-dwarf}\)\(\sim\)10\({}^{\rm 38}\)-10\({}^{\rm 32}\) erg s\({}^{-1}\)(Pallavicini et al. 1990; Pandey & Singh 2008; Pye et al. 2015), while brown dwarf flares cover a luminosity range of \(L_{\rm X}^{\rm B-dwarf}\)\(\sim\)10\({}^{\rm 27}\)-10\({}^{\rm 30}\) erg s\({}^{-1}\)(Berger 2006; Robrade et al. 2010). Empirically, the ratio between the X-ray luminosity (\(L_{X}\)) and bolometric luminosity (\(L_{\rm bol}\)) of cool M dwarfs and L dwarfs typically exhibits values no larger than log(\(L_{X}\)/\(L_{\rm bol}\))\(\lesssim\)0.0 and \(\lesssim\) -3.0, respectively (e.g., Garcia-Alvarez et al. 2008; De Luca et al. 2020). Adopting this limiting ratio, we rule out a stellar flare scenario for FXT candidates. As in Paper I, we compute the ratio log(\(L_{X}\)/\(L_{\rm bol}\)) considering stellar synthetic models of dwarf stars (taken from Phillips et al. 2020, 1000\(\lesssim\)\(T_{\rm eff}\)\(\lesssim\)3000 K and 2.5\(\lesssim\) log \(g\)\(\lesssim\)5.5), normalised to the deepest photometric upper limits and/or detections (as listed in Table 4), and compute bolometric fluxes by integrating the normalized models. We describe the constraints for each FXT below: For FXT 15, the \(m_{\rm s}\)\(>\)24.0 and \(m_{\rm\star}\)\(>\)23.0 AB mag limits imply distances to any putative M- and brown dwarfs responsible for the X-ray flares of \(\gtrsim\)0.2-1.7 kpc and \(\gtrsim\)0.02-0.2 kpc, respectively. The corresponding X-ray flare luminosities would be \(L_{\rm X}^{\rm M-dwarf}\)\(\gtrsim\)(7.0-690)\(\times\)10\({}^{\rm 29}\) and \(L_{\rm X}^{\rm B-dwarf}\)\(\lesssim\)(7.0-700)\(\times\)10\({}^{\rm 27}\) erg s\({}^{-1}\), respectively. These are not enough to discard a Galactic stellar flare nature. Furthermore, the ratio log(\(F_{X}\)/\(F_{\rm bol}\))\(\gtrsim\) 0.9 to -1.4 remains consistent with the extreme spectral type L1 stars (e.g., J0331-27 with log(\(F_{X}\)/\(F_{\rm bol}\))\(\sim\)0.0; De Luca et al. 2020). Thus, we cannot completely rule out an extreme stellar flare origin for FXT 15. In the case of FXT 17, the \(m_{\rm s}\)\(>\)24.1 and \(m_{\rm\star}\)\(>\)22.4 AB mag limits imply distances of \(>\)0.2-1.8 kpc and \(>\)0.02-0.2 kpc for M- and brown-dwarfs, respectively, and corresponding X-ray flare luminosities are \(L_{\rm X}^{\rm M-dwarf}\)\(\gtrsim\)(1.5-149)\(\times\)10\({}^{\rm 31}\) and \(L_{\rm X}^{\rm B-dwarf}\)\(\gtrsim\)(1.5-150)\(\times\)10\({}^{\rm 29}\) erg s\({}^{-1}\), respectively. The X-ray-to-total flux ratio is log(\(F_{X}\)/\(F_{\rm bol}\))\(\gtrsim\) - 0.1 to +0.4. Based on these, we cannot discard a Galactic stellar flare association for FXT 17. For FXT 18, due to the large X-ray positional uncertainty, there are several possible optical counterparts. We consider only _source #1_ here, as it is closest, lying inside the 2\(\sigma\) X-ray uncertainty position (see Fig. 6). The \(m_{\rm s}\)=24.6 and \(m_{\rm\star}\)=24.9 AB mag DECam detections (see Table 4) implies distances of \(\approx\)0.3-3.1 kpc and \(\approx\)0.03-0.3 kpc for M- and brown-dwarfs, respectively, and corresponding X-ray flare luminosities of \(L_{\rm X}^{\rm M-dwarf}\)\(\approx\)(5.9-600)\(\times\)10\({}^{\rm 33}\) and \(L_{\rm X}^{\rm B-dwarf}\)\(\approx\)(5.9-590)\(\times\)10\({}^{\rm 31}\) erg s\({}^{-1}\), respectively. The ratio log(\(F_{X}\)/\(F_{\rm bol}\)) is \(\approx\)2.5 to 3.0. These allow us to rule out robustly any Galactic stellar flare origin. In the case of FXT 20, we consider _source #1_ for the stellar flaring analysis (see Fig. 6). The detections \(m_{\rm s}\)=24.3 and \(m_{\rm\star}\)=23.5 AB mag equate to distance ranges of \(\approx\)0.2-2.5 kpc and \(\approx\)0.02-0.3 kpc for M- and brown-dwarfs, respectively, and corresponding X-ray flare luminosities of \(L_{\rm X}^{\rm M-dwarf}\)\(\approx\)(1.5-147)\(\times\)10\({}^{\rm 32}\) and \(L_{\rm X}^{\rm B-dwarf}\)\(\approx\)(1.5-150)\(\times\)10\({}^{\rm 30}\) erg s\({}^{-1}\), respectively. The ratio log(\(F_{X}\)/\(F_{\rm bol}\)) is \(\approx\)0.9 to 1.4. These allow us to discard a Galactic stellar flare origin for FXT 20. Finally, for FXT 21, the \(m_{\rm\star}\)=22.7 AB mag PanSTARRS detection yields distance ranges of \(\approx\)0.4-4.1 kpc and \(\approx\)0.04-0.4 kpc for M- and brown-dwarfs, respectively, and corresponding X-ray flare luminosities of \(L_{\rm X}^{\rm M-dwarf}\)\(\approx\)(3.0-300)\(\times\)10\({}^{\rm 31}\) and \(L_{\rm X}^{\rm B-dwarf}\)\(\approx\)(3.1-300)\(\times\)10\({}^{\rm 29}\) erg s\({}^{-1}\), respectively. The ratio log(\(F_{X}\)/\(F_{\rm bol}\)) is \(\approx\)0.2 to 0.7. Thus, we can rule out FXT 21 as a stellar flare. In summary, the multi-wavelength photometry indicate that three FXTs (FXTs 18, 20, and 21) appear inconsistent with stellar flaring episodes from Galactic M dwarfs and brown dwarfs, while deeper observations are required to completely rule out this option for FXTs 15 and 17. ### One or two populations of FXTs? In Paper I, we found that the FXT candidates could be robustly classified into "_local_" and "_distant_" populations (see their Sect. 3.5), based on the proximity of some sources to local galaxies (distances\(\lesssim\)100 Mpc). In contrast, we find no plausible and robust association between our final sample and local galaxies in the current work. Two FXT candidates, XRT 19127 and XRT 210423, are detected at projected distances of \(\approx\)500 and 300 kpc, respectively, from the center of the galaxy cluster Abell 1795 (\(\approx\)285.7 Mpc). However, neither is obviously associated with cluster members and physical offsets in this range are not easily explained by any possible physical scenario of FXTs (see Sec. 6). The lack of local FXTs could be explained by the _Chandra_ exposure time spent to observe local galaxies in recent years. Around 26.5% of the _Chandra_ ObsIDs (amounting to \(\approx\)0.8 years of exposure and a sky area of \(\approx\)66.7 deg\({}^{2}\)) analyzed in this work covers local galaxies (based on a match with the GLADE catalog; Dalya et al. 2018). Adopting the _local_ FXT rate from Paper I, \(\mathcal{R}_{\rm Local}\)=53.7\({}^{+22.6}_{-15.1}\) deg\({}^{-2}\) yr\({}^{-1}\), we thus expect \(\approx\)2.8\({}^{+5.6}_{-2.7}\)_local_ FXTs in this work, which remains consistent with our model detection of _local_ FXTs at 3\(\sigma\) confidence for Poisson statistics. On the other hand, the _distant_ FXTs rate from Paper I is \(\mathcal{R}_{\rm Distant}\)=28.2\({}^{+9.8}_{-6.9}\) deg\({}^{-2}\) yr\({}^{-1}\), implying \(\approx\)4.5\({}^{+7.3}_{-3.7}\) sources, which is consistent with our new eight FXT candidates. \begin{table} \begin{tabular}{l l l} \hline \hline FXT & \(\Gamma_{\rm before}\)(\(T\)\(<\)\(T_{\rm break}\)) & \(\Gamma_{\rm after}\)(\(T\)\(\geq\)\(T_{\rm break}\)) \\ \hline (1) & (2) & (3) \\ \hline \(15\) & \(1.2^{+1.0}_{-1.0}\) & \(2.8^{+1.2}_{-0.1}\) \\ \(16\) & \(1.6^{+0.4}_{-0.4}\) & \(2.9^{+0.5}_{-0.5}\) \\ \(19 ## 4 Host-galaxy features The host galaxy or host environment can provide additional information about the nature and origin of this FXT sample. Five FXTs (16, 19, 20, 21, and 22) lie close to extended optical/NIR sources, which are plausible host galaxies (see Fig. 6). The host galaxies of FXTs 19 and 22 were previously identified, but their properties were not reported so far. For FXT 18, just one faint source (_source #1_) falls inside the X-ray uncertainty position, but it is not clear whether it is extended. As a first step, in Fig. 12, we explore the nature of the hosts using \(i-K_{s}\) vs. \(g-i\) (_top panel_) and \(i-W1\) vs. \(g-i\) (_bottom panel_) colours, compared to the colours of X-ray sources previously classified as stars both in Paper I (_gray points_) and this work (_black points_; see Sect. 2.7.2), and the expected parameter space for stars (_cyan regions_) with different ages [log(Age/yr)=7.0-10.3], metallicities (from [Fe/H]\(=-\) 3.0-0.5), and attenuations (\(A_{V}\)=0.0-5.0 mag) from theoretical stellar isochrones (MIST; Dotter 2016; Choi et al. 2016). The vast majority of the X-ray flares with stellar counterparts form a tight sequence (see Fig. 12, _cyan region_), with the outliers identified as PNe, YSOs, eruptive variable stars, T Tauri stars, or emission-line stars. Overall, the potential hosts of the FXT candidates appear to reside outside or at the edge (e.g., FXT 16, 18, 19, 21 and 22) of the stellar region, although the limits or large uncertainties (e.g., FXT 20) indicate that the current colour estimates are not the best discriminators by themselves. Thus the spatially resolved nature of the counterparts remains vital to their confirmation as a candidate host galaxy. We further constrain the host properties through spectral energy distribution (SED) model fitting of their existing photometry using BAGPIPES (Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation; Carnall et al. 2018), which fits broadband photometry and spectra with stellar-population models taking star-formation history and the transmission function of neutral/ionized ISM into account via a MultiISet sampling algorithm (Feroz & Hobson 2008; Feroz et al. 2009). In Appendix C, we list the different conditions considered for the SED fitting. Table 8 provides the best-fit parameters obtained with BAGPIPES for the hosts of FXTs 16, 18, 19, 21 and 2228, while Fig. 13 shows the 16th to 84th percentile range for the posterior spectrum, photometry, and the posterior distributions for five fitted host-galaxy parameters. Also, Figs. 13 and 14 compare the SFR and stellar masses of the FXT hosts to those of several well-known transient classes (such as CC- and Ia SNe, LGRBs, SGRBs and FRBs) and cumulative functions, respectively. Footnote 8: FXT 20 only has faint \(g\), \(r\) and \(z\)-band DECam detections, which are too few and too loosely constrained to compute a SED photometric redshift. Overall, in terms of stellar mass and star-formation rate, FXTs 16, 18, 19, and 22 are located above the galaxy main sequence, while FXT 21 lies slightly below it (see Fig. 13, _solid cyan line_). The hosts of FXTs 16, 19, and 22 have SFRs, stellar masses, and young stellar populations indicative of star-forming galaxies (Moustakas et al. 2013). In terms of SFRs, the FXT hosts broadly lie in the same region populated by SNe type Ib, Ic, II, SL-SNe, and GRBs. The SFR of the host of FXT 22 compares more favorably with LGRB (30%) and SGRB (10%) hosts over SNe (\(\sim\)0%); meanwhile, its large host stellar mass shares little overlap with the LGRB/SGRB (\(\approx\)10/15%) and partially with type-Ia/CC-SN (\(\approx\)40/30%) host populations. For FXTs 16 and 19, the overlapping fractions of LGRB, SGRB, and SLSNe host galaxies with galaxy stellar mass \(\lesssim\)10\({}^{9}\)\(M_{\odot}\) are \(\approx\)20, 15, and 80%, respectively. In the particular case of FXT 18, it has a moderate SFR and low stellar mass. This low stellar mass matches with a very small fraction of LGRB (\(\lesssim\)5%) and SGRB (\(\lesssim\)2%) hosts and some SL-SNe (\(\approx\)30%) hosts. The host of FXT 21 has a moderate SFR and high stellar mass, implying a classification as a quiescent galaxy (Moustakas et al. 2013). Its SFR falls in a region populated by \(\approx\)70 and 50% of LGRBs and SGRBs, respectively, and \(\approx\)40 and 25% of CC-SNe and Ia-SNe hosts, respectively, with SFR\(\gtrsim\)2.0 \(M_{\odot}\) yr\({}^{-1}\). Meanwhile, only \(\lesssim\)10% of SNe and GRBs have similar host galaxy stellar masses \(\gtrsim\)10\({}^{11}\)\(M_{\odot}\). Moreover, these sources fall in the same parameter space occupied by the distant FXTs reported in Paper I (see Fig. 13, _lower panel_). Thus, we conclude that a majority of distant FXTs appear to be associated with actively star-forming galaxies (\(\gtrsim\)10\({}^{8}\) M\({}_{\odot}\) and \(\gtrsim\)0.5 M\({}_{\odot}\) yr\({}^{-1}\)), while a subset is associated with post-starburst ("green valley") galaxies. Another crucial parameter is the projected physical offset (\(\delta\)R) between the transient's position and the host galaxy center. Figure 14, _right panel_, compares the projected physical offset distribution of FXTs 16, 18, 19, 21 and 22 to several transient classes such as CC-SNe (_cyan_), Ia-SNe (_orange_), SL-SNe (_magenta_), FRBs (_black_), LGRBs (_blue_) and SGRBs (_red_). SGRBs have a physical offset, which is about \(\sim\)4-5 times greater than \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline FXT & RA (deg) & DEC (deg) & Offset & \(z\) & Log(M\({}_{*}\)/\(M_{\odot}\)) & SFR(/\(M_{\odot}\)/yr) & \(A_{V}\) (mag) & Ref. \\ \hline (1) & (2) & (3) & (5) & (6) & (7) & (8) & (9) \\ \hline \multicolumn{8}{c}{Parameters obtained from the literature} \\ \hline 16 & 53.07658 & -27.87332 & 0\(\aas@@fstack{\prime}\)44 & 0.738 & 9.07 & 0.81 & 0.02 & 1,2 \\ \hline \multicolumn{8}{c}{Parameters derived from photometric data using BAGPIPES (Carnall et al. 2018)} \\ \hline 16 & 53.07658 & -27.87332 & 0\(\aas@@fstack{\prime}\)44 & 0.738 & 8.91\(\pm\)0.04 & 2.98\(\aas@@fstack{\prime}\)0.78 & 1.12\(\pm\)0.12 & – \\ 18(S1) & 36.7144 & -1.0826 & 2\(\aas@@fstack{\prime}\)63 & 0.35\(\aas@@fstack{\prime}\)0.15 & 7.87\(\aas@@fstack{\prime}\)0.27 & 0.17\(\aas@@fstack{\prime}\)0.38 & 0.36\(\aas@@fstack{\prime}\)0.76 & – \\ 19(S1) & 356.26812 & -42.64566 & 0\(\aas@@fstack{\prime}\)45 & 1.44\(\pm\)0.08 & 8.67\(\pm\)0.11 & 2.59\(\aas@@fstack{\prime}\)0.68 & 0.16\(\pm\)0.10 & – \\ 21(S1) & 50.47531 & 41.24695 & 0\(\aas@@fstack{\prime}\)46 & 0.85\(\pm\)0.14 & 11.20\(\aas@@fstack{\prime}\)0.24 & 1.66\(\aas@@fstack{\prime}\)0.58 & 1.48\(\aas@@fstack{\prime}\)0.37 & – \\ 22(S1)\({}^{a}\) & 207.23646 & 26.66300 & 4\(\aas@@fstack{\prime}\)6 & 1.5105 & 10.73\(\pm\)0.62 & 35.23\(\aas@@fstack{\prime}\)0.65 & 0.63\(\pm\)0.45 & 3,4 \\ \hline \end{tabular} \end{table} Table 8: Parameters obtained from the literature and by our SED fitting to archival photometric data using the BAGPIPES package (Carnall et al. 2018). _Column 2 and 3:_ Right ascension and declination of the host galaxies. _Column 5:_ Angular offset between the transient and the host galaxy. _Column 5:_ Host galaxy redshift or distance. _Columns 6 and 7:_ Logarithmic values of the stellar mass, and the SFR from the host galaxies. _Column 8:_ Dust attenuation. _Column 9:_ Literature references. the median offset for LGRBs (Bloom et al. 2002) and SL-SNe (Schulze et al. 2021), and about \(\sim\)1.5 times larger than the median offsets of CC- and Type Ia SNe (Prieto et al. 2008) and FRBs (Heintz et al. 2020). In addition, practically no LGRBs and SL-SNe, and only \(\approx\)10% of CC- and type Ia SNe have offsets \(\gtrsim\)10 kpc, while \(\approx\)40% of SGRBs have such offsets. Moreover, \(\approx\)15% of SGRBs have offsets \(\gtrsim\)20 kpc, while essentially no SL-SNe, CC- and type Ia SNe, or LGRBs exhibit such large offsets. The physical offsets of FXTs 16 (\(\approx\)3.3 kpc), 19 (\(\approx\)3.9 kpc), and 21 (\(\approx\)3.6 kpc) overlap with the cumulative distributions of CC- and type Ia SNe, and SGRBs at 1\(\sigma\) confidence level, although only \(\approx\)10-15% of SL-SNe and LGRBs have equal or higher offset values. In the case of FXT 18, its offset (\(\approx\)13.2 kpc) resides well inside the offset distribution of SGRBs (\(\approx\)70% with \(\delta\)\(R\)\(<\)13 kpc), while just \(\lesssim\)5% of LGRBs, and CC- and Ia-SNe have equal or higher offset values. Nevertheless, it has a large X-ray positional uncertainty. In contrast, FXT 22 has a physical offset of \(\approx\)40 kpc, which is just compatible with \(\approx\)10% of SGRB hosts with equal or higher offsets. ## 5 Rates We update the FXT event rates determined in Paper I and revisit comparisons with other transients to explore possible interpretations. Specifically, we derive the observed event rates (deg\({}^{-2}\) yr\({}^{-1}\); Sect. 5.1), FXT X-ray luminosity function (Sect. 5.2), and volumetric rates (yr\({}^{-1}\) Gpc\({}^{-3}\); Sect. 5.3). ### Event-rate estimation We compute FXT event rates following the procedure and assumptions outlined in Paper I (their sect 6.1 and eqs. 5-7). We first estimate the rate independently of Paper I to confirm consistency. We found eight FXTs inside \(\approx\)89 Ms of _Chandra_ data from 2014 to 2022, yielding \(\mathcal{R}_{\rm This~{}work}\)=45.6\({}^{+18.2}_{-14.3}\) deg\({}^{-2}\) yr\({}^{-1}\) (for sources with \(F_{\rm X,peak}\)\(\gtrsim\)1\(\times\)10\({}^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\)). This rate is consistent with the rates derived by Yang et al. (2019), \(\mathcal{R}_{\rm Yang+19}\)\(\approx\)59\({}^{+79}_{-38}\) Figure 12: Colour-colour diagrams of the potential host galaxies associated with FXTs 16, 18, 19, 20, 21 and 22 (_coloured stars_), “distant” FXTs from Paper I (_magenta hexagons_), and X-ray sources classified as stars in Paper I (_gray filled circles_) and here according to Criterion 2 in Sect. 2.7.2 (_black filled circles_). The expected parameter space of stars with different ages [log(Age)=7.0-10.3], metallicities ([Fe\({}^{-}\)H]=\(-\)3.0–0.5), and attenuations (\(A_{V}\)=0.0–5.0) taken from the MIST package (Dotter 2016; Choi et al. 2016) are overplotted as _cyan_ regions. \begin{table} \begin{tabular}{l l l l l} \hline \hline FXT & \(z\) & \(L_{\rm X,peak}\) & \(z_{\rm max}\) & \(V_{\rm max}\) \\ & & (erg s\({}^{-1}\)) & & (Gpc\({}^{3}\)) \\ \hline (1) & (2) & (3) & (4) & (5) \\ \hline \multicolumn{5}{c}{Paper I (Quirola-Vásquez et al. 2022)} \\ \hline 1 & 0.1866\({}^{1,2}\) & 1.9\(\times\)10\({}^{46}\) & 4.30 & 1673 \\ 7 & 1.0\({}^{\dagger}\) & 1.3\(\times\)10\({}^{46}\) & 3.59 & 1354 \\ 8 & 0.61 & 1.1\(\times\)10\({}^{44}\) & 0.52 & 32 \\ 9 & 0.7 & 1.5\(\times\)10\({}^{45}\) & 1.49 & 343 \\ 10 & 1.0\({}^{\dagger}\) & 4.8\(\times\)10\({}^{45}\) & 2.42 & 789 \\ 11 & 0.0216\({}^{1,2}\) & 2.4\(\times\)10\({}^{44}\) & 0.72 & 70 \\ 12 & 1.0\({}^{\dagger}\) & 3.0\(\times\)10\({}^{45}\) & 2.00 & 584 \\ 13 & 1.0\({}^{\dagger}\) & 6.5\(\times\)10\({}^{44}\) & 1.08 & 177 \\ 14 & 2.23\({}^{c}\) & 1.7\(\times\)10\({}^{47}\) & 11.01 & 3751 \\ \hline \multicolumn{5}{c}{This work} \\ \hline 15 & 1.0\({}^{\dagger}\) & 1.0\(\times\)10\({}^{45}\) & 1.30 & 261 \\ 16 & 0.738 & 2.8\(\times\)10\({}^{45}\) & 1.94 & 554 \\ 17 & 1.0\({}^{\dagger}\) & 6.3\(\times\)10\({}^{45}\) & 2.73 & 946 \\ 18 & 0.35 & 1.9\(\times\)10\({}^{47}\) & 11.55 & 3867 \\ 19 & 1.44 & 3.7\(\times\)10\({}^{46}\) & 5.72 & 2245 \\ 20 & 1.0\({}^{\dagger}\) & 8.1\(\times\)10\({}^{46}\) & 7.98 & 2987 \\ 21 & 0.85 & 6.9\(\times\)10\({}^{45}\) & 2.80 & 978 \\ 22 & 1.51 & 1.3\(\times\)10\({}^{46}\) & 3.61 & 1367 \\ \hline \end{tabular} \end{table} Table 9: FXT properties from Paper I and this work used to compute the volumetric density rates (Sect. 5.3) and X-ray luminosity functions (Sect. 5.2). _Column 2:_ adopted redshift for each FXT. _Column 3:_ Peak isotropic X-ray luminosity in cgs units (corrected for Galactic and intrinsic absorption). _Columns 4:_ maximum observable redshift. _Columns 5:_ maximum comoving volume in Gpc\({}^{3}\) units. \({}^{\dagger}\) fiducial redshifts. \({}^{a}\) redshift taken from Eappachen et al. (2022, \(z\)=0.1866). \({}^{b}\) redshift taken from Glennie et al. (2015, 94.9 Mpc). \({}^{c}\) redshift taken from Bauer et al. (2017, \(z\)=2.23). and Paper I (\(\mathcal{R}_{\rm Paper\,\,L_{\rm distant}}\)=28.2\({}^{+9.8}_{-6.9}\) deg\({}^{-2}\) yr\({}^{-1}\)) at the Poisson 1\(\sigma\) confidence level and higher than the rate derived by Glennie et al. (2015, \(\approx\)3.4 deg\({}^{-2}\) yr\({}^{-1}\)). As already mentioned in Paper I, this is not surprising since Glennie et al. (2015) computed the rate for a higher peak flux of \(F_{\rm X,peak}\)\(\gtrsim\)1\(\times\)10\({}^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\). Then, considering all 17 distant FXTs (i.e., the nine distant FXTs from Paper I, also including the ambiguous FXTs 1 and 11 which might be extragalactic sources according to Eappachen et al. 2022, and the eight FXTs from this work) detected by _Chandra_ (ACIS-I/S) instruments between 2000 and 2022, we estimate a total distant FXT event-rate of \(\mathcal{R}_{\rm total,\rm distant}\)=36.9\({}^{+9.7}_{-3.8}\) deg\({}^{-2}\) yr\({}^{-1}\). Since the number of FXTs removed erroneously by our selection criteria is \(\ll\)1 (see Sect. 2.7.6), the estimated event rates are robust results for FXT candidates brighter than \(F_{X}\)\(\gtrsim\)1\(\times\)10\({}^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\) (from the search algorithm developed in Paper I). Finally, we found no new local FXTs among the _Chandra_ observations of nearby galaxies included in this work (i.e., 26.5% of the ObsIDs), allowing a revised estimate (considering ObsIDs from Paper I and this work) of the nearby FXT event rate of \(\mathcal{R}\)=34.3\({}^{+13.7}_{-10.8}\) deg\({}^{-2}\) yr\({}^{-1}\), consistent with that derived in Paper I. Figure 13: Star-forming galaxy main sequence diagram, stellar mass vs. SFR, comparing hosts of FXTs and various other transient classes (one per panel) such as SNe type Ia, Ib, Ic, II (Tsvetkov & Bartunov 1993; Galbany et al. 2014; Schulze et al. 2021), super-luminous SNe (SL-SNe; Schulze et al. 2021), LGRBs (including SN 2020bvc; Chang et al. 2015; Li et al. 2016; Izzo et al. 2020; Ho et al. 2020), low-luminosity LGRBs (LL-LGRBs; GRB 980425, GRB 020903, GRB 030329, GRB 050826, GRB 060218, and GRB 171205A; Christensen et al. 2008; Michalowski et al. 2014; Levesque 2014; Kruhler et al. 2017; Wiersema et al. 2007; Wang et al. 2018; Arabsalmani et al. 2019), SGBs (including GW 170817/GRB 170817A; Li et al. 2016; Im et al. 2017; Nugent et al. 2022), TDEs (French et al. 2020), and Paper I FXT candidates (nearby and distant FXTs). _Grayscale contours_ denote the SDSS galaxy distribution from Brinchmann et al. (2004). The _solid cyan lines_ show the best-fit local galaxy main sequence relation from Peng et al. (2010), while the _dashed coloured lines_ denote the upward evolution of the boundary separating star-forming and quiescent galaxies as a function of redshift (at \(z\)=0.0, 0.1, 0.2, 0.3, and 0.4; Moustakas et al. 2013). The event rate as a function of fluence (or peak flux29) behaves as a power-law function as \({\cal R}_{\rm{\infty}}F_{\rm{peak}}^{-\gamma}\), where \(\gamma\) is a positive value. Footnote 29: Similar to Paper I, due to the lack of a standardized method to estimate the \(F_{X,\rm{peak}}\), first few find the shortest time interval during which 25% of the counts are detected, and we compute the count rate during this shortest interval. Next, to convert the peak-count rates to fluxes, we multiply the flux from the time-averaged spectral fits by the ratio between the peak and the time-averaged count rates (i.e., we assume no spectral evolution). Figure 15, _left panel_, shows the cumulative log/\(\uplambda\)-log\(S\) distribution of the entire sample analyzed in this work (i.e., 8 FXTs; _orange line_), FXTs identified in Paper I (i.e., 14 FXTs; _cyan line_), and finally combining both samples (i.e., 22 FXTs; _black line_) which appears to follow \(\gamma\)\(\approx\)0.5 (_red regionline_). We also plot the extrapolation of the best-fit slope, \(\gamma\)=1.0, based on the estimates of FXTs at bright fluxes (\(\gtrsim\)10\({}^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\)) from Arefiev et al. (2003).30 The brightest sources in the total sample of 22 FXTs appear to be consistent with the Arefiev et al. (2003) Figure 14: Comparison of the host-galaxy properties of FXTs 16, 19, 21 and 22 (_colour vertical dashed lines_) from Table 8 with the cumulative distributions of galaxy stellar mass (_left panel_), star-formation rate (_center panel_), and projected physical offset (_right panel_) for LGRBs (_blue lines_; Li et al. 2016; Blanchard et al. 2016), SGRBs (_red lines_; Fong et al. 2010, 2012, 2014; Margutti et al. 2012; Sakamoto et al. 2013; Berger et al. 2013a; Fong et al. 2022; Nugent et al. 2022), FRBs (_black lines_; Heintz et al. 2020), CC-SNe and Ia-SNe (_cyan and orange lines_; Tsvetkov & Bartunov 1993; Prieto et al. 2008; Galbany et al. 2014), and SLSNe (_magenta lines_; Schulze et al. 2021). Figure 15: Observed cumulative log/\(\uplambda\)-log\(S\) and log/\(\uplambda\)-log\(F_{\rm{peak}}\) distributions. _Left panel_: log/\(\uplambda\)-log\(S\) distribution of the sample of extragalactic FXTs analyzed in this work (_orange line_), from Paper I (_cyan line_), and combined (_black line_), as a function of fluence (in cgs units). Also shown are two PL models, \(N\)(\(>\)_Fluence_)\(\propto\)\(S^{-\gamma}\), with slopes \(\gamma\)=0.52 (_red line_) and 1.0 (_blue dashed line_). The \(\gamma\)=1 line represents the best fit and 1\(\sigma\) error of Arefiev et al. (2003) based on bright FXTs (including Galactic flares). The brightest sources in our sample appear to be consistent with this bright-end extrapolation, although our fainter sources fall up to \(\sim\)1 dex below, implying a break. For comparison with Arefiev et al. (2003), we convert all FXT fluences to the 2–10 keV band from their best fits. _Right panel_: log/\(\uplambda\)-log\(F_{\rm{peak}}\) distribution of the total sample of extragalactic FXTs analyzed in Paper I and this work (_black line_) and only distant FXTs (_orange line_). The _magenta dashed line_ shows objects uniformly distributed in Euclidean space (\(\propto\)\(F_{\rm{peak}}^{-3/2}\)) for comparison. The _color regions_ represent the 1-\(\sigma\) confidence interval. extrapolation at 1-\(\sigma\) confidence. In contrast, the fainter sources fall well below it by \(\sim\)1 dex (following a power-law with an exponential index of \(\gamma\)=0.52), indicating a break around a fluence of \(\sim\)\(10^{-8}\) erg cm\({}^{-2}\) to our best-fit slope. A similar result was found in Paper I. Furthermore, we define a BPL model, which fits the results obtained (see Fig. 15), for the cumulative log\(\mathcal{N}\)-log\(\mathcal{S}\) distribution with a break fluence around \(\sim\)\(1\times\)\(10^{-8}\) erg cm\({}^{-2}\) and two power-law indexes of \(\gamma_{1}\)= \(-\) 0.52 and \(\gamma_{2}\)= \(-\) 1.0 for the faint and bright ends, respectively. Footnote 1: The \(\gamma\)-log\(\mathcal{S}\) distribution is also shown in Fig. 16. Finally, Fig. 15, _right panel_, represents the cumulative log\(\mathcal{N}\)-log\(F_{\rm peak}\) curves considering the whole sample of 22 FXTs (_black line_) and just the 17 distant FXTs for Paper I and this work (_orange line_). The local FXTs identified in Paper I contribute mostly to low fluxes (compare the _orange_ and _black_ lines). The log\(\mathcal{N}\)-log\(F_{\rm peak}\) slope appears to be significantly shallower at low \(F_{\rm peak}\) than the Euclidean prediction (i.e., \(\propto\)\(S^{-3/2}\), which is expected for astrophysical objects uniformly distributed in a Euclidean space; _dashed magenta line_). A combination of four effects could explain this deviation: \(i\)) near the sensitivity threshold of the detector, the number of FXTs depends on the detection efficiency, which affects the log\(\mathcal{N}\)-log\(F_{\rm peak}\) plot; \(ii\)) due to the flux being inversely proportional to the square of the luminosity distance, which will differ from the Euclidean distance as \(z\) approaches unity (this implies that the FXTs should be cosmological); \(iii\)) the sample of FXTs likely has a mix of origins, such that the cosmic event rate density is not constant with redshift; and \(iv\)) the sample is dominated by low-number statistical fluctuations, particularly at the bright end, due to the _grasp_ (area \(\times\) sensitivity) of _Chandra_. New X-ray missions which are focusing on scanning the sky, such as _eROSITA_ and _Einstein Probe_, will increase the number of FXTs and improve our statistics. ### Luminosity function Past works have constructed X-ray luminosity functions (XLFs) for GRBs (Sun et al. 2015), SBOs (Sun et al. 2022), and TDEs (Sazonov et al. 2021). We construct here the XLF of FXTs considering distant sources from Paper I and this work, using the classical \(V_{\rm max}\) method from Schmidt (1968). We adopt the redshifts and peak X-ray luminosities shown in Table 9 (_columns 2 & 3_, respectively) and Fig. 16 (_left panel_), which also plots the limiting luminosity corresponding to a _Chandra_ detection threshold of \(F_{\rm X-peak}^{\rm lim}\)\(\sim\)\(1\times\)\(10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\) (_green solid line_; representing the uncertainty detection limit of our search algorithm) and 1\(\times\)\(10^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\) (_green dashed line_; representing an approximate instrument threshold). All the FXTs except FXT 8 lie above \(F_{\rm X-peak}^{\rm lim}\)\(\approx\)\(1\times\)\(10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\). The XLF, \(\Phi(L)\), is the sum of the individual contributions by each source (\(j\)) in the luminosity range from log(\(L\)) to log(\(L\)) + \(d\) log(\(L\)) (with total \(\Delta N_{L}\) sources), i.e., \[\Phi(L)\;d\log(L)=\sum_{j=1}^{\Delta N_{L}}\frac{4\pi\;dN(L)_{j}}{\{\sum_{i= 1}^{N_{obs}}T_{i}\Omega_{i}V_{\rm max,i}\}_{j}}, \tag{3}\] where \(i\) runs over the \(i\)th observation, \(N_{obs}\) is the total number of observations, \(T_{i}\) and \(\Omega_{i}\) are the exposure time and FoV per observation, respectively, \(V_{\rm max,i}\) is the maximum observable volume, and \(\Delta N_{L}\) is the total detectable number of sources. \(V_{\rm max}\) for a given FXT in the sample depends on its intrinsic peak luminosity in the 0.3-10.0 keV energy band, which are both given in Table 9, assuming a flux limit of \(F_{\rm X,peak}^{\rm lim}\)\(\approx\)\(10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\) from our detection algorithm. To calculate the FXT XLF, we sum the derived \(V_{\rm max}^{-1}\) values of individual FXTs in five equal 0.5 dex interval luminosity bins from log(\(L_{\rm X,peak}\)) between 44.0 and 47.5. The uncertainty within a given bin depends on the Poisson uncertainty of the number of FXTs per bin and \(V_{\rm max}\) (computed as \(\sqrt{\sum(V_{\rm max})^{-2}}\), where the summation is done over the objects within that bin). We estimate three different cases: \(I\)) considering just eight FXTs with _known_ redshifts, \(II\)) considering 17 distant FXTs with _known_ and _fiducial_ (\(z\)=1.0) redshifts, and \(III\)) considering 17 distant FXTs with _known_ and _fiducial_ (\(z\)=0.5) redshifts. The computed FXT XLFs are shown in Fig. 16 (_right panel_). In _Case I (red squares_), the largest uncertainties are associated with the lowest luminosity bins where there is just one FXT per luminosity bin. For _Cases II_ and _III_ (_blue_ and _green squares_, respectively), the uncertainties are somewhat smaller because some luminosity bins have more than one source. The FXT XLFs all appear to decline with increasing X-ray luminosity and are fit with power-law models as: \[\frac{dN}{d\log(L_{\rm X,peak})dVdt}=\beta\times\left(\frac{L_{\rm X,peak}}{10^ {44}\;{\rm erg\;s^{-1}}}\right)^{\alpha}. \tag{4}\] The best-fit models are plotted in Fig. 16 as _red, blue, and green lines_ for _Cases I_, _II_ and _III_, respectively, with the results summarized in Table 10. Assuming fiducial redshifts of \(z\)=1.0 (\(z\)=0.5) naturally leads to shallower (steeper) XLF slopes. Under the implicit assumption of no evolution, we can estimate the average FXT volumetric rate in the \(z\)\(\approx\)0-2.2 (see Table 9) Universe by integrating the XLF over the entire luminosity range used here (\(L_{\rm X,peak}\)\(>\)\(10^{44}\) erg s\({}^{-1}\)). The results are tabulated in Table 10. For _Case I_, the average event rate density is \(\approx\)\(1.2\times\)\(10^{3}\) Gpc\({}^{-3}\) yr\({}^{-1}\). Due to the total galaxy volume density of \(\sim\)\(2\times\)\(10^{7}\) Gpc\({}^{-3}\) (Bell et al. 2003), this equates to a rate of \(R\)\(\sim\)\(6\times\)\(10^{-5}\) FXTs yr\({}^{-1}\) per galaxy. On the other hand, considering the FXTs with known+fiducial redshifts of \(z\)=1.0 (_Case II_) and \(z\)=0.5 (_Case III_), the average FXT volumetric rates would be \(\approx\)\(1.3\times\)\(10^{4}\) and \(\approx\)\(4.4\times\)\(10^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\), respectively, implying rates of \(R\)\(\sim\)\(6.5\times\)\(10^{-4}\) and \(2.2\times\)\(10^{-3}\) FXTs yr\({}^{-1}\) per galaxy, respectively. The three derived FXT volumetric rates (which could be interpreted as a mean value for _Cases I_ or _II/III_, respectively) remain consistent with previous results computed in Paper I (\(\approx\)\(5\times\)\(10^{5}\) Gpc\({}^{-3}\) yr\({}^{-1}\)), Xue et al. (2019 (\(\approx\)\(1.3\times\)\(10^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\)), and Bauer et al. 2017 (\(\gtrsim\)\(10^{2}\) Gpc\({}^{-3}\) yr\({}^{-1}\) at \(z\)\(\lesssim\)1). Notably, these values differ by orders of magnitude from the rates computed previously for SMBH TDEs (\(\approx\)\(210\) Gpc\({}^{-3}\) yr\({}^{-1}\); Donley et al. 2002; Sazonov et al. 2021) and marginally with SBOs (\(\approx\)\(4.6\times\)\(10^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\); Sun et al. 2022) identified in eROSITA and _XMM-Newton_ archival data, respectively. This further helps to exclude the SBOs (considering _Case I_) and SMBH TDE scenarios, although other transients may remain viable due to the large beaming correction uncertainties. ### Volumetric Density rate estimation In Section 5.2, we estimated the average FXT volumetric rate in the redshift range \(z\)\(\approx\)0-2.2 from the XLF (see Table 10), assuming zero evolution. Now, we compare the volumetric density rate, in units of yr\({}^{-1}\) Gpc\({}^{-3}\), with other known transient classes such as GRBs, SNe/SBOs, or TDEs. First, because only eight of 17 distant FXTs have redshift estimates, we correct the FXT volumetric rate of _Case I_ by the inverse of this fraction (i.e., multiply by 17/8) to account for the fact that we do not include all the sources. We implicitly assume here that the underlying redshift distribution of the sources without redshifts is the same as those with. Without this correction, the luminosity functions are lower limits rather than best estimates. Meanwhile, for _Cases II and III_, a correction factor is not necessary because both cases adopted fiducial redshifts (\(z_{\rm fiducial}\)=1 and 0.5, respectively) for all FXTs that lacked estimates. Considering this correction, the volumetric density rate for _Cases I, II and III_ ranges between \(\sim\)1.9\(\times\)10\({}^{3}\)\(-\)4.6\(\times\)10\({}^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\) at 1\(\sigma\) confidence. The derived density rate as a function of redshift is shown in Fig. 17 (_gray filled region_). Our result is consistent with the rates estimated in Paper I (\(\approx\)4.8\(\times\)10\({}^{3}\) Gpc\({}^{-3}\) yr\({}^{-1}\) at \(z_{\rm max}\)=2.1; _cyan circle_) or CDF-S-XT2-like sources (_purple square_; Xue et al. 2019). Each panel in Fig. 17 represents a comparison with transients related to massive stars (LGRBs, LL-LGRBs, and CC-SNe; _left panel_), compact binary mergers (SGRBs; _middle panel_), and tidal disruption events (SMBH and IMBH TDEs; _right panel_). As CC-SNe progenitors are massive, short-lived stars, the event rates should reflect ongoing star formation at different cosmological epochs (Madau & Dickinson 2014, and references therein). Thus to build the cosmic density rate shown in Fig. 17 (_left panel_), we use the star-formation history derived by Madau & Dickinson (2014), weighted by the number of stars that explode as SNe per unit mass \(k_{\rm CC-SNe}\)=0.0068 \(M_{\odot}^{-1}\) (Madau & Dickinson 2014), adopting a Salpeter initial mass function (_orange-dashed lines_). One caveat for CC-SNe, however, is that we do not expect strong X-ray emission from all types of SBO CC-SNe. Thus, we analyze the expected rates for different sub-samples of CC-SNe. The local event rate density of all CC-SNe types is \(\sim\)10\({}^{5}\) Gpc\({}^{-3}\) yr\({}^{-1}\) (see Fig. 17, _left panel_; Smartt et al. 2009; Li et al. 2011; Madau & Dickinson 2014). Around \(\sim\)59% of CC-SNe are Type II-P SNe from red supergiant star (RSG) progenitors. This means that the local rate of SBO from RSGs (which peak in the UV at a few eV; Alp & Larsson 2020; Sun et al. 2022) should be \(\sim\)6\(\times\)10\({}^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\), which is slightly higher than our result for FXTs (\(\sim\)2\(\times\)10\({}^{3}\)\(-\)4.5\(\times\)10\({}^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\)). Meanwhile, around \(\sim\)1-3% of CC-SNe are Type II SNe from blue supergiant star progenitors (BSGs; Arnett et al. 1989; Pastorello et al. 2005), and \(\sim\)30% are Type Ib/c SNe from Wolf-Rayet star (WR) progenitors. SBOs from BSGs and WRs are expected to peak in the soft X-rays, 0.3 and 3 keV, respectively (Matzner & McKee 1999; Nakar & Sari 2010; Sapir et al. 2013). Thus, the local rates of SBOs related to BSGs and WRs are \(\sim\)2\(\times\)10\({}^{3}\) and \(\sim\)6\(\times\)10\({}^{2}\) Gpc\({}^{-3}\) yr\({}^{-1}\), respectively. The derived event rate density of FXTs falls especially close to the expected rate of BSGs. For LGRBs, we adopt a cosmic evolution rate following Sun et al. (2015), which we normalize to the local universe value to characterize the cosmic density rate, as shown in Fig. 17 (_left panel_). LGRBs have an isotropic luminosity of \(\sim\)10\({}^{49}\)\(-\)10\({}^{54}\) erg s\({}^{-1}\), and an observed local density rate above \begin{table} \begin{tabular}{l l l l} \hline \hline Case \# & \(\alpha\) & \(\beta\) & \(\rho_{0}\)(\(L_{\rm X,peak}\)\(>\)10\({}^{44}\)erg s\({}^{-1}\)) \\ & & (Gpc\({}^{-3}\) yr\({}^{-1}\) dex\({}^{-1}\)) & (Gpc\({}^{-3}\) yr\({}^{-1}\)) \\ \hline (1) & (2) & (3) & (4) \\ \hline _Case I_ (known redshifts) & \(-\)0.26\(\pm\)0.13 & (7.90\(\pm\)1.49)\(\times\)10\({}^{2}\) & (1.24\(\pm\)0.35)\(\times\)10\({}^{3}\) \\ _Case II_ (\(z_{\rm fiducial}\)=1) & \(-\)0.57\(\pm\)0.11 & (1.74\(\pm\)0.29)\(\times\)10\({}^{4}\) & (1.32\(\pm\)0.11)\(\times\)10\({}^{4}\) \\ _Case III_ (\(z_{\rm fiducial}\)=0.5) & \(-\)1.13\(\pm\)0.27 & (1.14\(\pm\)0.45)\(\times\)10\({}^{5}\) & (4.38\(\pm\)0.21)\(\times\)10\({}^{4}\) \\ \hline \end{tabular} \end{table} Table 10: Results of XLF fitting. _Column 1: Case#_ considered. _Columns 2 and 3:_ best-fitting parameters of the XLF from Eq. 4. _Column 4:_ FXT volumetric rate from integrating the XLF. Figure 16: _Left panel:_ The peak X-ray luminosity of FXT candidates from Paper I and this work (with known and fiducial redshifts; see Table 9). The green _solid and dashed_ lines indicate peak flux limits of 10\({}^{-13}\) (set by our algorithm) and 10\({}^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\) (approximate _Chandra_ on-axis detector limit), respectively. _Right panel:_ FXT X-ray (0.3–10 keV) luminosity function (XLF) of the total sample from Paper I and this work. The _red squares_ represent the XLF using the seven FXTs with known (photometric or spectroscopic) redshifts, while the _blue_ and _green squares_ show XLFs using all 17 FXTs with known+fiducial redshifts, adopting \(z=1.0\) and 0.5 for unknown objects, respectively. The _solid lines_ show best-fit power-law models (see Table 10 for values). \(10^{30}\) erg s\({}^{-1}\) of \(\rho_{\rm{0,LGRBs}}\)\(\sim\)0.5 - 1.0 Gpc\({}^{-3}\) yr\({}^{-1}\) (Zhang 2018). We additionally consider a jet beaming correction factor of \(f_{b}^{-1}\)\(\sim\)500 (_blue-solid line_), which corresponds to a mean jet opening angle \(\theta_{\rm{LGRBs}}^{l}\)\(\sim\)3.6\({}^{\circ}\) (Frail et al. 2001). However, the beaming factor for LGRBs carries some uncertainties and various authors claim lower correction factors of \(\approx\)50-100 (_blue-dotted and dashed lines_, respectively; Piran 2004; Guetta et al. 2005). At \(z\)\(\leq\)0.6, the FXT volumetric rate exceeds the nominal LGRB rate by up to a factor of \(\sim\)7 (for the most favorable \(f_{b}^{-1}\)\(\sim\)500), while they appear consistent beyond \(z\)\(\geq\)0.6. The FXT rate does not appear consistent with LGRB rates that adopt lower jet beaming correction factors (e.g., \(f_{b}^{-1}\)\(\sim\)50 or 100). LL-LGRBs have relatively low isotropic luminosities of \(\sim\)5\(\times\)10\({}^{46}\)-10\({}^{49}\) erg s\({}^{-1}\), limiting our ability to see them out to large distances, and hence comprise only a small fraction of observed LGRBs. As such, they have a much higher local density rate of \(\rho_{\rm{0,LL-LGRBs}}\)\(\sim\)100-200 Gpc\({}^{-3}\) yr\({}^{-1}\) (Zhang 2018), and generally do not show strong evidence of collimation, implying a much wider jet opening angle, or even that the emission is essentially isotropic, i.e., \(f_{b}^{-1}\)\(\sim\)1 (Virgili et al. 2009; Pescalli et al. 2015). Normalizing the adopted cosmic evolution rate from Sun et al. (2015) to this value, we show the cosmic LL-GRB density rate as the _green lines_ in Fig. 17 (_left panel_). Following Liang et al. (2007a), we also consider a LL-LGRB jet beaming correction factor of \(f_{b}^{-1}\)\(\sim\)14 denoted by the _green-solid line_ and the isotropic case (i.e., \(f_{b}^{-1}\)\(\sim\)1) denoted by the _green-dashed line_. The derived FXT volumetric rate is consistent with the more strongly beamed LL-LGRB rate, while it is slightly higher than lower beamed LL-LGRBs, especially at \(z\)\(\gtrsim\)1. For SGRBs, the cosmic density rate is then shown in Fig. 17 (_middle panel_), considering Gaussian (_red lines_), power-law (_gray lines_) and log-normal (_purple lines_) merger delay evolution models.31, adopting an observed local density rate above 10\({}^{50}\) erg s\({}^{-1}\) of \(\rho_{\rm{0,SGRBs}}\)\(\sim\)0.5-3.0 Gpc\({}^{-3}\) yr\({}^{-1}\) (Wanderman & Piran 2015; Sun et al. 2015). Additionally, it is known that at least some short GRBs are collimated (Burrows et al. 2006; Soderberg et al. 2006; De Pasquale et al. 2010), with a mean jet opening angle \(\theta_{\rm{SGRBs}}^{l}\)\(\gtrsim\)10\({}^{\circ}\) (e.g., Berger 2014; Fong et al. 2022; Rouco Escorial et al. 2022), translating to a mean value of \(f_{b}^{-1}\)\(\sim\)25 (_dotted lines_; Fong et al. 2015). Nevertheless, the opening angle is not well-constrained, and other authors have suggested a wider range of \(f_{b}^{-1}\)\(\approx\)110-30 (_solid and dashed lines_, respectively; Berger 2014; Fong et al. 2022; Rouco Escorial et al. 2022). From different delay models, we have distinct outcomes. For instance, from the delay merger Gaussian and log-normal models, the FXT volumetric rates between cosmic epochs \(z\)\(\approx\)0.8-2.0 appear slightly higher than the most extreme beaming correction case (\(f_{b}^{-1}\)\(\sim\)110). In contrast, under the power-law model, the FXT volumetric rates are higher than even the most extreme beaming correction case. Footnote 31: The merger delay is defined as the time elapsed between the formation of the binary star system and the merger, which is dominated by the timescale for gravitational wave losses during the compact binary phase (Anand et al. 2018). Figure 17: Volumetric density rate as a function of redshift comparing FXTs (_gray filled region_, assuming no evolution with redshift and 1-\(\sigma\) confidence; see text for details) and other transients. _Left panel:_ comparison to massive-star related sources such as CCSNe (_dashed orange line_; \(k_{\rm{CC-SNe}}\)=0.0068 \(M_{0}^{-1}\) times the cosmic SFR density from Madau & Dickinson 2014), LGRBs (_blue-solid, dashed and dotted lines_ show evolution normalized at \(z\)=0 to \(\rho_{\rm{0,LGRBs}}\)=0.75 yr\({}^{-1}\) Gpc\({}^{-3}\) for \(f_{b}^{-1}\)=500, 100, and 50, respectively; Sun et al. 2015; Wanderman & Piran 2010), LL-LGRBs (_green-solid and dashed lines_ denote evolution normalized at \(z\)=0 to \(\rho_{\rm{0,LL-LGRBs}}\)=150 yr\({}^{-1}\) Gpc\({}^{-3}\) for \(f_{b}^{-1}\)=14 and 1, respectively, where \(f_{b}^{-1}\) is the jet beaming correction factor; Liang et al. 2007a; Zhang 2018). _Middle panel:_ comparison to compact-object binary systems such as SGRBs considering Gaussian (Gau.; _red lines_), Power-law (PL; _gray lines_), and Log-Normal (LN; _purple lines_) merger delay models (_solid, dashed and dotted lines_ denote evolution normalized at \(z\)=0 to \(\rho_{\rm{0,SGRBs}}\)=1.75 yr\({}^{-1}\) Gpc\({}^{-3}\) for \(f_{b}^{-1}\)=110, 30 and 25, respectively; Sun et al. 2015; Wanderman & Piran 2015). _Right panel:_ comparison to SMBH-TDEs (_magenta line_ indicates evolution normalized at \(z\)=0 to \(\rho_{\rm{0,SMBH-TDE}}\)=1.1\(\times\)10\({}^{5}\) yr\({}^{-1}\) Gpc\({}^{-3}\) for luminosities \(\gtrsim\)10\({}^{44}\) erg s\({}^{-1}\), assumed to be emitted isotropically; Sun et al. 2015) and IMBH-TDEs (_cyanline_ shows evolution normalized at \(z\)=0 to \(\rho_{\rm{0,IMBH}}\)=-290 yr\({}^{-1}\) Gpc\({}^{-3}\) emitted isotropically; Bloom et al. 2011; Lu & Kumar 2018; Malysali et al. 2019; Tanikawa et al. 2022). We also show the estimated rates from Paper I (_cyan circle_) and Xue et al. (2019) for CDF-XT2-like objects (_purple square_). Finally, in Fig. 17, (_right panel_), we consider both SMBH and IMBH TDEs, adopting the analytical cosmic density rate evolution of Sun et al. (2015). For SMBH TDEs, the model is normalized to the local value of \(\rho_{\rm 0,SMBH-TDE}\)\(\sim\)(0.7-1.4)\(\times\)10\({}^{5}\) Gpc\({}^{-3}\) yr\({}^{-1}\)(_magenta line_; Sun et al. 2015). Moreover, we assume that IMBHs have grown in a similar way to SMBHs, and adopt same cosmic evolution, with a local density normalization of \(\rho_{\rm 0,IMBH-TDE}\)\(\sim\)75-500 Gpc\({}^{-3}\) yr\({}^{-1}\)(_cyan-dotted line_; Malyali et al. 2019; Tanikawa et al. 2022). Like SMBH TDEs (e.g., _Swift_ 1644+57; Bloom et al. 2011; Levan et al. 2011) and IMBH-WD TDEs could be capable of launching luminous jets which can be detected by current satellites, until reaching X-ray luminosities as large as \(\sim\)10\({}^{48}\) erg s\({}^{-1}\)(MacLeod et al. 2014; MacLeod et al. 2016). The FXT volumetric rate is generally lower than the rate of SMBH TDEs at \(z\)\(\lesssim\)0.8, while it matches with them at \(z\)\(\gtrsim\)0.8. On the other hand, the FXT rate is much higher than our estimate of IMBHs, albeit with many untested assumptions (Malyali et al. 2019; Tanikawa et al. 2022). Of course, inconsistencies in several other parameters rule out an SMBH-TDE channel for several FXTs. In Sect. 6, we use the volumetric rate of FXTs to understand the most likely progenitors of the different sources. ## 6 Possible interpretations To assess the nature of our final sample of FXTs, we compare them with other well-known transients. For FXTs 16, 18, 19, 21, and 22, we adopt their best-fit photometric or spectroscopic redshifts in Table 8. For FXTs 15, and 17, which lack clear host associations in optical and NIR images, and 20, which only has three detections in DECam images and poor photometric redshift constraints, we assume a fiducial distance of \(z\)=1, consistent with the average known redshift distribution. From the best-fit PL spectral model (see Table 6), we compute the peak X-ray flux (corrected for Galactic and intrinsic absorption; \(F_{\rm X,peak}\)), the associated intrinsic peak X-ray luminosity (\(L_{\rm X,peak}\)), and the Eddington mass (defined as \(M_{\rm Edd}\)=7.7\(\times\)10\({}^{-39}\)\(L_{\rm X,peak}\) in solar mass units); values are reported in Table 5. In the 0.3-10.0 keV band. FXTs 16, 18, 19, 21 and 22 mean peak X-ray luminosities of \(L_{\rm X,peak}\)\(\approx\)2.8\(\times\)10\({}^{45}\), 1.9\(\times\)10\({}^{47}\), 3.7\(\times\)10\({}^{46}\), 6.9\(\times\)10\({}^{45}\) and 1.3\(\times\)10\({}^{46}\) erg s\({}^{-1}\), and have isotropic energies of \(E_{\rm X}^{\rm iso}\)\(\approx\)3.6\(\times\)10\({}^{48}\), 1.7\(\times\)10\({}^{50}\), 1.3\(\times\)10\({}^{49}\), 1.8\(\times\)10\({}^{48}\), and 1.0\(\times\)10\({}^{49}\) erg, respectively (see Table 5.1). Likewise, adopting \(z\)=1, FXTs 15, 17, and 20 would have peak X-ray luminosities of \(L_{\rm X,peak}\)\(\approx\)1.0\(\times\)10\({}^{45}\), 6.3\(\times\)10\({}^{45}\) and 8.1\(\times\)10\({}^{46}\) erg s\({}^{-1}\) and isotropic energies of \(E_{\rm X}^{\rm iso}\)\(\approx\)7.5\(\times\)10\({}^{47}\), 3.4\(\times\)10\({}^{48}\), and 3.2\(\times\)10\({}^{48}\), respectively (see Table 5.1). This luminosity range automatically excludes lower luminosity (\(L_{\rm X,peak}\)\(\lesssim\)10\({}^{42}\) erg s\({}^{-1}\)) X-ray flaring transients such as X-ray binaries (including ultra-luminous X-ray sources), soft gamma repeaters, quasi-periodic eruptions, and anomalous X-ray pulsars (e.g., Colbert & Mushotzky 1999; Kaaret et al. 2006; Woods & Thompson 2006; Miniutti et al. 2019). Below, in Sects. 6.1-6.3 we investigate the SBO, GRB, and TDE scenarios as origins of this FXT sample. In Sect. 6.4 we compare these FXTs with those identified in Paper I. ### Supernova Shock Breakouts (SBOs) One intriguing explanation for FXTs is related to the SBO from a CC-SNe. An initial flash of thermal UV or soft X-ray radiation is expected when a CC-SNe shock wave emerges from the stellar surface of the progenitor (Falk & Arnett 1977; Klein & Chevalier 1978; Matzner & McKee 1999; Schawinski et al. 2008; Ganot et al. 2016; Waxman & Katz 2017). The physical features of an SBO depend mainly on the density structure of the progenitor star and the explosion energy driving the shock wave (Chevalier & Irwin 2011; Gezari et al. 2015), which means that the temperature and duration of SBOs might cover a range of \(\sim\)10\({}^{5}\)-5\(\times\)10\({}^{6}\) K and \(\approx\)100-5,000 seconds, respectively (Ensman & Burrows 1992; Tominaga et al. 2011), leading to a bolometric peak luminosity of order \(\sim\)10\({}^{44}\)-10\({}^{45}\) erg s\({}^{-1}\). In the 0.5-7 keV band, we would expect to observe a soft thermal spectrum, potential spectral softening with time, and peak luminosities at least 1 dex lower than the bolometric values. Until now, just one SBO has been detected conclusively in multi-wavelength observations, XRT 080109/SN 2008D, serendipitously discovered during _Swift_-XRT observations of SN 2007uy in NGC 2770 (Soderberg et al. 2008; Modjaz et al. 2009; Waxman & Katz 2017). Recently, a dozen further SBO candidates were reported among _XMM-Newton_ archival data by Alp & Larsson (2020) and Novara et al. (2020). This subsample of FXTs has luminosities that fall within the ranges predicted by models and observations of SBOs (\(L_{\rm X,peak}^{\rm SBOs}\)\(\sim\)\(\times\)10\({}^{42}\)-10\({}^{44}\) erg s\({}^{-1}\); Soderberg et al. 2008; Modjaz et al. 2009; Waxman & Katz 2017; Alp & Larsson 2020; Novara et al. 2020); however, at least four of these FXTs are associated with energy releases that are two orders of magnitude higher than SBO model predictions (e.g., Waxman & Katz 2017) or observations (e.g., XRT 080109/SN 2008D had \(E_{\rm X}\)\(\sim\)2\(\times\)10\({}^{46}\) erg; Soderberg et al. 2008). Based on the energetics (the luminosity peaks are higher than those expected for SBOs, \(L_{\rm X,peak}^{\rm SBOs}\)\(\sim\)10\({}^{42}\) - 10\({}^{44}\) erg s\({}^{-1}\)) and light curves (which are much brighter than the SBO XRT 080109/SN 2008D, see Fig 18), we rule out an SBO origin for FXTs 16, 18, 19, 21, and 22. Due to the natural relation between SBOs and both CC-SNe and super-luminous SNe (SL-SNe), we expect them to share similar host-galaxy properties. The host properties of FXTs 16, 18, and 19 fall in regions populated by SNe type II and SL-SNe hosts, but lie at the edges of SNe type Ib and Ic host distributions (see Fig. 13). FXTs 21 and 22 reside at the edges of the SNe type Ib, Ic, and II, and outside the SL-SNe host distributions (see Fig. 13). Thus, the FXT hosts do not show a robust link with SNe host galaxies, reinforcing the previous results from the energetics. Similarly, for the FXTs which lack hosts (15,17) or redshift (20), the SBO scenario is ruled out at the fiducial \(z\)=1 values. The sources would need to lie at redshifts of \(z\)\(\lesssim\)0.37, 0.10, and 0.05, respectively, to comply with the expected energetic limits. At these redshifts, the apparent \(r\)-band magnitudes or limits would imply hosts with \(M_{r}\)\(\gtrsim\) - 18.5, -15.9, and -13.0, respectively; only the host of FXT 15 lies at the faint end of regular galaxies, while the rest fall in the broad range of dwarf galaxies. FXTs 15 and 20 have BPL light curves with break times at \(\approx\)3.7 and 0.1 ks, respectively, followed by PL decays, \(F_{\rm X}\)\(\propto\)\(r^{-2.7}\), that are accompanied by possible softening (see Table 7) and photon indices (\(\Gamma\)=2.1 and 3.0, respectively; see Table 6), similar to the SBO XRF 080109/SN 2008D (\(\Gamma\)\(\approx\)2.3; Soderberg et al. 2008). Finally, we note that contemporary optical time domain surveys would have detected an observable SNe associated with FXTs 17 and 20, if they were at \(z\)\(\lesssim\)0.1. In summary, we do not find evidence in support of an SBO origin for FXTs 17 or 20, but cannot discard it completely for FXT 15. Comparing the FXT rate with the much larger total CC-SNe rate, it is clear that only a small fraction of SBOs can lead to FXTs (see Fig. 17, _left panel_). After analyzing the volumetric rate of different massive star progenitors, we conclude that just some stars are more consistent with FXTs (see Sect. 5.3). Although the derived event rate density of FXTs falls especially close to the expected rate of BSGs (\(\sim\)2\(\times\)10\({}^{3}\) Gpc\({}^{-3}\) yr\({}^{-1}\)), such an association is largely ruled out by other characteristics such as energetics and host-galaxy properties. Thus, we conclude that this sample of FXTs is unlikely to be associated with SBOs from normal CC-SNe. ### Gamma-ray bursts (GRBs) GRBs are characterized by an average emission timescale of \(\approx\)20 seconds for LGRBs and \(\approx\)0.2 seconds for SGRBs (Meegan et al. 1996; Meszaros 2006). Currently, the accepted model of GRBs consists of a relativistically expanding fireball with associated internal and external shocks (Meszaros & Rees 1997). Once the gamma-ray emission is generated, the expanding jet-cited fireball interacts with and shocks the surrounding interstellar medium, producing a broadband X-ray-to-radio afterglow. When the Doppler boosting angle of the decelerating fireball exceeds the jet aperture angle, it produces a steepening in the light curve known as the "jet break" (Sari 1999; Rhoads 1999; Zhang & Meszaros 2004). The majority of LGRBs arise from the core-collapse of massive stars associated with hydrogen-poor, high-velocity type Ic supernovae (Hjorth et al. 2003; Cano 2013; Levan et al. 2016). On the other hand, the current model of SGRBs is linked to the merger of a compact NS-NS or NS-BH Figure 18: Light curves of the eight FXTs in 0.3–10 keV luminosity units (converted from 0.5–7 keV light curves assuming best-fit spectral models in Sect. 3.3). Several individual transients are overplotted: XRF 080109/SN 2008D (the low-luminosity supernova SBO; _solid cyan lines_, 27 Mpc;); XRF 060218/SN 2006aj (_solid blue lines_, 145 Mpc;); XRF 100316D/SN 2010bh (_solid orange lines_, 263 Mpc; Berniol Duran et al. 2015; Starling et al. 2011; Modjaz et al. 2009; Evans et al. 2009; Soderberg et al. 2008; Evans et al. 2007; Campana et al. 2006); GRB 110328A/Swift J1644+57 (relativistically beamed TDE; _solid black lines_, \(z\)=0.3543; Bloom et al. 2011; Levan et al. 2011); J2150\(-\)0551 (unbeamed TDE; _solid pink line_, \(z\)=0.055; Lin et al. 2018). For FXTs 15, 17, 18 and 20 (_open symbols_), we assume \(z\)=1.0, and for FXTs 16, 19, 21, and 22 we take the redshift values from Table 8. binary (e.g., Eichler et al. 1989; Narayan et al. 1992), induced by angular momentum and energy losses due to gravitational wave (GW) emission and leading to a GW burst (Abbott et al. 2016). The NS-NS channel could produce as a remnant either a millisecond magnetar (e.g., Zhang 2013; Sun et al. 2017) or a BH surrounded by a hyper-accreting debris disk. The NS-BH channel may also generate a debris disk, if the NS is disrupted outside the BH event horizon by tidal forces (Rosswog 2007; Metzger 2019). Once it happens, both the high accretion rate and rapid rotation yield energy extraction, thus allowing the launching of a relativistic jet, via either neutrino-antineutrino annihilation or magneto-hydrodynamic processes (e.g., Blandford & Znajek 1977; Rosswog & Ramirez-Ruiz 2002; Lee & Ramirez-Ruiz 2007). The accretion event could produce an isotropic thermal supernova-like emission on timescales of \(\approx\)10\({}^{4}\)-10\({}^{6}\) seconds called "kilonova" (e.g., Berger et al. 2013b; Tanvir et al. 2013; Gao et al. 2015; Sun et al. 2017; Pian et al. 2017; Arcavi et al. 2017; Metzger 2019). No contemporaneous gamma-ray counterparts are detected near the X-ray trigger times for any FXTs in our sample, ruling out an on-axis GRB scenario. The intrinsic light curves of all FXTs, except 18, are flatter and fainter than the vast majority of on-axis X-ray afterglows over the same timescales (2D shaded histogram in Fig. 19), with initial luminosities \(\approx\)1-2 dex Figure 19: Similar to Fig. 18. The X-ray afterglow light curves of 64 LGRBs plus 32 SGRBs (taken from Bernardini et al. 2012; Lü et al. 2015) are shown as a 2D histogram, while several individual transients are overplotted: GRB 111209A (ultra-long duration LGRB; _solid magenta line_, \(z\)=0.677; Levan et al. 2014); GRB 170817A (off-axis SGRB, multiplied by \(\times\)1000; _solid dark green line_; Nynka et al. 2018; D’Avanzo et al. 2018; Troja et al. 2020, 2022); SN 2020bvc (the first off-axis LGRB candidate; _solid light green line_; Izzo et al. 2020), and theoretical off-axis GRB afterglows at different viewing angles \(\theta_{\rm obs}\) (_solid and dashed colour lines_ represents afterglows with isotropic-equivalent energy and circumbumbest density of 10\({}^{53}\) erg, 1 cm\({}^{-3}\) and 10\({}^{51}\) erg, 0.15 cm\({}^{-3}\), respectively; Berger 2014; Ryan et al. 2019; Chrimes et al. 2022). For FXTs 15, 17, 18, and 20 (_open symbols_), we assume \(z\)=1.0, and for FXTs 16, 19, 21, and 22, we take the redshift values from Table 8. below the luminosity range \(L_{\rm X,peak}^{\rm GRBs}\)\(\gtrsim\)10\({}^{47}\) erg s\({}^{-1}\) observed for GRBs. Beyond \(\sim\)10\({}^{2}\)-10\({}^{3}\) seconds, however, most FXTs do begin to overlap energetically with the low-luminosity on-axis X-ray GRB afterglows. Overall, GRBs have _canonical_ light curves which can be split into up to five different components (Zhang et al., 2006), although not all X-ray afterglows necessarily exhibit all of them (e.g., Nousek et al., 2006; Willingale et al., 2007; Zhang et al., 2007; Evans et al., 2007; Liang et al., 2007b, 2009; Evans et al., 2009). The light curves components are (from the earliest to the latest): _i_) steep decay phase (it is the tail of the prompt emission, from \(F_{X}\)\(\propto\)\(t^{-3}\) to \(\propto\)\(t^{-10}\)); _ii_) shallow decay or plateau phase (it could be interpreted invoking a continuous energy injection by a central engine, from \(F_{X}\)\(\propto\)\(t^{-0.7}\) to \(\propto\)\(t^{-0.9}\)); _iii_) normal decay phase (it is the typical value predicted in the standard external forward shock model, \(F_{X}\)\(\propto\)\(t^{-1}\)); _iv_) jet break phase (it is a geometrical effect, \(F_{X}\)\(\propto\)\(t^{-2}\)); and _v_) X-ray flares (the GRB central engine directly powers them). We note that FXTs 17, 18, and 21 exhibit PL decay light curves, similar to the normal decay phase of GRBs, while the other FXTs follow BPLs. However, FXTs 17, 18, and 21 as \(F_{X}\)\(\propto\)\(t^{-0.3}\) and \(\propto\)\(t^{-0.5}\), which is much shallower than the characteristic normal and jet-break phases (Evans et al., 2009, 2007; Racusin et al., 2009), but could be consistent with a shallow decay or plateau phase (Troja et al., 2007; Rowlinson et al., 2010). Notably, FXT 21 exhibits temporal flaring behavior, which is potentially comparable to the strong X-ray flaring episodes seen in the tails of the X-ray afterglow in some GRBs (Barthelmy et al., 2005; Campana et al., 2006; Chincarini et al., 2010; Margutti et al., 2011), while its best-fit X-ray spectral slope of \(\Gamma_{\rm FXT~{}21}\)=3.1\(\pm\)0.6 is consistent with that of the standard afterglow distribution (\(\Gamma_{\rm GRBs}\)=1.5-3.0; Berger, 2014; Wang et al., 2015; Bauer et al., 2017) at the 1\(\sigma\) confidence level. On the other hand, FXTs 16, 19, and 22 show light curves consistent with a \(\approx\)2.1-3.7 ks plateau phase followed by a power-law decay (\(F_{X}\)\(\propto\)\(t^{-1.9/-3.0}\); see Fig 8 and Table 5) accompanied by likely spectral softening, especially for FXT 16 (see Fig 8 and Table 11). Spectral softening has been seen previously in some GRBs afterglows (e.g., GRB 130925A; Zhao & Shao, 2014). FXTs 16, 19 and 22 have photon indices (\(\Gamma\)\(\approx\)2.1-2.3; see Table 6) similar to GRB afterglows (\(\Gamma_{\rm GRBs}\)\(\approx\)1.5-3.0; Berger, 2014; Wang et al., 2015). Notably, some subsets of LGRBs (e.g., Lyons et al., 2010) and SGRBs (e.g., Rowlinson et al., 2010, 2013; Gompertz et al., 2014) exhibit plateau phases, although only \(<\)10% have plateau luminosities \(\lesssim\)10\({}^{47}\) erg s\({}^{-1}\) which would be consistent with FXTs 16, 19 and 22 at their redshifts. Finally, FXTs 15 and 20 have BPL light curves with break times at \(\approx\)3.7 and 0.1 ks, respectively, followed by PL decays, \(F_{X}\)\(\propto\)\(t^{-2.7}\), accompanied by possible softening (see Table 7) and photon indices (\(\Gamma\)=2.1 and 3.0, respectively; see Table 6) similar to GRB afterglows (\(\Gamma\)\(\simeq\)1.5-3.0; Berger, 2014; Wang et al., 2015) at a 1\(\sigma\) confidence level. However, their early rise phases (\(F_{X}\)\(\propto\)\(t^{-0.4}\) and \(\propto\)\(t^{1.0}\) for FXTs 15 and 20, respectively, see Table 5) are incongruent with the typical decays of on-axis GRBs X-ray afterglows (from \(F_{X}^{\rm GRBs}\)\(\propto\)\(t^{-1.5}\) to \(\propto\)\(t^{-2.0}\)). FXT 20's light curve shows many similarities to FXT 14/CDF-XT1 (see Fig. 10), where its X-ray luminosity reaches a value of \(L_{\rm X,peak}^{\rm XT1}\)\(\approx\)10\({}^{47}\) erg s\({}^{-1}\) without a clear softening in the spectra. The nature of FXT 14/CDF-XT1 is still unknown, although several scenarios have been proposed recently by Sun et al. (2019), Peng et al. (2019) and Sarin et al. (2021). We introduce the option of FXTs being associated with X-ray flashes (XRF), which may be related to shock breakout from choked GRB jets (see Fig. 18; Campana et al., 2006; Bromberg et al., 2012; Nakar & Sari, 2012). We compare the light curves of the XRF 060218/SN 2006aj (Pian et al., 2006; Campana et al., 2006) and XRF 100316D/SN 2010bh (Starling et al., 2011) related with LL-LGRBs. We note that the plateau phases of FXTs 16, 19, and 22 have similar luminosities to those of XRF 060218 and XRF 100316D (\(L_{\rm X,peak}\)\(\sim\)10\({}^{45}\)-10\({}^{46}\) erg s\({}^{-1}\)), and the break and late-time light curves also appear to match reasonably well. FXT 18 has higher luminosities (\(\approx\)10\({}^{47}\) erg s\({}^{-1}\)) at early times than XRF 060218 and XRF 100316D, but it matches with them at later times. On the other hand, FXT 21, with its known redshift, looks inconsistent by a factor of \(\geq\)5 compared to the XRFs known. Finally, the light curves of FXTs 15, 17, and 20 could be consistent with those of XRFs, but the lack of constrained redshifts does not permit a proper intrinsic comparison of energetics. Importantly, XRF 060218 and XRF 100316D show significant soft thermal components (\(kT\)\(\sim\)0.1-0.2 keV) which become dominant beyond \(\sim\)1000 s (Campana et al., 2006; Starling et al., 2011; Barniol Durvan et al., 2015). We find that only FXT 16 shows comparable spectral behavior; FXTs 15, 19, 20, and 22 do not exhibit any similar robust trend while FXTs 17, 18, and 21 actually appear to marginally harden at late times. We also consider the option of FXTs being off-axis components of GRB afterglows (see Fig. 19). To explore this scenario, we use a numerical model, called afterglow (developed by Ryan et al., 2019), to calculate synthetic light curves in X-rays. We generated synthetic X-ray afterglow for a range of viewing angles (from 0 to 8.6 deg) and assuming an isotropic-equivalent energy of 10\({}^{53}\) (10\({}^{51}\)) erg and a circumburst density of 1.0 (0.15) cm\({}^{-3}\), respectively (Fig. 19, _solid_ [_dashed_] lines), which represent the typical parameters for LGRBs (SGRBs) (e.g., Berger, 2014; Chrimes et al., 2022, and references therein. For instance, LGRBs and SGRBs occur in high- and low-density environments, respectively, while SGRBs are less energetic than LGRBs.32 The light curves have a rise (from \(\sim\)1 to 10\({}^{4}\) s, see Fig. 19) before reaching a peak luminosity, followed by an afterglow consistent with the on-axis GRB trend. Figure 19 shows that an off-axis afterglow under small viewing angles could match the light curves of FXTs 15 and 20. In contrast, sources such as FXTs 16, 19, and 22, which have a plateau phase, cannot match the expected fast rise and curvature at early times of the slightly off-axis GRB cases. On the other hand, the light curves of some FXTs do appear to crudely match certain off-axis angle cases of SGRBs (_dashed lines_), because of their lower luminosity. Finally, we compare FXTs with the potential high inclination off-axis LGRB SN 2020bvc (with viewing angle \(\theta_{\rm obs}\)\(\approx\)23 deg; Izzo et al., 2020), and SGRB GRB 170817A (with viewing angle \(\theta_{\rm obs}\)\(\approx\)23 deg; Nynka et al., 2018; D'Avanzo et al., 2018; Troja et al., 2020, 2022) in Fig. 19. Notably, SN 2020bv and GRB 170817A are much less luminous (\(L_{\rm X,peak}\)\(\lesssim\)3\(\times\)10\({}^{41}\) erg s\({}^{-1}\)) than the sample of FXTs, by at least \(\sim\)5 orders of magnitude. In general, at high off-axis angles, we can expect later onsets, fainter light curves, lack of decay phases at early times, and peak luminosities at later times (e.g., Granot et al., 2002; Ryan et al., 2019; Oganesyan et al., 2020; Ascenzi et al., 2020). Overall, this comparison suggests that an association of FXTs with high inclination angle afterglows of GRBs is unlikely, although a mildly off-axis SGRB scenario remains plausible. In terms of host galaxies (see Sect. 4 for more details), based on the host stellar mass and SFR of FXTs 16, 18, 19, and 22, the galaxies lie above the galaxy-main sequence, in a parameter-space region populated mainly by GRBs (see Figs. 13). Nevertheless, it remains difficult to disentangle an association with LGRBs or SGRBs from the current data. In contrast, FXT 21's host is below the galaxy-main sequence and shares properties more similar to SGRB hosts (especially the stellar mass). Due to the physical offsets of FXTs 16, 19, and 21 (Fig. 14, _right panel_) overlapping with the cumulative distributions of CC- and type Ia SNe, and SGRBs at 1\(\sigma\) confidence level (see Fig. 14), the projected physical offsets are not enough to confirm or rule out the different scenarios. Although the offset distance of FXT 18 suggests a unique and apparent association with SGRBs, the considerable associated X-ray positional uncertainty does not permit us to consider its offset as a robust discriminator. Finally, FXT 22 has a sizeable physical offset which strongly disfavors a robust association with LGRBs, CC-, type Ia SNe, leaving only a SGRB association as a possible scenario. For instance, the dynamical evolution of the BNS due to a kick velocity (the formation of each compact object is associated with one supernova explosion; Fong and Berger 2013; Berger 2014, and references inside) could explain the significant offset of FXT 22 (\(\approx\)40 kpc). In the case of FXT 16, its light curve (the plateau and power-law decay as \(F_{X}c\tau^{-2}\)), spectral softening trend, host-galaxy offset distance, and host-galaxy properties are consistent with a compact star merger origin, following Xue et al. (2019). Sun et al. (2019) explain the X-ray emission assuming a magnetar remnant after a BNS merger observed at a slightly off-axis viewing angle. Although FXTs 19 and 22 do not follow the same spectral trend, they share similar timing properties (a plateau phase in their light curves) and belong to star-forming host galaxies, as does FXT 16. However, FXT 22's host is one of the most massive galaxies of the sample. The volumetric rates reinforce some previous conclusions from the timing, spectra, and host properties (for more details, see Sect. 5.3). In the case of LGRBs (see in Fig. 17, _left panel_), the FXT volumetric rate is higher than the LGRB rate by up to a factor of \(\sim\)7 at \(z\)\(\leq\)0.6 (even for \(f_{b}^{-1}\)\(\sim\)500), but appears consistent beyond \(z\)\(\geq\)0.6 just for the case \(f_{b}^{-1}\)\(\sim\)500. In this sense, LGRBs with higher beaming corrections remain a potential progenitor for FXTs, while an association with LGRBs with lower jet beaming factors (e.g., \(f_{b}^{-1}\)\(\lesssim\)200) seems unlikely. However, the lower luminosity of FXTs becomes challenging to explain under this context. Moreover, we identified that the FXT volumetric rate is well-matched to the LL-LGRB rate considering a moderate beaming correction (\(f_{b}^{-1}\)\(\sim\)14), while it is slightly higher than lower beamed LL-LGRBs (\(f_{b}^{-1}\)\(\sim\)1) beyond \(z\)\(\gtrsim\)1 (see Fig. 17, _left panel_). Thus, based on volumetric rates and luminosities, we conclude that LL-LGRBs remain a viable channel to explain FXTs. However, host properties do not align completely with this statement. Finally, in the case of SGRBs, the volumetric rates give possible clues about an association between FXTs and SGRBs. From the delayed merger Gaussian and log-normal models, the FXT volumetric rates at \(z\)\(\gtrsim\)0.8 and \(z\)\(\lesssim\)2 appear slightly higher than even the case of \(f_{b}^{-1}\)\(\sim\)110 (see Fig. 17, _middle panel_). Meanwhile, FXT remains a factor of \(\sim\)4 higher than SGRB rates assuming lower beaming correction values (i.e., \(f_{b}^{-1}\)\(\sim\)30-25). Thus, a link with SGRBs remains plausible, although it requires relatively strong beaming corrections, which unfortunately remain poorly constrained. This result agrees with the low luminosity of FXTs and the host galaxy properties. ### Tidal disruption events Another potential FXT progenitor scenario is related to TDEs (Rees 1988; Phinney 1989; Burrows et al. 2011; Saxton et al. 2021). TDEs occur when a red giant (RG), main-sequence (MS) star or IMBH that it undergoes tidal forces which exceed its self-gravity, causing it to be disrupted. A substantial fraction of the tidal debris will fallback onto the BH, leading to luminous thermal emission at soft X-ray through optical wavelengths, either by the accretion of this gas onto the BH and/or the initial shocks due to colliding stellar debris streams (Guillochon and Ramirez-Ruiz 2015). A delay between the disruption and the accretion of gas onto the black hole may cause a delay between the optical and X-ray emission (e.g., Hayasaki and Jonker 2021). The debris fallback rates can range from strongly (\(\sim\)10\({}^{4}\)) super-Eddington to strongly (\(\sim\)10\({}^{-3}\)) sub-Eddington, with respective peak timescales from \(<\)1 day to more than 100 years (e.g. Law-Smith et al. 2017). This is confirmed by an observed empirical correlation between the peak light curve emission time and IMBH/SMBH mass (van Velzen et al. 2020), such that IMBH-WD TDEs are expected to rise to peak within minutes/hours, while SMBH/IMBH-MS TDEs (depending on the mass and spin) take roughly months to years (Krolik and Piran 2011; Haas et al. 2012; Kawana et al. 2018). For MS stars disrupted by a SMBH (10\({}^{6}\)-10\({}^{8}\)\(M_{\odot}\)) or IMBH (10\({}^{3}\)-10\({}^{5}\)\(M_{\odot}\)), the radiation should peak around \(T_{\rm eff}\)\(\sim\)10\({}^{4}\)-10\({}^{6}\) K, i.e., at UV to soft X-ray wavelengths. Mild cooling is predicted, although substantial variations are seen empirically (Gezari 2021). For BHs exceeding \(\sim\)10\({}^{5}\)\(M_{\odot}\), a WD would be swallowed whole, leaving no expected emission signature (Clausen et al. 2012; Kawana et al. 2018). However, a high spin rate that increases the Hills mass for BHs with masses \(\lesssim\)10\({}^{6}\)\(M_{\odot}\) could enable IMBHs to potentially disrupt more white dwarfs, but, overall, it is difficult to explain all the X-ray flares as solely due to WD TDEs (Maguire et al. 2020). In addition to the possibility of exceeding the Eddington rate by large factors, emission from relativistic jets is also possible, particularly if the disruption involves a strongly magnetic WD (e.g., Cenko et al. 2012; Brown et al. 2015; Sadowski et al. 2016). Relativistic beaming from jetted TDEs such as _Swift_ J1644+57 (_black solid lines_ in Fig. 18; Bloom et al. 2011; Levan et al. 2011; Saxton et al. 2021) can generate much higher luminosities (\(L_{\rm X,peak}\)\(\sim\)10\({}^{48}\) erg s\({}^{-1}\)), rapid and strong variability, and harder X-ray spectra (\(\Gamma\)=1.6-1.8; Levan et al. 2011), although the photon index softens with decreasing flux (Bloom et al. 2011). However, some sources, such as the TDE AT2021ehb, show a hardening spectral trend with time, which is interpreted as the gradual formation of a magnetically dominated corona (Yao et al. 2022). TDEs involving SMBHs should occur in the centers of more massive galaxies. Thus, we can automatically discard an association of FXTs 16, 19, 20, and 22 with SMBH TDEs because the sources are offset from the centers of their host galaxies. In the case of FXT 18, although it is offset from its host galaxy candidate, the significant X-ray positional uncertainty does not allow us to rule out the association with SMBH TDEs. We arrive at a similar conclusion for FXT 21 and the hostless events FXTs 15 and 17. In contrast, TDEs involving IMBHs may occupy a larger range of possibilities, e.g., occurring near the centers of dwarf galaxies or in crowded stellar systems such as globular clusters (e.g., Jonker et al. 2012; Reines et al. 2013). Thus, the offset of FXTs 16, 19, 20, and 22 remain consistent with a possible IMBH-WD TDE association. Moreover, given the short durations of the FXTs and exclusive detection in the X-ray band to date, the IMBH-WD TDE scenario seems to be most applicable. Below, we will explore the association with IMBH TDEs. The BPL light curves of FXTs 15 and 20 most closely follow the expected light curve shape for IMBH-WD TDE candidates (e.g., MacLeod et al. 2014; Malyali et al. 2019; Peng et al. 2019), with a fast rise and exponential decline (see Fig 8 and Table 5). Although their late-time spectral slopes are relatively soft, their initial slopes are much harder than expected for TDEs (\(T_{\rm bbody}\)\(\approx\)0.02-0.13 keV; Gezari 2021). The nominal peak luminosities of \(L_{\rm X,peak}\)\(\sim\)10\({}^{45}\)-10\({}^{47}\) erg s\({}^{-1}\), respectively, at fiducial redshifts of \(z=1\), are several orders of magnitude larger than the expected Eddington limits for IMBH-WD TDEs or what is observed from local candidates (e.g., IMBH TDE candidate TDE J2150-05 has a \(L_{\rm X,peak}\)\(\sim\)10\({}^{43}\) erg s\({}^{-1}\); Lin et al. 2018), requiring invocation of extreme super-Eddington accretion or relativistic beaming to explain them under a TDE scenario. The peak luminosities are more in line with beamed TDEs _Swift_ J1644+57 (\(L_{\rm X,peak}\)\(\sim\)10\({}^{46}\)-10\({}^{48}\) erg s\({}^{-1}\); see Fig. 18; Bloom et al. 2011; Levan et al. 2011), but it is related with an SMBH TDE emission (although some authors claim by an association with IMBH TDEs, e.g., Krolik & Piran 2011). We cannot exclude an IMBH TDE explanation for FXTs 15 and 20, although they would clearly require special conditions. FXTs 17, 18, and 21 show PD declines from the very start with relatively soft spectral slopes. The lack of any detectable rise appears inconsistent with expected TDE light-curve shapes. However, the soft X-ray spectral shapes, particularly in the case of FXT 18, are potentially consistent with the properties of some IMBH TDEs (e.g., MacLeod et al. 2014; Malyali et al. 2019). Moreover, we do not have sufficient counts to resolve the fast rise times which it may expect for some IMBH-WD TDEs (e.g., MacLeod et al. 2016). Again, the peak luminosities (adopting a fiducial redshift of \(z\)=1 and photometric redshift of 0.35 for FXTs 17 and 18, respectively) are a few orders of magnitude larger than the expected Eddington limits for IMBH-WD TDEs or what is observed from local candidates, requiring super-Eddington accretion or relativistic beaming to explain them under an IMBH TDE scenario. Subsequent observations for FXT 17 can rule out any extended bright, long-term variability, however, it is not the case for FXT 18, which has not been revisited by X-ray observatories. For these reasons, we disfavor a TDE explanation for FXTs 17 and 21. The light curves of FXTs 16, 19, and 22 show \(\approx\)2.1-3.7 ks plateaus with subsequent power-law decay (from \(F_{\rm X}\)\(\propto\)\(t^{-1.9}\) to \(\propto\)\(t^{-3.0}\)), accompanied by robust spectral softening in the case of FXT 16 (Fig. 8, and Tables 5 and 11). Although not commonly observed in X-ray emission from TDEs as yet (Gezari 2021), some eccentric fallback or reprocessing scenarios could potentially explain this behavior. On the other hand, the overall spectra of FXTs 16, 19 and 22 are best-fit with photon indices of \(\Gamma\)\(\approx\)2.1-2.3 (see Table 6), which are much harder than expected for TDEs, while their peak luminosities (\(L_{\rm X,peak}\)\(\approx\)3\(\times\)10\({}^{45}\)-7\(\times\)10\({}^{46}\) erg s\({}^{-1}\) are also generally much higher than candidate IMBH-TDEs identified to date. A relativistic-beamed IMBH-TDE scenario could better explain some of the X-ray properties (luminosities, spectral slopes) of FXTs 16, 19, and 22 (e.g., Peng et al. 2019 argue for an IMBH-WD TDE scenario for FXT 16), although subsequent observations indicate that none shows extended durations or variability evolution such as seen in _Swift_ J1644+57. Finally, FXTs 16, 19, and 22 are all significantly offset from the nuclei of their associated hosts by \(\approx\)0\(\aas@@fstack{\prime\prime}\)4-4\(\aas@@fstack{\prime\prime}\)6 (or physical distances of \(\approx\)3.3-40 kpc; see Fig. 6), requiring an ejected IMBH scenario, or in the stripped nucleus of an infalling galaxy (such as TDE J2150-05; Lin et al. 2018), in order for the TDE scenario to remain viable. For these reasons, we disfavor a TDE explanation for FXTs 16, 19, and 22, although relativistically beamed emission from an IMBH-WD TDE scenario cannot be ruled out. Regarding the host galaxy properties, we might expect to find IMBHs near the centers of dwarf galaxies, in globular clusters at large offsets in massive galaxies, or ejected via 3-body interactions (Komossa & Merritt 2008; Jonker et al. 2012; Reines et al. 2013). This, we might naively expect to identify FXTs associated with IMBH-WD TDEs in any type of host galaxy, and with a wide range of projected offsets. FXTs 16, 18 and 19 have hosts with \(M_{*}\)\(\lesssim\)10\({}^{9}\) M\({}_{\odot}\), while FXT 21 and 22 hosts have larger stellar masses (\(M_{*}\)\(\sim\)10\({}^{11}\) M\({}_{\odot}\)). Thus, we cannot discard an IMBH-WD TDEs scenario for FXTs for any event. FXTs are only lower than the rate of SMBH TDEs for \(z\)\(\lesssim\)0.8 (see Fig. 17, _right panel_). In contrast, in the case of IMBH TDEs, the FXT rate is much higher during the cosmic time but potentially consistent with just a fraction of FXTs (because likely we have a mix of FXTs origins). Another possibility could be the different energetics between FXTs and IMBH TDEs (discarding the beaming case, which occurs just in a small fraction of events). Moreover, based on inconsistencies in several other parameters (such as the offset from transient X-ray position and host galaxies) we can rule out an SMBH-TDE channel for several FXTs. Finally, the partial consistency between volumetric rates of FXTs and different transients classes at different redshifts (see Sect. 5 for more details), timing and spectral parameters (see Sect. 3), and host properties (see Sect. 4), may suggest that the overall sample of FXTs arise from a heterogeneous set of progenitors. Detection of contemporaneous EM counterparts from future FXTs remains crucial to disentangle these multiple formation channels. Nonetheless, we strongly caution the reader not to overinterpret the consistency or lack thereof between FXTs and many of the transient classes, as we have implicitly assumed no density evolution in our calculations (which there easily could be) and the density evolution assumed for several of the other transient classes is not well-constrained. Thus, some of the previously mentioned discrepancies at low or high redshift could be no more than artifacts of these assumptions. ### FXTs discovered in Paper I The FXTs discovered here share many similarities with the previous distant FXTs identified in Paper I, in terms of their timing (Fig. D.1), spectral (Figs. 10 and 11), and host-galaxy properties (Fig. D.2). Unfortunately, the lack of host-galaxy detections for many FXTs identified here and in Paper I does not permit more detailed comparisons of energetics among the two samples. It is clear that according to the properties of the hosts we do detect, there is no single unifying class of galaxies (in terms of SFR and stellar mass) that could harbor a unique kind of transient. We conclude that the FXTs reported here likely have \(z\)\(\gtrsim\)0.2, i.e., they are not related to local galaxies (see Fig. D.2, _bottom panel_), and presumably span a wide distance range. ## 7 Expected sources in current and future missions Based on the event rate computed in Sect. 5.1, we explore the expected number of FXTs that should be detectable in other ongoing and future X-ray missions. The expected event rate of another (_New_) mission (called \(\mathcal{R}_{\rm New}\)) regarding our results (\(\mathcal{R}_{\rm Total}\)) is \[\mathcal{R}_{\rm New}=\left[\frac{\mathcal{N}(>\!S_{\rm New,lim})}{\mathcal{N}(> \!S_{\rm CX0,lim})}\right]\mathcal{R}_{\rm Total}, \tag{5}\] where \(\mathcal{R}_{\rm New}\) and \(\mathcal{N}(>\!S_{\rm New,lim})\) are the event rate and X-ray fluence limit of the new mission (taken from Sect. 5.1), respectively, and \(\mathcal{N}(>\!S_{\rm CX0,lim})\) represents the fluence limit of _Chandra_ (taken from Sect. 5.1). As we explain in Sect. 5.1, the event rate behaves as a BPL function. Then, the expected total number of sources must be \[\mathcal{N}_{\rm New} = \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ported 12, 136 and \(\approx\)2500 candidates to date, respectively; the large numbers from the latter two are strongly dominated by Galactic stellar flares, cataclysmic variables, type I X-ray bursts, supergiant flares, as well as extragalactic AGN and SBOs. In the case of future missions, the Advanced Telescope for High ENergy Astrophysics (_Athena_) will characterize the hot and energetic universe from the mid-2030s. It will cover the 0.2-12 keV band with a 1.4 m\({}^{2}\) effective area at 1 keV, and have a nominal lifetime of five years (although it could be extended for 10 years depending on consumables; Nandra et al. 2013; Barret et al. 2013, 2023). The Wide Field Imager (WFI) will have a spectral resolution of \(\Delta E\)\(<\)170 eV at 7 keV, a spatial resolution of \(\lesssim\)10 arcsec PSF on-axis, and FoV of 0.44 deg\({}^{2}\)(Rau et al. 2016). To estimate the number of FXTs, we assume a flux limit \(\times\)10 higher than the nominal 10 ks limit of \(F_{\rm WFI,lim}\)=5\(\times\)10\({}^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) for the WFI deep fields. Thus, the expected number of FXTs detected by _Athena_ will \(\approx\)460 [357-581] FXTs yr\({}^{-1}\). Assuming that the WFI observations will be spread evenly during the mission and that those observations will be performed during the _Athena_ ground contact, approximately one-sixth of the sources (\(\approx\)60-97 FXTs yr\({}^{-1}\)) could have _Athena_ alerts with latencies \(<\)4 hours. This could permit the investigation of the multi-wavelength properties of FXTs via coordinated campaigns with ground and space telescopes in other energy ranges. The _Einstein Probe_ (EP) mission will explore high-energy transient and variable phenomena in the 0.5-4.0 keV band (Yuan et al. 2015; Yuan et al. 2017, 2018, 2022), with a scheduled launch by the end of 2023 and a 3-year operational lifetime (and 5-year goal; Yuan et al. 2017). EP will harbor two scientific instruments, the Wide-field X-ray Telescope (WXT) with a large instantaneous FoV of 3600 deg\({}^{2}\) and a narrow-field Follow-up X-ray Telescope, and a fast alert downlink system (Yuan et al. 2015; Yuan et al. 2018). To compute the expected number of FXTs, we consider only the WXT instrument with a threshold sensitivity of \(F_{\rm WXT,lim}\)\(\approx\)5\(\times\)10\({}^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\) at 1 ks, yielding \(\approx\)13 [10-16] FXTs yr\({}^{-1}\). _STAR-X_ is a proposed equatorial low-earth orbit NASA mission comprised of an X-ray telescope (XRT) and a UV telescope (UVT; Saha et al. 2017; Saha & Zhang 2022). It aims to conduct a time-domain survey and respond rapidly to transient sources discovered by other observatories such as LIGO, Rubin LSST, Roman, and SKA. XRT will have a \(\approx\)2\(\aas@@fstack{\prime\prime}\)5 half-power diameter PSF, an on-axis effective area of \(\lesssim\)1,800 cm\({}^{2}\) at 1 keV, 1 deg\({}^{2}\) FOV, low particle background, and an on-board transient event alert capability of \(\sim\)5 min (Saha et al. 2017; Saha & Zhang 2022). Thus _STAR-X_ will be at least 1 dex more capable and more sensitive than _Chandra_ and _Swift_-XRT to find and study transient sources in the 0.2-6 keV band. To compute the potential expected number of FXTs, we again consider a 10 ks threshold sensitivity of \(F_{\rm STAR-X,lim}\)\(\approx\)1\(\times\)10\({}^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) (at 0.5-2 keV), which to avoid Poisson fluctuations we multiply by 10, yielding \(\approx\)180 [140-235] FXTs yr\({}^{-1}\). However, during its nominal 2-yr mission, _STAR-X_ will observe the extragalactic sky primarily through two time domain surveys, called _Deep_ and _Medium_ modes, which invoke different observing strategies. The _Deep_ (_Medium_) mode will have a daily (weekly) cadence, individual exposures of 1.5 (0.5) ks, a total exposure time of \(\sim\)13.1 (15.6) Ms over 12 (300) deg\({}^{2}\), and a single-epoch flux limit of \(F_{\rm STAR-X,lim}\)\(\approx\)1\(\times\)10\({}^{-14}\) (3\(\times\)10\({}^{-14}\)) erg cm\({}^{-2}\) s\({}^{-1}\); which to avoid Poisson fluctuations we again multiply by 10, yielding expected FXT numbers of \(\approx\)18-30 (12-20). As with eROSITA, we should consider these as upper limits, since the relatively short visits will hinder identifying shorter FXTs and lead to poor characterizations of FXT X-ray properties, especially for the _Medium_ survey. On the other hand, the simultaneous UVT observations should strongly constrain possible simultaneous or delayed lower-wavelength emission. Finally, the _Advanced X-ray Imaging Satellite_ (AXIS) is a NASA Probe Mission Concept designed to be the premier high angular resolution X-ray mission of the 2020s (\(\sim\)1\(\aas@@fstack{\prime\prime}\)0 on-axis and \(\sim\)2\(\aas@@fstack{\prime\prime}\)0 at 15\(\aas@@fstack{\prime\prime}\)0 off-axis). AXIS will cover an energy range of 0.2-10 keV, and have an effective area 5600 cm\({}^{2}\) at 1 keV, energy resolution \(\sim\)150 eV at 6 keV, FoV diameter of 24\(\aas@@fstack{\prime\prime}\)0, and detector background 4-5 times lower than _Chandra_. To estimate the expected number of FXTs, we consider an FXT threshold sensitivity of \(F_{\rm AXIS,lim}\)\(\approx\)3\(\times\)10\({}^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\) (at 1 ks), producing \(\approx\)50 [39-63] FXTs yr\({}^{-1}\). ## 8 Conclusions In this work we searched for extragalactic FXTs present in _Chandra_ data from 2014 to 2022. We applied an algorithm developed by Yang et al. 2019 and Quirola-Vasquez et al. (2022, hereafter Paper I) to X-ray sources with \(|b|\)\(>\)10 deg (i.e., 3899 _Chandra_ observations, totaling \(\approx\)88.8 Ms and 264.4 deg\({}^{2}\)). Considering additional criteria (analyzing further X-ray observations taken by _Chandra_, _XMM-Newton_, _Swift_-XRT, _Einstein_, and _ROSAT_) and other astronomical catalogs (e.g., _Gaia_, NED, SIMBAD, VHS, DES, Pan-STARRS), we identify eight FXTs consistent with an extragalactic origin. We rediscover all (three) previously reported _Chandra_ sources: XRT 150322 (previously identified by Xue et al. 2019), XRT 170901 (previously identified by Lin et al. 2019, 2022), and XRT 210423 (previously identified by Lin et al. 2021). We analyzed the timing and spectral properties of this new sample of FXTs. Overall, the X-ray spectra are well-fitted by power-law models with a median slope of \(\Gamma\)=2.6 and an overall range \(\Gamma\)\(\approx\)2.1-3.4 (excluding the very soft \(\Gamma\)\(\gtrsim\)6.5 outlier XRT 161125). We observe significant spectral softening for FXT 16/CDF-X2 with time, similar to other sources such as FXT 7/XRT 30511 and FXT 12/XRT 110919 (Paper I), while FXTs 15 and 20 show similar albeit marginal spectral softening trends. Regarding the X-ray timing properties, the light curves of five FXTs (15, 16, 19, 20, and 22) show broken power-law behavior, of which three FXTs (16, 19, and 22) exhibit plateaus with durations of \(\sim\)3-5 ks, followed by PL decays with slopes ranging from \(\sim\)2.0 to 3.8. Only in the case of FXT 16/CDF-XT2 do we simultaneously see spectral softening coincident with the plateau and decay phase (at 90% confidence), reinforcing the results obtained by Xue et al. (2019). Finally, three FXTs (FXTs 17, 18, and 21) show simple power-law decays in their light curves. We compute an event rate for the eight FXTs analyzed in this work of \(\mathcal{R}_{\rm This~{}work}\)=45.6\({}^{+18.2}_{-14.3}\) deg\({}^{-2}\) yr\({}^{-1}\). If we also consider the nine FXTs classified as "distant" (i.e., 100 Mpc) from Paper I, the combined event rate is \(\mathcal{R}_{\rm Total}\)=3.6\({}^{+9.7}_{-8.3}\) deg\({}^{-2}\) yr\({}^{-1}\). Additionally, we constructed the X-ray luminosity function (XLF) in the range from 10\({}^{44}\) to 10\({}^{47.5}\) erg s\({}^{-1}\), the first of its kind. The XLF clearly shows that the FXT volumetric rate decreases with increasing X-ray luminosity. A power-law model describes this trend with best-fit slopes of \(-\)0.26\(\pm\)0.13, considering just eight FXTs with known redshift, or \(-\)0.57\(\pm\)0.11 (\(-\)1.13\(\pm\)0.27) considering 17 FXTs with known + fiducial redshifts of \(z\)=1.0 (0.5). Finally, we derive the volumetric rate based on the XLF (sources from Paper I and this work), which covers a range of \(\sim\)1.9\(\times\)10\({}^{3}-4.6\times\)10\({}^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\) in the redshift range of \(z\)\(\)\(\approx\)0\(-\)2.2. These values are in good agreement with the values derived by Paper I and Xue et al. (2019) at similar redshifts (\(z_{\rm max}\)\(\approx\)2.1 and 1.9, respectively), and appear broadly consistent with several other transients classes (LL-LGRBs, LGRBs, SGRBs, and IMBH TDEs) across a broad redshift range. Six FXTs are associated with optical and NIR extended sources; however, only five (FXTs 16, 18, 19, 21, and 22) are sufficiently bright to derive galaxy properties using photometric archival data (at least four photometric points). For FXT 20, its potential host galaxy is detected weakly in just two photometric bands, which does not allow us to derive host properties. The host galaxies appear to cover a wide range in redshift (\(z_{\rm phot/spec}\)\(\approx\)0.3-1.5), stellar mass (\(M_{*}\)\(\approx\)10\({}^{7.9}\)-10\({}^{11}\)\(M_{\odot}\)), and SFR (\(\approx\)0.2-35 \(M_{\odot}\) yr\({}^{-1}\)). At the assumed distances, the peak luminosities, energetics, and spectro-temporal properties for all five sources robustly rule out an SBO origin, but potentially remain consistent with origins as on-axis LL-LGRBs, off-axis GRBs, or IMBH-WD TDEs. For the three FXTs (FXTs 15, 17, and 20) without optical and NIR host detections, interpretations are broader and less clear. An SBO scenario remains possible at low redshifts, \(z\)\(\lesssim\)0.4, as long as potential hosts are extremely low-mass, low-SFR dwarf galaxies. Nevertheless, at fiducial redshifts of \(\approx\)1.0, an SBO association is ruled out due to their high estimated X-ray luminosities (\(L_{\rm X,peak}\)\(\gtrsim\)10\({}^{44}\) erg s\({}^{-1}\)). A highly off-axis GRB scenario, similar to SN 2020bvc (\(L_{\rm X,peak}\)\(\sim\)10\({}^{41}\) erg s\({}^{-1}\)) or GRB 170817A (\(L_{\rm X,peak}\)\(\sim\)10\({}^{39}\) erg s\({}^{-1}\)), does not appear viable due to the relatively low expected redshifts \(z\)\(\lesssim\)0.02. However, off-axis GRBs afterglow (showing a rise of 1-10\({}^{4}\) s, before reaching a peak luminosity, followed by an afterglow consistent with the on-axis GRB trend) under a small range of viewing angles (from 0 to 8.6 deg) could match the light curves of some FXTs. An on-axis GRB scenario is possible at high redshifts (\(z\)\(\gtrsim\)1.0) and naturally explains the non-detection of faint host galaxies by existing optical and NIR facilities. However, their light curves at early times look inconsistent with on-axis X-ray afterglows, and the lack of gamma-ray detection is a weakness in this interpretation. Just the LL-LGRB scenario at moderate-high redshift could explain the non-detection of faint hosts and the lack of gamma-ray counterparts. Finally, an unbeamed IMBH-WD TDE scenario is possible only up to a redshift of \(z\)\(\approx\)0.14 (assuming a luminosity of \(L_{\rm X,peak}\)\(\sim\)10\({}^{43}\) erg s\({}^{-1}\) such as TDE J2150-0551). To reach higher luminosities beyond a fiducial redshift of \(z\)\(\approx\)1.0 (\(L_{\rm X,peak}\)\(\gtrsim\)10\({}^{45}\) erg s\({}^{-1}\)) requires a strongly beamed TDE scenario. Unfortunately, the few counts and the lack of host and EM counterparts do not permit us to analyze this scenario in detail. All the above, together with the broad range of properties, suggests that this novel population of FXTs has a mix of origins. The eight FXT candidates discovered or re-discovered in this work and the previous 14 sources from Paper I establish a novel sample of sources that opens a new window into the poorly explored world of X-ray transients. Unfortunately, the lack of well-determined distances and host properties leaves many questions about their nature unanswered. Given that so few FXTs have firm host detections and distances, concerted resources are needed to identify and follow up their associated host galaxies through photometric and spectroscopic techniques, in order to place extragalactic FXTs in an appropriate cosmic context compared to previous well-studied transients. Moreover, the lack of simultaneous detections across the electromagnetic spectrum has thus far severely limited our understanding of their emission process and progenitor channels. It is not only important to increase the number of detected FXTs, but also to improve efficient strategies for (onboard) detection and alert generation to trigger follow-up campaigns while the FXTs are still active in X-rays and likely other wavelengths. Future narrow and wide-field missions such as _Athena_, _STAR-X_, and _EP_ will enhance our detection capabilities and potential for alerts to follow-up in other energy bands. In contrast, missions such as _AXIS_ will allow us to accurately catch transient positions to identify host galaxies and offset distances. We leave as future work (Quirola-Vasquez et al. in prep.) an account of the ongoing efforts to acquire and analyze imaging and spectroscopy at optical and NIR wavelengths to identify the host galaxies of FXTs and thereby constrain their energetics and host properties. ###### Acknowledgements. We acknowledge support from: ANDI grants Programa de Capital Human Avanzado fido 21188086 (J.-O.), CATA-Basal AFB-170002 (J.-V. F.E.B.), FONDECCT Regular 119081 (F.E.B.), 1200495 (F.E.B.) and Millennium Science Initiative FCN12_009 (J.-V. F.E.B.); this project was (partially funded by NWO under grant number 184.304.020 (P.G.J.); NSF grant AST-2106990 and _Chandra X-ray_ Center grant GO02-10080X (W.N.B.); the National Natural Science Foundation of China grant 11991053 (B.L.); support from NSFC grants 12025035 and 1189093 (Y.O.Y.); support from the George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, from the National Science Foundation through grants AST-116468 and AST-2009424, and from the NASA/ESA/ASCA James Webb Space Telescope through the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-03127 (G.Y.). The scientific results reported in this article are based on observations made by the _Chandra X-ray_ Observatory. This research has made use of software provided by the _Chandra_ X-ray Center (CXC). This research uses services or data provided by the _Chandra_ X-ray Center (CXC). This research uses services or data provided by the Astro Data Lab at NSF's National Optical-Infrared Astronomy Research Laboratory. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the National Science Foundation.
2301.00889
An empirical process framework for covariate balance in causal inference
We propose a new perspective for the evaluation of matching procedures by considering the complexity of the function class they belong to. Under this perspective we provide theoretical guarantees on post-matching covariate balance through a finite sample concentration inequality. We apply this framework to coarsened exact matching as well as matching using the propensity score and suggest how to apply it to other algorithms. Simulation studies are used to evaluate the procedures.
Efrén Cruz Cortés, Kevin Josey, Fan Yang, Debashis Ghosh
2023-01-02T21:56:12Z
http://arxiv.org/abs/2301.00889v1
# An empirical process framework for covariate balance in causal inference ###### Abstract We propose a new perspective for the evaluation of matching procedures by considering the complexity of the function class they belong to. Under this perspective we provide theoretical guarantees on post-matching covariate balance through a finite sample concentration inequality. We apply this framework to coarsened exact matching as well as matching using the propensity score and suggest how to apply it to other algorithms. Simulation studies are used to evaluate the procedures. **keywords:** Causal effects, empirical distribution function, entropy metric, superpopulation, tail inequality, Vapnik-Chervonenkis dimension. ## 1 Introduction Causal inference is a central goal for outcomes and policy research, particularly in the medical field. Among the many topics in this broad field of study are methods for evaluating treatment effects with non-randomized data. There is an abundance of observational data in nearly every discipline of science. However, bias induced by confounding is inherent in observational studies. In this context, the researcher must account for every potential confounder in some way before they can establish causality. While randomization remains the gold-standard for inference, as there is no confounding by definition, randomizing individuals into treatment groups is often cost prohibitive and sometimes unethical for certain study designs. Under the potential outcomes framework (Neyman, 1923; Rubin, 1974), Rosenbaum and Rubin (1983) were able to describe how the propensity score plays a key role in causal effect estimation and inference with observational data. The propensity score is defined as the probability of receiving a treatment given a set of measured covariates. Under strong ignorabiligy assumption, the propensity score removes bias attributable to confounding due to its property as a balancing score (Rosenbaum and Rubin, 1983). With this result in mind, numerous methods for causal effect estimation were subsequently developed around the propensity score, with covariate balance serving as the primary objective (e.g., Imai and Ratkovic (2014); Zubizarreta (2015); Chan et al. (2016)). However, the results presented by Rosenbaum and Rubin (1983) about the propensity score are derived in an asymptotic setting. This means that estimates of the propensity score may not adequately balance the covariate distribution in finite settings. Therefore, many methods are resolved by iterating between fitting a model for the propensity score and evaluating balance diagnostics on the propensity score adjusted covariates before estimating the treatment effect of interest. Some methods for evaluating balance diagnostics have been proposed by Ho et al. (2007) and Sekhon (2008). The propensity score literature has mostly diverged into two overlapping yet distinct domains - one that uses the propensity score to derive balancing weights (Hainmueller, 2012; Imai and Ratkovic, 2014; Chan et al., 2016) and the other that uses a balancing score, such as the propensity score, to construct a matched cohort. Recently, a multivariate matching approach using coarsened values of the observed covariates was developed by Iacus et al. (2011). They refer to their algorithm as coarsened exact matching. One of the primary aims of their method was to eliminate the iterative step of re-matching participants until an acceptable amount of balance is achieved. Coarsened exact matching is quite simple in nature and proceeds using the following high-level heuristic: 1. For each confounding variable, coarsen it into a certain number of categories; 2. Create strata based on the possible combinations of the coarsened values; 3. Compute a causal effect by comparing the outcomes of the treatment groups within the strata and adjusting for the stratum effect appropriately. The theoretical justification provided by Iacus et al. (2011) for coarsened exact matching is a concept they term monotonic imbalance. They show that bounding the distance between confounders to be small leads to matching procedures that are more flexible than procedures based on the equal percent bias reduction theory developed by Rubin and collaborators (Rubin, 1976; Rubin and Thomas, 1992; Rubin et al., 2006). One of the main advantages of coarsened exact matching is that it becomes amenable to large-scale database querying approaches to peforming causal inference: see Salimi and Suciu (2016) as well as Wang et al. (2017). However, fewer technical results exist for matching estimators than for other approaches, such as inverse probability weighting estimators. Abadie and Imbens (2006) have studied the large-sample asymptotics of matching estimators and found that in general, matching-based estimators of average causal effect did not have the usual \(n^{1/2}\) convergence. The intuition is that the matching algorithm introduces a bias into causal effect estimation that did not vanish asymptotically. This bias term also increased with the number of confounders. Bias-corrected estimators have been proposed by Abadie and Imbens (2011). Abadie and Imbens (2016) performed a theoretical study of the asymptotic behavior of average causal effect estimators that match using the estimated propensity score. Conceptually, achieving covariate balance is a multivariate concept. If we let \(\mathcal{L}(Z\mid T=0)\) and \(\mathcal{L}(Z\mid T=1)\) denote the probability laws for the confounders conditional on treatment status then, ideally, as in the case of perfect randomization, these distributions are equal in some sense. We refer to this sense of equality as covariate balance. Most covariate balance methods do not take the joint distribution of confounders into account but rather seek to match moments of the marginal distributions for the confounders. For example, Imai and Ratkovic (2014) proposed matching the first and second moments of covariates in their algorithm. Practically, one-dimensional diagnostics such as mean comparisons of confounders between treatment groups or Kolmogorov-Smirnov statistics are used to evaluate balance. Wang and Zubizarreta (2019) have argued that due to the inherent complexity in attempting to achieve multivariate balance, one should instead strive to achieve approximate balance between confounders. In this paper, we propose a new theoretical approach to evaluating and understanding covariate balance. We introduce a distance metric to assess how close two multivariate distributions are from each other and define covariate balance as having zero distance. This metric is defined in terms of the function family the matching procedure belongs to. Subsequent assessment of balance relies on understanding the behavior of the function classes in question. We demonstrate the following in the current paper: 1. The use of function classes fits naturally with the use of probability metrics (Zolotarev, 1984) for comparing probability laws and in this instance, multivariate distributions for confounders conditional on treatment. 2. Results from empirical process theory (Van Der Vaart and Wellner, 1996; Kosorok, 2007) can subsequently be used to study the behavior of function classes and to make probabilistic statements on the rates of convergence of matching procedures under ideal balance. 3. Ideal balance provides a new theoretical out-of-sample justification for the methodology of Iacus et al. (2011) and can be used for the evaluation of other algorithmic strategies. Based on the framework, one can view the techniques in this paper as being akin to developing a scalable strategy for achieving covariate balance that has relatively low complexity from the viewpoint described in Section 3. ## 2 Background and Preliminaries ### Data Structures and Causal Estimands Let the data be represented as \((Y_{i},T_{i},Z_{i})\), \(i=1,\ldots,n\), a random sample from the triple \((Y,T,Z)\), where \(Y\) denotes the response of interest, \(T\) denotes the treatment group, and \(Z\) is a \(p\)-dimensional vector of covariates. We assume that \(T\) takes values in \(\{0,1\}\). We now briefly review the potential outcomes framework (Rubin, 1974; Holland, 1986). Let \(\{Y(0),Y(1)\}\) denote the potential outcomes for all \(n\) subjects, and the observed response be related to the potential outcomes by \[Y=(1-T)Y(0)+TY(1).\] In the potential outcomes framework, causal effects are defined as within-individual contrasts based on the potential outcomes. One popularly used estimand is the average causal effect, defined as \[\text{ACE}=\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}(1)-Y_{i}(0)\right).\] Many assumptions are needed for performing valid causal inference. These include the consistency assumption, the treatment positivity assumption, and the strongly ignorable treatment assumption (Rosenbaum and Rubin, 1983), defined as \[T\perp\{Y(0),Y(1)\}\mid Z. \tag{2.1}\] Assumption (2.1) means that treatment assignment is conditionally independent of the set of potential outcomes given the covariates. Treatment positivity refers to \(1>P(T=1\mid Z)>0\) for all values of \(Z\). Thus, the intuition is that any individual can potentially receive either treatment. Finally, the consistency assumption ensures that the observed outcome and the potential outcome under the observed treatment coincide. As described recently by Imbens and Rubin (2015), causal inference proceeds by modelling the assignment mechanism using observed covariates. A quantity that naturally arises from this modelling is the propensity score (Rosenbaum and Rubin, 1983), the probability of receiving treatment given confounders. The propensity score is defined as \[e(Z)=P(T=1\mid Z).\] Given the treatment ignorability assumption in (2.1), it also follows by Theorem 3 of Rosenbaum and Rubin (1983) that treatment is strongly ignorable given the propensity score, i.e. \[T\perp\{Y(0),Y(1)\}\mid e(Z).\] Based on these assumptions and definitions, we can formulate causal inference using the following approach: (a) define an appropriate causal estimand; (b) formulate a propensity score model; (c) check for covariate balance; (d) if (c) holds, estimate the causal estimand by conditioning on the propensity scores. We note that steps (b) and (c) tend to be iterative in practice. While the results in this paper pertain to propensity-matched analyses, they apply to more general matching strategies as well. ### Previous results on covariate balance In terms of covariate balance, a major class of theoretical results come from work on _equal percent bias reduction_ procedures (Rubin and Thomas, 1992, 1996). Equal percent bias reduction means that a certain type of covariate matching will reduce bias in all dimensions of \(Z\) by the same amount. Define a matching method to be affinely invariant if the matching procedure is invariant to affine transformations of the covariates. If \(Z\) given \(T\) is assumed to have a so-called elliptically symmetric distribution, then Theorem 3.1. and Corollaries 3.1. and 3.2 of Rubin and Thomas (1992) apply so that any affinely invariant matching method will be equal percent bias reducing. Examples of elliptically symmetric distributions include the multivariate normal and t distributions. While elliptical symmetry of the confounders given treatment group is a restrictive assumption, this was relaxed in more recent work by Rubin et al. (2006). There, they assumed that the conditional distribution of \(Z\) given \(T\) is a discriminant mixture of elliptically symmetric distributions. Rubin et al. (2006) prove that a generalization of equal percent bias reducing holds for this setup as well. Thus, for equal percent bias reducing methods, we have a guarantee that attempting to increase balance in one variable will not lead to distortions in balance for other variables. However, the assumptions needed for equal percent bias reducing to hold seem restrictive in practice. Iacus et al. (2011) took another approach by focusing on in-sample covariate discrepancies and requiring that the maximum discrepancy in sample means between treated and control subjects be bounded above by a constant. They generalize this to arbitrary functions of the data, which they term imbalance bounding and define monotonic imbalance bounding matching methods to be those in which the discrepancies between a monotonic function applied to a variable is bounded above by a confounder-specific term. Thus, one can be more stringent in the balance in variable without impacting the maximal imbalance across all confounders. There are many important implications of requiring the monotonic imbalance bounding property. First, many methods of confounder adjustment, such as nearest-neighbor or caliper matching as defined in Cochran and Rubin (1973), are not monotonic imbalance bounding because they fix the number of treated and control observations within strata, while monotonic imbalance bounding methods imply variable numbers of observations. By contrast, if the caliper matching procedure were to allow for different calipers for each confounder, then this would be monotonic imbalance bounding. Iacus et al. (2011) also show that a key goal in causal effect estimation is to reduce model dependence (Ho et al., 2007), meaning that there should not be extrapolation of potential outcomes to regions in the covariate space where there are no observations. Under some assumptions on the model for potential outcomes, they show that for monotonic imbalance bounding methods, the model dependence is upper bounded by terms involving an imbalance parameter. In addition, the estimation error for average causal effects using monotonic imbalance bounding matching methods can also be upper bounded by terms involving this parameter. As a concrete example of a new monotonic imbalance bounding method, Iacus et al. (2011) propose a coarsened exact matching algorithm for creating strata. It proceeds as follows: 1. For each variable \(Z_{j}\) (\(j=1,\ldots,p\)), coarsen it into a function \(C_{j}(Z_{j})\) which takes on fewer values than the unique values of \(Z_{j}\); 2. Perform exact matching between treated and control observations using the vector \[\left(C_{1}(Z_{1}),C_{2}(Z_{2}),\ldots,C_{p}(Z_{p})\right).\] This effectively creates strata \(\mathcal{S}_{1},\ldots,\mathcal{S}_{J}\) based on the unique combinations of \[\left(C_{1}(Z_{1}),C_{2}(Z_{2}),\ldots,C_{p}(Z_{p})\right).\] 3. Discard strata in which there are only observations with \(T=0\). For strata with only observations from the \(T=1\) population, extrapolate the potential outcome \(Y(0)\) using the available controls or discard by restricting the causal effect of interest on the treated units for which causal effect can be identified without further modelling based assumptions. For strata with both treated and control observations, compare the outcome between the two populations. Iacus et al. (2011) have developed very easy-to-use software packages for implementing coarsened exact matching in R and Stata. They show that the coarsened exact matching approach satisfies the monotonic imbalance bounding property with respect to a variety of functionals of interest. In addition, they provide a very intuitive explanation for what coarsened exact matching attempts to mimic. While classical propensity score approaches attempt to mimic a randomized study, analyses using coarsened exact matching will mimic randomized block designs, where the blocks are by definition predictive of the potential outcomes. It is well-known that in this situation, randomized block designs will yield more efficient estimators (e.g., Box, Hunter and Hunter, 1978). The other approach that has become of recent interest has been to incorporate covariate balance as part of the causal effect estimation process. For example, Imai and Ratkovic (2014) propose using generalized methods of moments for causal effect estimation in which covariate balance is treated as a constraint in the procedure. Chan et al. (2016) propose the use of calibration estimators for causal effect estimation in which covariate balance constraints lead to a constrained Lagrangian dual optimization problem. For these approaches, the authors are able to develop consistency and asymptotic normality results for the causal effect estimators. As described in more detail in Section 3.1, we will be using an integral probability metric to assess covariate balance among the two populations. In Kallus (2020) a similar metric is used. They define such a metric as the target error to be minimized for obtaining optimal weighting coefficients when estimating the _sample average treatment effect on the treated_. While our approaches are complementary, there are several notable differences. First, in Kallus (2020), they use their metric to find weights that correspond to known matching methods. The functions involved in their metric represent the expected relationship between potential outcomes and covariates. In our case, we take any matching procedure and given the measure of match, bound it by the probability metric involving functions representing the matching procedure itself, and provide probability bounds to how good the matching is. In addition, in Kallus (2020), they assume a fixed population and therefore no randomness in covariate values, while our concern indeed focuses on the sample distribution of these covariates. The difference between these two approaches is further explained in Section 2.3. ### Modes of inference and covariate balance In looking at the various proposals for accommodating covariate balance, it is useful to reconsider the ways in which one can perform causal inference. Imbens and Rubin (2015) have a nice overview on the distinction between finite-population and superpopulation modes for causal inference. The finite-population mode of causal inference treats the sampled units as the population of interest. The stochastic nature of the experiment is due solely to the treatment mechanism so that randomness occurs only with respect to the treatment assignments. If one adopts the finite-sample point of view for causal inference, then one can use a randomization-based approach to performing inference for causal effects. By contrast, the superpopulation mode of inference considers two sources of variability. The first is due to the randomness in the treatment assignments, and the second is due to the fact that the sampling units are a random sample from a superpopulation. Thus, this approach posits a superpopulation from which the sampling units come from. Revisiting the previous work from 2.2, the equal percent bias reduction theory and the work of Iacus et al. (2011) posit results about covariate balance assuming a finite-population mode for causal inference. Thus, covariate balance results of these methods will involve subsampling and matching from the sampling units, and the balance occurs with respect to the matched sample. The concept of balance we introduce in the next section can accommodate both modes of inference. ## 3 Main Results ### Ideal Balance In this section, we wish to study covariate balance from the viewpoint of comparing the distributions \(\mathcal{L}(Z\mid T=0)\) and \(\mathcal{L}(Z\mid T=1)\). To do so, we must determine how this comparison is done. We do this by first defining probability pseudometrics. _Definition 3.1_ (Pseudometric).: Let \(\mathcal{A}\) be the set of probability measures defined on a shared measurable space. A function \(m:\mathcal{A}\times\mathcal{A}\rightarrow[0,\infty)\) is a **pseudometric** on \(\mathcal{A}\) if, for all \(\mu\), \(\nu\), \(\lambda\in\mathcal{A}\), the following conditions are satisfied: 1. \(m(\mu,\mu)=0\). 2. \(m(\mu,\nu)=m(\nu,\mu)\). 3. \(m(\mu,\nu)\leq m(\mu,\lambda)+m(\lambda,\nu)\). Note these properties almost make \(m\) a metric on \(\mathcal{A}\), but notably we do not assume that if the distance between two elements is zero, then the two elements are the same. For the purpose of this paper, we will abuse terminology and refer to pseudometrics as metrics. The class of metrics we will work with in this article is given by \[\gamma_{\mathcal{F}}(\mu,\nu)=\sup_{f\in\mathcal{F}}\left|\int fd\mu-\int fd \nu\right|, \tag{3.1}\] where \(\mathcal{F}\) is a class of functions. In (3.1), \(\gamma_{\mathcal{F}}(\mu,\nu)\) is referred to by Zolotarev (1984) as an example of a probability metric. In our notation, we drop the dependency of \(\gamma_{\mathcal{F}}\) on \(\mathcal{F}\) and write it as \(\gamma\). We now define ideal balance as being based on (3.1). _Definition 3.2_ (Ideal Balance).: Let \(\mu\) and \(\nu\) be distributions on the same probability space and \(m\) a pseudometric, then we say \(\mu\) and \(\nu\) satisfy **Ideal Balance** with respect to \(m\) if \(m(\mu,\nu)=0\). When \(\mu\) and \(\nu\) are the conditional distributions of the covariates given the treatment group, as in Section 2, ideal balance is a restriction on the population. If these are instead the empirical distributions of the data, ideal balance is a sample restriction. Matching methods, in a sense, intend to achieve ideal balance on the matched data for some \(m\). Note that at this stage, we have only dealt with population distributional laws and have not described how to estimate or compute these quantities with real data. In practice, we would not expect ideal balance to hold in observational studies. However, it does serve as a useful benchmark through which we can study the behavior of various functional constraints. Here, the function spaces \(\mathcal{F}\) in (3.1) play the role of the constraints; more complex function spaces correspond to more constraints on the joint distributions of \(Z|T=1\) and \(Z|T=0\). ### A Concentration Inequality Result Let \(\mathcal{F}\) be a function space and \(\|\cdot\|\) a norm. The covering number \(N(\epsilon,\mathcal{F},\|\cdot\|)\) is the minimum number of \(\|\cdot\|\)-balls of radius \(\epsilon\) needed to cover \(\mathcal{F}\), where a ball centered around \(f\in\mathcal{F}\) is the set \(\{g\mid\|f-g\|\leq\epsilon\}\). Intuitively, one can think of the covering number as a measure of the complexity of the function class \(\mathcal{F}\). For a measure \(\mu\) the norm \(L_{r}(\mu)\)-norm, for \(r\geq 1\), is defined as \(\|f\|_{L_{r}(\mu)}^{r}=\int|f|^{r}d\mu\). Throughout the paper, we will assume \(\mathcal{F}\) is uniformly bounded. Note that if \(\mu\) is any probability measure, and under uniform boundedness, we can endow \(\mathcal{F}\) with the norm \(L_{r}(\mu)\) without dropping any of its elements. Unless otherwise specified, we assume the range of the functions in \(\mathcal{F}\) is \([0,1]\). Finally, for a function class \(\mathcal{F}\), an envelope function of \(\mathcal{F}\) is defined as any function \(h\) such that for all \(f\) in \(\mathcal{F}\), the inequality \[|f(x)|\leq|h(x)|\] is satisfied for any \(x\). Let \(\{Z_{i}\}_{i=1}^{n}\) be a sample where each \(Z_{i}\) has distribution \(Q\). We denote the empirical distribution by \(\mathbb{Q}_{n}\). The \(\mathcal{F}\)-indexed empirical process \(\mathbb{G}_{n}^{Q}\) is defined as the map taking any \(f\in\mathcal{F}\) to \[\mathbb{G}_{n}^{Q}(f)=\sqrt{n}\left(\int fd\mathbb{Q}_{n}-\int fdQ\right)= \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(f(Z_{i})-\int fdQ\right).\] **Theorem 3.3**.: _Let \(\mathbb{Q}_{n_{0}}^{0}\) and \(\mathbb{Q}_{n_{1}}^{1}\) be two empirical distributions of observations sampled from \(Q^{0}\) and \(Q^{1}\), respectively, and assume ideal balance holds for \(Q^{0}\) and \(Q^{1}\) with respect to \(\gamma\). Let \(M\) be the collection of probability measures. If there exists constants \(C\) and \(K\) such that \(\mathcal{F}\) satisfies_ \[\sup_{\mu\in M}N(\epsilon,\mathcal{F},\|\cdot\|_{L_{r}(\mu)})\leq\left(\frac{ K}{\epsilon}\right)^{C},\] _for every \(0<\epsilon<C\), then_ \[Pr\{\gamma(\mathbb{Q}_{n_{0}}^{0},\mathbb{Q}_{n_{1}}^{1})>\delta\}\leq\left( \frac{D\delta}{2\sqrt{C}}\right)^{C}\left(n_{0}^{C/2}\exp(-n_{0}\delta^{2}/2)+ n_{1}^{C/2}\exp(-n_{1}\delta^{2}/2)\right), \tag{3.2}\] _where \(D\) is a constant depending on \(K\) only._ The proofs of Theorem 3.3 and subsequent results are found in the supplementary material. Throughout the paper, we will use \(B_{n}(\delta,D,C)\) for the bound in Theorem 3.3, where the subscript \(n\) reminds us of the dependence on the sample size. _Remark 3.4_.: We note that the bound in (3.2) is nonasymptotic and will hold for any sample size. _Remark 3.5_.: In this framework, the function classes play an important role. Theorem 3.3 gives a bound in terms of the entropy number of the function class in question. In particular, low-complexity functions are favored using this approach. A key technical point is ensuring that the covering number condition in the theorem is satisfied. To do so, we will primarily use results from Vapnik-Chervonenkis theory (Chervonenkis and Vapnik, 1971) to determine appropriate covering numbers. In most cases the function classes of interest are not real-valued but vector-valued. The following straightforward results can be used to deal with these cases. **Lemma 3.6**.: _Let \(\{\mathcal{F}_{i}\}_{i=1}^{d}\) be a collection of real-valued function spaces and \((P^{i},Q^{i})\) satisfy ideal balance under \(\gamma_{\mathcal{F}_{i}}\) for each \(1\leq i\leq d\). Let \((\mathbb{P}^{i},\mathbb{Q}^{i})\) denote their respective empirical distributions with implicit sample size dependence. Then_ \[Pr\left(\sum_{i=1}^{d}\gamma_{\mathcal{F}_{i}}(\mathbb{P}^{i},\mathbb{Q}^{i}) >\delta\right)\leq\sum_{i=1}^{d}B(\delta/d,D_{i},C_{i}).\] Now, consider the collection \(\{\mathcal{F}_{i}\}_{i=1}^{d}\), where each \(\mathcal{F}_{i}\) is a real-valued function space. Define \(\mathcal{F}=\{f=(f_{1},\ldots,f_{d})^{T}\mid f_{i}\in\mathcal{F}_{i}\,\text{ for all }\,i\}\). Let \(\pi_{\ell}\) be the \(\ell^{th}\) coordinate projection, that is, for a finite dimensional vector \(x=(x_{1},\ldots,x_{d})\), \(\pi_{\ell}(x)=x_{\ell}\). Finally, define \(\mathcal{F}_{\pi}=\{\pi_{\ell}\circ f\mid f\in\mathcal{F},1\leq\ell\leq d\}\). Note the elements of \(\mathcal{F}_{\pi}\) are real-valued. The following lemma tells us we can either assume \(\mu\) and \(\nu\) satisfy ideal balance with respect to each of \(\gamma_{\mathcal{F}_{i}}\), or that they satisfy ideal balance with respect to \(\gamma_{\mathcal{F}_{\pi}}\). **Lemma 3.7**.: _Let \(\mathcal{F}\), \(\{\mathcal{F}_{i}\}_{i=1}^{d}\), and \(\mathcal{F}_{\pi}\) be as above, and let \(\mu\) and \(\nu\) denote two probability measures. Then the following are equivalent:_ 1. \(\mu\) _and_ \(\nu\) _satisfy ideal balance with respect to_ \(\gamma_{\mathcal{F}_{\pi}}\)_;_ 2. \(\mu\) _and_ \(\nu\) _satisfy ideal balance with respect to each_ \(\gamma_{\mathcal{F}_{i}}\)_,_ \(1\leq i\leq d\)_._ 3. \(\max_{i}\gamma_{\mathcal{F}_{i}}(\nu,\mu)=0\)_._ The following corollary will be very useful: **Corollary 3.8**.: _Let \(\mathcal{F}\) and \(\mathcal{F}_{\pi}\) be as above, and \(\mathcal{F}_{i}=\mathcal{F}^{*}\) for all \(i\). Assume \(\mathcal{F}^{*}\) has polynomial covering number. Let \(\{X_{j}^{0}\}_{j=1}^{n_{0}}\sim Q^{0}\) and \(\{X_{j}^{1}\}_{j=1}^{n_{1}}\sim Q^{1}\), where \(Q^{0}\) and \(Q^{1}\) satisfy ideal balance with respect to \(\gamma_{\mathcal{F}_{\pi}}\). Fix \(f^{*}\in\mathcal{F}\), then_ \[Pr\left(\left\|\frac{1}{n_{0}}\sum_{j=1}^{n_{0}}f^{*}(X_{j}^{0})-\frac{1}{n_{ 1}}\sum_{j=1}^{n_{1}}f^{*}(X_{j}^{1})\right\|_{\ell_{p}}>\delta\right)\leq dB( \delta/d^{1/p},D^{*},C^{*}),\] _for finite \(p\geq 1\), and_ \[Pr\left(\left\|\frac{1}{n_{0}}\sum_{j=1}^{n_{0}}f^{*}(X_{j}^{0})-\frac{1}{n_{ 1}}\sum_{j=1}^{n_{1}}f^{*}(X_{j}^{1})\right\|_{\ell_{\infty}}>\delta\right) \leq dB(\delta,D^{*},C^{*}),\] _where \(D^{*},C^{*}\) depend only on \(\mathcal{F}^{*}\)._ _Definition 3.9_ (Vapnik-Chervonenkis Dimension).: The **Vapnik-Chervonenkis** dimension of a function class \(\mathcal{F}\) on an ambient set \(\mathcal{X}\) is the cardinality of the largest subset shattered by \(\mathcal{F}\). A function class \(\mathcal{F}\) shatters a set \(S\in\mathcal{X}\) if for each possible \(0-1\) labeling of the elements of \(S\) there is at least one function \(f\in\mathcal{F}\) that realizes such labeling. A key result we will use is an application of Theorem 2.6.7 of Van Der Vaart and Wellner (1996), which implies that if a function class \(\mathcal{G}\) has finite Vapnik-Chervonenkis dimension \(v\), then \[\sup_{\mu}N(\epsilon,\mathcal{G},L_{2}(\mu))\leq\left(\frac{K}{\epsilon} \right)^{C^{*}},\] where \(C^{*}=2v-2\). ## 4 Examples ### Balance on coarsened function classes Consider coarsened exact matching as described in Iacus et al. (2011). Let \(\mathcal{Z}_{0}=\{Z_{i}^{0}\}_{i=1}^{n_{0}}\) and \(\mathcal{Z}_{1}=\{Z_{j}^{1}\}_{j=1}^{n_{1}}\) be the control and treatment samples, respectively. In coarsened exact matching we create a partition of the sample space and match samples which are found in the same element of the partition, and discard samples in subsets without samples from the opposite group. We are interested in the quantity \[\Delta=\frac{1}{m_{0}}\sum_{i\in M_{0}}w_{i}^{0}Z_{i}^{0}-\frac{1}{m_{1}}\sum_{j \in M_{1}}w_{j}^{1}Z_{j}^{1},\] where \(m_{\ell}\) is the number of matched samples for the \(\ell^{th}\) group, \(M_{\ell}\) is its index set, and \(\{w_{i}^{0},w_{j}^{1}\}_{i\in M_{0},j\in M_{1}}\) are weights. In the supplementary material we describe how to express this matching procedure as a function \(f\) on the variables \(Z_{i}^{0}\) and \(Z_{j}^{1}\). This allows us to express \(\Delta\) in terms of \(f\). We further specify the function space \(\mathcal{F}\) for which \[\|\Delta\|\leq\gamma_{\mathcal{F}}(\mathbb{Q}_{n_{0}}^{0},\mathbb{Q}_{n_{1}}^{ 1})\] holds for an appropriate norm. Using the properties of \(\mathcal{F}\) and provided the bound above, we can derive our results of interest: \[Pr(|\Delta_{k}|\geq\delta)\leq B(\delta,D,C^{*}),\] for a constant \(C^{*}\) and where \(\Delta_{k}\) is the \(k^{th}\) component of \(\Delta\). Similarly, \[Pr(\|\Delta\|_{\ell_{p}}\geq\delta)\leq dB(\delta/d^{1/p},D,C^{*})\] and \[Pr(\|\Delta\|_{\ell_{\infty}}\geq\delta)\leq dB(\delta,D,C^{*}).\] ### Covariate balance on the linear propensity score As discussed in Section 3, there has been a lot of work on developing matching results based on linear discriminant analysis. That is, we assume that \(P(Z\mid T=\ell)\) follows \(N(\mu_{\ell},\Sigma)\). Under this model, the metric for consideration is the \(logit\) of the propensity score (see Stuart (2010)). In the supplementary material we show the distance \(|logit(e(Z))-logit(e(Z^{\prime})|\) can be expressed in terms of the linear discriminant analysis hyperplane vector. Indeed, if \(p\) is the dimension of the covariates, we can create a function space \(\mathcal{F}\) derived from hyperplanes and with Vapnik-Chervonenkis dimension \(p+1\) such that \[\Delta =\left|\frac{1}{m_{0}}\sum_{i\in M_{0}}logit(e(Z_{i}))-\frac{1}{m _{1}}\sum_{j\in M_{1}}logit(e(Z_{j}))\right|\] \[\leq\gamma_{\mathcal{F}}(\mathbb{Q}_{n_{0}}^{0},\mathbb{Q}_{n_{1} }^{1}),\] allowing us, using Theorem 3.3, to determine the bound of interest: \[Pr\{\Delta>\delta\}\leq B(\delta,D,2p).\] ### Covariate balance using kernels Many authors (Hazlett, 2016; Wong and Chan, 2018; Zhu et al., 2018) have advocated for the use of kernel methods for matching and evaluating covariate balance. This corresponds to assuming that \(\mathcal{F}\) in (3.1) represents a Reproducing Kernel Hilbert space. Further details about these function spaces can be found in the supplementary material. To apply Theorem 3.3 to the kernel setting, we will note there exists a version of linear discriminant analysis from section 4.2 that can be extended to the reproducing Kernel Hilbert Space setting (Baudat and Anouar, 2000). Let \(\mathcal{H}\) be a reproductive kernel Hilbert space and \(\|\cdot\|_{\mathcal{H}}\) the norm associated to it, then a natural metric to consider for a kernelized matching procedure would be \[\Delta_{\mathcal{H}}=\left\|\frac{1}{m_{0}}\sum_{i\in M_{0}}f(Z_{i})-\frac{1}{ m_{1}}\sum_{j\in M_{1}}f(Z_{j})\right\|_{\mathcal{H}},\] which represents a functional generalization of \(\Delta\) from Section 4.2, and where \(f\in\mathcal{H}\) is an appropriate function chosen by the user. Then \(\Delta_{\mathcal{H}}\leq\gamma_{\mathcal{F}}(\mathbb{Q}^{0}_{n_{0}},\mathbb{Q}^ {1}_{n_{1}})\), and we can use the previous results with a few adjustments. We show in the supplementary material that \[P(\Delta_{\mathcal{H}}>\delta)\leq B(\delta,D,C^{*}),\] where \(C^{*}\) depends on the smoothness properties of \(\mathcal{H}\). ## 5 Practical implementation So far, we have given theoretical results that describe how algorithms under various function classes behave under the ideal balance assumption. As noted earlier, the ideal balance definition is strict but permits theoretical characterization of various algorithms. The question then naturally arises as to how to use the theoretical results from the previous sections in practice. Note one can view the metric in equation (3.1) as a multivariate balance metric, which differentiates it from many other balance metrics in the literature. Zhu et al. (2018) used (3.1), where \(\mathcal{F}\) is a reproducing kernel Hilbert space, as a covariate balance diagnostic. There, they found that in certain situations, the diagnostic was more sensitive in finding covariate imbalances relative to univariate diagnostics as well as those based on the prognostic score (Hansen, 2008). Consider the problem of estimating the average causal effect among the treated. In practice, it is unlikely that ideal balance will hold for the treatment and control populations. That is to say, \(\gamma_{\mathcal{F}}\left(Q^{0},Q^{1}\right)\neq 0\), unless treatment is randomized. Therefore, we wouldn't be able to use Theorem 3.3 in an observational study. However, a slight modification can be done for which the analysis remains largely the same. Let \(w\in\mathcal{W}\subset\mathbb{R}^{n_{0}}\) be a weight vector and define \[\mathbb{Q}^{0}_{w}=\frac{1}{\sum_{i:T_{i}=0}w_{i}}\sum_{i:T_{i}=0}w_{i}\delta_ {X_{i}}.\] The majority of methods in causal inference have as a goal to find appropriate weights \(w\) for which \(\mathbb{Q}^{0}_{w}\) converges to \(Q^{*}\) for some distribution \(Q^{*}\) that indeed satisfies ideal balance with \(Q^{1}\). That is, for which \(\gamma_{\mathcal{F}}\left(Q^{*},Q^{1}\right)=0\). In order for this modification to be feasible, we just need to modify our proof of Theorem 3.3 and include the convergence rates of \(\mathbb{Q}^{0}_{w}\) to \(Q^{*}\), which may change depending on the problem. Having done so, we continue in a parallel manner. Let \(f^{*}\in\mathcal{F}\) represent a matching procedure with balance diagnostic \[\Delta=\left|\int fd\mathbb{Q}^{0}_{w}-\int fd\mathbb{Q}^{1}_{n_{1}}\right|,\] then, by the definition of \(\gamma_{\mathcal{F}}\), \[\Delta\leq\gamma_{\mathcal{F}}\left(\mathbb{Q}^{0}_{w},\mathbb{Q}^{1}_{n_{1} }\right).\] Therefore, if we can find weights for which \(\mathbb{Q}^{0}_{w}\) converges to \(Q^{*}\) and \(\gamma_{\mathcal{F}}(Q^{*},Q^{1})=0\), then we can bound the probability that \(\Delta\) exceeds some threshold \(\delta\). There are many methods for finding \(w\in\mathcal{W}\), the most straightforward being the inverse probability of treatment weights, \[w_{i}=T_{i}+\frac{e(Z_{i})(1-T_{i})}{1-e(Z_{i})}.\] Even heavily prescribed matching algorithms that are found throughout the causal inference literature find some weights \(w\in\mathcal{W}\) as described by Abadie and Imbens (2006). In one-to-one matching with replacement, let \(\mathcal{J}(i)=\{j_{1}(i),j_{2}(i),\ldots\}\) be the set of indices of units that are matched with the unit \(i=1,2,\ldots,n\). If there are no ties, then \(\mathcal{J}(i)=j(i)\). With ties present, which occur frequently especially with exact matching (see coarsened exact matching), \(\mathcal{J}(i)\) might contain multiple matched indices. The matching process will allow us to produce weights for every unit by solving \[w_{i}=\sum_{\{l:T_{l}=1\}}\frac{I[i\in\mathcal{J}(l)]}{\#\mathcal{J}(l)}\text{ for all }i\in\{i:T_{i}=0\}\] where \(\#\mathcal{J}(i)\) denotes the cardinality of \(\mathcal{J}(i)\). ## 6 Simulation Studies We perform a simulation study to evaluate the distribution of the distances reported in Section 4. We also examine their downstream consequences for estimating average treatment effects on the treated. There are two data generating mechanisms that we consider. In addition, we vary the sample size and the variance of the responses for a total of eight scenarios. We replicate each of these scenarios, described below, over 1000 iterations. We report the mean and Monte Carlo standard errors of the three distances (\(\Delta\)) examined in Section 4 (Table 1) along with the kernel density estimates for one representative scenario (Figure 1). We also evaluate the downstream effects of these \(\Delta\) statistics on the average treatment effect using one-to-one matching methods described by Abadie and Imbens (2006) implemented in the Matching package (Sekhon, 2008) (Tables 2 and 6). For \(i=1,2,\ldots,n\), let \(Z_{i1}\sim\mathcal{N}(1,4)\), \(Z_{i2}\sim\text{Bin}(1,0.3)\), \(Z_{i3}\sim\mathcal{N}(0,1)\), and \(Z_{i4}\sim\text{Bin}(1,0.5)\) where \(T_{i}\) denotes the binary treatment assignment. The conditional means of the outcomes for the treated, \(\mu_{1}(Z_{i})\), and the controls, \(\mu_{0}(Z_{i})\), are constructed as \[\begin{split}\mu_{0}(Z_{i})&=10-3Z_{i1}-Z_{i2}+Z_{i 3}+3Z_{i4}\,\text{ and }\\ \mu_{1}(Z_{i})&=\mu_{0}(Z_{i})+5+3Z_{i1}-Z_{i2}+Z_{i 3}-3Z_{i4}.\end{split} \tag{6.1}\] We sample \(T_{i}\sim\text{Bin}(1,0.5)\) distribution. For \(i=1,2,\ldots,n\), we sample the counterfactual responses \(Y_{i}(1)\sim\mathcal{N}[\mu_{1}(Z_{i}),\sigma^{2}]\) and \(Y_{i}(0)\sim\mathcal{N}[\mu_{0}(Z_{i}),\sigma^{2}]\). The observed outcome is \(Y_{i}=T_{i}Y_{i}(1)+(1-T_{i})Y_{i}(0)\). We will refer to these conditions with the label "baseline". For the error variance, we set \(\sigma^{2}\in\{5,10\}\). For the scenario labeled "sparse", we include an additional set of covariates that ultimately do not affect the outcome. The outcomes are determined by the potential outcome models in (6.1), yet the methods we consider also account for the noise covariates \(Z_{i5}\sim\mathcal{N}(-1,4)\), \(Z_{i6}\sim\text{Bin}(1,0.7)\), \(Z_{i7}\sim\mathcal{N}(0,1)\), and \(Z_{i8}\sim\text{Bin}(1,0.5)\). As mentioned before, we test the three examples described in Section 4 in their ability to produce efficient, unbiased estimates of the average treatment effect of the treated. Linear discriminant analysis sets \(f\) to be the logit transformation of the fitted posterior probability that each unit receives treatment. The support vector machine examples use the distance that each point is from the resulting separating hyperplane assuming a linear kernel. Coarsened exact matching is performed similar to what is described in Iacus et al. (2011) and is implemented with the cem R package. Table 1 shows the results of our simulation experiment. Since balance is already achieved through randomization in this simulation, we also report the unmatched, crude estimate of the average causal effect for references. Here the value \(\Delta\) is the maximum absolute sample mean difference for the unweighted covariates. The values \(\Delta\) are not necessarily directly comparable in this example. They do represent the distributions whose tail probabilities we are bounding in theorem. The simulation serves to characterize some of the densities of these statistics so that we might better understand which values of \(\delta\) are acceptable for the different balance methods in Section 4. We see that the values for \(\Delta\) after coarsened exact matching were the most heavily concentrated, followed closely by the values generated by linear discriminant analysis. The balance diagnostics from a support vector machine and from an unweighted comparison yielded considerably more dispersed values. One point of direct comparison that we may take between the different \(\Delta\) estimates is the downstream effects of the various balancing methods with estimating the average treatment effect. The purpose of this portion of the simulation study shows how the concentration of the distribution for \(\Delta\) may have little to do with the actual quality of the average treatment effect estimates - the ultimate result for causal inference. Although the concentration of the distribution for \(\Delta\) under coarsened exact matching was the most narrow among the other densities found for \(\Delta\) under linear discriminant analysis and support vector machines, the estimated average treatment effect is also the most biased. The Monte Carlo standard errors also seem to be greater than the other two balance methods. Linear discriminant analysis also conferred a narrow concentration of \(\Delta\) statistics yet produced the most efficient estimates of the average treatment effect, other than from the unweighted estimate which had the smallest Monte Carlo standard errors. This result is interesting because the unweighted diagnostics had the most dispersed values for \(\Delta\). This leads us to believe that the scale of the \(\Delta\) statistics must be carefully considered while evaluating balance to make some determination on which method is most suitable for evaluating treatment effects. \begin{table} \begin{tabular}{c c c c c c c c} \hline \(n\) & \(\sigma^{2}\) & Scenario & \(\theta\) & A & B & C & D \\ \hline 1000 & 5 & baseline & 6.2 & 0.11 (0.07) & 0.03 (0.02) & 0.02 (0.01) & 0.09 (0.04) \\ 1000 & 5 & sparse & 6.2 & 0.15 (0.07) & 0.01 (0.01) & 0.03 (0.02) & 0.13 (0.05) \\ 1000 & 10 & baseline & 6.2 & 0.12 (0.07) & 0.03 (0.02) & 0.02 (0.01) & 0.09 (0.05) \\ 1000 & 10 & sparse & 6.2 & 0.15 (0.07) & 0.01 (0.01) & 0.03 (0.02) & 0.13 (0.05) \\ 2000 & 5 & baseline & 6.2 & 0.08 (0.05) & 0.02 (0.01) & 0.01 (0.01) & 0.06 (0.03) \\ 2000 & 5 & sparse & 6.2 & 0.11 (0.05) & 0.01 (0.01) & 0.02 (0.01) & 0.09 (0.04) \\ 2000 & 10 & baseline & 6.2 & 0.08 (0.05) & 0.02 (0.01) & 0.01 (0.01) & 0.06 (0.03) \\ 2000 & 10 & sparse & 6.2 & 0.11 (0.05) & 0.01 (0.01) & 0.02 (0.01) & 0.09 (0.04) \\ \hline \end{tabular} \end{table} Table 1: Average and Monte Carlo standard error of \(\Delta\) found in the experiment. In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to linear discriminant analysis, and Method D to support vector machines. Since both A and B create a vector valued \(\Delta\) we report the maximum. \begin{table} \begin{tabular}{c c c c c c c c} \hline \(n\) & \(\sigma^{2}\) & Scenario & \(\theta\) & A & B & C & D \\ \hline 1000 & 5 & baseline & 6.2 & 6.20 (0.33) & 6.24 (0.33) & 6.20 (0.42) & 6.20 (0.36) \\ 1000 & 5 & sparse & 6.2 & 6.20 (0.34) & 6.29 (1.24) & 6.21 (0.45) & 6.20 (0.39) \\ 1000 & 10 & baseline & 6.2 & 6.20 (0.37) & 6.22 (0.40) & 6.20 (0.47) & 6.20 (0.42) \\ 1000 & 10 & sparse & 6.2 & 6.19 (0.35) & 6.31 (1.46) & 6.20 (0.46) & 6.22 (0.42) \\ 2000 & 5 & baseline & 6.2 & 6.19 (0.24) & 6.21 (0.24) & 6.20 (0.29) & 6.20 (0.25) \\ 2000 & 5 & sparse & 6.2 & 6.20 (0.23) & 6.34 (0.71) & 6.21 (0.29) & 6.21 (0.26) \\ 2000 & 10 & baseline & 6.2 & 6.21 (0.25) & 6.21 (0.26) & 6.19 (0.32) & 6.21 (0.28) \\ 2000 & 10 & sparse & 6.2 & 6.21 (0.25) & 6.38 (0.79) & 6.21 (0.31) & 6.21 (0.27) \\ \hline \end{tabular} \end{table} Table 2: Summary of simulation estimates and Monte Carlo standard errors. The simulation scenarios corresponding to ”baseline” and ”sparse” are described in further detail in Section 6. Here, \(\theta\) refers to the population average treatment effect among the treated. In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C is linear discriminant analysis, and Method D is support vector machines. ## Acknowledgments The authors would like to acknowledge funding support from the following sources: the National Institutes of Health, the National Science Foundation, the Veterans Administration and the Grohne Figure 1: Kernel Densities of the \(\Delta\) balancing statistics for the baseline scenario with \(n=1000\) and \(\sigma^{2}=10\). The solid line is the distribution from the unweighted estimates, the dashed line is the distribution for coarsened exact matching, the dotted line is the distribution for the linear propensity score, and the dotted-dashed line for the support vector machine examples. \begin{table} \begin{tabular}{c c c c c c c c} \hline \(n\) & \(\sigma^{2}\) & Scenario & \(\theta\) & A & B & C & D \\ \hline 1000 & 5 & baseline & 6.2 & 0.952 & 0.937 & 0.941 & 0.929 \\ 1000 & 5 & sparse & 6.2 & 0.944 & 0.955 & 0.934 & 0.917 \\ 1000 & 10 & baseline & 6.2 & 0.941 & 0.918 & 0.935 & 0.912 \\ 1000 & 10 & sparse & 6.2 & 0.955 & 0.950 & 0.951 & 0.931 \\ 2000 & 5 & baseline & 6.2 & 0.931 & 0.945 & 0.937 & 0.923 \\ 2000 & 5 & sparse & 6.2 & 0.956 & 0.945 & 0.939 & 0.918 \\ 2000 & 10 & baseline & 6.2 & 0.959 & 0.936 & 0.926 & 0.928 \\ 2000 & 10 & sparse & 6.2 & 0.953 & 0.946 & 0.948 & 0.935 \\ \hline \end{tabular} \end{table} Table 3: Summary of coverage probabilities from the simulation experiment. The simulation scenarios corresponding to ”baseline”, ”interaction”, ”positivity”, and ”sparse” are described in further detail in Section 6. Here, \(\theta\) refers to the population average treatment effect among the treated. In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to linear discriminant analysis, and Method D to support vector machines. ## Appendix ### Proof of theorem 3.3 We will use \(P\) and \(Q\) instead of \(Q^{0}\) and \(Q^{1}\) to ease symbolic burden on the reader. Proof.: By definition of \(\gamma\): \[\gamma(\mathbb{P}_{n_{0}},\mathbb{Q}_{n_{1}}) = \sup_{f\in\mathcal{F}}\left|\int fd\mathbb{P}_{n_{0}}-\int fd \mathbb{Q}_{n_{1}}\right|\] \[= \sup_{f\in\mathcal{F}}\left|\int fd\mathbb{P}_{n_{0}}\pm\int fdP \pm\int fdQ-\int fd\mathbb{Q}_{n_{1}}\right|\] \[\leq \sup_{f\in\mathcal{F}}\left|\int fd\mathbb{P}_{n_{0}}-\int fdP- \int fd\mathbb{Q}_{n_{1}}+\int fdQ\right|+\sup_{f\in\mathcal{F}}\left|\int fdP- \int fdQ\right|\] \[= \sup_{f\in\mathcal{F}}\left|\int fd\mathbb{P}_{n_{0}}-\int fdP- \int fd\mathbb{Q}_{n_{1}}+\int fdQ\right|,\] since \(\gamma(P,Q)=0\). Using elementary probability arguments, we have \[Pr\{\gamma(\mathbb{P}_{n_{0}},\mathbb{Q}_{n_{1}})>\delta\} = Pr\left(\sup_{f\in\mathcal{F}}\left|\int fd\mathbb{P}_{n_{0}}- \int fdP-\int fd\mathbb{Q}_{n_{1}}+\int fdQ\right|>\delta\right)\] \[= Pr\left(\sup_{f\in\mathcal{F}}\left|\frac{1}{\sqrt{n_{0}}} \mathbb{G}_{n_{0}}^{P}(f)-\frac{1}{\sqrt{n_{1}}}\mathbb{G}_{n_{1}}^{Q}(f) \right|>\delta\right)\] \[\leq Pr\left(\sup_{f\in\mathcal{F}}|\mathbb{G}_{n_{0}}^{P}(f)|>\sqrt{ n_{0}}\delta/2\right)+Pr\left(\sup_{f\in\mathcal{F}}|\mathbb{G}_{n_{1}}^{Q}(f)|> \sqrt{n_{1}}\delta/2\right),\] where \(\mathbb{G}_{n_{0}}^{P}(f)\) and \(\mathbb{G}_{n_{1}}^{Q}(f)\) represent the \(\mathcal{F}\)-indexed empirical processes of \(P\) and \(Q\), respectively. Applying Theorem 2.14.9 in Van Der Vaart and Wellner (1996), we can bound each of the terms as follows: \[Pr\left(\sup_{f\in\mathcal{F}}|\mathbb{G}_{n_{0}}^{P}(f)|>\sqrt{n_{0}}\delta/ 2\right)<\left(\frac{D\sqrt{n_{0}}\delta}{2\sqrt{C}}\right)^{C}\exp(-n_{0} \delta^{2}/2)\] \[Pr\left(\sup_{f\in\mathcal{F}}|\mathbb{G}_{n_{1}}^{Q}(f)|>\sqrt{n_{1}}\delta/ 2\right)<\left(\frac{D\sqrt{n_{1}}\delta}{2\sqrt{C}}\right)^{C}\exp(-n_{1} \delta^{2}/2),\] where \(D\) is a constant depending only on \(K\). Plugging these two bounds into (6.2) concludes the proof. ### Proof of Lemma 3.6 Proof.: Define \(\gamma_{i}=\gamma_{\mathcal{F}_{i}}(\mathbb{P}^{i},\mathbb{Q}^{i})\). Then: \[Pr\left(\sum_{i}\gamma_{i}>\delta\right) =1-Pr\left(\sum_{i}\gamma_{i}<\delta\right)\] \[\leq 1-Pr(\gamma_{i}<\delta/d\;\;\forall i)\] \[=Pr(\exists\;\;i\ni\gamma_{i}>\delta/d)\] \[\leq\sum_{i}Pr(\gamma_{i}>\delta/d)\] \[\leq\sum_{i}B(\delta/d,D_{i},C_{i}),\] where we have used the union bound in the second inequality. ### Proof of Lemma 3.7 Proof.: Assume \(\gamma_{\mathcal{F}_{i}}(\mu,\nu)=0\) for all \(i\). Then \[\gamma_{\mathcal{F}_{\pi}}(\mu,\nu) =\sup_{f^{\pi}\in\mathcal{F}_{\pi}}\left|\int f^{\pi}d\mu-\int f^{ \pi}d\nu\right|\] \[=\max_{\ell}\sup_{f\in\mathcal{F}}\left|\int\pi_{\ell}\circ fd\mu- \int\pi_{\ell}\circ fd\nu\right|\] \[=\max_{\ell}\sup_{f\in\mathcal{F}}\left|\int f_{\ell}d\mu-\int f _{\ell}d\nu\right|\] \[=\max_{\ell}\sup_{f_{\ell}\in\mathcal{F}_{\ell}}\left|\int f_{ \ell}d\mu-\int f_{\ell}d\nu\right|\] \[=\max_{\ell}\gamma_{\mathcal{F}_{\ell}}(\mu,\nu)=0.\] Conversely, assuming \(\gamma_{\mathcal{F}_{\pi}}(\mu,\nu)=0\) yields \[\gamma_{\mathcal{F}_{i}}(\mu,\nu) =\sup_{f_{\ell}\in\mathcal{F}_{\ell}}\left|\int f_{\ell}d\mu-\int f _{\ell}d\nu\right|\] \[=\sup_{f\in\mathcal{F}}\left|\int\pi_{\ell}\circ fd\mu-\int\pi_{ \ell}\circ fd\nu\right|\] \[\leq\max_{\ell}\sup_{f\in\mathcal{F}}\left|\int\pi_{\ell}\circ fd \mu-\int\pi_{\ell}\circ fd\nu\right|\] \[=\gamma_{\mathcal{F}_{\pi}}(\mu,\nu)=0.\] This proves the first two equivalences. The third one is a byproduct of the proof. ### Proof of Corollary 3.8 Proof.: To avoid cumbersome notation, let \(v=\frac{1}{n_{0}}\sum_{j=1}^{n_{0}}f^{*}(X_{j}^{0})-\frac{1}{n_{1}}\sum_{j=1}^{n_{1 }}f^{*}(X_{j}^{1})\) and note \(v_{\ell}=\frac{1}{n_{0}}\sum_{j=1}^{n_{0}}f^{*}_{\ell}(X_{j}^{0})-\frac{1}{n_{1 }}\sum_{j=1}^{n_{1}}f^{*}_{\ell}(X_{j}^{1})\), then: \[Pr\left(\left\|v\right\|_{\ell_{p}}>\delta\right) =Pr\left(\left\|v\right\|_{\ell_{p}}^{p}>\delta^{p}\right)\] \[=Pr\left(\sum_{\ell}\left|v_{\ell}\right|^{p}>\delta^{p}\right)\] \[\leq Pr\left(\sum_{\ell}\gamma_{\mathcal{F}_{\ell}}(\mathbb{Q}_{n _{0}}^{0},\mathbb{Q}_{n_{1}}^{1})^{p}>\delta^{p}\right)\] \[\leq\sum_{\ell}Pr\left(\gamma_{\mathcal{F}_{\ell}}(\mathbb{Q}_{n _{0}}^{0},\mathbb{Q}_{n_{1}}^{1})^{p}>\delta^{p}/d\right)\] \[=\sum_{\ell}Pr\left(\gamma_{\mathcal{F}_{\ell}}(\mathbb{Q}_{n_{0} }^{0},\mathbb{Q}_{n_{1}}^{1})>\delta/d^{1/p}\right)\] \[\leq\sum_{\ell}B(\delta/d^{1/p},D^{*},C^{*})=dB(\delta/d^{1/p},D^ {*},C^{*}),\] where the second and third inequalities follow from a slight variation of Lemma 3.6 and application of Lemma 3.7. For the \(\ell_{\infty}\) case we have: \[Pr\left(\left\|v\right\|_{\ell_{\infty}}>\delta\right) \leq Pr\left(\max_{\ell}\left|\gamma_{\ell}\right|>\delta\right)\] \[\leq\sum_{\ell}B(\delta,D^{*},C^{*}),\] concluding the proof. ### Balance for coarsening functions We will show the coarsened exact matching procedure belongs to a class of functions with tractable Vapnik-Chervonenkis dimension. Consider the set \(\mathcal{S}\) of partitions with a fixed number of elements \(R\). For a given partition \(S\in\mathcal{S}\), such that \(S=\{s_{1},\ldots,s_{R}\}\) define \(f^{k\alpha}_{S}\) to be: \[f^{k\alpha}_{S}(x)=\sum_{i=1}^{R}k_{i}\alpha_{i}\chi_{s_{i}}(x),\] where \(k_{i}\leq k\) for \(k\) a constant, \(\chi_{s_{i}}\) is the indicator function of \(s_{i}\), and \(\alpha:=(\alpha_{1},\ldots,\alpha_{R})\) is a binary vector, this is, \(\alpha_{i}\in\{0,1\}\) for each \(i\). In words, if \(x\) is found in \(s_{i}\), \(f\) will return a scaled version of \(x\) if \(\alpha_{i}\) is \(1\) and zero otherwise. Now let \(\mathcal{F}:=\{f^{k\alpha}_{S}\}_{S\in\mathcal{S},\alpha\in A,k\leq\kappa}\), where \(A\) is the set of all binary vectors of size \(R\) and \(\kappa\in\mathbb{R}\). Hence, the coarsened exact matching procedure belongs to this class of functions, since in that case \(\alpha_{i}\) indicates if there are at least two members of different groups in stratum \(s_{i}\). For any sample point \(x\), the weights are usually chosen in the following manner: If \(x\) is a treated unit, \(w^{1}_{i}=1\), otherwise, \(w^{0}_{i}=(m^{s}_{1}/m_{1})/(m^{s}_{0}/m_{0})\), where \(s\) is the stratum \(x\) belongs to. Letting \(k_{i}=w^{\ell}_{i}n_{\ell}/m_{\ell}\) appropriately weighs matched samples. We just need to add the mild assumption that the ratio of sample to matched size per stratum \(s\) does not grow faster than \(\sqrt{\kappa}\), that is, \(n_{\ell}/m^{s}_{\ell}\leq\sqrt{\kappa}\) for all \(s\in S\), because in that case \(w^{0}_{i}\leq m_{0}/m^{s}_{0}\leq n_{0}/m^{s}_{0}\leq\sqrt{\kappa}\) and \(n_{\ell}/m_{\ell}\leq\sqrt{\kappa}m^{s}_{\ell}/m_{\ell}\leq\sqrt{\kappa}\), so \(k_{i}\leq\kappa\). Finally, notice that any similar function with a smaller partition size can be expressed by a function in \(\mathcal{F}\), so we can consider variable partition size as long as it does not exceed a reasonable bound \(R\) For any set of points of size \(R\) there is a partition \(S\) containing one point in a different element, and therefore an \(\alpha\) that can assign each point arbitrarily to either \(0\) or \(1\). So \(\mathcal{F}\) shatters such set. However, if we add an extra point, and since the number of partitions is constrained, it would have to share partition element with a previous point, and so assignment under \(f_{s}^{ka}\). So the Vapnik-Chervonenkis dimension of \(\mathcal{F}\) is \(R\). Finally, let \(g(\mathcal{Z}_{\ell})=\mathbb{Q}_{n_{\ell}}^{\ell}\), where \(\mathbb{Q}_{n_{\ell}}^{\ell}\) is the empirical distribution of the sample \(\mathcal{Z}_{\ell}\) for group \(\ell\). Let \(k^{*}\) be chosen as above and let \((S^{*},\alpha^{*})\) be the particular partition and binary vector used for coarsened exact matching. Then, for the \(\ell^{th}\) component we get: \[\left|\frac{1}{m_{0}}\sum_{i\in M_{0}}w_{i}^{0}Z_{i,\ell}^{0}- \frac{1}{m_{1}}\sum_{j\in M_{1}}w_{j}^{1}Z_{j,\ell}^{1}\right| =\left|\frac{1}{n_{0}}\sum_{i=1}^{n_{0}}f_{S^{*},\ell}^{k^{*} \alpha^{*}}(Z_{i}^{0})-\frac{1}{n_{1}}\sum_{j=1}^{n_{1}}f_{S^{*},\ell}^{k^{*} \alpha^{*}}(Z_{j}^{1})\right|\] \[\leq\sup_{f_{\ell}\in\mathcal{F}^{*}}\left|\frac{1}{n_{0}}\sum_{ i=1}^{n_{0}}f_{\ell}(Z_{i}^{0})-\frac{1}{n_{1}}\sum_{j=1}^{n_{1}}f_{\ell}(Z_{j}^ {1})\right|\] \[=\gamma_{\mathcal{F}^{*}}(\mathbb{Q}_{n_{0}}^{0},\mathbb{Q}_{n_{ 1}}^{1})=\gamma_{\mathcal{F}^{*}}(g(\mathcal{Z}_{0}),g(\mathcal{Z}_{1})).\] Thus, the discrepancy among the matched samples per dimension is bounded by the \(\gamma_{\mathcal{F}^{*}}\) distance of the unmatched samples. Finally, the function \(h(x):=\kappa x\) is an envelope function of \(\mathcal{F}\) and has norm \(\|h\|_{L_{2}(\mu)}<\infty\) as long as we assume compact domain, which is OK to do for most coarsened exact matching cases. Then, by Theorem 2.6.7 of Van Der Vaart and Wellner (1996): \[\sup_{\mu}N(\epsilon,\mathcal{F},L_{2}(\mu))\leq\left(\frac{K}{\epsilon} \right)^{C^{*}},\] for some constant \(K\) and where \(C^{*}=2(R-1)\). This leads us to our final result: Assume ideal balance on the population probabilities holds for \(\gamma_{\mathcal{F}_{\pi}}\), then, for the \(\ell^{th}\) component we have: \[Pr\left(\left|\frac{1}{m_{0}}\sum_{i\in M_{0}}w_{i}^{0}Z_{i,\ell}^{0}-\frac{1 }{m_{1}}\sum_{j\in M_{1}}w_{j}^{1}Z_{j,\ell}^{1}\right|>\delta\right)\leq B( \delta,D,C^{*}).\] If we are interested in the \(\ell_{p}\) norm of the full vector instead, then, by Corollary 3.8: \[Pr\left\{\left\|\frac{1}{m_{0}}\sum_{i\in M_{0}}w_{i}^{0}Z_{i}^{0}-\frac{1}{m _{1}}\sum_{j\in M_{1}}w_{j}^{1}Z_{j}^{1}\right\|_{\ell_{p}}>\delta\right\} \leq dB(\delta/d^{1/p},D,C^{*}),\] for finite \(p\geq 1\). While \[Pr\left\{\left\|\frac{1}{m_{0}}\sum_{i\in M_{0}}w_{i}^{0}Z_{i}^{0}-\frac{1}{m _{1}}\sum_{j\in M_{1}}w_{j}^{1}Z_{j}^{1}\right\|_{\ell_{\infty}}>\delta\right\} \leq dB(\delta,D,C^{*}).\] ### Balance using propensity scores Recall \(e(Z)=P(T=1\mid Z)\), and that we are assuming \(Z\mid T=\ell\sim N(\mu_{\ell},\Sigma)\). Let \(p_{\ell}\) be the probability density function of \(N(\mu_{\ell},\Sigma)\), that is, the gaussian density, then by the density version of Bayes' Theorem we have \[p(T=1\mid Z=z)=\frac{p_{1}P(T=1)}{p_{1}P(T=1)+p_{0}P(T=0)}.\] Therefore, we can express the _logit_ of \(e(Z)\) as \[logit(e(Z))=\log\left(\frac{e(Z)}{1-e(Z)}\right)=\log\left(\frac{p_{1}P(T=1)}{p_{ 0}P(T=0)}\right).\] Now define \(L_{k}:=logit(e(Z_{k}))\), then the matching procedure is based on the difference \(|L_{i}-L_{j}|\). Given the above computation and after a few straightforward steps we get \[|L_{i}-L_{j}| =\left|(\mu_{1}-\mu_{0})^{T}\Sigma^{-1}(Z_{i}-Z_{j})\right|\] \[=\left|f^{*}(Z_{i})-f^{*}(Z_{j})\right|,\] where \(f^{*}(x)=w^{T}x\) for \(w\in\mathbb{R}^{p}\). Notice the vector \(w\) is the same as the one used for linear discriminant analysis so, adding an offset parameter, it will be useful to think of \(f^{*}\) as a hyperplane. Let \(M_{0}^{j}\) be the control units assigned to treatment unit \(j\). We make the assumption that there is a fixed number of assigned controls to each treatment, and so \(m_{0}=|M_{0}^{j}|m_{1}\). Then \[\Delta :=\left|\frac{1}{m_{1}}\sum_{j\in M_{1}}logit(e_{j})-\frac{1}{m_{ 0}}\sum_{i\in M_{0}}logit(e_{i})\right|\] \[=\left|\frac{1}{m_{1}}\sum_{j\in M_{1}}L_{j}-\sum_{j\in M_{1}} \frac{1}{m_{0}}\sum_{i\in M_{0}^{j}}L_{i}\right|\] \[=\left|\sum_{j\in M_{1}}\left(\frac{1}{m_{1}}L_{j}-\frac{1}{m_{ 0}}\sum_{i\in M_{0}^{j}}L_{i}\right)\right|\] \[=\left|\sum_{j\in M_{1}}\left(\frac{1}{m_{1}}\sum_{i\in M_{0}^{j }}\frac{L_{j}}{|M_{0}^{j}|}-\frac{1}{m_{0}}\sum_{i\in M_{0}^{j}}L_{i}\right)\right|\] \[=\left|\sum_{j\in M_{1}}\sum_{i\in M_{0}^{j}}\left(\frac{L_{j}}{m _{1}|M_{0}^{j}|}-\frac{L_{i}}{m_{0}}\right)\right|\] \[=\left|\sum_{j\in M_{1}}\sum_{i\in M_{0}^{j}}\frac{1}{m_{0}} \left(L_{j}-L_{i}\right)\right|\] \[=\left|\sum_{j\in M_{1}}\sum_{i\in M_{0}^{j}}\frac{1}{m_{0}} \left(f^{*}(Z_{j})-f^{*}(Z_{i})\right)\right|\] \[=\left|\frac{1}{m_{1}}\sum_{j\in M_{1}}f^{*}(Z_{j})-\frac{1}{m_{ 0}}\sum_{i\in M_{0}}f^{*}(Z_{i})\right|.\] That is, we can express the difference of means of _logit_s in terms of the difference of means of the discriminant functions. Let \(p\) be the dimension of the covariates, and let \(\mathcal{F}\) be the collection of \(p\)-dimensional hyperplanes, notice \(f^{*}\in\mathcal{F}\). The Vapnik-Chervonenkis dimension of \(\mathcal{F}\) is known to be \(p+1\)(Mohri et al., 2018). We would like to bound \(\Delta\) in terms of \(\gamma\) but we first need some adjustments to \(f^{*}\). The matching procedure determines a set \(\mathcal{Z}_{M}=\{Z_{k}\mid k\in M\}\) of matched samples, where \(M=M_{0}\cup M_{1}\). By the Gaussian assumption the \(Z\)s are sampled from a Gaussian mixture so the probability of two sample points being the same is zero. Hence there is an \(\epsilon>0\) such that for all \(k\in M\), \(\mathcal{Z}\cap B_{\epsilon}(Z_{k})=\{Z_{k}\}\), that is, each \(\epsilon\) ball centered around a matched sample does not contain any other sample point (here \(\mathcal{Z}\) is the sample set). Let \(S_{\epsilon}=\cup_{k}B_{\epsilon}(Z_{k})\). Note \(S_{\epsilon}\) is a measurable set. Let \(\beta_{S_{\epsilon}}(x):=x\chi^{S_{\epsilon}}(x)\), this function maps points to zero if unmatched and to themselves if matched. Furthermore, let \(\beta_{\ell}(x):=\frac{m_{\ell}}{n_{\ell}}\chi^{M_{\ell}}(x)+\chi^{M^{C}_{ \ell}}(\text{x})\), for \(\ell\in\{0,1\}\). Each \(\beta_{\ell}\) scales elements in \(M_{\ell}\) by the factor \(\frac{m_{\ell}}{n_{\ell}}\) and leaves the rest untouched. Notice \(f^{*}_{M}:=f^{*}\circ\beta_{1}\circ\beta_{0}\circ\beta^{S_{\epsilon}}\) sends \(Z_{k}\) to \(\frac{m_{\ell}}{n_{\ell}}w^{T}Z_{k}\) if \(k\in M_{k}\) and to \(0\) otherwise. Then we can express \(\Delta\) as \[\Delta =\left|\frac{1}{m_{1}}\sum_{j\in M_{1}}f^{*}(Z_{j})-\frac{1}{m_{0 }}\sum_{i\in M_{0}}f^{*}(Z_{i})\right|\] \[=\left|\frac{1}{n_{1}}\sum_{j=1}^{n_{1}}f^{*}_{M}(Z_{j})-\frac{1} {n_{0}}\sum_{i=1}^{n_{0}}f^{*}_{M}(Z_{i})\right|.\] Now, consider the set \(\mathcal{F}_{M}:=\{f\circ\beta_{1}\circ\beta_{0}\circ\beta_{S}|f\in\mathcal{F},S\in\Sigma\}\), where \(\Sigma\) is the set of measurable sets according to the distribution of the \(Z\)s. The Vapnik-Chervonenkis dimension for \(\mathcal{F}_{M}\) is the same as that of \(\mathcal{F}\), that is, \(p+1\). To see this we notice that the standard derivation for the hyperplane case involves shattering the standard basis \(\mathcal{B}\) in \(\mathbb{R}^{p}\). With probability one, no sample point will equal a standard basis vector, so there is an \(\epsilon^{\prime}>0\) for which we can create a set \(s=\cup_{x\in\mathcal{B}}B_{\epsilon^{\prime}}(x)\) such that \(s\in\Sigma\) and no sample point is in \(s\). Considering the functions \(\{f_{\nu}\}\) in \(\mathcal{F}\) used to shatter \(\mathcal{B}\) and using \(s\), we can use the functions \(\{f_{\nu}\circ\beta_{1}\circ\beta_{0}\circ\beta_{S}\}\) in \(\mathcal{F}_{M}\) to also shatter \(\mathcal{B}\). So the Vapnik-Chervonenkis dimension is at least \(p+1\). Since the functions \(\beta_{1}\), \(\beta_{0}\), and \(\beta^{S}\) are either zero or a scaled identity, we don't get any complexity and the dimension is no larger than \(p+1\), so it is indeed \(p+1\). For the envelope function, we can choose \(h(x)=<w_{e},x>\). The norm of \(w_{e}\) must be large enough to keep a \(p+1\) Vapnik-Chervonenkis dimension. Since the vectors used to ensure such a dimension have norm \(p+1\), the norm of \(w_{e}\) must be at least \(p+1\). So we can choose any large constant \(C>p+1\). Since we are interested in vectors of the form \(w=\Sigma^{-1}\Delta\mu\), we have \(\|w\|\leq\|S^{-1}\|_{F}\|\Delta\mu\|_{2}\), so the user has to choose constants that bound each of these norms. Also, we must assume the covariates themselves are bounded, this ensures a finite norm for \(h\). Finally, we have \[\Delta =\left|\frac{1}{n_{1}}\sum_{j=1}^{n_{1}}f^{*}_{M}(Z_{j})-\frac{1 }{n_{0}}\sum_{i=1}^{n_{0}}f^{*}_{M}(Z_{i})\right|\] \[\leq\sup_{f\in\mathcal{F}_{M}}\left|\frac{1}{n_{1}}\sum_{j=1}^{n_ {1}}f(Z_{j})-\frac{1}{n_{0}}\sum_{i=0}^{n_{0}}f(Z_{i})\right|\] \[=\gamma_{\mathcal{F}_{M}}(\mathbb{Q}^{0}_{n_{0}},\mathbb{Q}^{1}_{ n_{1}}).\] Assuming Ideal Balance on the population probabilities, and applying Theorem 2.6.7 of Van Der Vaart and Wellner (1996) in conjunction with Theorem 3.3, yields \[Pr\{\Delta>\delta\}\leq B(\delta,D,2p).\] ### Covering number bound for Reproducing Kernel Hilbert Spaces We refer the reader to Wahba (1990); Berlinet and Thomas-Agnan (2011); Steinwart and Christmann (2008) for nice overviews on reproducing kernel Hilbert spaces. Roughly speaking, a mapping \(k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\) is said to be the reproducing kernel associated to the reproducing kernel Hilbert space \(\mathcal{H}\) if it satisfies the following properties: (a) \(k(\cdot,x)\in\mathcal{H}\) for any \(x\in\mathcal{X}\); (b) \(f(x)=\langle f,k(\cdot,x)\rangle_{\mathcal{H}}\) for all \(f\in\mathcal{H}\) and \(x\in\mathcal{X}\). Property (b) is commonly referred to as the reproducing property. To apply Theorem 3.3 to the reproducing kernel case, we will need to directly bound the covering number based on arguments different from Vapnik-Chervonenkis theory. Define the space \[\mathcal{H}_{q}^{m}(\mathbb{R}^{p})=\{f\in L_{q}(\mathbb{R}^{p})\ |\ D^{j}f\in L_{q}( \mathbb{R}^{p})\ \forall j\in\{1,\ldots,m\};\ \ \|f\|_{q}<\infty\},\] where \[\|f\|_{q}=\sum_{0\leq|\alpha|\leq s}\|D^{\alpha}f\|_{L_{q}}\] and \(D^{\alpha}\) denotes partial derivatives in the sense of distributions. Then as a consequence of Theorem 1 of Nickl and Potscher (2007), if \(m-q/p>0\), then \[N(\epsilon,\mathcal{H},\|\cdot\|_{q})\leq b_{1}\epsilon^{-q},\] while if \(m-q/p<0\), \[N(\epsilon,\mathcal{H},\|\cdot\|_{q})\leq b_{2}\epsilon^{-p/m},\] Based on this result, Theorem 3.3 can then be applied to prove a convergence rate under ideal balance. Note that this does not cover the Gaussian kernel case, because the Gaussian kernel is infinitely differentiable, so the space \(\mathcal{H}_{q}^{m}(\mathbb{R}^{p})\) does not apply. For the reader interested in the Gaussian case, we refer them to the recent paper by Steinwart and Fischer (2020).
2303.16892
Multi-scale Hierarchical Vision Transformer with Cascaded Attention Decoding for Medical Image Segmentation
Transformers have shown great success in medical image segmentation. However, transformers may exhibit a limited generalization ability due to the underlying single-scale self-attention (SA) mechanism. In this paper, we address this issue by introducing a Multi-scale hiERarchical vIsion Transformer (MERIT) backbone network, which improves the generalizability of the model by computing SA at multiple scales. We also incorporate an attention-based decoder, namely Cascaded Attention Decoding (CASCADE), for further refinement of multi-stage features generated by MERIT. Finally, we introduce an effective multi-stage feature mixing loss aggregation (MUTATION) method for better model training via implicit ensembling. Our experiments on two widely used medical image segmentation benchmarks (i.e., Synapse Multi-organ, ACDC) demonstrate the superior performance of MERIT over state-of-the-art methods. Our MERIT architecture and MUTATION loss aggregation can be used with downstream medical image and semantic segmentation tasks.
Md Mostafijur Rahman, Radu Marculescu
2023-03-29T17:58:40Z
http://arxiv.org/abs/2303.16892v1
Multi-scale Hierarchical Vision Transformer with Cascaded Attention Decoding for Medical Image Segmentation ###### Abstract Transformers have shown great success in medical image segmentation. However, transformers may exhibit a limited generalization ability due to the underlying single-scale self-attention (SA) mechanism. In this paper, we address this issue by introducing a Multi-scale hiERarchical vIsion Transformer (MERIT) backbone network, which improves the generalizability of the model by computing SA at multiple scales. We also incorporate an attention-based decoder, namely Cascaded Attention Decoding (CASCADE), for further refinement of multi-stage features generated by MERIT. Finally, we introduce an effective multi-stage feature mixing loss aggregation (MUTATION) method for better model training via implicit ensembling. Our experiments on two widely used medical image segmentation benchmarks (i.e., Synapse Multi-organ, ACDC) demonstrate the superior performance of MERIT over state-of-the-art methods. Our MERIT architecture and MUTATION loss aggregation can be used with downstream medical image and semantic segmentation tasks. Medical image segmentation, Vision transformer, Multi-scale transformer, Feature-mixing augmentation, Self-attention. ## 1 Introduction Automatic medical image segmentation has become an important step in disease diagnosis nowadays. Since the emergence of UNet (Ronneberger et al., 2015), U-shaped convolutional neural networks (CNNs) (Oktay et al., 2018; Huang et al., 2020; Zhou et al., 2018; Fan et al., 2020) have become de facto methods for medical image segmentation. By producing high-resolution segmentation maps through aggregating multi-stage features via skip connections, UNet variants, such as UNet++ (Zhou et al., 2018) and UNet3Plus (Huang et al., 2020), have shown good performance in medical image segmentation. However, the spatial context of the convolution operation limits the CNN-based methods ability to learn the long-range relations among pixels (Cao et al., 2021). Some works (Chen et al., 2018; Oktay et al., 2018; Fan et al., 2020) try to address this issue by embedding attention mechanisms in the encoder or decoder. Despite the significant efforts made in this direction, the CNN-based methods still have insufficient ability to capture long-range dependencies. With the emergence of Vision transformers (Dosovitskiy et al., 2020), many works (Cao et al., 2021; Chen et al., 2021; Dong et al., 2021; Wang et al., 2022) try to address the above problem using a transformer encoder, specifically for medical image segmentation. Transformers capture long-range dependencies by learning correlations among all the input patches using self-attention (SA). Recently, hierarchical vision transformers, such as pyramid vision transformer (PVT) (Wang et al., 2021) with spatial reduction attention, Swin transformer (Liu et al., 2021) with window-based attention, and MaxViT (Tu et al., 2022) with multi-axis attention have been introduced to improve performance. Indeed, these hierarchical vision transformers are very effective for medical image segmentation tasks (Cao et al., 2021; Dong et al., 2021; Wang et al., 2022b). However, these transformer-based architectures have two limitations: 1) self-attention is performed with a single attention window (scale) which has limited feature processing ability, and 2) the self-attention modules used in transformers have limited ability to learn spatial relations among pixels (Chu et al., 2021). More recently, PVTv2 (Wang et al., 2022c) embeds convolution layers in transformer encoders, while CASCADE (Rahman et al., 2023) introduces an attention-based decoder to address the limitation of learning spatial relations among pixels. Although these methods enable learning, the local (spatial) relations among pixels, they still have limited ability to capture features of multi-scale (e.g., small, large) organs/lesions/objects due to computing self-attention in a single-scale attention window. To address this limitation, we introduce a novel _multi-scale hierarchical_ vision transformer (MERIT) backbone which computes self-attention across _multiple attention windows_ to improve the generalizability of the model. We also incorporate multiple CASCADE decoders to produce better high-resolution segmentation maps by effectively aggregating and enhancing multi-scale hierarchical features. Finally, we introduce a novel effective multi-stage (hierarchical) feature-mixing loss aggregation (MUTATION) strategy for implicit ensembling/augmentation which produces new synthetic predictions by mixing hierarchical prediction maps from the decoder. The aggregated loss from these synthetic predictions improves the performance of medical image segmentation. Our contributions are as follows: * **Novel Network Architecture:** We propose a novel multi-scale hierarchical vision transformer (MERIT) for 2D medical image segmentation which captures both multi-scale and multi-resolution features. Besides, we incorporate a cascaded attention-based decoder for better hierarchical multi-scale feature aggregation and refinement. * **Multi-stage Feature-mixing Loss Aggregation:** We propose a new simple, yet effective way, namely MUTATION, to create synthetic predictions by mixing features during loss calculation; this improves the medical image segmentation performance. * **New State-of-the-art Results:** We perform rigorous experiments and ablation studies on two medical image segmentation benchmarks, namely Synapse multi-organ and ACDC cardiac diagnosis. Our implementation of MERIT using two instances (with different windows for SA) of MaxViT (Tu et al., 2022) backbone with CASCADE decoder and MUTATION loss aggregation strategy produces new state-of-the-art (SOTA) results on Synapse multi-organ and ACDC segmentation benchmarks. ## 2 Related Work ### Vision transformers Dosovitskiy et al. (Dosovitskiy et al., 2020) build the first vision transformer (ViT), which can learn long-range (global) relations among the pixels through SA. Recent works focus on improving ViT in different ways, such as designing new SA blocks (Liu et al., 2021; Tu et al., 2022), incorporating CNNs (Wang et al., 2022c; Tu et al., 2022), or introducing new architectural designs (Wang et al., 2021; Xie et al., 2021). Liu et al. (Liu et al., 2021) introduce a sliding window attention mechanism in the hierarchical Swin transformer. In DeiT (Touvron et al., 2021), authors explore data-efficient training strategies to minimize the computational cost for ViT. SegFormer (Xie et al., 2021) proposes a positional-encoding-free hierarchical transformer using Mix-FFN blocks. In PVT, authors (Wang et al., 2021) develop a pyramid vision transformer using a spatial reduction attention mechanism. The authors extend the PVT to PVTv2 (Wang et al., 2022) by embedding an overlapping patch embedding, a linear complexity attention layer, and a convolutional feed-forward network. Recently, in MaxViT (Tu et al., 2022), authors propose a multi-axis self-attention mechanism to build a hierarchical hybrid CNN transformer. Although vision transformers have shown excellent promise, they have limited spatial information processing ability; also, there is little effort in designing multi-scale transformer backbones (Lin et al., 2022). In this paper, we address these very limitations by introducing a multi-scale hierarchical vision transformer with attention-based decoding. ### Medical image segmentation Medical image segmentation can be formulated as a dense prediction task of classifying the pixels of lesions or organs in endoscopy, CT, MRI, etc. (Dong et al., 2021; Chen et al., 2021). U-shaped architectures (Ronneberger et al., 2015; Oktay et al., 2018; Zhou et al., 2018; Huang et al., 2020; Lou et al., 2021) are commonly used in medical image segmentation because of their sophisticated encoder-decoder architecture. Ronneberger et al. (Ronneberger et al., 2015) introduce UNet, an encoder-decoder architecture that aggregates features from multiple stages through skip connections. In UNet++ (Zhou et al., 2018), authors use nested encoder-decoder sub-networks that are linked using dense skip connections. Besides, UNet3Plus (Huang et al., 2020) explores the full-scale skip connections having intra-connections among the decoder blocks. Transformers are nowadays widely used in medical image segmentation (Cao et al., 2021; Chen et al., 2021; Dong et al., 2021). In TransUNet (Chen et al., 2021), authors propose a hybrid CNN transformer architecture to learn both local and global relations among pixels. Swin-Unet (Cao et al., 2021) introduces a pure U-shaped transformer using Swin transformer (Liu et al., 2021) blocks. Recently, in CASTFormer (You et al., 2022), authors introduce a class-aware transformer with adversarial training. Some studies explore attention mechanisms with CNN (Oktay et al., 2018; Fan et al., 2020) and transformer-based architectures (Dong et al., 2021) for medical image segmentation. In PraNet (Fan et al., 2020), authors utilize the reverse attention (Chen et al., 2018). PolypPVT (Dong et al., 2021) uses PVTv2 (Wang et al., 2022) as the encoder and adopts a CBAM (Woo et al., 2018) attention block in the decoder with other modules. In CASCADE (Rahman et al., 2023), authors propose a cascaded decoder using attention modules for feature refinement. Due to its remarkable performance in medical image segmentation, we incorporate the CASCADE decoder with our architecture. ## 3 Method In this section, we first introduce our proposed multi-scale hierarchical vision transformer (MERIT) backbone and decoder. We then describe an overall architecture combining our MERIT (i.e., MaxViT (Tu et al., 2022)) with the decoder (i.e., CASCADE (Rahman et al., 2023)). Finally, we introduce a new hierarchical feature-mixing loss aggregation method. ### Multi-scale hierarchical vision transformer (MERIT) To improve the generalizability of the model across small and large objects in an image, we propose two designs based on the MERIT backbone network, i.e., Cascaded and Parallel. #### 3.1.1 Cascaded MERIT In the cascaded design of our MERIT, we add (i.e., cascade) feedback from a backbone to the next backbone. We extract the hierarchical features from four different stages of the backbone network. Then, we cascade these features with the features from the previous backbone and pass them to the skip connections and bottleneck modules of respective decoders, except the first decoder. We also pass feedback from the decoder of one backbone (except the last) to the next backbone. This design captures the multi-scale, as well as multi-resolution features due to using multiple attention windows and hierarchical features. It also refines the features well due to adding feedback from the decoder of a backbone to the next backbone and using cascaded skip connections. Fig. 1(a) presents the Cascaded MERIT architecture with two backbone networks. For each backbone network, the images with resolution (H, W) are first put into a Stem layer (TB1 Stem, TB2 Stem in Fig. 1(a)) which reduces the feature resolution to (H/4, W/4). Afterward, these features are passed through four stages of transformer backbones (this reduces feature resolution by 2 times at each stage except the fourth). The features from the last stage of the first decoder are combined with the input image to cascade it (feature) with the second backbone in Fig. 1(a). To do this, we reduce the number of channels to one and produce logits by applying a 1x1 convolution followed by Sigmoid activation. We also resize the feature map to the input resolution (i.e., \(224\times 224\) in our implementation) of Backbone 2. Figure 1: Cascaded MERIT architecture. (a) cascaded MERIT backbone, (b) decoders with cascaded skip connections from the decoder 1, (c) prediction maps aggregation of two decoders. p1, p2, p3, and p4 are the aggregated multi-stage prediction maps. #### 3.1.2 Parallel MERIT Unlike Cascaded MERIT, in the parallel design of our MERIT backbone, we pass input images of multiple resolutions/scales in parallel into separate hierarchical transformer backbone encoders with different attention windows. Similar to Cascaded MERIT, we extract the hierarchical features from four different stages of the backbone networks and pass those features to the respective parallel decoders. This design also captures multi-scale features due to using hierarchical backbones with multiple attention windows. Fig. 2(a) in Appendix A presents a design for the Parallel MERIT with two backbone networks. The input images are passed through similar steps in the backbone networks as in Cascaded MERIT. However, the Parallel MERIT shares information among the backbone networks only at the very end during the feature aggregation step (Fig. 2(c) in Appendix A). ### Decoder We propose using a separate decoder for each transformer backbone. As shown in Fig. 1(b), we use cascaded skip connections in the decoder of our cascaded MERIT architecture. Here, we add the skip connections from the first backbone to the skip connections of the second backbone network. In this case, we share information across backbones in three phases, i.e., during backbone cascading, skip connections cascading, and aggregating prediction maps. This sharing of information helps to capture richer information than the single-resolution backbone, as well as our Parallel MERIT. Unlike Fig. 1(b), in Fig. 2(b) in Appendix A, we have two parallel decoders for our parallel backbones. Each decoder has four stages that correspond to four stages of the transformer backbone. We only aggregate the multi-stage prediction maps produced by the decoders in Fig. 2(b) at the aggregation step shown in Fig. 2(c). ### Overall Architecture In our experiments, we use one of the most recent SOTA transformers, MaxViT (Tu et al., 2022). We use two instances of MaxViT-S (standard) backbone with \(8\times 8\) and \(7\times 7\) attention windows to create our MERIT backbone. Each MaxViT backbone has two Stem blocks followed by four stages that consist of multiple (i.e., 2, 2, 5, 2) MaxViT blocks. Each MaxViT block is built with a Mobile Convolution Block (MBConv), a Block Attention having Block Self-Attention (SA) followed by a Feed Forward Network (FFN), a Grid Attention having a Grid SA followed by an FFN. We note that although we use the MaxViT backbone in our experiments, other transformer backbones can easily be used with our MERIT. Pure transformers have limited (spatial) contextual information processing ability among pixels. As a result, the transformer-based models face difficulties in locating discriminative local features. To address this issue, we adopt a recent attention-based cascaded decoder, CASCADE (Rahman et al., 2023), for multi-stage feature refinement and aggregation. CASCADE decoder uses the attention gate (AG) (Oktay et al., 2018) for cascaded feature aggregation and the convolutional attention module (CAM) for robust feature map enhancement. CASCADE decoder has four CAM blocks for the four stages of hierarchical features from the transformer backbone and three AGs for three skip connections. CASCADE decoder aggregates the multi-resolution features by combining the upsampled features from the previous stage of the decoder with the features from the skip connections using AG. Then, CASCADE decoder processes the aggregated features using the CAM module (consists of channel attention (Hu et al., 2018) followed by spatial attention (Chen et al., 2017)) which groups pixels together and suppresses background information. Lastly, CASCADE decoder sends the output from the CAM block of each stage to a prediction head to produce prediction maps. We produce four prediction maps from the four stages of the CASCADE decoder. As shown in Fig. 1(c) and Fig. 2(c) in Appendix A, we aggregate (additive) the prediction maps for each stage of our two decoders. We generate the final prediction map, \(\hat{y}\), using Equation 1: \[\hat{y}=\alpha\times p1+\beta\times p2+\gamma\times p3+\psi\times p4 \tag{1}\] where \(p1\), \(p2\), \(p3\), and \(p4\) represent the prediction maps, and \(\alpha\), \(\beta\), \(\gamma\), and \(\psi\) are the weights of each prediction heads. We use the value of 1.0 for \(\alpha\), \(\beta\), \(\gamma\), and \(\psi\). Finally, we apply Softmax activation on \(\hat{y}\) to get the multi-class segmentation output. ### Multi-stage feature-mixing loss aggregation (MUTATION) We now introduce a simple, yet effective multi-stage feature mixing loss aggregation strategy for image segmentation, which enables better model training. Our intention is to create new prediction maps combining the available prediction maps. So, we take all the prediction maps from different stages of a network as input and aggregate the losses of prediction maps generated using \(2^{n}-1\) non-empty subsets of \(n\) prediction maps. For example, if a network produces 4 prediction maps, our multi-stage feature-mixing loss aggregation produces a total of \(2^{4}-1=15\) prediction maps including 4 original maps. This mixing strategy is simple, it does not require additional parameters to calculate, and it does not introduce inference overheads. Due to its potential benefits, this strategy can be used with _any_ multi-stage image segmentation or dense prediction networks. Algorithm 1 presents the steps to produce new prediction maps and loss aggregation. ``` Input:\(y\); the ground truth mask A list \([P_{i}]\); \(i=0,1,\cdots,n-1\), where each element is a prediction map Output:\(loss\); the aggregated loss \(loss\gets 0.0\); \(\mathcal{S}\leftarrow\) find all non-empty subsets of prediction map indices, \(\{0,\ldots,n-1\}\); // \(\mathcal{S}\) is the set of non-empty subsets of \(\{0,\ldots,n-1\}\) foreach\(s\in\mathcal{S}\)do \(\hat{y}\gets 0.0\); // \(\hat{y}\) is a new prediction map foreach\(i\in s\)do \(\hat{y}\leftarrow\hat{y}+P_{i}\); end foreach\(loss\gets loss\_function(y,\hat{y})\); // \(loss\_function(.)\) is any loss function (e.g., CrossEntropy, DICE) ``` Input:\(y\); the ground truth mask A list \([P_{i}]\); \(i=0,1,\cdots,n-1\), where each element is a prediction map Output:\(loss\); the aggregated loss \(loss\gets 0.0\); \(\mathcal{S}\leftarrow\) find all non-empty subsets of prediction map indices, \(\{0,\ldots,n-1\}\); // \(\mathcal{S}\) is the set of non-empty subsets of \(\{0,\ldots,n-1\}\) foreach\(s\in\mathcal{S}\)do \(\hat{y}\leftarrow 0.0\); // \(\hat{y}\) is a new prediction map foreach\(i\in s\)do \(\hat{y}\leftarrow\hat{y}+P_{i}\); end foreach\(loss\gets loss\_function(y,\hat{y})\); // \(loss\_function(.)\) is any loss function (e.g., CrossEntropy, DICE) ``` Input:\(y\); the ground truth mask A list \([P_{i}]\); \(i=0,1,\cdots,n-1\), where each element is a prediction map Output:\(loss\); the aggregated loss \(loss\gets 0.0\); \(\mathcal{S}\leftarrow\) find all non-empty subsets of prediction map indices, \(\{0,\ldots,n-1\}\); // \(\mathcal{S}\) is the set of non-empty subsets of \(\{0,\ldots,n-1\}\) foreach\(s\in\mathcal{S}\)do \(\hat{y}\leftarrow 0.0\); // \(\hat{y}\) is a new prediction map foreach\(i\in s\)do \(\hat{y}\leftarrow\hat{y}+P_{i}\); end foreach\(loss\gets loss\_function(y,\hat{y})\); // \(loss\_function(.)\) is any loss function (e.g., CrossEntropy, DICE) ``` Input:\(y\); the ground truth mask A list \([P_{i}]\); \(i=0,1,\cdots,n-1\), where each element is a prediction map Output:\(loss\); the aggregated loss \(loss\gets 0.0\); \(\mathcal{S}\leftarrow\) find all non-empty subsets of prediction map indices, \(\{0,\ldots,n-1\}\); // \(\mathcal{S}\) is the set of non-empty subsets of \(\{0,\ldots,n-1\}\) foreach\(s\in\mathcal{S}\)do \(\hat{y}\leftarrow\hat{y}+P_{i}\); // \(loss\_function(.)\) is any loss function (e.g., CrossEntropy, DICE) ``` Input:\(y\); the ground truth mask A list \([P_{i}]\); \(i=0,1,\cdots,n-1\), where each element is a prediction map Output:\(loss\); the aggregated loss \(loss\gets 0. ## 4 Experiments In this section, we demonstrate the superiority of our proposed MERIT architectures by comparing the results with SOTA methods. We introduce datasets, evaluation metrics, and implementation details in **Appendix B**. More experiments and ablation studies to answer questions related to our architectures are given in **Appendix C.1-C.7**. ### Results on Synapse multi-organ segmentation Table 1 presents the results of Synapse multi-organ segmentation; it can be seen that both variants of our MERIT significantly outperform all the SOTA CNN- and transformer-based 2D medical image segmentation methods. Among all the methods, our Cascaded MERIT achieves the best average DICE score (84.90%). Cascaded MERIT outperforms two popular methods on this dataset, such as TransUNet and SwinUNet by 7.42% and 5.57%, respectively, when compared to their original reported DICE scores. Cascaded MERIT achieves 2.22% better DICE than the existing best method, TransCASCADE (82.68% DICE), on this dataset. When we compare the HD95 distance of all the methods, we find that both variants of our MERIT achieve a lower HD95 distance. Cascaded MERIT has the lowest HD95 distance (13.22) which is 18.47 lower than TransUNet (HD95 of 31.69) and 4.12 lower than the best SOTA method, TransCASCADE (HD95 of 17.34). If we look into the DICE score of individual organs, we observe that proposed MERIT variants significantly outperform SOTA methods on six out of eight organs. We also can conclude that Cascaded MERIT performs better both in large and small organs, though it exhibits greater improvement for small organs. We believe that both MERIT variants \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Architectures} & \multicolumn{3}{c}{Average} & \multirow{2}{*}{Aorta} & \multirow{2}{*}{GB\({}^{b}\)} & \multirow{2}{*}{KL\({}^{b}\)} & \multirow{2}{*}{KR\({}^{b}\)} & \multirow{2}{*}{Liver} & \multirow{2}{*}{PC\({}^{b}\)} & \multirow{2}{*}{SP\({}^{b}\)} & \multirow{2}{*}{SM\({}^{b}\)} \\ & DICE\(\uparrow\) & HD95\({}^{a}\)\(\downarrow\) & & & & & & & & \\ \hline UNet (Ronneberger et al., 2015) & 70.11 & 44.69 & 84.00 & 56.70 & 72.41 & 62.64 & 86.98 & 48.73 & 81.48 & 67.96 \\ AttnUNet (Okray et al., 2018) & 71.70 & 34.47 & 82.61 & 61.94 & 76.07 & 70.42 & 87.54 & 46.70 & 80.67 & 67.66 \\ R50+UNet (Chen et al., 2021) & 74.68 & 36.87 & 84.18 & 62.84 & 79.19 & 71.29 & 93.35 & 48.23 & 84.41 & 73.92 \\ R50+AttnUNet (Chen et al., 2021) & 75.57 & 36.97 & 55.92 & 63.91 & 79.20 & 72.71 & 93.56 & 49.37 & 87.19 & 74.95 \\ SSFormerPVT (Wang et al., 2022b) & 78.01 & 25.72 & 82.78 & 63.74 & 80.72 & 78.11 & 93.53 & 61.53 & 87.07 & 76.61 \\ PolypPVT (Dong et al., 2021) & 78.08 & 25.61 & 82.34 & 66.14 & 81.21 & 73.78 & 94.37 & 59.34 & 88.05 & 79.4 \\ TransUNet (Chen et al., 2021) & 77.48 & 31.69 & 87.23 & 63.13 & 81.87 & 77.02 & 94.08 & 55.86 & 85.08 & 75.62 \\ SwinUNet (Cao et al., 2021) & 79.13 & 21.55 & 85.47 & 66.53 & 83.28 & 79.61 & 94.29 & 56.58 & 90.66 & 76.60 \\ MT-UNet (Wang et al., 2022a) & 78.59 & 26.59 & 87.92 & 64.99 & 81.47 & 77.29 & 93.06 & 59.46 & 87.75 & 76.81 \\ MISFormer (Huang et al., 2021) & 81.96 & 18.20 & 86.99 & 68.65 & 85.21 & 82.00 & 94.41 & 65.67 & 91.92 & 80.81 \\ CASTformer (You et al., 2022) & 82.55 & 22.73 & **89.05** & 67.48 & 86.05 & 82.17 & **95.61** & 67.49 & 91.00 & 81.55 \\ PVT-CASACDE (Rahman et al., 2023) & 81.06 & 20.23 & 83.01 & 70.59 & 82.23 & 80.37 & 94.08 & 64.43 & 90.1 & 83.69 \\ TransCASCADE (Rahman et al., 2023) & 82.68 & 17.34 & 86.63 & 68.48 & 87.66 & 84.56 & 94.43 & 65.33 & 90.79 & 83.52 \\ \hline Parallel MERIT (Ours) & 84.22 & 16.51 & 88.38 & 73.48 & 87.21 & 84.31 & 95.06 & 69.97 & 91.21 & 84.15 \\ Cascaded MERIT (Ours) & **84.90** & **13.22** & 87.71 & **74.40** & **87.79** & **84.85** & 95.26 & **71.81** & **92.01** & **85.38** \\ \hline \hline \end{tabular} \({}^{a}\) More details in Appendix B.2, \({}^{b}\) more details in Appendix B.1 \end{table} Table 1: Results on Synapse multi-organ dataset. DICE scores (%) are reported for individual organs. The results of UNet, AttnUNet, PolypPVT, and SSFormerPVT are taken from CASCADE (Rahman et al., 2023). MERIT results are averaged over five runs for MERIT+CASCADE decoder(Additive)+MUTATION. \(\uparrow\) denotes higher the better, \(\downarrow\) denotes lower the better. The best results are in bold. demonstrate better performance due to using the multi-scale hierarchical transformer encoder with cascaded attention-based decoding and MUTATION loss aggregation. ### Results on ACDC cardiac organ segmentation Table 2 reports three cardiac organ segmentation results of different methods on the ACDC dataset for MRI data modality. Both our Parallel and Cascaded MERIT have better DICE scores than all other SOTA methods. Our Parallel MERIT achieves the best average DICE score (92.32%) which outperforms TransUNet and SwinUNet by 2.61% and 2.32%, respectively. Parallel MERIT also shows the best DICE scores in RV\({}^{B.1}\) (90.87%) and LV\({}^{B.1}\) (96.08%) segmentation. We can conclude from these results that our method performs the best across different medical imaging data modalities. ## 5 Conclusion In this paper, we have introduced a novel multi-scale hierarchical transformer architecture (MERIT) that can capture both the multi-scale and multi-resolution features necessary for medical image segmentation. We have also incorporated an attention-based cascaded decoder to further refine features. Moreover, we have proposed a novel multi-stage feature mixing loss aggregation (MUTATION) strategy for implicit ensembling/augmentation which ensures better model training and boosts the performance without introducing additional hyper-parameters and inference overhead. Our experimental results on two well-known multi-class medical image segmentation benchmarks demonstrate the superiority of our proposed method over all SOTA approaches. Finally, we believe that our proposed MERIT architectures and MUTATION loss aggregation strategy will improve other downstream medical image segmentation and semantic segmentation tasks. \begin{table} \begin{tabular}{l r r r r} \hline \hline Architectures & Avg DICE & RV\({}^{a}\) & Myo\({}^{a}\) & LV\({}^{a}\) \\ \hline R50+UNet (Chen et al., 2021) & 87.55 & 87.10 & 80.63 & 94.92 \\ R50+AttnUNet (Chen et al., 2021) & 86.75 & 87.58 & 79.20 & 93.47 \\ ViT+CUP (Chen et al., 2021) & 81.45 & 81.46 & 70.71 & 92.18 \\ R50+ViT+CUP (Chen et al., 2021) & 87.57 & 86.07 & 81.88 & 94.75 \\ TransUNet (Chen et al., 2021) & 89.71 & 88.86 & 84.53 & 95.73 \\ SwinUNet (Cao et al., 2021) & 90.00 & 88.55 & 85.62 & 95.83 \\ MT-UNet (Wang et al., 2022a) & 90.43 & 86.64 & 89.04 & 95.62 \\ MISSFormer (Huang et al., 2021) & 90.86 & 89.55 & 88.04 & 94.99 \\ PVT-CASCADE (Rahman et al., 2023) & 91.46 & 88.9 & 89.97 & 95.50 \\ TransCASCADE (Rahman et al., 2023) & 91.63 & 89.14 & **90.25** & 95.50 \\ \hline Parallel MERIT (Ours) & **92.32** & **90.87** & 90.00 & **96.08** \\ Cascaded MERIT (Ours) & 91.85 & 90.23 & 89.53 & 95.80 \\ \hline \hline \end{tabular} \({}^{a}\) More details in Appendix B.1 \end{table} Table 2: Results on the ACDC dataset. DICE scores (%) are reported for individual organs. We present the results of MERIT averaging over five runs with the setting MERIT+CASCADE decoder(Additive)+MUTATION. The best results are in bold.
2302.10112
Constraining the EdGB theory with higher harmonics and merger-ringdown contribution using GWTC-3
In this paper, we revisit the problem of using gravitational wave data to test the Einstein-dilation-Gauss-Bonnet theory, by using nine selected gravitational wave events from GWTC-3. Compared with existing work, we are taking into account the higher harmonics more properly and we also study the contribution of the merger-ringdown data. Using the inspiral data alone, we find that the best result is from GW200115, giving $\sqrt{|\alpha|} < 1.1$ km, which is about 17\% tighter than the previous best result. We also notice the possible existence of a simple unexpected relation among the constraints from different events. Several combinations of the selected events give $\sqrt{|\alpha|} \leq 1.0$ km. The result is further improved when the merger-ringdown data is also included, using two phenomenological schemes, giving $\sqrt{|\alpha|} < 0.87$ km for GW200115 in the best case scenario.
Baoxiang Wang, Changfu Shi, Jian-dong Zhang, Yi-Ming hu, Jianwei Mei
2023-02-20T17:15:27Z
http://arxiv.org/abs/2302.10112v1
# Constraining the EdGB theory with higher harmonics and merger-ringdown contribution using GWTC-3 ###### Abstract In this paper, we revisit the problem of using gravitational wave data to test the Einstein-dilation-Gauss-Bonnet theory, by using nine selected gravitational wave events from GWTC-3. Compared with existing work, we are taking into account the higher harmonics more properly and we also study the contribution of the merger-ringdown data. Using the inspiral data alone, we find that the best result is from GW200115, giving \(\sqrt{|\alpha|}<1.1\) km, which is about 17% tighter than the previous best result. We also notice the possible existence of a simple unexpected relation among the constraints from different events. Several combinations of the selected events give \(\sqrt{|\alpha|}\leq 1.0\) km. The result is further improved when the merger-ringdown data is also included, using two phenomenological schemes, giving \(\sqrt{|\alpha|}<0.87\) km for GW200115 in the best case scenario. ## I Introduction The detection of gravitational waves (GWs) from compact binaries has opened the door to study the nature of gravity and dark compact objects in the genuinely strong field and dynamical regime. By using the currently available GW data [1; 2; 3; 4; 5], a variety of tests have been performed [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. For example, there have been theory agnostic tests, such as residual test [6], inspiral-merger-ringdown consistency test [17], and searches for possible deviations from the post-Newtonian (PN) waveform [18], there have been tests on specific topics, such as no-hair theorem [19], GW polarization [21], graviton mass [6], the spin-induced quadrupole moment [22], the GW dispersion relation [23], extra dimension [16], time-varying gravitational constant G(t) [24] and the gravitational Lorentz invariance [25], and there have also been tests targeting different modified gravity theory (MG) [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. No evidence against general relativity (GR) has been found so far. For the existing tests, the full inspiral-merger-ringdown data has been used in the residual test, the inspiral-merger-ringdown consistency test and the polarization test, while the test on no-hair theorem only uses the ringdown data, and tests on the PN waveform and the spin-induced quadrupole moment and almost all theory/topic specific tests only use the inspiral data. What's more, previous works using the inspiral data have only considered corrections to the dominant 22 mode or have assumed a universal correction for all harmonics. For some tests, especially the theory-specific tests, it is natural to expect that both the higher harmonics and the merger-ringdown data can make a difference. But taking these factors into consideration has to come supported with adequate waveform modeling in the theories being tested. A good example here is the test of the Einstein-dilaton Gauss-Bonnet (EdGB) theory [36; 37; 38; 39]. EdGB is a quantum gravity-inspired modified gravity theory featuring a dilation coupled to the Gauss-Bonnet invariant, through a dimensionful coupling constant \(\alpha\). Since the coupling is dimensionful, one generally expects the constraint to be of the order \(\sqrt{|\alpha|}<\mathcal{O}(L)\,,\) where \(L\) is the typical curvature radius of the system under consideration [40]. Indeed, electromagnetic observation of the orbital decay of black hole low-mass x-ray binary A0620-00 has produced a constraint, \(\sqrt{|\alpha|}<1.9\) km [39], which is of the expected order. Black hole binaries in EdGB generically have dipolar GW emission and so the theory is expected to get strong constraints with GWs [41]. The first estimation of the constraints on EdGB with real GW source information has been done using the Fisher Information Matrix (FIM) method [16]. The amplitude correction to the waveform was later included but the calculation was still FIM based [42]. The full Bayesian analysis of EdGB was performed firstly with GWTC-1 data in [43] and then with GWTC-2 data in [11]. Both works have selected events with small mass ratios (defined to be the major mass over the minor mass). However, systems with larger mass ratios tend to place stronger constraints on \(\alpha\)[44; 41; 45]. For such systems, significant contributions from higher harmonics are expected. The contribution of higher harmonics has been considered in [12; 46] with large mass ratio events such as GW190814, but a universal EdGB correction to all harmonics has been assumed. Recently, a parameterized post-Einsteinian (ppE) waveform model has been constructed in which the relations among the corrections to different harmonics have been carefully calculated [47]. Including the merger-ringdown data in theory-specific tests is challenging due to the difficulties with numerical relativity simulation [48; 49; 50] and quasi-nomral mode (QNM) [51; 52; 53; 54; 55; 56] calculation in the corresponding MGs. For EdGB, a test with inspiral and ringdown data has been carried out based on QNMs found in a spherically symmetric background [57; 58]. For more general situations, several schemes have recently been proposed [59] to model phase corrections beyond the inspiral stage, based on the structure of IMRPhenomD waveforms [60; 61]. In this paper, we revisit the problem of testing EdGB with real GW data. We refine existing work in the following ways: * We apply the work of [47] to EdGB and obtain a waveform model containing the correct relation among the corrections to different harmonics. To apply the result of [47], we need to use waveform models that assume nonprecessing binaries, for this we will use the IMRPhenomXHM waveform [62]. * We apply the method of [59] to the IMRPhenomXHM waveform so that the merger-ringdown data can also be used. * We have included a couple of new events from GWTC-3 that has never been used to test EdGB before. We find that: (i) the appropriate inclusion of higher harmonic modes can make appreciable difference in the result. For example, there can be about 50% improvements on the results from GW190707, GW190720 and GW190728 over previous ones that do not include higher harmonics [11], and there can be nearly 30% improvement from GW190421 compared to the previous result that assumes a universal correction for all higher harmonics [12]; (ii) All constraints from the selected GW events seem to be closely distributed by the line described by (13) below; (iii) the effect of merge-ringdown data is relatively less significant, but can still give about 20% improvement for GW200115, compared to that without using the merger-ringdown data; (iv) Our best constraint comes from GW200115, which with higher harmonic modes alone gives \(\sqrt{|\alpha|}<1.1\) km, and if the merge-ringdown data is also taken into account, it gives \(\sqrt{|\alpha|}<0.87\) km. As a comparison, the current best constraint on EdGB comes from [46], which also uses GW200115, giving \(\sqrt{|\alpha|}<1.33\) km. This paper is organized as follows. We introduce how the higher harmonics and the merger-ringdown contribution are included into the waveform model in Section II, select the GW events to be used and make necessary explanations of the statistical method in Section III, present the main results in Section IV, and then make a short concluding remark in Section V. We use the convention G=c=1 throughout the paper. ## II Waveform Models In this section, we explain the construction of the waveform models used in this paper. ### The ppE waveform model with higher harmonic contributions The evolution of compact binaries is divided into three stages: inspiral, merger and ringdown. During the inspiral stage, the binary components are widely separated, their velocities are relatively small, and the PN approximation [63] can be used to obtain the waveforms for low mass ratio systems. The ppE waveform model has been proposed to capture the common features of how the PN waveforms in many MGs deviate from those in GR [64]. Keeping only the leading order correction, the inspiral waveform for an MG can be written as: \[\tilde{h}_{\rm ppE}(f)=\tilde{h}_{\rm GR}(f)(1+\alpha u^{a})e^{i\beta u^{b}}\,, \tag{1}\] where \(\alpha\) and \(\beta\) are the ppE parameters, \(b=2\) PN \(-5\) and \(a=b+5\) are the ppE order parameters, and the PN in this equation symbolizes the PN order, \(u=(\pi\mathcal{M}f)^{1/3}\) is a characteristic velocity, \(\mathcal{M}=\eta^{3/5}M\) is the chirp mass, \(M=m_{1}+m_{2}\) is the total mass, \(\eta=m_{1}m_{2}/(m_{1}+m_{2})^{2}\) is the symmetrical mass ratio, \(m_{1}\) and \(m_{2}\) are the major and minor masses, respectively, and \(h_{\rm GR}\) is the corresponding GR waveform, which will be produced using IMRPhenomXHM in this paper. To take into account the contribution of higher harmonics, the construction of [47] neglects the contribution of the amplitude correction, leading to \[\tilde{h}_{\rm ppE}(f)=\tilde{h}_{\rm GR}(f)e^{i\beta u^{b}}\,. \tag{2}\] The error introduced in this process is expected to be less than a few percent [42]. Early works [11; 43] of using GWTC-1 and GWTC-2 data to test EdGB have only considered the 22 mode of \(\tilde{h}_{\rm GR}\) in (2), while the authors of [12; 46] have considered the contribution of higher harmonics but have assumed a universal EdGB correction to all harmonics, i.e., \[\tilde{h}_{\rm ppE}(f)=\Big{[}\sum_{\ell m}\tilde{h}_{\ell m}^{\rm GR}(f) \Big{]}e^{i\beta u^{b}}\,. \tag{3}\] However, there is no reason to believe that the ppE corrections to all the harmonics are the same, and one would naturally expect \[\tilde{h}_{\rm ppE}(f)=\sum_{\ell m}\tilde{h}_{\ell m}^{\rm ppE }(f)\,,\] \[\tilde{h}_{\ell m}^{\rm ppE}(f)=\tilde{h}_{\ell m}^{\rm GR}\left( f\right)e^{i\beta_{\ell m}u^{b_{\ell m}}}\,, \tag{4}\] where both \(\beta_{\ell m}\) and \(b_{\ell m}\) can be different for different values of \(\ell\) and \(m\). Indeed, it has been found that [47], \[\beta_{\ell m}=\Big{(}\frac{2}{m}\Big{)}^{b/3-1}\beta_{22}\,,\quad b_{\ell m} =b_{22}\,. \tag{5}\] Given an MG, the relation between the ppE parameters and the theory parameters can be established by calculating corrections to binary orbits [45]. For EdGB, the leading order modification occurs at the \(-1\)PN order, corresponding to \(b_{22}=-7\), and it has been found that [41]: \[\beta_{22}=-\frac{5\zeta}{7168}\frac{(m_{1}^{2}\tilde{s}_{2}-m_{2}^{2}\tilde{s}_{ 1})^{2}}{M^{4}\eta^{18/5}}\,, \tag{6}\] where \(\zeta\equiv 16\pi\alpha^{2}/M^{4}\)[36], and \(\tilde{s}_{n}\), \(n=1,2\), is the scalar charge for the \(n\)th component. If the component is a black hole, we have \(\tilde{s}_{n}\equiv 2(\sqrt{1-\chi_{n}^{2}}-1+\chi_{n}^{2})/\chi_{n}^{2}\). If it's a neutron star, the corresponding scalar charge is zero. ### Waveform model for the merger-ringdown stage Although it is tempting to use all the available GW data to test a given MG, such as EdGB, there still lacks a good waveform model that properly takes into account the corresponding MG correction at the merger and ringdown stages, due to the difficulty with numerical relativity simulation [48; 49; 50] and QNM calculations [51; 52; 53; 54; 55; 56]. In this paper, we will follow [59] and use the following phenomenologically motivated waveforms for the merger and ringdown stages: * Zero-correction. This is the simplest case when no contribution from the merger-ringdown stage is invoked, \[\tilde{h}_{\ell m}^{\rm zero}(f)=\left\{\begin{array}{ll}\tilde{h}_{\ell m} ^{\rm GR}(f)e^{i\beta_{\ell m}u^{k_{\ell m}}},&f<f_{\ell m}^{\rm IM}\\ 0,&f\geq f_{\ell m}^{\rm IM}\end{array}\right.\,,\] (7) where \(f_{\ell m}^{\rm IM}\) is the GW frequency when the binary system reaches its minimal energy circular orbit (MECO) as defined in [65], i.e. \(Mf_{\rm IM}^{22}=0.014\) and \(f_{\ell m}^{\rm IM}=\frac{m}{2}f_{\rm IM}^{22}\) for IMRPhenomXHM [62]. * \(C^{0}\)-correction. In this case, the correction in the merger-ringdown stage is modeled with a fixed phase, \[\tilde{h}_{\ell m}^{\rm C^{0}}(f)=\left\{\begin{array}{ll}\tilde{h}_{\ell m }^{\rm GR}e^{i\beta_{\ell m}u^{k_{\ell m}}},&f<f_{\ell m}^{\rm IM}\\ \tilde{h}_{\ell m}^{\rm GR}e^{i\beta_{\ell m}u^{k_{\ell m}}_{\rm IM}},&f\geq f _{\ell m}^{\rm IM}\end{array}\right.\,,\] (8) where \(u_{\rm IM}=(\pi\mathcal{M}f_{\ell m}^{\rm IM})^{1/3}\). * \(C^{\infty}\)-correction: In this case, the form of the correction in the inspiral stage is carried all the way through the merger-ringdown stages, \[\tilde{h}_{\ell m}^{\rm C^{\infty}}(f)=\tilde{h}_{\ell m}^{\rm GR}e^{i\beta_ {\ell m}u^{k_{\ell m}}}\,.\] (9) ## III Data source and data analysis methods In this section, we explain how the GW events are selected and clarify the specifics of the statistical method used in this paper. ### Selection of GW events There are 90 GW events in the current GWTC-3 catalog [5], and using all of them to test EdGB would be too expensive for computation, so it is preferable to choose a limited number of events that can put EdGB under the most stringent test. As guidance for selecting the events, we would like to have reliable and strong events that have low false alarm rate (FAR) (i.e., FAR \(<10^{-3}\) Year\({}^{-1}\) ) and high signal-to-noise ratio (SNR) (i.e., SNR \(>10\) ), and we also want events that have low total mass but large mass ratio. We are thus led to the set of GW events listed in TABLE 1, guided mainly by the requirement on SNR, total mass and mass ratio. For later use, we also list the mass, denoted as \(M_{*}\), of the smallest black hole involved in each GW event.1 It turns out that all these events have FAR \(<10^{-5}\) Year\({}^{-1}\). We will consider two possibilities for GW190814: one as a NSBH, denoted as GW190814\({}^{a}\), and one as a BHB, denoted as GW190814\({}^{b}\). The selected events can be roughly divided into two groups: Footnote 1: If the system is a black hole binary (BHB), then \(M_{*}\) is the mass of the minor component, but if the system is a neutron star-black hole binary (NSBH), then \(M_{*}\) is the mass of the black hole. * High mass ratio events with \(q\geq 3\) ; * Low mass ratio events with \(3>q\geq 1\). One can see that only three events belong to the high mass ratio group. For events in the high mass ratio group, constraints on EdGB with higher harmonics contribution has been studied before [46; 12], while for events in the low mass ratio group, no higher harmonic contribution has been considered [11]. ### Bayesian inference The statistics are done with Bayesian inference. According to the Bayes' theorem, given the data \(d\) and \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Designation & \(M(\) M\({}_{\odot})\) & \(M_{*}(\) M\({}_{\odot})\) & \(q\) & SNR \\ \hline GW190412 & 36.8 & 9.0 & 3.08 & 19.8 \\ \hline GW190814\({}^{a}\) & 25.9 & 23.3 & 8.96 & 25.3 \\ \hline GW190814\({}^{b}\) & 25.9 & 2.6 & 8.96 & 25.3 \\ \hline GW200115 & 7.4 & 5.9 & 4.1 & 11.3 \\ \hline GW190707 & 20.1 & 7.9 & 1.53 & 13.1 \\ \hline GW190720 & 21.8 & 7.5 & 1.89 & 10.9 \\ \hline GW190728 & 20.7 & 8.0 & 1.56 & 13.1 \\ \hline GW190924 & 13.9 & 5.1 & 1.73 & 12.0 \\ \hline GW191129 & 17.5 & 6.7 & 1.6 & 13.1 \\ \hline GW200202 & 17.58 & 7.3 & 1.38 & 10.8 \\ \hline \end{tabular} \end{table} Table 1: The list of selected GW events. \(M_{*}\) is the mass of the smallest black hole involved in each event. hypothesis \(H\), the posterior probability distribution of a set of parameters \(\theta\) is given by \[p(\theta|d,H)=\frac{p(d|\theta,H)\pi(\theta|H)}{\mathcal{Z}(d|H)}\,, \tag{10}\] where \(p(d|\theta,H)\) and \(\pi(\theta|H)\) are the likelihood and prior, respectively. The evidence \(\mathcal{Z}(d|H)\) is an overall normalization and will not be used in our calculation. The GW and theory parameters used in this work are listed in TABLE 2. Note \(\chi_{1}\) and \(\chi_{2}\) are dimensionless and the spins are assumed to be aligned with the orbital angular momentum. Assuming stationary and Gaussian noise, the likelihood can be written as \[p(d|\theta,H)\propto\exp\Bigg{[}-\frac{1}{2}\sum_{j=1}^{N}(d_{j}-h_{j}|d_{j}-h _{j})\Bigg{]}\,, \tag{11}\] where inner product is defined as [66] \[(x|y)=2\int_{f_{\rm low}}^{f_{\rm high}}\frac{\tilde{x}^{*}(f)\tilde{y}(f)+ \tilde{x}(f)\tilde{y}^{*}(f)}{S_{n}(f)}df\,. \tag{12}\] Here \(\tilde{x}^{*}\) means the complex conjugate of the Fourier component of \(x\), \(S_{n}(f)\) is the power spectral density of the detector, \(f_{\rm low}\) and \(f_{\rm high}\) are the frequency bounds. We assume the prior to be uniform for \(m_{1}\), \(m_{2}\), \(\chi_{1}\), \(\chi_{2}\), \(\phi_{\rm ref}\) and \(t_{c}\), the events are spatially uniformly distributed, and \(\sqrt{|\alpha|}\) is uniform in \([0,15]\) km. We have used the PyCBC package [67] to do the Bayesian inference, and the Markov Chain Monte Carlo (MCMC) sampling is done with the emcee_pt sampler [68]. We use 32 s of data for all the events selected and set \(f_{\rm low}=20\) Hz. ## IV Results In this section, we present the main findings of this work. ### Constraints from the inspiral stage Existing GW constraints on EdGB have been obtained only using the inspiral data. The contribution of higher harmonics has been considered in [12; 46], but a universal EdGB correction to all harmonics has been assumed. Here we revisit the problem by using the corrected waveform model (4), which has been obtained in [47] recently. An independent Bayesian inference has been carried out for each GW source listed in TABLE 1, except that GW190814 has been used twice, firstly as a NSBH and then as a BHB. The posterior distribution of \(\sqrt{|\alpha|}\) is obtained by marginalizing over all other parameters and the results are plotted in FIG.1. The corresponding 90% constraints are listed in TABLE 3. One can see that all the selected GW events can constrain \(\sqrt{|\alpha|}\) to better than about 4 km. The strongest constraint comes from GW190814\({}^{b}\), giving \(\sqrt{|\alpha|}<0.27\) km, which is about 25% improvement over the previous results that have taken higher harmonics into consideration [12; 46]. If GW190814 is not a BHB, then the strongest constraint comes from GW200115, giving \(\sqrt{|\alpha|}<1.1\) km, which is about 17% improvement over the existing result [46]. Except for GW190814\({}^{b}\), the strongest constraint obtained from a BHB event comes from GW190924, which gives \(\sqrt{|\alpha|}<2.26\) km. An inspection of TABLE 1 and TABLE 3 suggests that there might be some simple approximate relation among the constraints and the parameters of different events, we are thus led to produce the plot given in FIG. 2. One can see that all the constraints found in TABLE 3 are distributed not far away from the line, \[\frac{M_{\rm s}}{\sqrt{|\alpha|}}\approx 0.37+1.1q\,. \tag{13}\] We do not know a reason that could lead to such a simple relation and we think more data (especially \begin{table} \begin{tabular}{|c|c|} \hline Symbol & Physical meaning \\ \hline \(m_{1}\) & Mass of the major component \\ \hline \(m_{2}\) & Mass of the minor component \\ \hline \(\chi_{1}\) & Spin of the major component \\ \hline \(\chi_{2}\) & Spin of the minor component \\ \hline \(\alpha_{S}\) & Right ascension of source location \\ \hline \(\delta\) & Declination of the source location \\ \hline \(\psi\) & Polarization angle \\ \hline \(\iota\) & Inclination angle \\ \hline \(\phi_{\rm ref}\) & Phase at the reference frequency \\ \hline \(t_{c}\) & Coalescence time \\ \hline \(D_{L}\) & Luminosity distance \\ \hline \(\sqrt{|\alpha|}\) & Parameter from the EdGB coupling \\ \hline \end{tabular} \end{table} Table 2: GW and theory parameters involved in this study. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & 90\% & Existing & Improv- \\ GW event & Constr. & result & cement \\ \hline GW190412 & 3.18 & 4.46 [12] & 29\% \\ \hline GW190814\({}^{a}\) & 2.18 & 2.72 [46] & 20\% \\ \hline GW190814\({}^{b}\) & 0.27 & 0.4 [12] & 32\% \\ \hline GW200115 & 1.1 & 1.33 [46] & 17\% \\ \hline \hline GW190707 & 3.03 & 6.59 [11] & 54\% \\ \hline GW190720 & 3.74 & 6.90 [11] & 46\% \\ \hline GW190728 & 3.47 & 6.87 [11] & 50\% \\ \hline GW190924 & 2.26 & 2.98 [11] & 24\% \\ \hline GW191129 & 3.27 & – & – \\ \hline GW200202 & 3.79 & – & – \\ \hline \hline Comb.1 & 1.00 & – & – \\ \hline Comb.2 & 0.25 & – & – \\ \hline Comb.3 & 0.98 & – & – \\ \hline \end{tabular} \end{table} Table 3: 90% constraints on \(\sqrt{|\alpha|}\) using the inspiral data from selected GW events. Figure 1: The cumulative posterior distributions for \(\sqrt{|\alpha|}\) obtained with the inspiral data of selected GW events. In each panel, the solid vertical line stands for the constraint on \(\sqrt{|\alpha|}\) with 90% probability; the dashed vertical line stands for existing results from the literature, the superscripts 1, 2, and 3 correspond to references [11], [12] and [46], respectively, and the corresponding values are also listed in TABLE 3; the dotted vertical line stands for the location where the weak coupling limit, characterized by \(\alpha^{2}<\frac{m_{1}^{2}}{32\pi}\)[11; 46], is saturated, and above which the results are no longer reliable. those with large mass ratios) will help clarify if there is indeed such a trend. We also obtain combined constraints on \(\sqrt{|\alpha|}\) by superimposing the posteriors of individual events [6]. Three combinations have been considered: * Comb.1: Including all single events in TABLE III but not GW190814\({}^{a}\) or GW190814\({}^{b}\); * Comb.2: Including all single events in TABLE III but not GW190814\({}^{a}\); * Comb.3: Including all single events in TABLE III but not GW190814\({}^{b}\); One can see that the constraints can reach 1 km or better for all three combinations. ### The effect of the merger-ringdown data Here we use the best events from the last subsection, GW190814\({}^{b}\), GW190924 and GW200115, all of which are characterized by containing a very low mass black hole, to study the contribution of merger-ringdown data. We also present the results on GW190814\({}^{a}\) just to make the study of this event more complete. The results are plotted in FIG.3 and the corresponding 90% constraints are listed in TABLE IV. For the four cases studied, we see that the posterior distributions of \(C^{0}\)- and \(C^{\infty}\)-corrections can be appreciably different from that of Zero-correction, while the difference between \(C^{0}\)- and \(C^{\infty}\)-corrections is relatively less significant. One can also see that the merger-ringdown part can make an appreciable correction to the constraints, reaching as large as 20% for the cases studied in this paper. ## V Conclusion In this paper, we have revisited the problem of constraining EdGB with real GW data. We have selected 9 events from GWTC-3 [69] and have paid particular attention to properly include the higher harmonics in the waveform model for the inspiral stage and have considered two schemes to include the contribution of the merger-ringdown data. We find that different ways to include the higher harmonics can make a significant difference in the resulted constraints. For example, for some of the low mass ratio sources, such as GW190707 and GW190728, for which the higher harmonics have never been used to constrain EdGB in previous studies, we find that the improvement can be as large as 50%. For the high mass ratio events, for which the higher harmonics have already been considered in previous studies, but by using the correct relation among the corrections to different harmonics [47], one can still get improved results. In the case of GW190412, for example, the improvement can be as large as 29%. We also find that the most stringent constraint comes from the events that contain the least massive black holes. In particular, all constraints from the selected GW events seem to be closely distributed by the line (13). This is consistent with the expectation that the constraint on \(\sqrt{|\alpha|}\) is at the order of the typical curvature radius of the system [40]. We hope more large mass ratio events could help clarify if there is indeed such a trend. We also considered two schemes to include the merger-ringdown data in the test. By using the GW events that pose to give the most stringent constraint on EdGB, we find that the contribution from the merger-ringdown stage can improve the constraint by as large as about 20%. ###### Acknowledgements. The authors thank Yi-Fan Wang for useful discussions and Alexander Harvey Nitz for the helpful communication on the use of PyCBC. This work has been supported by the Guangdong Basic and Applied Basic Research Foundation(Grant No.2021A1515010319), the Guangdong Major Project of Basic and Applied Basic Research (Grant No.2019B030302001), the Natural Science Foundation of China (Grants No.12173104). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{GW events} & \multicolumn{2}{c|}{GW190814} & \multirow{2}{*}{GW190924} & \multirow{2}{*}{GW200115} \\ \cline{2-2} \cline{4-5} & BHB & & & \\ \hline Zero- & & & & \\ Correction & 0.27 & 2.18 & 2.26 & 1.10 \\ \hline \multirow{2}{*}{\(C^{0}\)} & 0.268 & 2.07 & 1.98 & 0.95 \\ & 0.7\% & 5\% & 12.4\% & 13.6\% \\ \hline \multirow{2}{*}{\(C^{\infty}\)} & 0.254 & 1.93 & 1.86 & 0.87 \\ & 5.9\% & 11.5\% & 17.7\% & 20.9\% \\ \hline \end{tabular} \end{table} Table 4: Constraints on \(\sqrt{|\alpha|}\) with merge-ringdown contributions. The percentage values in the lines of \(C^{0}\) and \(C^{\infty}\) stand for the improvement made compared to the case of Zero-correction. Figure 2: The distribution of constraints from different events.
2310.10465
Coherence-enhanced thermodynamic performance in a periodically-driven inelastic heat engine
Quantum thermodynamics with microscopic inelastic scattering processes has been intensively investigated in recent years. Here, we apply quantum master equation combined with full counting statistics approach to investigate the role of quantum coherence on the periodically-driven inelastic heat engine. We demonstrate that the inelastic quantum heat engine exhibits dramatic advantage of thermodynamic performance compared to their elastic counterpart. Moreover, it is found that inelastic currents, {output work}, and the efficiency can be enhanced by quantum coherence. In particular, the geometric effect proves crucial in achieving maximal values of generated output work and energy conversion efficiency. The Berry curvature boosted by quantum coherence unveils the underlying mechanism of periodically-driven inelastic heat engine. Our findings may provide some insights for further understanding and optimizing periodically-driven heat engines via quantum coherence resource and inelastic scattering processes.
Jincheng Lu, Zi Wang, Jie Ren, Chen Wang, Jian-Hua Jiang
2023-10-16T14:43:54Z
http://arxiv.org/abs/2310.10465v2
# Coherence-enhanced thermodynamic performance in periodically driven thermoelectric heat engines ###### Abstract Three-terminal phonon-thermoelectric devices have the potential to operate as heat engines, converting absorbed heat from one terminal into useful output work. This study explores one periodic driven quantum-dot thermoelectric heat engine within the framework of quantum master equations with full counting statistics, and discusses the impact of quantum coherence on the thermodynamic performance of heat engines. By applying external modulations, we not only observe the dynamic current induced by thermodynamic bias but also uncover a Berry-phase-like current with adiabatic geometric pump. It is found that heat absorption and inelastic thermoelectric currents can be enhanced by quantum coherence effect. In particular, this geometric effect proves crucial in achieving maximal values of generated output work and energy conversion efficiency, especially in moderate coupling regimes. The Berry curvature boosted by quantum coherence unveils the underlying mechanism of quantum heat engines. Our findings demonstrate that heat engines including quantum coherence outperform their incoherent counterparts by enhancing both generated average output work and efficiency. ## I Introduction Thermoelectric phenomena in mesoscopic systems have attracted significant attention due to their fundamental physics relevance and potential applications in new energy resources [1, 2]. Research in thermoelectric transport and heat-to-work conversion has concurrently driven robust developments across various fields, encompassing quantum measurement [3, 4, 5], quantum information processing [6, 7, 8], and quantum thermal machines [9, 10]. Quantum heat engines, a focal point in quantum transport, have been extensively explored both theoretically [11, 12, 13] and experimentally [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Likewise, quantum coherence is a fundamental concept in quantum heat engines that distinguishes them from classical engines [25, 26, 27, 28, 29, 30, 31, 32, 33]. Quantum coherence, with its unique properties, finds applications across a wide spectrum of disciplines. For instance, quantum coherence can enhance the stability of quantum heat engines [34, 35, 36], improve the efficiency of feedback-driven quantum engines [37], and refine the performance of hybrid multitask autonomous thermal machines [38]. Despite extensive research on quantum interference in quantum transport, an open question remains regarding how to establish a direct connection between quantum coherence and thermoelectric heat engines. In recent years, there has been a growing recognition of the significance of inelastic thermoelectricity in non-equilibrium thermoelectric transport for quantum thermodynamics [39, 40, 41, 42]. Inelastic transport processes are generally implemented in three-terminal setups. These transport processes involving the collaborative interaction of these three heat terminals, enable one to investigate nonlinearly coupled electronic and bosonic currents, that can lead to unconventional thermoelectric phenomena, such as cooling by heating effects [43], the separation of charge and heat currents [44, 45], and linear transistor effects [46, 47]. While previous works have mainly focused on steady-state thermoelectric energy conversion [2, 48], there has been limited exploration of coherence effect in periodically driven quantum heat engines. From a quantum regulation perspective, the study of time-dependent thermoelectric transport has garnered significant interest as a crucial component of quantum transport research [49, 50, 51, 52, 53, 54, 55, 56, 57, 58]. Within the framework of the quantum master equation, the realization of adiabatic thermoelectric heat engines relies on the slow modulation of device parameters. This modulation process occurs on a timescale longer than the relaxation time of the system [59; 60; 61]. Adiabatic driving can overcome thermoelectric biases and transfer heat from the cold (low voltage) reservoir to the hot (high voltage) one, thereby enabling the operation of a heat engine or refrigerator [62]. In our work, we have conducted a comprehensive study to investigate whether quantum coherence has a positive impact on the performance of periodically driven thermoelectric heat engines operating in adiabatic regimes. Our work encompasses a unified description of nonlinear electronic and phononic transport in a three-terminal setup, with a particular focus on the significant role of inelastic electron-phonon scatterings on nonequilibrium currents. The geometric and dynamic current components with quantum coherence are analyzed in detail to contribute to the thermodynamic performance of three-terminal heat engine. Our theoretical and numerical calculations demonstrate that quantum coherence not only enhances the average output work of a quantum heat engine but also boosts the energy conversion efficiency of heat engines. Our work is structured as follows: In Section II, we describe the setup of the thermoelectric heat engines. The dynamic and geometric currents are derived in Section III using the quantum master equation with a full counting statistics approach. Section IV is dedicated to the examination of the energy efficiency and output work of the thermoelectric heat engines, where we also compare numerical results with the incoherent case. We summarize our findings in Section V. For simplicity, we set \(\hbar=k_{B}=e\equiv 1\) throughout this paper. ## II Three-terminal double quantum dot system We will now apply these considerations to a quantum heat engine consisting of three terminals. In our setup (see Fig. 1), a double quantum dot (QD) system is coupled to a phononic heat bath. Additionally, the double quantum-dot system exchanges energy with an electronic circuit with two electronic reservoirs (metal leads), denoted as \(L\) and \(R\). These reservoirs can be taken out of equilibrium by applying a finite voltage bias \(\Delta\mu=\mu_{L}-\mu_{R}\) and a temperature difference. Our model is described by the Hamiltonian [40] \[\hat{H}=\hat{H}_{\text{DQD}}+\hat{H}_{\text{e-ph}}+\hat{H}_{\text{lead}}+\hat{ H}_{tun}+\hat{H}_{\text{ph}}, \tag{1}\] with individual components as follows: \[\hat{H}_{\text{DQD}}=\sum_{i=l,r}E_{i}\hat{d}_{i}^{\dagger}\hat{d}_{i}+\Delta( \hat{d}_{l}^{\dagger}\hat{d}_{r}+\text{H.c.}), \tag{2}\] \[\hat{H}_{\text{e-ph}}=\sum_{q}\lambda_{q}\hat{d}_{l}^{\dagger}\hat{d}_{r}(\hat {a}_{q}+\hat{a}_{q}^{\dagger})+\text{H.c.}, \tag{3}\] \[\hat{H}_{\text{ph}}=\sum_{q}\omega_{q}\hat{a}_{q}^{\dagger}\hat{a}_{q}, \tag{4}\] \[\hat{H}_{\text{lead}}=\sum_{j=L,R}\sum_{k}\varepsilon_{jk}N_{jk}, \tag{5}\] with \(N_{jk}=\hat{d}_{jk}^{\dagger}\hat{d}_{jk}\) and \[\hat{H}_{\text{tun}}=\sum_{k}\gamma_{Lk}\hat{d}_{l}^{\dagger}\hat{d}_{Lk}+ \sum_{k}\gamma_{Rk}\hat{d}_{r}^{\dagger}\hat{d}_{Rk}+\text{H.c.}, \tag{6}\] where \(\hat{d}_{i}^{\dagger}\) (\(\hat{d}_{i}\)) is the creation (annihilation) operator of an electron in the \(i\)-th QD, \(E_{i}\) represents the QD energy, \(\Delta\) Figure 1: Illustration of the three-terminal thermoelectric transport: An electron initially departs from the left reservoir and enters the left QD, characterized by an energy level \(E_{l}\). Subsequently, the electron undergoes a transition to the right QD, which possesses a different energy level denoted as \(E_{r}\). This transition is facilitated by the interaction with a phonon from the phonon bath, maintained at a temperature of \(T_{\text{ph}}\). Within this setup, two electric reservoirs are involved, each characterized by distinct temperatures and chemical potentials. The left (\(L\)) and right (\(R\)) electric reservoirs have temperatures denoted as \(T_{L(R)}\), while their respective chemical potentials are represented as \(\mu_{L(R)}\). Furthermore, the temperature of the phonon bath is labeled as \(T_{\text{ph}}\). \(\Delta\) represents the tunneling strength between the double QDs, \(\gamma_{i}\) characterizes the coupling between the electronic reservoirs and the respective QD, and \(\lambda_{q}\) represents the strength of the electron-phonon interaction. stands for tunneling between the two QDs, \(\gamma_{L}\) (\(\gamma_{R}\)) is the coupling between the dots and the left (right) reservoir, \(\lambda_{q}\) is the strength of the electron-phonon interaction, and \(\hat{a}_{q}^{\dagger}\) (\(\hat{a}_{q}\)) creates (annihilates) one phonon with frequency \(\omega_{q}\). We begin by diagonalizing the Hamiltonian \(\hat{H}_{\text{DQD}}\) and expressing it in terms of a new set of electronic operators \(\hat{D}=\sin\theta\hat{d}_{l}+\cos\theta\hat{d}_{r}\) and \(\hat{d}=\cos\theta\hat{d}_{l}-\sin\theta\hat{d}_{r}\)[63], with \(\theta\equiv\frac{1}{2}\arctan\left(\frac{2t}{E_{r}-E_{l}}\right)\). The corresponding energy levels are given by \(E_{D}=\frac{E_{r}+E_{l}}{2}+\sqrt{\frac{(E_{r}-E_{l})^{2}}{4}+\Delta^{2}}\) and \(E_{d}=\frac{E_{r}+E_{l}}{2}-\sqrt{\frac{(E_{r}-E_{l})^{2}}{4}+\Delta^{2}}\). With these operators, the Hamiltonian can be expressed as follows: \[\hat{H}_{\text{DQD}}=E_{D}\hat{D}^{\dagger}\hat{D}+E_{d}\hat{d}^{\dagger}\hat {d}, \tag{7}\] \[\begin{split}\hat{H}_{\text{e-ph}}=&\sum_{q}\lambda _{q}[\sin(2\theta)(\hat{D}^{\dagger}\hat{D}-\hat{d}^{\dagger}\hat{d})\\ &+\cos(2\theta)(d^{\dagger}D+D^{\dagger}d)](\hat{a}_{q}^{\dagger }+\hat{a}_{q}).\end{split} \tag{8}\] \[\begin{split}\hat{H}_{\text{tun}}=&\sum_{k}\gamma_{ Lk}(\sin\theta\hat{D}^{\dagger}+\cos\theta\hat{d}^{\dagger})\hat{d}_{Lk}\\ &+\gamma_{Rk}(\cos\theta\hat{D}^{\dagger}-\sin\theta\hat{d}^{ \dagger})\hat{d}_{Rk}+\text{H.c.},\end{split} \tag{9}\] This model has been previously explored by the authors in various studies. In Refs.[46; 47], the authors investigated the realization of thermoelectric diodes and transistors by leveraging phonon-assisted hopping transport processes within three-terminal double quantum dot systems. Remarkably, these studies demonstrated the attainment of the thermal amplification effect within the linear-response regime, without relying on negative differential thermal conductivity stemming from inelastic transport processes. Additionally, in Ref. [64], the authors delved into the statistical properties of charge and heat transport processes, along with an examination of thermoelectric efficiency and its associated fluctuations in vibrationally assisted electronic conducting junctions featuring electron-phonon interactions. While these aforementioned works primarily focused on issues related to steady-state thermoelectric energy conversion, the exploration of coherence effects in periodically driven quantum heat engines has remained relatively unexplored. ## III Full counting statistics for particle and energy currents ### Evolution Matrix with Off-Diagonal Coherence Elements In this work, we describe the three-terminal system using the two-time measurement protocol [65; 66]. We define the characteristic function as [65; 67]: \[\begin{split}&\mathcal{Z}(\lambda_{p},\lambda_{E},\lambda_{\text{ ph}})\\ &=\langle e^{i\lambda_{p}\hat{A}_{p}+i\lambda_{E}\hat{A}_{E}+i \lambda_{\text{ph}}\hat{A}_{\text{ph}}}e^{-i\lambda_{p}\hat{A}_{p}(t)-i\lambda _{E}\hat{A}_{E}(t)-i\lambda_{\text{ph}}\hat{A}_{\text{ph}}(t)}\rangle.\end{split} \tag{10}\] Here, \(\lambda_{p,E,\text{ph}}\) are counting parameters for particles, energy, and phonons, respectively. \(\hat{A}_{p}\), \(\hat{A}_{E}\), and \(\hat{A}_{\text{ph}}\) are the corresponding operators. Specifically, \(\hat{A}_{p}\) represents the number operator for the total particles in the \(R\) reservoir, \(\hat{A}_{E}\) is the Hamiltonian operator for the \(R\) reservoir, and \(\hat{A}_{\text{ph}}\) is the Hamiltonian operator for the phonon bath. Time evolution follows the Heisenberg representation, and \(\langle.\rangle\) denotes an average with respect to the total initial density matrix. This density matrix has a factorized form with respect to the system (\(S\)) and the (\(L\), \(R\), and ph) baths, given by \(\rho_{T}(0)=\rho_{S}(0)\otimes\rho_{L}(0)\otimes\rho_{R}(0)\otimes\rho_{\text{ ph}}(0)\). The state of the metal leads is described by a grand canonical distribution, where \(\rho_{S}\) is given by \(\rho_{S}=\exp\left[-\beta_{\text{el}}(\hat{H}_{\alpha}-\mu_{\alpha}\hat{N}_{ \alpha})\right]/Z_{\alpha}\), with representing the partition function. Equation (10) can be reorganized as \(\mathcal{Z}(\lambda_{p},\lambda_{E},\lambda_{\text{ph}})=\text{Tr}[\rho_{ \lambda_{p},\lambda_{E},\lambda_{\text{ph}}}^{T}(t)]\), where the modified density operator is specified as \(\rho_{\lambda_{p},\lambda_{E},\lambda_{\text{ph}}}^{T}(t)=U_{-\lambda_{p}/2,- \lambda_{E}/2,-\lambda_{\text{ph}}/2}(t)\rho_{T}(0)U_{\lambda_{p}/2,\lambda_{E }/2,\lambda_{\text{ph}}/2}^{\dagger}(t)\). We trace out the heat reservoirs' degrees of freedom (\(\text{Tr}_{B}\)) and express the characteristic function in terms of the reduced density matrix \(\rho_{S}(\lambda,t)\) for the central double QDs system: \(\rho_{S}(\lambda,t)=\text{Tr}_{B}[\rho_{\lambda_{p},\lambda_{E},\lambda_{ \text{ph}}}^{T}(t)]\), where the forward propagator is \[\begin{split}& U_{-\lambda_{p}/2,-\lambda_{E}/2,-\lambda_{\text{ph}} /2}(t)\\ =& e^{-i\frac{\lambda_{\text{ph}}}{2}H_{\text{ph}}-i \frac{\lambda_{E}}{2}H_{R}-i\frac{\lambda_{E}}{2}N_{R}}U(t)e^{i\frac{\lambda_{ \text{ph}}}{2}H_{\text{ph}}+i\frac{\lambda_{E}}{2}H_{R}+i\frac{\lambda_{E}}{2} N_{R}}\\ =& e^{-iH_{-\lambda_{p}/2,-\lambda_{E}/2,-\lambda_{ \text{ph}}/2}(t)},\end{split} \tag{11}\] with the counting-field-dependent total Hamiltonian: \[\begin{split}& H_{-\lambda_{p}/2,-\lambda_{E}/2,-\lambda_{\text{ph}} /2}\\ =& e^{-i\frac{\lambda_{\text{ph}}}{2}H_{\text{ph}}-i \frac{\lambda_{E}}{2}H_{R}-i\frac{\lambda_{E}}{2}N_{R}}H_{T}e^{i\frac{\lambda_{ \text{ph}}}{2}H_{\text{ph}}+i\frac{\lambda_{E}}{2}H_{R}+i\frac{\lambda_{E}}{2} N_{R}},\\ =& H_{\text{DQD}}+H_{\text{ph}}+H_{\text{lead}}+V_{- \lambda/2},\end{split} \tag{12}\] with \(V_{-\lambda/2}\) defined as: \[V_{-\lambda/2} =\sum_{q}\lambda_{q}[\sin(2\theta)(\hat{D}^{\dagger}\hat{D}-\hat{d}^ {\dagger}\hat{d}) \tag{13}\] \[+\cos(2\theta)(d^{\dagger}D+D^{\dagger}d)](e^{i\frac{\lambda_{q}}{2 }\omega_{\rm a}}\hat{a}_{q}+\text{H.c.})\] \[+\sum_{k}\gamma_{Lk}(\sin\theta\hat{D}^{\dagger}+\cos\theta\hat{ d}^{\dagger})\hat{d}_{Lk}\] \[+\gamma_{Rk}e^{-i\frac{\lambda_{p}}{2}-i\frac{\lambda_{p}}{2} \varepsilon_{Rk}}(\cos\theta\hat{D}^{\dagger}-\sin\theta\hat{d}^{\dagger}) \hat{d}_{Rk}+\text{H.c.}.\] In standard quantum master equation approaches for quantum dot systems [68], where the dot-reservoir coupling is considered weak, taking the time derivative of \(\rho_{S}(\lambda,t)\) and integrating this equation, we obtain the exact form: \[\frac{\partial}{\partial t}\rho_{S}(\lambda,t) \tag{14}\] \[=-i[H_{\rm DQD},\rho_{S}]+\int_{0}^{t}dt^{\prime}\text{Tr}_{B}[V_ {-\lambda/2}(t)V_{-\lambda/2}(t^{\prime})\rho_{\lambda}^{T}(t^{\prime})\] \[-V_{-\lambda/2}(t)\rho_{\lambda}^{T}(t^{\prime})V_{\lambda/2}(t^ {\prime})-V_{-\lambda/2}(t^{\prime})\rho_{\lambda}^{T}(t^{\prime})V_{\lambda/ 2}(t)\] \[+\rho_{\lambda}^{T}(t^{\prime})V_{\lambda/2}(t^{\prime})V_{ \lambda/2}(t)].\] The operators are expressed in the interaction representation. We follow standard steps, i.e., the Born-Markovian approximation, as in the derivation of the weak-coupling Redfield equation [65; 69]. Here we consider the off-diagonal coherence elements for the reduced density matrix, and if the eigenstates for the electron are denoted by \(|\chi\rangle\), where \(\chi=0,D,d\), the vector made of the population of \(\rho_{S}(\lambda,t)\) in the basis denoted by \(|\mathbf{P}\rangle=\langle 0|\rho_{S}(\lambda,t)|0\rangle,\ \langle D|\rho_{S}( \lambda,t)|D\rangle,\ \langle d|\rho_{S}(\lambda,t)|d\rangle,\)\(\langle D|\rho_{S}(\lambda,t)|d\rangle,\)\(\langle d|\rho_{S}(\lambda,t)|D\rangle\)[70], it evolves according to: \[\frac{d|\mathbf{P}^{\lambda}(t))}{dt}=\mathbf{H}(\lambda)|\mathbf{P}^{\lambda}( t)\rangle, \tag{15}\] where \(|\mathbf{P}^{\lambda}(t)\rangle\), \(\mathbf{H}(\lambda)\) is a \(5\times 5\) matrix and its elements are shown as in App. A. ### Geometric Berry-Phase-Induced Particle, Heat, and Phononic Currents For thermoelectric heat engines, modulation can be imposed on parameters such as \(\Gamma_{i}(t)\), \(\mu_{i}(t)\), \(T_{i}(t)\), and \(E_{i}(t)\) (where \(i=l,r\)). According to the large deviation principle and adiabatic perturbation theory, the cumulant generating function for the double QD system, denoted as \(\mathcal{G}_{\rm tot}\), can be separated into two parts in the long time limit (\(\mathcal{T}\)) [71; 72; 73], \[\mathcal{Z}\approx e^{\mathcal{G}_{\rm tot}}=e^{\mathcal{G}_{\rm dyn }+\mathcal{G}_{\rm gon}}, \tag{16}\] \[\mathcal{G}_{\rm dyn}=\int_{0}^{\mathcal{T}}dtG(\lambda,t),\] \[\mathcal{G}_{\rm geo}=-\int_{0}^{\mathcal{T}}dt\langle\varphi( \lambda)|\partial_{t}|\psi(\lambda)\rangle.\] The symbol \(G\) signifies the eigenvalues of the evolution matrix \(\mathbf{H}\), specifically those with the largest real part [74]. The vectors \(|\psi(\lambda,t)\rangle\) and \(\langle\varphi(\lambda,t)|\) correspond to the normalized right and left eigenvectors, respectively [75]. The cumulant generating function encompasses both dynamic and geometric components. The dynamical component, \(\mathcal{G}_{\rm dyn}\), characterizes the temporal average and delineates the dynamic aspects of particle and heat transfer. On the other hand, the geometric contribution, \(\mathcal{G}_{\rm geo}\), arises from adiabatic cyclic evolution and necessitates a minimum of two parameter modulations to manifest its effects [71; 72; 73; 76]. The particle current flowing from the right reservoir into the system is given by [64]: \[\langle N_{R}\rangle=\frac{\partial\mathcal{G}_{\rm tot}}{\partial(i\lambda_{p })}|_{\lambda=0}, \tag{17}\] and the energy current is expressed as \[\langle E_{R}\rangle=\frac{\partial\mathcal{G}_{\rm tot}}{\partial(i\lambda_{E} )}|_{\lambda=0}. \tag{18}\] The electronic heat current extracted from the right reservoir is defined as \[\langle Q_{R}\rangle=\langle E_{R}\rangle-\mu_{R}\langle N_{R}\rangle. \tag{19}\] The phononic heat current is given by: \[\langle Q_{\rm ph}\rangle=\frac{\partial\mathcal{G}_{\rm tot}}{\partial(i \lambda_{\rm ph})}|_{\lambda=0}. \tag{20}\] Similarly, the particle current \(\langle N_{L}\rangle\) and energy current \(\langle E_{L}\rangle\) flowing from the left (\(L\)) reservoir into the central system can also be obtained by introducing the counting functions in App. B. Moreover, particle and energy conservation imply that [77; 78], \[\langle N_{L}\rangle+\langle N_{R}\rangle=0, \tag{21}\] \[\langle W_{I}\rangle=-(\langle E_{L}\rangle+\langle E_{R}\rangle+ \langle Q_{\rm ph}\rangle).\] Here, \(\langle W_{I}\rangle\) represents the input work done by the external driving, which becomes vanishing as the driving is removed. In our work, we constrain all parameters of the driving protocol to the adiabatic driving regimes: the driving period is chosen as \(\mathcal{T}=10^{-12}\) s, corresponding to \(\hbar\Omega\approx 4\times 10^{-2}\) meV. It's evident that the rate between system and reservoir \(\Gamma_{i}=4\) meV (\(i=l,r,\text{ph}\)), which is much greater than \(\hbar\Omega\), and the adiabatic approximation remains valid [78]. ## IV Thermoelectric efficiency and output work of thermoelectric heat engines In this section, we operate the three-terminal setup as a thermoelectric heat engine, harvesting heat from the hot phonon bath and converting it into useful output work. The electrochemical potential difference is defined as \(\Delta\mu=\mu_{L}-\mu_{R}\), with the equilibrium chemical potential \(\mu\equiv(\mu_{L}+\mu_{R})/2\). The average output work of the heat engine is \[\left\langle W_{\text{out}}\right\rangle=\left(\mu_{L}-\mu_{R}\right)\left\langle N _{L}\right\rangle. \tag{22}\] Here \(W_{\text{out}}\) is the useful output work. The entropy production of the whole system is \(\left\langle S\right\rangle=-\sum_{v=L,R,\text{ph}}\langle Q_{v}\rangle/T_{v}\)[79]. Using conservation laws of the energy and particle, the entropy production takes on a specific form [79] \[T_{L}\langle S\rangle=\left(1-T_{L}/T_{\text{ph}}\right)\left\langle Q_{\text {ph}}\right\rangle-\left\langle W_{\text{out}}\right\rangle+\left\langle W_{I }\right\rangle. \tag{23}\] When the electric power \(\left\langle W_{\text{out}}\right\rangle>0\), the thermal machine operates as a heat engine. (i) If the input energy is negative, i.e., \(\left\langle W_{I}\right\rangle<0\), the efficiency of the heat engine is given by \[\left\langle\eta\right\rangle=\frac{\left\langle W_{\text{out}}\right\rangle} {(1-T_{L}/T_{\text{ph}})\langle Q_{\text{ph}}\rangle}. \tag{24}\] Such definition of the efficiency is consistent with the energy efficiency of steady-state thermoelectric transport, e.g., at the Carnot limit \(\left\langle W_{\text{out}}\right\rangle/\langle Q_{\text{ph}}\rangle=1-T_{L }/T_{\text{ph}}\), the efficiency at Eq. (24) becomes the unit. (ii) When the input energy is nonnegative, i.e. \(\left\langle W_{I}\right\rangle\geq\)0, the efficiency of the heat engine becomes \[\left\langle\eta\right\rangle=\frac{\left\langle W_{\text{out}}\right\rangle} {(1-T_{L}/T_{\text{ph}})\langle Q_{\text{ph}}\rangle+\left\langle W_{I} \right\rangle}. \tag{25}\] According to the thermodynamic second law, the thermoelectric engine efficiency is upper bounded by \(\left\langle\phi\right\rangle\leq 1\)[77, 80]. In Fig. 2, we demonstrate the realization of the thermoelectric engine in the three-terminal double quantum dot system by choosing the left and right quantum dot energies, i.e., \(E_{l}\) and \(E_{r}\), as the modulating parameters [62, 81]. First, we study the output work and efficiency for the three-terminal inelastic thermoelectric heat engine. As shown in Fig. 2(a) and 2(b), with a fixed temperature of \(k_{B}T_{L}=k_{B}T_{R}=10\) meV and \(k_{B}T_{\text{ph}}=12\) meV, we observe that the quantum coherence effect (nonzero off-diagonal elements of system density matrix) yields a significant improvement in the optimal work and efficiency, compared with the counterparts in absence of quantum coherence. More specifically, the maximum output work increases from 0.05 meV to 0.3 meV as the coherence effect is taken into account. In analogy, the maximum efficiency considering the coherent transport effect lead to a maximum efficiency of 30% (in percentage units), whereas it becomes only 2% during the incoherent transport processes. It's worth noting that the temperatures considered in this analysis are consistent with experimental condition, and the tiny temperature gradient usually require a small cutoff voltage [82]. To comprehend the potential physical significance of enhancing the heat engine's performance, let's first delve into how coherent transport impacts particle and phononic currents. It's evident that coherent transport leads to a significant improvement in particle current. Here, our analysis is carried out around the open-circuit voltage \(\Delta\mu_{\text{oc}}\) (the chemical potential difference at the point where the particle current vanishes, i.e., \(\left\langle N_{R}\right\rangle=0\)) and the short-circuit current \(\left\langle N_{R}\right\rangle_{\text{sc}}\) (the particle current at \(\Delta\mu=0\)). The product of these two quantities, denoted as \(\mathcal{P}=-\left(N_{R}\right)_{\text{sc}}\Delta\mu_{\text{oc}}\), characterizes the maximum work achievable. Notably, coherent transport yields \(\mathcal{P}\) more than ten times larger than that obtained from incoherent transport, aligning with the observed improvement in maximum work efficiency. Moreover, we scrutinize the impact of the coherent effect on phononic current \(Q_{\text{ph}}\) and driving energy \(W_{I}\). The input heat current \(\left\langle Q_{\text{ph}}\right\rangle\) at \(\Delta\mu=0\) is reduced to approximately twice the value in absence of quantum coherence in Fig. 2(c). Additionally, we observe a decrease for the input work \(W_{I}\) due to the quantum coherence effect in Fig. 2(d). These cooperative factors contribute to the overall reduction in input energy \((1-T_{L}/T_{\text{ph}})\langle Q_{\text{ph}}\rangle+\left\langle W_{I}\right\rangle\), shedding light on the reasons behind the enhanced efficiency. We utilize coherence measurement (i.e. \(\left|\rho_{Dd}\right|\)) to quantitatively estimate the quantum coherence on the adiabatic transport [83]. In Fig. 3(a), we plot the density matrix element \(\rho_{dd}\) as a function of the driving time, both with and without coherence. It is found that the existence of \(\left|\rho_{Dd}\right|\) suppresses the increase in the density matrix element \(\rho_{dd}\). And, \(\left|\rho_{Dd}\right|\) itself in Fig. 3(b) shows finite value and periodic oscillations in one driving period, and it is enhanced by the inter-dot tunneling. Moreover, the contribution from \(\left|\rho_{Dd}\right|\) to the currents can be analytically expressed as \(\left\langle N_{R}\right\rangle=\int_{0}^{\mathcal{T}}dt\sum_{i=D,d}[-\gamma_ {r,i0}\rho_{ii}(t)+\gamma_{r,0i}\rho_{00}(t)]+\frac{1}{4}\sin 2\theta[ \tilde{\gamma}_{r,i0}(\rho_{Dd}+\rho_{dD})]\), with \(\gamma_{r,i0}=\Gamma_{r}\lambda_{0i}[1-f_{r}(E_{i})]\), \(\gamma_{r,0i}=\Gamma_{r}\lambda_{0i}f_{r}(E_{i})\), \(\tilde{\gamma}_{r,i0}=\Gamma_{r}[1-f_{r}(E_{i})]\) and \(\tilde{\gamma}_{r,0i}=\Gamma_{r}f_{r}(E_{i})\) (\(\lambda_{0D}=\sin^{2}\theta\), \(\lambda_{0d}=\cos^{2}\theta\)) [63; 84], which is depicted in Figs. 4(a) and 4(b). Therefore, we conclude that quantum coherence indeed plays a pivotal role in improving the performance of quantum heat engines. To offer a deep understanding of how coherent transport impacts particle and phononic currents, we divide these currents into two distinct components: the geometric and dynamic components, as elegantly illustrated in Figs. 4(c) and 4(d). The former component arises as a consequence of external driving, notably resulting in the input work \(W_{I}\). In contrast, the latter one is attributed to thermodynamic biases, such as differences in chemical potentials and temperature gradients. It's evident that geometric currents can counteract the direction of thermodynamic biases, allowing for the realization of a geometric thermoelectric pumping effect in the three-terminal double QDs system. It is interesting to find that both dynamic and geometric currents experience improvements due to quantum coherence effects. Next, we Figure 4: The average particle current \(\left\langle N_{R}\right\rangle\) with (a) coherence and (b) incoherence as a function of \(\Delta\mu\) with coherence.(c) The geometric part of particle current \(\left\langle N_{R}\right\rangle|_{\text{geo}}\) and (d) the dynamic part of particle current \(\left\langle N_{R}\right\rangle|_{\text{dyn}}\) as a function of \(\Delta\mu\) for both coherence and incoherence cases. The parameters are same with Fig. 2. focus on the geometric component of currents. In scenarios where pairs of parameters [\(E_{l}(t)\)] and [\(E_{r}(t)\)] are subjected to periodic driving [85], the gauge-invariant Berry curvature is harnessed to describe the system's geometric behavior. This Berry curvature is elegantly expressed as [86, 71, 87]: \[\mathcal{F}_{E_{l}E_{r}}=\langle\partial_{E_{l}}\varphi|\partial_{E_{r}} \psi\rangle-\langle\partial_{E_{r}}\varphi|\partial_{E_{l}}\psi\rangle, \tag{26}\] and the geometric contribution of cumulant generating function is described as \(\mathcal{G}_{\mathrm{geo}}=-\iint_{E_{l}E_{r}}dE_{l}E_{r}\mathcal{F}_{E_{l}E_ {r}}\)[88]. As shown in Fig. 5, the Berry-phase effect introduces geometric particle and heat currents against thermodynamic biases. Specifically, the Berry curvatures for the particle current, considering both incoherence [Fig. 5(a)] and coherence effects[Fig. 5(b)], promise the existence of geometric currents. However, quantum coherence will further enhance the performance of Berry curvature within the driving zone (rounded by black circles), which finally significantly strengthens geometric currents. In the previous section, we elucidated the fundamental distinctions between coherent and incoherent thermoelectric heat engines in nonlinear transport regimes. In this section, we aim to investigate the impact of various parameters, such as the tunneling strength between the double quantum dots and the geometric phase, on average thermoelectric efficiency and output work, without making specific parameter selections. To begin, we explore how coherent transport behaves with respect to these parameters. In Fig. 6, we present the average output work and efficiency of the thermoelectric heat engines as functions of the tunneling strength (\(\Delta\)) and the phase (\(\phi\)). For each configuration, we optimize performance by adjusting the chemical potential difference (i.e., the chemical potential difference \(\Delta\mu\) at maximum efficiency and work). Fig. 6(a) reveals that coherent transport has a pronounced impact on the maximum output work. For instance, with \(\phi=\pi/2\) and \(\Delta=8\,\mathrm{meV}\), the maximum work increases from \(0.1\,\mathrm{meV}\) to \(0.4\,\mathrm{meV}\) due to the coherent effect. However, it is noteworthy that this enhancement in work comes at the cost of a reduction in efficiency, which drops from \(60\%\) to \(10\%\). This trade-off between work and efficiency aligns with previous findings in the field of thermal machines [80, 89, 9]. We also calculate the enhancement factor \(\langle W^{\mathrm{max}}\rangle\left|{}_{\mathrm{co}}/\langle W^{\mathrm{max}} \rangle\right|_{\mathrm{inco}}\) for different tunneling strengths and geometric phases, as shown in Fig. 6(c). This enhancement factor can be as high as 10, indicating a substantial improvement in work with increasing tunneling strength and decreasing phase. Moreover, as shown in Figs. 6(a)-6(d), although the geometric phase can increase the magnitude of the geometric current, it doesn't consistently improve the overall performance of heat engines. This discrepancy can be attributed to the potential inconsistency between the direction of geometric current and dynamics, which can lead to a reduction in the total current magnitude in the system. Consequently, the average output work may also decrease accordingly, see Figs. 6(a) and 6(c). Meanwhile, this effect also doesn't hold for the thermodynamic efficiency. Since the input energy, i.e., phononic heat current, increases with an increase in the geometric phase, efficiency is optimized when the phase is either small or large, as illustrated in Figs. 6(b) and 6(d). Additionally, the tunneling strength between the quantum dots plays an important role in enhancing quantum coherence. As shown in Figs. 6(a) and 6(b), though tunneling strength can improve the efficiency of quantum engines, it leads to reduced average output work in the regime where coherence enhances efficiency. In other words, quantum coherence yields power losses. This finding aligns with the conclusions drawn by Brandner et al. [37; 90]. These results emphasize the crucial role of coherence effects in quantum heat engines, as they not only transform energy but also alter the state of quantum systems. ## V Conclusion In summary, this work mainly explores the impact of quantum coherence on the thermodynamic performance in periodically driven thermoelectric heat engines. We focus on the three-terminal setup involving electronic and phononic heat reservoirs, aiming to improve the performance of thermoelectric heat engines. Through external modulations and the consideration of geometric and dynamic currents, the study unveils the pivotal role of quantum coherence in achieving maximum output work and energy conversion efficiency. Notably, it demonstrates that thermoelectric heat engines that embrace quantum coherence outperform their incoherent counterparts by enhancing average work generation and efficiency. And the quantum coherence is characterized via the reduced system density matrix elements. By analyzing the Berry curvature we further reveal the mechanism of quantum coherence on the nonequilibrium currents in quantum heat engines. Beyond shedding light on energy conversion and efficiency improvements, it opens up new possibilities for controlling quantum systems with additional levels of precision and performance enhancement. ## VI Acknowledgement This work was supported by the funding for the National Natural Science Foundation of China under Grants No. 12125504, No. 12074281, No. 11935010 and No. 12305050, Jiangsu Key Disciplines of the Fourteenth Five-Year Plan (Grant No. 2021135), the Natural Science Foundation of Jiangsu Higher Education Institutions of China (Grant No. 23KJB140017), and the Opening Project of Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology. Appendix A The detailed expression of the evolution \(\mathbf{H}(\lambda)\) for counting the right reservoir \[H_{11} =-\Gamma_{l}\cos^{2}\theta f_{l}(E_{D})-\Gamma_{l}\sin^{2}\theta f_{ l}(E_{d})-\Gamma_{r}\sin^{2}\theta f_{r}(E_{D})-\Gamma_{r}\cos^{2}\theta f_{r}(E_{d}),\] \[H_{12} =\Gamma_{l}\cos^{2}\theta[1-f_{l}(E_{D})]+\Gamma_{r}\sin^{2}\theta [1-f_{r}(E_{D})]e^{-i(\lambda_{p}+\lambda_{E}E_{D})},\] \[H_{13} =\Gamma_{l}\sin^{2}\theta[1-f_{l}(E_{d})]+\Gamma_{r}\cos^{2}\theta [1-f_{r}(E_{d})]e^{-i(\lambda_{p}+\lambda_{E}E_{d})},\] \[H_{14} =H_{15}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[[1-f_{l}(E_{D}) ]+[1-f_{l}(E_{d})]]\] \[\qquad\qquad-\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[[1-f_{r}(E _{D})]e^{-i(\lambda_{p}+\lambda_{E}E_{D})}+[1-f_{r}(E_{d})]e^{-i(\lambda_{p}+ \lambda_{E}E_{d})}],\] \[H_{21} =\Gamma_{l}\cos^{2}\theta f_{l}(E_{D})+\Gamma_{r}\sin^{2}\theta f _{r}(E_{D})e^{i(\lambda_{p}+\lambda_{E}E_{D})},\] \[H_{22} =-\Gamma_{l}\cos^{2}\theta[1-f_{l}(E_{D})]-\Gamma_{r}\sin^{2} \theta[1-f_{r}(E_{D})]-\Gamma_{\rm ph}\cos^{2}2\theta\left[1+n(\omega_{0}) \right],\] \[H_{23} =\Gamma_{\rm ph}\cos^{2}2\theta e^{i\lambda_{\rm ph}\omega_{0}}\, n(\omega_{0}),\] \[H_{24} =H_{25}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{d}) ]-\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{d})]+\frac{1}{2}\Gamma_ {\rm ph}\sin 2\theta\cos 2\theta(e^{i\lambda_{\rm ph}\omega_{0}}-1)\left[1+n( \omega_{0})\right],\] \[H_{31} =\Gamma_{l}\sin^{2}\theta f_{l}(E_{d})+\Gamma_{r}\cos^{2}\theta f _{r}(E_{d})e^{i(\lambda_{p}+\lambda_{E}E_{d})},\] \[H_{32} =\Gamma_{\rm ph}\cos^{2}2\theta e^{-i\lambda_{\rm ph}\omega_{0}} \left[1+n(\omega_{0})\right],\] \[H_{33} =-\Gamma_{l}\sin^{2}\theta[1-f_{l}(E_{d})]-\Gamma_{r}\cos^{2} \theta[1-f_{r}(E_{d})]-\Gamma_{\rm ph}\cos^{2}2\theta n(\omega_{0}),\] \[H_{34} =H_{35}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{D}) ]-\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{D})]-\frac{1}{2}\Gamma _{\rm ph}\sin 2\theta\cos 2\theta(e^{-i\lambda_{\rm ph}\omega_{0}}-1)\left[1+n( \omega_{0})\right],\] \[H_{41} =H_{51}=-\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[f_{l}(E_{D})+f _{l}(E_{d})]+\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[f_{r}(E_{D})e^{i( \lambda_{p}+\lambda_{E}E_{D})}+f_{r}(E_{d})e^{i(\lambda_{p}+\lambda_{E}E_{d}) }],\] \[H_{42} =H_{52}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{D}) ]-\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{D})]+\frac{1}{2}\Gamma _{\rm ph}\sin 2\theta\cos 2\theta(1+e^{-i\lambda_{\rm ph}\omega_{0}})\left[1+n( \omega_{0})\right],\] \[H_{43} =H_{53}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{d}) ]-\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{d})]-\frac{1}{2}\Gamma _{\rm ph}\sin 2\theta\cos 2\theta(e^{i\lambda_{\rm ph}\omega_{0}}+1)\,n(\omega_{0}),\] \[H_{44} =H_{55}=-\frac{1}{2}\Gamma_{l}\sin^{2}\theta[1-f_{l}(E_{D})]- \frac{1}{2}\Gamma_{l}\cos^{2}\theta[1-f_{l}(E_{d})]-\frac{1}{2}\Gamma_{r}\cos^ {2}\theta[1-f_{r}(E_{D})]-\frac{1}{2}\Gamma_{r}\sin^{2}\theta[1-f_{r}(E_{d})]\] \[\qquad\qquad-\frac{1}{2}\Gamma_{\rm ph}\cos^{2}2\theta[1+2n(\omega_ {0})],\] \[H_{45} =H_{54}=\frac{1}{2}\Gamma_{\rm ph}\cos^{2}2\theta e^{-i\lambda_{ \rm ph}\omega_{0}}\left[1+n(\omega_{0})\right]+\frac{1}{2}\Gamma_{\rm ph} \cos^{2}2\theta e^{i\lambda_{\rm ph}\omega_{0}}\,n(\omega_{0}). \tag{11}\] Here, \(\Gamma_{i}=2\pi\sum_{k}|\gamma_{i,k}|^{2}\delta(E-E_{i,k})\) is the dot-electronic reservoir hybridization energy, and \(\Gamma_{\rm ph}=2\pi\sum_{q}\lambda_{q}^{2}\delta(\omega-\omega_{q})\) is the coupling energy of the particular mode \(\omega_{0}\) to the phonon bath. \(f_{i}(E_{i})=\{\exp[(E_{i}-\mu_{i})/k_{B}T_{i}]+1\}^{-1}\) is the Fermi-Dirac distribution for the electronic reservoir with chemical potential \(\mu_{i}\) and the temperature \(k_{B}T_{i}\), and \(n(\omega_{0})=[\exp(\omega_{0}/k_{B}T_{\rm ph})-1]^{-1}\) is the Bose-Einstein distribution function in the phononic reservoir with \(\omega_{0}=E_{D}-E_{d}\). ## Appendix B The detailed expression of the evolution for counting the left reservoir Similarly, the particle current \(N_{L}\) and energy current \(E_{L}\) flowing from the left (L) reservoir into the central system can also be obtained by introducing the excitation and relaxation rates with the counting fields as \[H_{11} =-\Gamma_{l}\cos^{2}\theta f_{l}(E_{D})-\Gamma_{l}\sin^{2}\theta f_{ l}(E_{d})-\Gamma_{r}\sin^{2}\theta f_{r}(E_{D})-\Gamma_{r}\cos^{2}\theta f_{r}(E_{d}),\] \[H_{12} =\Gamma_{l}\cos^{2}\theta[1-f_{l}(E_{D})]e^{-i(\lambda_{p}+\lambda _{E}E_{D})}+\Gamma_{r}\sin^{2}\theta[1-f_{r}(E_{D})],\] \[H_{13} =\Gamma_{l}\sin^{2}\theta[1-f_{l}(E_{d})]e^{-i(\lambda_{p}+\lambda _{E}E_{D})}+\Gamma_{r}\cos^{2}\theta[1-f_{r}(E_{d})],\] \[H_{14} =H_{15}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[e^{-i(\lambda_{p }+\lambda_{E}E_{D})}[1-f_{l}(E_{D})]+e^{-i(\lambda_{p}+\lambda_{E}E_{d})}[1-f_ {l}(E_{d})]]\] \[\qquad\quad-\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[[1-f_{r}(E _{D})]+[1-f_{r}(E_{d})]],\] \[H_{21} =\Gamma_{l}\cos^{2}\theta e^{i(\lambda_{p}+\lambda_{E}E_{D})}f_{ l}(E_{D})+\Gamma_{r}\sin^{2}\theta f_{r}(E_{D}),\] \[H_{22} =-\Gamma_{l}\cos^{2}\theta[1-f_{l}(E_{D})]-\Gamma_{r}\sin^{2} \theta[1-f_{r}(E_{D})]-\Gamma_{\rm ph}\cos^{2}2\theta\left[1+n(\omega_{0}) \right],\] \[H_{23} =\Gamma_{\rm ph}\cos^{2}2\theta e^{i\lambda_{ph}\omega_{0}}\,n( \omega_{0}),\] \[H_{24} =H_{25}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{d})] -\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{d})]+\frac{1}{2}\Gamma_ {\rm ph}\sin 2\theta\cos 2\theta(e^{i\lambda_{ph}\omega_{0}}-1)\left[1+n( \omega_{0})\right],\] \[H_{31} =\Gamma_{l}\sin^{2}\theta e^{i(\lambda_{p}+\lambda_{E}E_{d})}f_{ l}(E_{d})+\Gamma_{r}\cos^{2}\theta f_{r}(E_{d}),\] \[H_{32} =\Gamma_{\rm ph}\cos^{2}2\theta e^{-i\lambda_{ph}\omega_{0}}\left[ 1+n(\omega_{0})\right],\] \[H_{33} =-\Gamma_{l}\sin^{2}\theta[1-f_{l}(E_{d})]-\Gamma_{r}\cos^{2} \theta[1-f_{r}(E_{d})]-\Gamma_{\rm ph}\cos^{2}2\theta n(\omega_{0}),\] \[H_{34} =H_{35}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{D})] -\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{D})]-\frac{1}{2}\Gamma_ {\rm ph}\sin 2\theta\cos 2\theta(e^{-i\lambda_{ph}\omega_{0}}-1)\left[1+n( \omega_{0})\right],\] \[H_{41} =H_{51}=-\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[e^{i(\lambda_{p }+\lambda_{E}E_{D})}f_{l}(E_{D})+e^{i(\lambda_{p}+\lambda_{E}E_{d})}f_{l}(E_{d })]]+\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[f_{r}(E_{D})+f_{r}(E_{d})],\] \[H_{42} =H_{52}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{D})] -\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{D})]+\frac{1}{2}\Gamma_ {\rm ph}\sin 2\theta\cos 2\theta(1+e^{-i\lambda_{ph}\omega_{0}})\left[1+n(\omega_{0}) \right],\] \[H_{43} =H_{53}=\frac{1}{2}\Gamma_{l}\sin\theta\cos\theta[1-f_{l}(E_{d})] -\frac{1}{2}\Gamma_{r}\sin\theta\cos\theta[1-f_{r}(E_{d})]-\frac{1}{2}\Gamma_ {\rm ph}\sin 2\theta\cos 2\theta(e^{i\lambda_{ph}\omega_{0}}+1)\,n(\omega_{0}),\] \[H_{44} =H_{55}=-\frac{1}{2}\Gamma_{l}\sin^{2}\theta[1-f_{l}(E_{D})]- \frac{1}{2}\Gamma_{l}\cos^{2}\theta[1-f_{l}(E_{d})]+\frac{1}{2}\Gamma_{r}\sin^ {2}\theta[1-f_{r}(E_{D})]\] \[\qquad\quad-\frac{1}{2}\Gamma_{r}\cos^{2}\theta[1-f_{r}(E_{d})] -\frac{1}{2}\Gamma_{\rm ph}\cos^{2}2\theta[1+2n(\omega_{0})],\] \[H_{45} =H_{54}=\frac{1}{2}\Gamma_{\rm ph}\cos^{2}2\theta e^{-i\lambda_{ ph}\omega_{0}}\left[1+n(\omega_{0})\right]+\frac{1}{2}\Gamma_{\rm ph}\cos^{2}2 \theta e^{i\lambda_{ph}\omega_{0}}\,n(\omega_{0}). \tag{47}\]
2308.12038
Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages
Recently there has been a significant surge in multimodal learning in terms of both image-to-text and text-to-image generation. However, the success is typically limited to English, leaving other languages largely behind. Building a competitive counterpart in other languages is highly challenging due to the low-resource nature of non-English multimodal data (i.e., lack of large-scale, high-quality image-text data). In this work, we propose MPM, an effective training paradigm for training large multimodal models in non-English languages. MPM demonstrates that Multilingual language models can Pivot zero-shot Multimodal learning across languages. Specifically, based on a strong multilingual large language model, multimodal models pretrained on English-only image-text data can well generalize to other languages in a (quasi)-zero-shot manner, even surpassing models trained on image-text data in native languages. Taking Chinese as a practice of MPM, we build large multimodal models VisCPM in image-to-text and text-to-image generation, which achieve state-of-the-art (open-source) performance in Chinese. To facilitate future research, we open-source codes and model weights at https://github.com/OpenBMB/VisCPM.git.
Jinyi Hu, Yuan Yao, Chongyi Wang, Shan Wang, Yinxu Pan, Qianyu Chen, Tianyu Yu, Hanghao Wu, Yue Zhao, Haoye Zhang, Xu Han, Yankai Lin, Jiao Xue, Dahai Li, Zhiyuan Liu, Maosong Sun
2023-08-23T09:55:41Z
http://arxiv.org/abs/2308.12038v3
# Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages ###### Abstract Recently there has been a significant surge in multimodal learning in terms of both image-to-text and text-to-image generation. However, the success is typically limited to English, leaving other languages largely behind. Building a competitive counterpart in other languages is highly challenging due to the low-resource nature of non-English multimodal data (i.e., lack of large-scale, high-quality image-text data). In this work, we propose MpM, an effective training paradigm for training large multimodal models in low-resource languages. MpM demonstrates that **M**ultilingual language models can **P**ivot zero-shot **M**ultimodal learning across languages. Specifically, based on a strong multilingual large language model, multimodal models pretrained on English-only image-text data can well generalize to other languages in a zero-shot manner for both image-to-text and text-to-image generation, even surpassing models trained on image-text data in native languages. Taking Chinese as a practice of MpM, we build large multimodal models VisCPM in image-to-text and text-to-image generation, which achieve state-of-the-art (open-source) performance in Chinese. To facilitate future research, we open-source codes and model weights at [https://github.com/OpenBMB/VisCPM](https://github.com/OpenBMB/VisCPM). ## 1 Introduction With the rapid advancement of powerful models such as GPT-4 [38] and Stable Diffusion [43] in their multimodal capabilities, large multimodal models have emerged as the latest frontier in pursuing achieving Artificial General Intelligence (AGI). Generally, the multimodal generative capabilities across images and text can be divided into two categories: (i) In the field of image-to-text generation, prominent multimodal large language models like GPT-4 [38], LLaVA [32] and InstructBLIP [13] exhibit remarkable multimodal conversational and reasoning abilities based on images; (ii) In the field of text-to-image generation, models such as Imagen [45] and Stable Diffusion [43] excel in generating highly realistic and relevant images based on text prompts. These models possess exceptional capabilities in processing images and text, profoundly reshaping the landscape of multimodal AI in both academia and industry. However, the success of large multimodal models has mainly been achieved within the English community, while the multimodal capabilities in other non-English languages significantly trail behind. Bridging this gap is challenging due to the extensive image-text pair data requirements for training multimodal models. For instance, the pretraining of BLIP-2 [28] involves more than 100M high-quality image-text pairs, while Stable Diffusion [43] utilizes more than 2B pairs. As a result of the paucity of such multimodal data resources in non-English languages, the progress of multimodal research in these languages remains hindered. To address this challenge, we propose MpM, an effective training paradigm for large multimodal models in non-English languages. MpM utilizes the **M**ultilingual language model to **P**ivot **M**ultimodal learning across languages and considers English, which contains substantial multimodal data resources, as a pivot between the visual signals and non-English languages which commonly lacking in multimodal data. MpM draws inspiration from the _Bilingual Dual-coding Theory_[12; 39] that argue that _visual semantics are largely language-agnostic_. Intuitively, as portrayed in Fig. 0(a), multilingual learners can effectively align the visual semantics with newly acquired language based on established multimodal and multilingual alignment. Simulating the human learning process, MpM also divides the non-English multimodal learning into two consecutive stages: multilingual alignment and multimodal alignment. The former focuses on building a multilingual model, while the latter culminates in a multimodal model spanning multiple languages. Specifically, for multilingual alignment, MpM harnesses a pretrained multilingual large language model (LLM) as the backbone language model, which can provide aligned representations for different languages. Next, for the multimodal alignment, MpM trains the visual modules based on the multilingual model exclusively on English image-text pairs to align English and visual semantics. Similar to how humans learn, using the multilingual model as a pivot point, the resultant multimodal model naturally acquires zero-shot multimodal capability in other non-English languages. Taking Chinese as a practical instance for MpM, we develop Chinese large multimodal models named VisCPM based on English-Chinese bilingual large language model CPM-Bee [66]. Notably, pretraining exclusively on English image-text pairs, the zero-shot performance of VisCPM in Chinese still surpasses that of existing Chinese multimodal models trained on image-text pairs in native Chinese. The promising performance of MpM in Chinese shed light on its potential application in broader languages. Following the same training process, we further extend MpM to develop a multilingual multimodal chat model supporting six languages based on the LLaMA [52]. The six languages include _English, German, French, Spanish, Italian_, and _Portuguese_. In summary, the contributions of this paper are as follows: (i) We propose MpM, an effective training paradigm specifically designed for low-resource languages lacking multimodal resources. Researchers worldwide can utilize MpM to rapidly adapt advanced multimodal learning methods from English to their respective languages. (ii) We develop a series of Chinese large multimodal models VisCPM as a practical application of MpM, which achieves state-of-the-art performance among open-source Chinese multimodal models. (iii) We open-source the model weights of VisCPM and provide experimental details, serving as a valuable reference for fellow researchers. (iv) We validate the generalization capability of VisCPM in diverse languages and develop the first multilingual multimodal conversational model across six languages. ## 2 Related Work **Image-to-text Models.** Traditional image-to-text generation models mainly focus on the task of image caption and visual question answering [55; 60; 63]. Recently, the mainstream of image-to-text has turned to multimodal LLM, focusing on rendering LLM capable of multimodal interaction with Figure 1: Overview of the motivation and architecture of MpM and VisCPM. users. These models connect the visual module and LLM with perceivers, such as BLIP-2 [28], InstructBLIP [13], and X-LLM [8] or linear projectors such as LLAVA [32] and PandaGPT [51]. VPGTrans [64] explores the transferability of visual modules across LLM. To enhance the multimodal instruction following capabilities, many efforts have been dedicated to building multimodal instruction following datasets. LLaVA [32] and MiniGPT-4 [68] build visual content-related dialog by transferring image captions into conversation data using GPT-4. InstructBLIP [13] and M\({}^{3}\)IT [30] incorporate downstream vision-language datasets to construct instruction data. **Text-to-image Models.** In the early stages of text-to-image model development, generative adversarial networks [27; 69] and auto-regressive generation [17] are popularly chosen architectures for text-to-image synthesis models [27]. More recently, large-scale diffusion-based text-to-image models such as DALLE-2 [41], Imagen [45], and Stable Diffusion [43] have taken center stage, demonstrating exceptional generative capabilities. **Multilingual Multimodal Models.** The extension of multimodal models to include multilingual capabilities has become a key research focus over the past few years. Researchers have made efforts to extend the powerful image-text model CLIP [40] to handle more languages using techniques such as knowledge distillation [5] or contrastive learning [4; 10; 25]. Other studies have aimed to create a universal framework for multilingual vision-language pretraining. For instance, M3P [36] presents a unified framework that fuses multilingual and multimodal pretraining through multimodal code-switched training. MLA [65] optimized a lightweight language acquisition encoder for cross-lingual image-text retrieval. UC2 [67] expands English multimodal data using machine translation and introduces specific pretraining objectives to enhance fine-grained alignment between image regions and multilingual tokens. In the era of LLMs, PaLI [9] develops a 17B multilingual language-image model based on 10B image-text pairs spanning 100 languages. Differing from these studies, which try to simultaneously achieve multilingual and multimodel alignment, we focus on effectively leveraging pretrained multilingual LLMs in multimodal learning across various languages. Some concurrent work also discovers some phenomena of cross-lingual transfer from multilingual LLM in multimodal settings. Specifically, Ying-VLM [30] shows that instruction tuning in English can generalize to other languages. MultiFusion [3] discover that the multilingual language model can help cross-lingual transfer in text-to-image generation. Differently, our proposed MpM provides a more systematical formulation for the training of multilingual multimodal models and demonstrates that the zero-shot transfer performance of these models can surpass that of models trained on native-language multimodal data. ## 3 MpM Training Paradigm In this section, we first present the formulation of multilingual multimodal learning and provide an overview of the training procedure of MpM. Following this, we detail the specific training procedures of MpM for both image-to-text and text-to-image generation. ### Problem Formulation and Overview Multimodal learning can be formulated as modeling the relationship between images, denoted as \(x\), and text, denoted as \(y\), in a target language \(l_{t}\). In this context, the image-to-text generation, which can be roughly summarized as generating description for input images, is to learn the conditional distribution \(p_{\theta}(y^{l_{t}}|x)\) parameterized by \(\theta\); the text-to-image generation, which is to synthesize relevant images given input text prompts, is to learn \(p_{\phi}(x|y^{l_{t}})\) parameterized by \(\phi\). In the vanilla setting, these conditional distributions are typically trained using image-text pairs \(\mathcal{D}_{t}=\{(x_{i},y_{i}^{l_{t}})\}_{i=1}^{N}\) in the target language \(l_{t}\)[40; 58]. However, high-quality image-text pairs are extremely scarce in most languages. To mitigate the dependency on native image-text pairs, we introduce the pivot language \(l_{p}\), which contains abundant multimodal pairs \(\mathcal{D}_{p}=\{(x_{i},y_{i}^{l_{p}})\}_{i=1}^{M}\), where \(M\gg N\). Imitating the human learning mechanism that can naturally align visual concepts with various learned languages, MpM aims to transfer visual concepts learned in the pivot language to the target language. MpM divides the multimodal learning process in target language \(l_{t}\) into two consecutive stages: **multilingual alignment** and **multimodal alignment**. For the multilingual alignment, MpM aims to establish the cross-lingual alignment for \(l_{t}\) and \(l_{p}\). This is achieved by directly leveraging a pretrained multilingual LLM, denoted as \(f_{\sigma}\), which can provide close hidden representations for text pair \(y^{l_{t}}\) and \(y^{l_{p}}\) with similar semantics, i.e., \(f_{\sigma}(y^{l_{t}})\approx f_{\sigma}(y^{l_{p}})\). For the multimodal alignment, MpM utilize the sufficient multimodal resource \(\mathcal{D}_{p}\) in the pivot language and optimize the image-to-text objective \(p_{\theta}(y^{l_{p}}|x)\) and text-to-image objective \(p_{\phi}(x|y^{l_{p}})\). In the following sections, we introduce the training process of **multimodal alignment** stage. It's worth noting that MpM is agnostic to the specific model architecture and training method, which enables us to flexibly utilize existing highly effective model architectures and training techniques in each task. ### Image-to-text Generation In image-to-text generation, we incorporate an image encoder module \(h_{\xi}\) parameterized by \(\xi\) to provide visual feature \(\mathbf{z}=h_{\xi}(x)\). These visual features \(\mathbf{z}\) are then concatenated with the text embedding as input into the multilingual LLM. Following recent work to train multimodal conversation models [32; 68], MpM's training process for image-to-text generation consists of two sub-stages: Multimodal Pretraining and Instruction Tuning. **Multimodal Pretraining.** In this sub-stage, we pretrain the visual module to align it with LLM on a large scale of image-text pairs using the language modeling objective: \[\mathcal{L}_{1}(p_{\theta},\mathcal{D}_{p})=-\sum_{i=1}^{M}\log p_{\theta}(y^{ l_{p}}_{i}|h_{\xi}(x_{i})). \tag{1}\] Here, we fix the parameters of LLM (\(\theta=\{\xi\}\)) to prevent the powerful capabilities of LLM from being influenced by short texts in the image-text pairs. **Instruction Tuning.** To enhance models' capabilities in following human instructions, we conduct instruction tuning on elaborately curated multimodal instruction tuning datasets built by blending the existing multimodal instruction tuning datasets in the pivot language and their translated version in the target language. We denote this multilingual instruction tuning datasets as \(\mathcal{D}_{i}=\{x_{k},y^{l}_{q,k},y^{l}_{a,k}\}_{k=1}^{S}\), where \(y^{l}_{q}\) is the instructions and \(y^{l}_{a}\) is the response in certain language \(l\). Both the visual module and multilingual LLM are fine-tuned, i.e., \(\theta=\{\xi,\sigma\}\), by maximizing the probability of the response: \[\mathcal{L}_{2}(p_{\theta},\mathcal{D}_{i})=-\sum_{k=1}^{S}\log p_{\theta}(y^ {l}_{a,k}|h_{\xi}(x_{k}),f_{\sigma}(y^{l}_{q,k})). \tag{2}\] Interestingly, we find a _quasi-zero-shot_ transfer capability of multilingual multimodal models in this scenario. If excluding the translated variant in the target language and solely performing instruction tuning using the pivot language, when given an image \(x\) and a question or an instruction \(y^{l_{t}}_{q}\) in the target language, the resultant model responds accurately though mostly in the pivot language. This can be attributed to the close resemblance between the hidden representation of instructions in two languages provided by the multilingual LLM, i.e., \(f_{\sigma}(y^{l_{p}}_{q})\approx f_{\sigma}(y^{l_{t}}_{q})\). Consequently, we have \(p_{\theta}(y^{l_{p}}_{a}|h_{\xi}(x),f_{\sigma}(y^{l_{p}}_{q}))\approx p_{ \theta}(y^{l_{p}}_{a}|h_{\xi}(x),f_{\sigma}(y^{l_{t}}_{q}))\). Since both the pretraining and instruction tuning stages employ text components solely in the pivot language, the LLM can understand the question in the target language but cannot calibrate the response in the same language. To stimulate the model to respond in the target language, MpM incorporates a small number of translated pairs in the target language during instruction tuning. In this way, MpM simultaneously improves the model's instruction-following capability and calibrates the response language, ultimately realizing a multimodal chatbot in the target language. ### Text-to-image Generation In the text-to-image generation, we adopt a similar architecture with Stable Diffusion [43]. It incorporates a denoising network \(g_{\delta}\) with a UNet architecture [44] parameterized by \(\delta\) as an image decoder to generate images given the input prompt. The LLM \(f_{\sigma}\) and image decoder \(g_{\delta}\) are interconnected with cross-attention mechanism [53]. Diffusion models [22; 50] involve learning an iterative process of denoising a Gaussian noise into data distribution. The denoise network is optimized to remove the noise of noised image \(x_{\tau}\), conditioned on the hidden states of text input provided by the LLM. The training objective is defined as follows: \[\mathcal{L}(p_{\phi},\mathcal{D}_{p})=\mathbb{E}_{x,y^{l_{p}},\varepsilon,\tau}[ ||g_{\delta}(x_{t},f_{\sigma}(y^{l_{p}}),\tau)-\varepsilon||^{2}_{2}], \tag{3}\] In this stage, \(\phi=\{\delta\}\), i.e., the image decoder is trained to align with frozen LLM. In this way, when input with the unseen prompt in the target language \(y^{l_{t}}\), the multilingual LLM \(f_{\sigma}\) can inherently provide a representation \(f_{\sigma}(y^{l_{t}})\) close to the seen representation \(f_{\sigma}(y^{l_{p}})\) of the pivot language prompt with similar semantics. Therefore, the capability of text-to-image in the target language can be seamlessly transferred from the pivot language in a zero-shot fashion, illustrated as follows: \[g_{\delta}(x_{\tau},f_{\sigma}(y^{l_{t}}),\tau)\approx g_{\delta}(x_{\tau},f_{ \sigma}(y^{l_{p}}),\tau). \tag{4}\] ## 4 VisCPM As a practice of MpM, we develop a series of large-scale Chinese multimodal models called VisCPM. We use Chinese as the target language and English as the pivot language. The Chinese-English bilingual language model CPM-Bee [66] serves as the backbone multilingual LLM. We have two variations of the model: VisCPM-Chat for image-to-text multimodal conversation and VisCPM-Paint for text-to-image synthesis. In the following sections, we begin by providing an overview of existing multimodal datasets in Chinese and then introduce the training procedure of VisCPM-Chat and VisCPM-Paint. ### Are Chinese Multimodal Datasets Enough To Train A Multimodal Model? The largest publicly available multimodal dataset of native Chinese is Wukong [20], consisting of 100M image-text pair pairs. However, by visualizing the CLIP score computed with Chinese-CLIP [58], as shown in Fig. 2, and manually inspecting, we discover that only a minor fraction of image-text pairs in Wukong possess semantically matched content. The poor quality of the dataset escalates existing data resource shortfalls. A straightforward method to enhance the count of Chinese image-text pairs is to translate the English image-text pairs into Chinese, which has been utilized in Ziya-Visual [54]. However, translation requires an external machine translation model, and translating a large-scale dataset used in pretraining consumes substantial computing resources. Also, we practically demonstrate that incorporating translated image-text only has marginal improvement on the performance, as discussed in Sec. 5.3. Based on this analysis, we argue that effectively utilizing the English data to achieve knowledge transfer in multimodal alignment is the key to developing a powerful Chinese large multimodal model. ### VisCPM-Chat VisCPM-Chat is a Chinese-English bilingual multimodal chatbot capable of responding to users' instructions based on the input image. VisCPM-Chat utilizes the Murfin architecture [62] as the image encoder. Specifically, Murfin directly leverages a pretrained vision-language model BEiT-3 [56] as an inherent bridge module between vision and language. During the multimodal pretraining stage, the visual module is trained on 100M image-text pairs to align with the frozen LLM, which are collected from publicly available datasets, including CC3M [49], CC12M [7], COCO [31], Visual Genome [26], and Laion-COCO [11]. We train VisCPM-Chat for 180K steps with a batch size of 768 and a learning rate of 1e-5 using Adam optimizer [23]. In the instruction tuning sub-stage, we utilize bilingual versions of LLaVA 150K [32] and UniMM-Chat [62], and the English part of M\({}^{3}\)IT [30]. The details are presented in Appendix A.2. Due to the Figure 2: The histogram of CLIP score of 100M Chinese image-text pairs from Wukong dataset. We set 0.18 as a filter threshold. _quasi-zero-shot_ phenomenon in Chinese introduced in Sec. 3.2, we incorporate certain Chinese data by translating LLaVA 150K and UniMM-Chat into Chinese using machine translation.2 We fine-tune the image encoder and LLM for 80k steps with a batch size of 64. The learning rate and optimizer configurations remain the same as the previous sub-stage. Footnote 2: We employ CPM-Bee for machine translation. Except for special instructions, all translation processes in this work are carried out using the CPM-Bee model. To demonstrate the effect of Chinese image-text pairs, we also train an additional version of VisCPM-Chat, which adds additional Chinese image-text pairs during pretraining, including 20M native Chinese image-text pairs filtered from 100M Wukong [20] and 20M Zero-Chinese [57] dataset based on a CLIP score threshold greater than 0.18s, and 136M image-text pairs translated from Laion-COCO dataset. We name this model as VisCPM-Chat+. ### VisCPM-Paint VisCPM-Paint is a text-to-image synthesis model that can accept prompts in both Chinese and English. VisCPM-Paint employs the UNet in Stable Diffusion [43] as the image decoder. To maintain the generative capability of UNet, the training process only involves the cross-attention layer of UNet and the linear transblock between the multilingual LLM and UNet. We optimize these parameters using an extensive dataset of English image-text pairs Laion-2B [46]. We first train 200K steps at resolution 384\(\times\)384 with a batch-size 4096, and then train 100K steps at resolution 512\(\times\)512 with the same batch size. We use AdamW optimizer [34] and set the learning rate as 1e-4. Thanks to the representation power of the multilingual LLM, even though the model did not receive any exposure to Chinese image-text pairs during training, the resultant VisCPM-Paint model can generate images with high fidelity and relevance when provided with either English or Chinese input. Similar to VisCPM-Chat+, we train an additional version of VisCPM-Paint, which is fine-tuned on Chinese image-text pairs. The component of these pairs are identical to that VisCPM-Chat+ uses. We name this model as VisCPM-Paint+. ## 5 Experiments We perform comprehensive experiments to evaluate the effectiveness of MpM and its outcome VisCPM. This section is organized as follows: Sec. 5.1 and Sec. 5.2 present the evaluation results of VisCPM-Chat and VisCPM-Paint, respectively. Sec. 5.3 introduces the results of ablation study about the effect of the Chinese image-text dataset in the model performance. Sec. 5.4 shows the performance of MpM's expansion to more languages. ### Evaluation of VisCPM-Chat #### 5.1.1 Baselines We compare VisCPM-Chat with existing multimodal conversation models, which include the English-only models: MiniGPT-4 [68], InstructBLIP [13], and LLaVA [32], as well as Chinese-English bilingual models: mPLUG-Owl [59], VisualGLM3 [16], and Ziya-Visual4 [54]. Specifically, we mainly compare VisCPM-Chat with three bilingual models: Footnote 3: [https://github.com/THUDM/VisualGLM-6B](https://github.com/THUDM/VisualGLM-6B) Footnote 4: [https://huggingface.co/IDEA-CCNL/Ziya-BLIP-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP-14B-Visual-v1) * mPLUG-Owl: mPLUG-Owl consists of a vision encoder, a vision abstractor module, and raw LLaMA-7B as language model backbone. It is trained on LAION-400M, COYO, CC3M, and MSCOCO. * VisualGLM: VisualGLM employs Q-Former as image encoder and ChatGLM-6B [16] as language model backbone. VisualGLM-6B's pretraining incorporates 30M high-quality Chinese image-text pairs and 300M filtered English image-text pairs. In the fine-tuning phase, VisualGLM is trained on long VQA data. * Ziya-Visual: Ziya-Visual leverage Q-Former [28] as image encoder and Ziya-LLaMA-13B-v1 as language model backbone. They utilize 20M Chinese image-text pair, which is built by cleaning high-quality data from open source data, translating English datasets, and extracting coarse-grained information from captions using BLIP [29] and Grounded SAM [24, 33]. #### 5.1.2 Evaluation Benchmark We evaluate the multimodal conversational capabilities of VisCPM-Chat in English and Chinese on the LLaVA benchmark [32] and its translated Chinese version, respectively. LLaVA benchmark consists of 90 instances, each containing an image, a question, and an answer. We refer readers to the original paper for more details of data generation. LLaVA benchmark comprehensively evaluates the model's performance in conversation, detailed description, and complex reasoning. Following LLaVA [32], we use GPT-45 to rate the model-generated answers and reference answers (provided in LLaVA benchmark) in a range of 1-10. Footnote 5: We use GPT-4 0314 version to evaluate the answers. #### 5.1.3 Experimental Results **Quantitative Results.** The relative scores evaluated by GPT-4 are presented in Table 1. In **Chinese**, VisCPM-Chat achieves remarkable results and significantly outperforms all baseline models. Specifically, VisCPM-Chat maintains an impressive average score of 90.9, while VisCPM-Chat+ achieves an even higher average score of 92.5. It is worth noting that, unlike VisualGLM and Ziya-Visual, which leverage a substantial number of Chinese image-text pairs during pretraining, VisCPM-Chat does not incorporate any Chinese multimodal data in its pretraining process. Despite this, VisCPM-Chat still outperforms the second-ranked model, VisualGLM, by 8 points and is superior to three baselines in all three aspects. Such results strongly demonstrate the effectiveness of MPM in transferring visual knowledge from English to Chinese. In **English**, VisCPM-Chat attains an average score of 81.4. This performance exceeds that of mPLUG-Owl, VisualGLM, InstructBLIP, and MiniGPT-4, and is roughly on par with Ziya-Visual while remaining comparable to the strong baseline LLaVA. **Case Study.** Fig. 3 and Fig. 4 vividly illustrate the model's capabilities. Based on these cases, We discover that VisCPM-Chat demonstrates exceptional capabilities in the following aspects: * VisCPM-Chat assimilated a wide range of global knowledge. As shown in the first and second cases in Fig. 3, the model can recognize the stained map of New York City and explain its realistic meaning; it can also recognize the Mona Lisa painting adapted in a surreal style. * VisCPM-Chat has a good reserve of Chinese cultural knowledge and Chinese characteristics. As shown in the third case in Fig. 3, VisCPM-Chat can be associated with the poems in Su Shi's "Water Melody" according to the full moon. * VisCPM-Chat has a balanced Chinese-English multi-modal conversation capability and strong text recognition ability. As shown in Fig. 4, VisCPM-Chat can conduct fluent multimodal conversations on different topics in English and recognize the words "Starbucks", "Avengers: Endgame" and its release date "April 26th" in the image. \begin{table} \begin{tabular}{c|c|c|c c c|c c c|c} \hline \hline \multirow{3}{*}{Model} & \multirow{3}{*}{Model} & LLM & \multicolumn{4}{c|}{English} & \multicolumn{4}{c}{Chinese} \\ \cline{3-10} & & \multicolumn{1}{c}{Backbone} & Con & DD & CR & AVG & Con & DD & CR & AVG \\ \hline \multirow{3}{*}{\begin{tabular}{c} English \\ Model \\ \end{tabular} } & MiniGPT-4 [68] & Vicuna-13B & 65.0 & 67.3 & 76.6 & 69.7 & - & - & - & - \\ & InstructBLIP [13] & Vicuna-13B & 81.9 & 68.0 & 91.2 & 80.5 & - & - & - \\ & LLaVA [32] & Vicuna-13B & **89.5** & **70.4** & 96.2 & **85.6** & - & - & - \\ \hline \multirow{3}{*}{ \begin{tabular}{c} En-Zh \\ Bilingual \\ \end{tabular} } & mPLUG-Owl [59] & LLaMA-7B & 64.6 & 47.7 & 80.1 & 64.2 & 76.3 & 61.2 & 77.8 & 72.0 \\ & VisualGLM [16] & ChatGLM-6B & 62.4 & 63.0 & 80.6 & 68.7 & 76.6 & 87.8 & 83.6 & 82.7 \\ & Ziya-Visual [54] & Ziya-LLaMA-13B & 82.7 & 69.9 & 92.1 & 81.7 & 85.0 & 74.7 & 82.4 & 80.8 \\ \cline{1-1} \cline{2-10} & VisCPM-Chat & CPM-Bee-10B & 81.4 & 69.2 & 93.1 & 81.4 & 90.0 & 87.4 & 95.0 & 90.9 \\ & VisCPM-Chat+ & CPM-Bee-10B & 80.1 & 67.1 & **97.1** & 81.5 & **91.3** & **90.7** & **95.4** & **92.5** \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results on LLaVA benchmark accessed by GPT-4. Con: Conversation, DD: Detailed Description, CR: Complex Reasoning, AVG: the average score of three tasks. The best/second best results are marked in **bold** and underlined, respectively. Figure 3: Multimodal conversation cases of VisCPM-Chat in Chinese. Figure 4: Multimodal conversation cases of VisCPM-Chat in English. Figure 5: Generated images of VisCPM-Paint. ### Evaluation of ViSCPM-Paint #### 5.2.1 Baselines We compare ViSCPM-Paint with several strong text-to-image model, which including English-only models: GLIDE [37], Make-A-Scene [19], DALL-E-2 [41], UniDiffuser [2], and Chinese or Chinese-English bilingual text-to-image models: CogView2 [15], AltDiffusion [10] and TaiyiDiffusion [54]. We mainly compare ViSCPM-Paint with AltDiffusion [10] and TaiyiDiffusion [54]: * AltDiffusion is a Chinese-English bilingual text-to-image model based on Stable Diffusion and bilingual vision-language encoder AltClip [10]. The training data are collected form Laion [47]. * TaiyiDiffusion is a Chinese text-to-image model that adapts a Chinese text encoder into Stable Diffusion. The visual part is frozen during training. The training datasets include 20M filtered Chinese image-text pairs. #### 5.2.2 Automatic Evaluation For the text-to-image tasks, we assess the zero-shot Frechet Inception Distance (FID) [21] and the CLIP score [42] using the MSCOCO validation set [31]. We sample 30K prompts from MSCOCO, and for the Chinese evaluations, the text prompts are translated from the original English captions. We maintain the same sampling configuration for VisCPM-Paint, AltDiffusion, and TaiyiDiffusion and grid search the optimal FID across eight separate classifier guidance scales. We present the zero-shot FID on MSCOCO validation in Table 2. In **Chinese**, ViSCPM-Paint achieves the best FID performance. By solely training on English image-text pairs, VisCPM-Paint displays a significant advantage over AltDiffusion [10] and TaiyiDiffusion [54]. To visualize the trade-off between fidelity and alignment, Fig. 6 plots the curves of the FID against the CLIP score under different classifier guidance scales. The results indicate VisCPM-Paint and VisCPM-Paint+ deliver good overall performance regarding the balance between image quality and semantic alignment. While the CLIP score of AltDiffusion slightly outperforms that of VisCPM-Paint and VisCPM-Paint+, AltDiffusion's FID falls significantly short. The sub-optimal quality of generated images will affect the practical use, as shown in the human evaluation below. In **English**, the performance of ViSCPM-Paint is comparable to existing powerful text-to-image models, such as Stable Diffuson [43] and UniDiffuser [2]. #### 5.2.3 Human Evaluation Following previous work [6; 61], we perform a human evaluation to have a more comprehensive understanding of model performance. Due to the lack of human evaluation benchmarks in Chinese, we create a Chinese text-to-image human evaluation benchmark, named _Chinese Drawbench_. _Chinese Drawbench_ consists of 174 prompts and evaluates text-to-image models' proficiency to accurately fulfill the requirements in the prompt across different aspects, including relation, composition, attribute, counterintuitive, rare word, counting, long input, and Chinese culture. We conduct a human evaluation involving VisCPM-Paint, AltDiffusion [10], and TaiyiDiffusion [54] on _Chinese Drawbench_. We ask five independent annotators to judge the best image generated by three models for each prompt. Their evaluation criteria included Overall, Alignment, and Fidelity. Alignment reflects the consistency between the generated image content and the input prompt. Fidelity evaluates the quality of generated images, considering factors such as image clarity, object structures' correspondence to reality, and aesthetic appeal. The Overall score is a comprehensive indicator that considers both previous factors. More details of human evaluation and _Chinese Drawbench_ are outlined in Appendix C. Figure 7 presents the human evaluation results, while Figure 8 provides detailed results of each category of _Chinese DrawBench_. The figure shows the preference shares of 5 evaluators for each generated image with different marginals over consensuses in three aspects. Notably, VisCPM-Paint receives the strongest preference across all three evaluation aspects. In each aspect, more than 40 percent of images generated by VisCPM-Paint served as the favored choice. Impressively, in the more objective metrics of Overall and Alignment, VisCPM-Paint earns more than 20 percent 5/5 preference. These results strongly demonstrate the superior quality of VisCPM-Paint's generated images compared to the two baseline models. Noticing that despite AltDiffusion obtaining a higher CLIP score in the automatic evaluation, its inferior-quality image generation resulted in slipping relative performance in the human evaluation. The human assessment, consistent with automatic evaluation results, shows that VisCPM-Paint's supremacy in producing high-quality images from Chinese text. ### Ablation Study We conduct the ablation study of dataset languages to investigate the impact of varied dataset language combinations on multimodal models' performance in image-to-text and text-to-image tasks. For better efficiency, we only use LLaVA 150K for instruction tuning in the image-to-text task. The detailed configurations of each experiment are reported in Appendix D. Based on the results presented in Table 2(a) and Table 2(b), We have the following observations: \begin{table} \end{table} Table 3: Performance in Chinese with different combinations of dataset languages in each training stage. SFT means instruction tuning. The size of Native Chinese and Translated Chinese dataset is introduced in Sec.4.1. Figure 7: Results of human evaluation for text-to-images on Chinese Drawbench. * Relying solely on a native Chinese dataset is insufficient for achieving good performance in both image-to-text and text-to-image tasks. Models trained exclusively on native Chinese datasets yield worse scores, with the image-to-text obtaining an 85.5 average score and the text-to-image model attaining a 15.1 FID. This result emphasizes the necessity for multilingual knowledge transfer. * English data plays a crucial role in improving the chat capability of the model in Chinese during the instruction tuning stage. When the image-to-text model is pretrained on a large dataset of English image-text pairs but then fine-tuned using the monolingual translated Chinese instruction tuning dataset, its average performance experiences a decline from 88.0 to 86.3 compared to the model utilizing the bilingual instruction tuning dataset. * The filtering process applied to the native dataset is essential for the Chinese performance. In the text-to-image tasks, after pertaining on English data and then fine-tuning with unfiltered native Chinese multimodal pairs, the FID worsens from 10.9 in the zero-shot scenario to 12.8. This suggests that low-quality image-text pairs in the unfiltered dataset can negatively impact the model's ability to generate images learned from English. * Incorporating the native Chinese dataset and translated Chinese dataset yields a marginal improvement in the model performance. Specifically, adding native Chinese data into the pertaining stage of the image-to-text model or fine-tuning stage of the text-to-image model only results in a 0.2 variation of metrics, while further mixing translated datasets improves the text-to-image model FID from 10.7 to 9.6, indicating a positive impact. The improvement from VisCPM-Chat to VisCPM-Chat+, shown in Table 1, also confirms the same results. Collectively, these insights reaffirm the effectiveness and crucial necessity of MpM in training large language models within low-resource settings. ### Generalization to More Languages The remarkable results on Chinese of MpM illuminate the potential broader application to a more diverse set of languages. Specifically, we leverage the multilingual LLM LLAMA [52] as the LLM backbone and consider German, French, Spanish, Portuguese, and Italian as target languages. Following the same training procedure as in VisCPM-Chat, we develop a multilingual multimodal chatbot proficiently supporting six languages. We begin by pretraining the visual encoder with LLaMA to achieve visual feature alignment in English image-text pairs. Next, we employ M2M-100 [18] to translate the instruction training set of LLaVA into five target languages. The original English instruction training set and the five translated sets are merged and shuffled, then used for fine-tuning the visual module and LLM. Table 4 presents the evaluation results for the five additional languages. Notably, for the relatively popular languages, such as German, French, and Spanish, the average scores exceed 88. Additionally, for Italian and Portuguese, the average scores are above 85. These results are highly encouraging as they demonstrate the chatbot's coherent and accurate responses to visual-related questions in all six languages, even though these languages are simply blended during instruction tuning. The results in these languages validate the generalization and robustness of MpM in building powerful multimodal models in diverse linguistic contexts. \begin{table} \begin{tabular}{l|c c c|c} \hline \hline Languages & Conversation & \begin{tabular}{c} Detailed \\ Description \\ \end{tabular} & \begin{tabular}{c} Complex \\ Reasoning \\ \end{tabular} & Average \\ \hline English & 87.6 & 76.8 & 101.0 & 88.6 \\ German & 90.8 & 80.8 & 93.6 & 88.7 \\ French & 87.6 & 81.9 & 94.6 & 88.2 \\ Spanish & 87.1 & 79.6 & 102.3 & 89.8 \\ Italian & 84.5 & 79.5 & 93.3 & 85.9 \\ Portuguese & 81.3 & 81.6 & 92.4 & 85.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation scored by GPT-4 on different languages. Conclusion We introduce MpM, an innovative training paradigm designed for effectively training large multimodal models in low-resource languages. By utilizing a multilingual LLM as a pivotal intermediary between vision signals and target languages, MpM facilitates the efficient transfer of multimodal alignment knowledge across different languages. Based on MpM, we develop a series of open-sourced Chinese large multimodal models VisCPM, which show remarkable capability in Chinese image-to-text and text-to-image tasks. Our experimental results demonstrate that by solely relying on English multimodal data, VisCPM can achieve the SOTA performance among Chinese open-sourced multimodal models. We further scale the scope of language by constructing a versatile multimodal chatbot that supports six distinct languages. We believe that the effectiveness of MpM will contribute to the development of large multimodal models worldwide, thereby fostering the development of sophisticated multimodal models across a variety of languages and cultures. ## Contributions The authors' contributions can be outlined as follows: In the preparation of the project, Jinyi Hu and Yuan Yao design the model architecture. Xu Han, Yankai Lin, Jiao Xue, Dahai Li, Zhiyuan Liu, and Maosong Sun offer invaluable guidance in refining the model's architecture. Shan Wang, Chongyi Wang, and Jinyi Hu take charge of collecting and processing the extensive multimodal dataset required for pretraining. Additionally, Hanghao Wu, Yue Zhao, Haoye Zhang, and Yuan Yao collaborate on constructing the instruction tuning data. Jinyi Hu, Chongyi Wang, Tianyu Yu, Qianyu Chen, and Shan Wang jointly implement the training codebase. In model training, Chongyi Wang, Tianyu Yu, and Yinxu Pan babysit the VisCPM-Chat training; Jinyi Hu, Shan Wang and Qianyu Chen take care of the VisCPM-Paint training. In model evaluation, Jinyi Hu and Yuan Yao design the evaluation framework. Chongyi Wang and Tianyu Yu evaluate VisCPM-Chat; Jinyi Hu and Shan Wang execute the automatic evaluation of VisCPM-Paint and organize the human evaluation of VisCPM-Paint. In paper writing, Jinyi Hu and Yuan Yao write the main paper; Yankai Lin, Zhiyuan Liu, and Maosong Sun provide suggestions to polish the writing. For public usability, Jinyi Hu, Yinxu Pan, Chongyi Wang, Shan Wang, and Yuan Yao promote the open-source of VisCPM; Yinxu Pan develops the online demo and API of VisCPM; Chongyi Wang and Yinxu Pan implement the low-resource inference of VisCPM. Throughout the project, Xu Han, Yankai Lin, Jiao Xue, Dahai Li, Zhiyuan Liu, and Maosong Sun provide invaluable technical guidance and advice.
2310.07119
The classical field approximation of ultra light dark matter: quantum breaktimes, corrections, and decoherence
The classical field approximation is widely used to better understand the predictions of ultra-light dark matter. Here, we use the truncated Wigner approximation method to test the classical field approximation of ultra-light dark matter. This method approximates a quantum state as an ensemble of independently evolving realizations drawn from its Wigner function. The method is highly parallelizable and allows the direct simulation of quantum corrections and decoherence times in systems many times larger than have been previously studied in reference to ultra-light dark matter. Our study involves simulation of systems in 1, 2, and 3 spatial dimensions. We simulate three systems, the condensation of a Gaussian random field in three spatial dimensions, a stable collapsed object in three spatial dimensions, and the merging of two stable objects in two spatial dimensions. We study the quantum corrections to the classical field theory in each case. We find that quantum corrections grow exponentially during nonlinear growth with the timescale being approximately equal to the system dynamical time. In stable systems the corrections grow quadratically. We also find that the primary effect of quantum corrections is to reduce the amplitude of fluctuations on the deBroglie scale in the spatial density. Finally, we find that the timescale associated with decoherence due to gravitational coupling to Baryonic matter is at least as fast as the quantum corrections due to gravitational interactions. These results strongly imply that quantum corrections do not impact the predictions of the classical field theory.
Andrew Eberhardt, Alvaro Zamora, Michael Kopp, Tom Abel
2023-10-11T01:38:31Z
http://arxiv.org/abs/2310.07119v1
The classical field approximation of ultra light dark matter: quantum breaktimes, corrections, and decoherence ###### Abstract The classical field approximation is widely used to better understand the predictions of ultra-light dark matter. Here, we use the truncated Wigner approximation method to test the classical field approximation of ultra-light dark matter. This method approximates a quantum state as an ensemble of independently evolving realizations drawn from its Wigner function. The method is highly parallelizable and allows the direct simulation of quantum corrections and decoherence times in systems many times larger than have been previously studied in reference to ultra-light dark matter. Our study involves simulation of systems in 1, 2, and 3 spatial dimensions. We simulate three systems, the condensation of a Gaussian random field in three spatial dimensions, a stable collapsed object in three spatial dimensions, and the merging of two stable objects in two spatial dimensions. We study the quantum corrections to the classical field theory in each case. We find that quantum corrections grow exponentially during nonlinear growth with the timescale being approximately equal to the system dynamical time. In stable systems the corrections grow quadratically. We also find that the primary effect of quantum corrections is to reduce the amplitude of fluctuations on the deBroglie scale in the spatial density. Finally, we find that the timescale associated with decoherence due to gravitational coupling to Baryonic matter is at least as fast as the quantum corrections due to gravitational interactions. These results strongly imply that quantum corrections do not impact the predictions of the classical field theory. ## I Introduction The standard model of cosmology, \(\Lambda\)CDM, is known to successfully describe much of the observed structure growth in the universe [1]. This model includes a dark matter component comprising approximately 26% of the universe's total energy budget. The observational evidence for dark matter is extensive, the distribution of mass in the bullet cluster [2; 3], the stellar rotation curves of galaxies [4; 5], and the anisotropies in the cosmic microwave background [1; 6] being some of the most prominent examples. And while the density, self interaction, and temperature of the cold dark matter constituent of this model are constrained by observation, the specific particle nature remains unknown [7]. This has motivated a large number of models spanning \(\sim 100\) decades orders of magnitude of mass parameter space. At the lowest mass end, around \(\lesssim 10^{-19}\,\mathrm{eV}\), we have "ultra-light" dark matter (ULDM) models, see reviews [8; 9; 10]. Such ultra-light fields arise naturally in many string theory models [11]. Importantly, the low mass in this model means that the particles must be Bosonic [12] and have a non-thermal production mechanism [13]. Here the mass of the particles is so low that their deBroglie wavelength is astrophysical in size [14]. The deBroglie wavelength is given \[\lambda=0.48\,\mathrm{kpc}\left(\frac{10^{-22}\,\mathrm{eV}}{m}\right)\left( \frac{250\,\mathrm{km/s}}{v}\right)\,, \tag{1}\] in terms of the mass, \(m\), and velocity, \(v\), of the dark matter particle. Structures below the deBroglie scale are washed out while larger scale structures are left unchanged. It was originally hoped that a particle with mass \(m\sim 10^{-22}\,\mathrm{eV}\) could alleviate small-scale structure problems without invoking Baryonic physics [14]. These problems are usually summarized as the missing satellites [15; 16], core-cusp [17; 18; 19], and too-big-to-fail [20] problems, see [21; 22] for review. Though the original \(m\sim 10^{-22}\,\mathrm{eV}\) mass particle has been excluded by a large body of work, ultra-light dark matter remains an interesting model and work on this model helps establish a lower bound on the dark matter mass. Ultra-light dark matter is associated with a rich phenomenology. Current constraints on ultra-light dark matter include the Lyman-\(\alpha\) forest [23, 24, 25, 26], galactic subhalo mass function [27, 28], stellar dispersion of ultra-faint dwarfs [29, 30], galactic density profiles [31, 32, 33], Milky Way satellites [34], gravitational lensing [35], and superradiance [36, 37], see reviews [8, 9, 10] for specific details on constraint curves. Typically, these constraints are derived by combining the predictions of theory and simulations to observations. Crucially, much of the analytic and numerical works relies on the classical field approximation, or mean field theory. In the full quantum field theory, quantum field operators representing the dark matter field act on a distribution of field values. In the classical field approximation, this distribution is replaced by the mean field value. This approximation is known to be accurate when the underlying quantum distribution is tightly peaked around the mean field value and when large occupation numbers mean the fractional variation in the field values due to "quantum fluctuations", representing the non-zero width of the underlying distribution, are small. We generally say that the fractional deviation due to fluctuations goes as [38] \[\frac{\delta\hat{\psi}}{|\psi|}\sim\frac{1}{\sqrt{n_{tot}}}\,. \tag{2}\] Where \(\delta\hat{\psi}\) is a quantum field operator measuring the deviation from the mean field value, \(\psi\), and \(n_{tot}\) is the total number of particles in the system. The masses considered are well below the thermal dark matter limit and so we need an alternative production mechanism for this model, for example the misalignment mechanism [39, 40]. The misalignment mechanism also has the advantage of placing the dark matter initially in a coherent state, making it amenable to classical approximation at early times. The light mass also means the occupation numbers are very large, a typical halo may have \(n_{tot}\sim 10^{100}\) particles putting us well within the limit where quantum fluctuations are vanishingly small when the system is tightly distributed around the mean field value. This combination of 1) the initial coherent state description, and 2) the large occupation numbers of the system are typically used to justify the classical field description [38, 41]. In the absence of nonlinear interactions and when the initial conditions are well described classically, the accuracy of the mean field equations is known to survive, with canonical examples being Bose-Einstein condensates [42] and freely propagating photons [43]. However, it is known that nonlinearities, like those due to gravitational interactions, can introduce quantum corrections on some timescale, known as the "quantum breaktime", even in highly occupied systems initially well described by classical theory [44, 45, 46, 47, 48, 49, 50, 51]. It has been shown that this is due to the underlying quantum distribution evolving away from one tightly peaked around the mean field value, for example [41, 44, 45, 46, 47, 48, 49, 50, 51]. This deviation can be quantified by, or described as, a number of effects including the chaotic exploration of phase space [49], phase diffusion [44], fragmentation [51], etc. Importantly, it is not sufficient to simply compare the quantum breaktime to other timescales in the system. To fully understand the impact of quantum corrections it is also necessary to know the decoherence time and pointer states. The decoherence time is the timescale on which interactions with the environment entangle the quantum state with its basis of pointer states. Analytic estimates of the decoherence time indicate that it is short compared to the system dynamical times [57, 58, 59]. There has been a great deal of work examining the quantum nature of ultra-light dark matter as well as debate whether the classical field theory is sufficient to describe ultra-light dark matter on the scales relevant to constraints [38, 41, 45, 46, 47, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65], a more detailed description reviewing these works can be found in [66]. Some work has found that quantum corrections grow on the order of the dynamical time even in the highly occupied regime [46, 54, 55, 56, 54]. Others have argued that they remain small [38, 41, 57, 58, 59, 61, 62]. However, much of this work relies largely on analytic estimates, which may not be reliable into the nonlinear regime, or simulations of small toy systems, which may not be indicative of the behavior of more realistic ones. In previous works, we studied quantum corrections using full quantum simulations in small toy systems [45], and using the Field Moment Expansion method [67] to study the gravitational collapse of an initial over-density in a single spatial dimension [46]. In both cases, we found that the gravitational interaction caused quantum corrections to grow exponentially. The latter case shows specifically that this growth occurred during the nonlinear growth of the over-density. Here we will use the truncated Wigner approximation [68, 69, 70, 71, 72]. This method works by sampling the quantum Wigner distribution with many classical fields, i.e. an ensemble of solutions of the different realisations of the scalar field. A quantum state can be represented by its Wigner distribution [73, 74, 75]. This can then be used to calculate the expectation value of operators corresponding to observables such as the field amplitude and spatial density. Its evolution can be approximated by an ensemble of classical fields all evolving according to MFT [41, 68, 75]. By constructing this ensemble and simulating each constituent realization, an individual time-evolved classical representative of the ensemble, in parallel we can accurately and quickly simulate the evolution of observables. Already this method has been successfully used to simulate Bose-Einstein condensates in a trap [68, 70, 71, 72], quantum number eigenstates with a four point interaction [41], gravitational collapse of an initial overdensity in a single spatial dimension for a coherent state [47], and quantum field theory calculations [69] among others. We study the "quantum breaktime" of ultra light dark matter using the method presented in [46], i.e. measuring the growth rate of the \(Q\) parameter, a proxy for the spread of the wavefunction around its mean field value. We find that our three dimensional results corroborate the one dimensional results in [46]. The main result being that quantum corrections grow exponentially during nonlinear collapse and halo merging, but only quadratically after merging. We also study the impact of large quantum corrections on the evolution of the density. We find that large quantum corrections tend to remove the \(\sim\mathcal{O}(1)\) density fluctuations at the deBroglie scale. Finally, we study decoherence using a test particle intended to represent Baryonic matter which we know to take well defined trajectories through phase space. We find that these test particles quickly enter into macroscopic superpositions, spreading around their mean value at the same rate as the dark matter wavefunction. This strongly implies that the macroscopic super positions needed to impact the predictions of the the classical field theory do not occur in realistic systems. The results of this paper support the conclusion that the classical field theory produces accurate predictions for scalar field dark matter. However, a more complete answer would require the identification of pointer states which is beyond the scope of this paper. This work in organized in the following way: in Section II we discuss the necessary quantum mechanical background. In Section III we discuss the truncated Wigner approximation. Section IV describes the test problems we simulate. We summarize results in Section V and discuss their implications in Section VI. Finally, we conclude in Section VII. ## II Background In this section we introduce the quantum field and quantum phase space formalisms. We then show how the classical field theory is derived in the limit of large occupation number and when the quantum distribution is tightly peaked around the mean field value. Following this, we describe how quantum corrections enter the system when these assumptions are relaxed. Finally, we introduce the decoherence formalism. ### Quantum description In the non-relativistic limit, the Hamiltonian of a self gravitating quantum scalar field takes the following form \[\hat{H}/\tilde{\hbar} =\sum_{j}\omega_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j}+\sum_{ijkl} \frac{\Lambda_{kl}^{ij}}{2}\hat{a}_{k}^{\dagger}\hat{a}_{l}^{\dagger}\hat{a}_ {i}\hat{a}_{j} \tag{3}\] \[=\iint dxdy\,\hat{\psi}^{\dagger}(x)\frac{-\tilde{\hbar}\nabla^{ 2}}{2}\hat{\psi}(y)+\hat{\psi}^{\dagger}(x)\,\frac{\hat{V}(x)}{\tilde{\hbar}} \,\hat{\psi}(x)\,,\] where \(\tilde{\hbar}=\hbar/m\). In our case the potential will be the solution to Poisson's equation, \(\nabla^{2}\hat{V}(x)=Cm\,\hat{\psi}^{\dagger}(x)\hat{\psi}(x)\). The position and momentum space field operators, \(\hat{\psi}(x)\) and \(\hat{a}_{i}\) respectively, are related by Fourier transform \[\hat{\psi}(x)=\sum_{i}\hat{a}_{i}u_{i}^{\dagger}(x)\,. \tag{4}\] \(\hat{a}\) and \(\hat{a}^{\dagger}\) are the annihilation and creation operators respectively. \(u_{i}^{\dagger}(x)\) is the momentum eigenstate with momentum \(\hbar k_{i}\). The field operators act on a quantum state, \(\ket{\psi}\). We will be concerned with the time evolution of \(\ket{\psi}\). It is often convenient to write this state in the basis of number eigenstates \(\{\ \ket{n_{i}}\}\) which satisfy \(\hat{a}^{\dagger}\hat{a}\ket{n}=n\ket{n}\). Where then \(\ket{n_{i}}\) is the number eigenstate with \(n\) particles in the \(i\)th momentum mode. We can analysis this system by looking at the evolution of the quantum state, given by Schrodinger's equation \[i\hbar\,\partial_{t}\ket{\psi}=\hat{H}\ket{\psi}\,, \tag{5}\] or by looking at the evolution of the quantum field operators, given by Heisenberg's equation \[i\hbar\,\partial_{t}\hat{\psi}=[\hat{\psi},\,\hat{H}]\,. \tag{6}\] In this work we focus mainly on the simulation of coherent states which we can write the following \[\left|\vec{z}\right\rangle_{C}=\bigotimes_{i=1}^{M}\exp\left[-\frac{|z_{i}|^{2}}{2 }\right]\sum_{n_{i}=0}^{\infty}\frac{z_{i}^{n_{i}}}{\sqrt{n_{i}!}}\left|n_{i} \right\rangle\,. \tag{7}\] where \(\vec{z}\) is the vector of Fourier components of the classical field, i.e. \(z(x)=\sum_{i}z_{i}u_{i}^{\dagger}(x)\). This type of state is thought to describe the initial conditions for ultra-light dark matter produced via the misalignment mechanism [39; 40]. ### Phase space representation and pseudo probability distributions In much of this work, it will be much more convenient to work in phase space. The representation of operators and states in phase space is described by their Weyl symbols. For an arbitrary operator,\(\hat{\Omega}(\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\})\), which is a function of our set of field operators \(\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\}\), the Weyl symbol is given \[\Omega_{W}[\psi,\psi^{*}]\equiv\frac{1}{\text{Norm}}\int_{\mathbb{C}^{8^{3}}} \int_{\mathbb{C}^{8^{3}}}\mathcal{D}\eta\,\mathcal{D}\eta^{*}\left\langle \psi-\frac{\eta}{2}\,|\,\hat{\Omega}(\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\}) \,|\,\psi+\frac{\eta}{2}\right\rangle_{C}e^{-|\psi|^{2}-\frac{1}{2}|\eta|^{2} }e^{\frac{1}{2}(\eta^{*}\psi-\eta\psi^{*})}\,. \tag{8}\] where \(\mathcal{D}\eta=\Pi_{x}d\eta(x)\) denotes a functional integral measure over all complex field configurations, see [76] for a rigorous treatment. For operators which are symmetrically ordered functions of \(\hat{\psi},\hat{\psi}^{\dagger}\) the Weyl symbol can be found by making the substitution \(\hat{\psi},\hat{\psi}^{\dagger}\rightarrow\psi,\psi^{*}\) in \(\hat{\Omega}(\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\})\). The Weyl symbol of this operator is a real valued functional of the field configuration \(\psi(x)\), \(\psi^{*}(x)\). We will make use of the Wigner function, \(f_{W}\), which is the Weyl symbol of the density matrix, \(\hat{\rho}\). The Wigner function defines a pseudo probability distribution on this phase space. It is not a true probability distribution because it takes negative values for most states. The Weyl symbol of the commutator, \([\,\dots\,,\,\dots\,]\), is the Moyal bracket which acts as \[\left\{\{\,f,g\,\}\right\}_{M}\equiv 2f(\psi,\psi^{*})\sinh\left(\frac{1}{2}( \vec{\partial}_{\psi}\vec{\partial}_{\psi^{*}}-\vec{\partial}_{\psi^{*}}\vec{ \partial}_{\psi})\right)g(\psi,\psi^{*})\,. \tag{9}\] Note that when the amplitude of \(\psi\) is large compared to the higher order derivatives of the Wigner function, i.e. \(|\partial_{\psi}^{3}f|/|\partial_{\psi}f|\ll n_{tot}\), that the Moyal bracket can be approximated as a Poisson bracket, i.e. \[\left\{\{\,f,g\,\}\right\}_{M}=\left\{\,f,g\,\right\}_{c}+\mathcal{O}(1/n_{ tot})\,. \tag{10}\] We can then write the Von Neumann equations of motion as \[\partial_{t}\hat{\rho} =-\frac{i}{\hbar}\left[\hat{H},\hat{\rho}\right]\rightarrow \tag{11}\] \[\partial_{t}f_{W}[\psi,\psi^{*};t] =-\frac{i}{\hbar}\left\{\left\{H_{W}[\psi,\psi^{*}]\,,\,f_{W}[ \psi,\psi^{*};t]\right\}\right\}_{M}\] (12) \[\approx-\frac{i}{\hbar}\left\{H_{W}[\psi,\psi^{*}]\,,\,f_{W}[ \psi,\psi^{*};t]\right\}_{c}\,. \tag{13}\] Expectation values are calculated \[\langle\hat{\Omega}(\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\})\rangle=\int_{ \mathbb{C}^{8^{3}}}\int_{\mathbb{C}^{8^{3}}}\mathcal{D}\psi\,\mathcal{D}\psi^ {*}\,f_{W}[\psi,\psi^{*}]\,\Omega_{W}[\psi,\psi^{*}]\,. \tag{14}\] For our purposes one particularly important Wigner function is that of a coherent state, see equa tion (7). This distribution is simply Gaussian centered on the mean field value [76], i.e. for a coherent state with classical field \(z(x)\) \[f_{W}[\psi,\psi^{*}]=\frac{1}{\pi}e^{-\int dx|\psi(x)-z(x)|^{2}}\,. \tag{15}\] ### Classical field approximation Using the phase space and pseudo probability distribution formalism it is straightforward to understand the assumptions necessary to justify the classical field approximation. We first identify the classical field as the mean field value, \(\psi^{cl}(x)=\langle\hat{\psi}(x)\rangle\), with an initial state given by equation (15) we have \(\psi^{cl}(x,t=0)=z(x)\).. Then we solve Heisenberg's equation in the phase space for \(\partial_{t}\left\langle\hat{\psi}(x,t)\right\rangle\). \[\partial_{t}\left\langle\hat{\psi}(x,t)\right\rangle=\partial_{t}\psi^{cl}(x) =-\frac{i}{\hbar}\int_{\mathbb{C}^{\mathbb{H}^{3}}}D\psi\,\left\{\left\{\,H_{ W}[\psi,\psi^{*}]\,,\,\psi(x)\,\right\}\right\}_{M}\,f_{W}[\psi,\psi^{*}]\,. \tag{16}\] Next we take the large occupation limit \(n_{tot}\gg 1\), which allows us to approximate the Moyal bracket as a Poisson bracket, \[\partial_{t}\psi^{cl}(x)\approx-\frac{i}{\hbar}\int_{\mathbb{C}^{\mathbb{H}^ {3}}}D\psi\,\left\{H_{W}[\psi,\psi^{*}]\,,\,\psi(x)\right\}_{c}\,f_{W}[\psi, \psi^{*}]\,. \tag{17}\] Next we assert that the distribution is tightly peaked around the mean field value, i.e. \(|\left\langle\hat{\psi}\right\rangle|^{2}\gg\langle\delta\hat{\psi}^{\dagger }\,\delta\hat{\psi}\rangle\); the distribution can therefore be approximated as a delta function at the classical field value, \(f_{W}=\delta[\psi(x)-\psi^{cl}(x)]\) \[\partial_{t}\psi^{cl}(x) \approx-\frac{i}{\hbar}\int_{\mathbb{C}^{\mathbb{H}^{3}}}D\psi \,\left\{H_{W}[\psi,\psi^{*}]\,,\,\psi(x)\right\}_{c}\,\delta[\psi-\psi^{cl}] \tag{18}\] \[=-\frac{i}{\hbar}\left\{H_{W}[\psi^{cl},\psi^{cl*}]\,,\,\psi^{cl} (x)\right\}_{c}\,,\] (19) \[=-\frac{i}{\hbar}\left(\frac{\nabla^{2}}{2m}+m\,V\right)\psi^{cl }(x) \tag{20}\] The last line is the familiar Schrodinger-Poisson equations when \(\nabla^{2}V(x)=Cm|\psi^{cl}(x)|^{2}\). We see that the derivation of the classical field equations of motion relies on two assumptions. The first is that the Moyal bracket was well approximated by a Poisson bracket, which is true in the large occupation limit, i.e. \(n_{tot}\gg 1\). The second is that the quantum distribution is tightly localized around the classical field value, i.e. \(|\left\langle\hat{\psi}\right\rangle|^{2}\gg\langle\delta\hat{\psi}^{\dagger }\,\delta\hat{\psi}\rangle\), see discussions in [45; 54; 67]. Because these assumptions are necessary for the classical field equations but will be relaxed in the following sections, it will be useful later to parameterize the size of quantum corrections due to the spreading of the wavefunction using \(Q(t)\), see [67]. \[Q(t)=\frac{1}{n_{tot}}\int dx\,\left\langle\delta\hat{\psi}^{\dagger}(x) \delta\hat{\psi}(x)\right\rangle \tag{21}\] Note that a coherent state with large classical field amplitude, see equation (15), satisfies both of the necessary assumptions for description by a classical field and has \(Q=0\). If the state remains well approximated by a coherent state with large occupations then the classical field theory will remain accurate [45; 46; 67]. ### Quantum corrections Quantum corrections begin to enter the system when the assumptions used to derive the mean field theory to break down. As discussed in the previous section, the classical field theory is achieved in the limit where the Moyal bracket can be replaced by the Poisson bracket and the quantum distribution can be approximated by a delta function centered at the mean field value. The first approximation relies on the how the mean field values compare with higher order derivatives of the Wigner function. Corrections to the Poisson bracket approximation are of \(\sim\mathcal{O}(1/n_{tot})\)[75] and are not discussed in this work. The second approximation ignores the finite width of underlying quantum distribution. This width is given by the commutation relation between the field operators \(\hat{\psi}^{\dagger}\), \(\hat{\psi}\), and can be related to the uncertainty principle and is therefore a correction \(\sim\mathcal{O}(1/\sqrt{n_{tot}})\). This is the main correction considered in this work. Coherent states with large occupation numbers are thought to describe ultra light dark matter produced by the misalignment mechanism at early times [39, 40]. In the top row of Figure 1, we plot the underlying quantum distribution as compared with the classical field value of the gravitational collapse of an initial over-density, for two quantum simulations of coherent states at different occupation number, \(n_{tot}\), but with the same mean field evolution. We see in the top left panel of Figure 1, that at early times the distribution is tightly peaked around the mean field value. Overtime, if the Hamiltonian has nonlinearities, like gravity, the underlying quantum distribution will spread. This is caused by the finite width Figure 1: Gravitational collapse of a spatial over-density in a single spatial dimension. We plot the results for the classical field theory in red, and two quantum simulations of coherent states with \(n_{tot}\approx 6\times 10^{4}\) and \(n_{tot}\approx 1\times 10^{6}\) in black and cyan, respectively. Each column represents a different time, \(t\). The top row shows the value of the each stream in the ensemble at \(x=0\) and the bottom row shows the spatial density plotted such that each field has the same norm. Shell crossing occurs at \(t=1\). During the collapse phase the field undergoes phase diffusion but the density remains well approximated by the MFT until after the collapse. Following the collapse, the density is smoothed out in proportion to the amount of phase diffusion achieved prior to the collapse, with high particle number simulations exhibiting larger fluctuations in the final number density. In these simulations \(\tilde{h}=2.5\times 10^{-4}\) and \(N_{s}=1024\), \(M=256\), \(M_{tot}=L=1\). Plot taken from [47]. of the distribution which creates correction terms to the equations of motion proportional to the variance of the field operators, see [67, 46]. In the top middle panel of Figure 1, the distribution is beginning to experience phase diffusion [44]. This means that the phase of wavefunction is accumulating uncertainty and is becoming less well defined. At shell crossing, this phase diffusion becomes amplitude uncertainty, that is, the distribution spreads around the ring in the complex plane of fixed amplitude corresponding to \(A(x)=\sqrt{n_{tot}(x)}\), see the top right panel of Figure 1. The position of \(\sim\mathcal{O}(1)\) density fluctuations require well defined phase gradients. The result is that the density admits quantum corrections at late times, although well described by mean field theory at early times. We parameterize these deviations by \(Q(t)\), defined in equation (21), which give an approximate description of the leading order quantum corrections to the mean field equation. We will then define a "quantum breaktime" as \(Q(t_{br})\sim 1\). This is the time when quantum corrections are large, and the system can no longer be well described by the classical field theory alone. It has been demonstrated that quantum corrections grow exponentially for chaotic systems, and, as a result, breaktimes scale logarithmically with occupation number [41, 49, 50, 77, 78, 79, 45, 46]. The phase space description of quantum mechanics offers an intuitive explanation for this phenomenon. The distribution over quantum phase space can be thought, to first order, as a classical ensemble of fields with slight perturbations in their initial conditions. The chaotic quality of the system then causes these perturbations to exponentially grow apart in phase space, causing the distribution to spread away from its mean field value. The relationship between chaos and quantum phase space is explored in detail in [49]. ### Decoherence Let us consider the state, \(\ket{\psi(t)}\), of a system we are interested in, e.g. the dark matter halo of a galaxy. We couple this state to an environment, \(\ket{\mathcal{E}(t)}\), e.g. the state describing the phase space of the stars in the galaxy. We will assume at our initial time, \(t=0\), that the state describing both system and environment can be written as a product \[\ket{A(0)} =\ket{\psi(0)}\ket{\mathcal{E}(0)}\,, \tag{22}\] \[=\sum_{i}c_{i}(t=0)\ket{\phi}_{i}\otimes\sum_{j}b_{j}(t=0)\ket{ \epsilon}_{j}\,. \tag{23}\] The combined state evolves via the Hamiltonian \[\hat{H}_{A}=\hat{H}_{\psi}+\hat{H}_{\mathcal{E}}+\hat{H}_{\text{int}}\,. \tag{24}\] Where \(\hat{H}_{\text{int}}\) describes the interaction between the state and environment. Assuming that the Hamiltonian is time independent, the evolution of the state to an arbitrary time \(t=T\), is given \[\ket{A(T)}=e^{-i\,\hat{H}_{A}\,T}\ket{A(0)}\,. \tag{25}\] At this time, because of the influence of the interaction term in our Hamiltonian, there is no guarantee that the state can be written as a simple tensor product as in equation (22). In general, the system will be entangled with the environment. We now must write our state more generally as \[\ket{A}=\sum_{ij}c_{ij}\ket{\phi}_{i}\ket{\epsilon}_{j}\,. \tag{26}\] Which can describe the entanglement between the two sets of basis states. It will often be convenient beyond this point to write the state's density matrix \[\hat{\rho}_{A}=\sum_{ijkl}c_{ij}c_{kl}^{*}\ket{\phi}_{i}\ket{\epsilon}_{j} \bra{\phi}_{k}\bra{\epsilon}_{l}\,. \tag{27}\] Now, if we assert that an observer measures the environment to be in the eigenbasis \(\ket{\tilde{\epsilon}}\). We then have a reduced density matrix tracing over the environment eigenbasis \[\hat{\rho}_{A}^{R}=\text{Tr}_{\epsilon}[\rho_{A}]=\sum_{i}\bra{\tilde{ \epsilon}}\hat{\rho}_{A}\ket{\tilde{\epsilon}}_{i}\,. \tag{28}\] When \(\bra{\tilde{\epsilon}}\ket{\tilde{\epsilon}}_{ij}=\delta_{ij}\), the reduced density matrix is now a classical ensemble of pointer states of the system. Pointer states being the states which develop the least entanglement over time with the preferred environmental basis states. This process of environmental interaction projecting the state into the pointer state basis is called "decoherence". It is necessary to calculate the timescale associated with this process in order to fully understand the impact of quantum corrections. ## III Truncated Wigner approximation In this section we introduce the truncated Wigner approximation (TWA). We explain how we implement this scheme and how it can be used to model the quantum breaktime and decoherence. ### Approximation scheme The truncated Wigner approximation scheme is a method for approximating the evolution of the Wigner function, equation (11), in a way that relaxes the assumption that the quantum distribution is tightly distributed around the classical field value. We can use this to estimate the leading order quantum corrections to the classical field theory. The TWA method relies on two sets of approximations. First, we approximate the time evolution in phase space dropping all terms of order \(\mathcal{O}(1/n_{tot})\) and higher, i.e. \[\left\{\left\{\,f,g\,\right\}\right\}_{M}\approx\left\{\,f,g\,\right\}_{c}\,. \tag{29}\] Note that this is the same first assumption required in the derivation of the classical field theory. The next approximation is of the Wigner function itself. We will represent the Wigner function with a set of classical fields and weights \(\left\{\,c_{i},\psi_{i}(x)\,\right\}_{W}\) as \[f_{W}[\psi,\psi^{*};t] \approx f_{S}[\psi,\psi^{*};t] \tag{30}\] \[f_{S}[\psi,\psi^{*};t] = \frac{1}{N_{s}}\sum_{i}c_{i}\,\delta[\psi-\psi_{i}(x,t)]\,\delta [\psi^{*}-\psi_{i}^{*}(x,t)] \tag{31}\] Where \(\psi_{i}(x,t)\) represents the \(i\)th field configuration in the set with weight \(c_{i}\) and \(N_{s}\) is the total number of streams in the set. We choose the field instances at \(t=0\) such that \[\int_{C\subset\mathbb{C}^{3}}D\psi\,f_{W}[\psi,\psi^{*};t=0] =\lim_{N_{s}\to\infty}\int_{C\subset\mathbb{C}^{3}}D\psi\,f_{S}[ \psi,\psi^{*},t=0]\] \[=\lim_{N_{s}\to\infty}\frac{1}{N_{s}}\sum_{i}\begin{cases}c_{i}& \psi_{i}(x,t=0)\in C,\\ 0&\operatorname{else},\end{cases} \tag{32}\] for all regions \(C\subset\mathbb{C}^{\mathbb{R}^{3}}\). Note that the above scheme is sufficiently general to include Wigner function which are not everywhere positive. If, however, the Wigner function being approximated is everywhere positive, it is sufficient to treat it as a probability distribution functional for the fields, i.e. \(\psi_{i}\sim f_{W}(\psi,\psi^{*})\) with \(c_{i}=1\) for all \(i\). The expectation value of a symmetrically ordered operator at time \(t\), \(\langle\hat{\Omega}[\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\}]\rangle\) is then given \[\langle\hat{\Omega}(\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\})\rangle =\int_{C\subset\mathbb{C}^{3}}D\psi\,f_{W}[\psi,\psi^{*},t]\, \Omega_{W}[\psi,\psi^{*}] \tag{33}\] \[=\lim_{N_{s}\to\infty}\frac{1}{N_{s}}\sum_{i}c_{i}\,\Omega_{W}[ \psi=\psi_{i}(x);t]\,. \tag{34}\] The accuracy of the truncation of the Moyal bracket as a Poisson bracket relies on large occupation numbers, \(n_{tot}\gg 1\), like the classical field approximation. However, unlike the classical field approximation, we relax the assumption that the underlying quantum distribution is well approximated by a delta function at the mean field value. The TWA instead requires that we have adequate field instances sampled to resolve the distribution. The equation of motion for the set is given as \[\partial_{t}f_{S}[\psi,\psi^{*};t] =-\frac{i}{\hbar N_{s}}\sum_{i}c_{i}\left\{\,H_{W}[\psi_{i}(x,t), \psi^{*}_{i}(x,t)]\,,\,\psi_{i}(x,t)\,\right\}_{c}\,, \tag{35}\] \[=-\frac{i}{\hbar N_{s}}\sum_{i}c_{i}\left(\frac{\hbar^{2}\nabla^{2 }}{2m}+m\,V_{i}(x,t)\right)\psi_{i}(x,t) \tag{36}\] Achieved by plugging equation (31) into equation (13). It is important to note the somewhat unintuitive result that, at this approximation order, the potential, \(V_{i}\), is a functional only of \(\psi_{i}\) not the ensemble of fields, i.e. \(\nabla^{2}V_{i}(x)=Cm|\psi_{i}(x)|^{2}\). The equation of motion for each individual stream is given \[\partial\psi_{i}(x,t)=-\frac{i}{\hbar}\left(\frac{\hbar^{2}\nabla^{2}}{2m}+m \,V_{i}(x,t)\right)\psi_{i}(x,t) \tag{37}\] This result is derived in more detail in appendix A. Notice also, that each stream evolves independently of any other, meaning that this method is highly parrallelizable. ### Numerical implementation Numerically, we solve this system by integrating the mean field evolution of an ensemble of classical fields instances. This means that any solver which solves the Schodinger-Poisson equations can be used. Because each of the streams evolves independently of the others, we can solve each in parallel, allowing for a large number of streams to be simulated efficiently. We use the pseudo spectral integrator, as well as timestep and resolution checks described in [80] with an updated kinetic aliasing check [81]. A discussion of the language used for the implementation as well as the code repository link are in appendix B. #### iii.2.1 Initial conditions generation We will simulate the evolution of coherent states, see equation (7). The Wigner function for a coherent state is a Gaussian centered on the classical field value, \(\psi^{cl}(x)\), see equation (15), i.e. the exact Wigner function at the initial conditions for a coherent state is \[f_{W}[\psi,\psi^{*}]=\frac{1}{\pi}e^{-\int dx|\psi(x)-\psi^{cl}(x)|^{2}}\,. \tag{38}\] We will approximate this with a stream ensemble as \[f_{S}[\psi,\psi^{*}]=\frac{1}{N_{s}}\sum_{i}c_{i}\,\delta[\psi- \psi_{i}(x)]\,\delta[\psi^{*}-\psi^{*}_{i}(x)]\,. \tag{39}\] Where each field instance is drawn from the true Wigner function, \(\psi_{i}(x)\sim f_{W}[\psi,\psi^{*}]\). Note that for a coherent state, the Wigner function has an untroubled interpretation as a probability distribution. Here it is also necessary to introduce our spatial grid, which in three spatial dimensions is written \(x_{ijk}=(i\,dx,\,j\,dx,\,k\,dx)\) where \(dx=L/M\) is the spatial resolution given by the box size, \(L\), over the number of spatial modes in a single dimension \(M\). For notational convenience, we write the grid indices in a way independent of the number of spatial dimensions, e.g. let \(ijk=g\) where now \(g\in\left\{\,0,1,\ldots,M-1\,\right\}^{D}\), where \(D\) is the number of spatial dimensions. Then we choose our fields as \[\psi_{i}(x_{g})=\psi^{cl}(x_{g})+\delta^{R}_{i}(x_{g})+i\,\delta^{I}_{i}(x_{g} )\,. \tag{40}\] Where at every point, \(g\), in our grid we choose two random numbers drawn from a Gaussian distribution with variance \(1/2\), i.e. \(\delta^{R}_{i}(x_{g}),\,\delta^{I}_{i}(x_{g})\sim\mathcal{N}(0,\sqrt{1/2})\). Note that this is only the case if the classical field is normalized to be the number density, i.e. \(\sum_{g}|\psi(x_{g})|^{2}\,dx=n_{tot}\). Let us define a normalized \(\psi^{\prime}(x)\equiv\psi(x)/\sqrt{n_{tot}}\) such that \(\sum_{g}|\psi^{\prime}(x_{g})|^{2}\,dx=1\), as is often more convinient, then \[\psi^{\prime}_{i}(x_{g})=\psi^{\prime cl}(x_{g})+\delta^{\prime R }_{i}(x_{g})+\delta^{\prime I}_{i}(x_{g})i\,, \tag{41}\] \[\delta^{\prime R}_{i}(x_{g}),\,\delta^{\prime I}_{i}(x_{g})\sim \mathcal{N}(0,\sqrt{1/2})/\sqrt{n_{tot}}. \tag{42}\] See [68] for a more detailed discussion of this sampling scheme. #### iii.2.2 Integrating the equations of motion The fields are integrated using the standard symplectic pseudo spectral leap frog integrator following the temporal and spectral aliasing resolutions constraints discussed in [80, 82], however we update the kinetic temporal resolution check. Let \(\psi_{t}^{i}\equiv\psi_{i}(x,t)\), \(V_{t}\equiv V(x,t)\), and \(\tilde{\psi}\equiv\mathcal{F}[\psi]\), i.e. the Fourier transform of the field. The update \(\psi_{t}\rightarrow\psi_{t+\Delta t}\) is given in the non-expanding case \[\tilde{\psi}_{t+\Delta t} = U_{t}^{T}(\Delta t/2)\tilde{\psi}_{t}\text{ (position update half step)}\] \[\text{ (calculate }V_{t})\] \[\psi_{t+\Delta t} = U_{t}^{V}(\Delta t)\psi_{t}\text{ (momentum update full step)}\] \[\tilde{\psi}_{t+\Delta t} = U_{t}^{T}(\Delta t/2)\tilde{\psi}_{t}\text{ (position update half step)}\,.\] \(U^{T}\) and \(U^{V}\) are the unitary operators associated with the kinetic and potential energies respectively, i.e. \[U_{t}^{T}(\Delta t) \equiv e^{i\,\Delta t\hbar\,k^{2}/(2m)}\,, \tag{43}\] \[U_{t}^{V}(\Delta t) \equiv e^{-i\,\Delta t\,m\,V(x,t)/\hbar}\,. \tag{44}\] \(\Delta t\) is dynamically chosen to avoid temporal aliasing in the kinetic or potential updates. This means that at each time step \[\Delta t=2\pi c_{f}\min\left[\hbar/mV\,,\,mL/(M\pi\hbar)\right]\,. \tag{45}\] Where the first argument of the minimum function is the restriction on the time step set by the potential energy and the second is the restriction set by the gradient of the kinetic energy. Notice that this differs from the restrictions in [80], here we only ensure that the gradient of the kinetic energy does not alias. #### iv.2.3 Evaluating operators The expectation value of symmetrically ordered operators, i.e. equation (33), can be approximated using our ensemble of fields as \[\langle\hat{\Omega}[\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\}]\rangle\approx \frac{1}{N_{s}}\sum_{i}\,\Omega_{W}[\psi_{i},\psi_{i}^{*};t]\,. \tag{46}\] Where now the elements of our set of operators are the field operators defined at the grid points, i.e. \(\hat{\psi}(x_{g})\in\{\,\hat{\psi},\hat{\psi}^{\dagger}\,\}\) for all \(x_{g}\). ### Estimating the breaktime There are a number of physical parameters in our simulations which determine the evolution of the system. For example, the total mass in the system \(M_{tot}\), the mass of the field \(m\), Planck's constant \(\hbar\), and the total number of particles \(n_{tot}\). In real systems, these parameters are related in physical ways, for example \(m=M_{tot}/n_{tot}\). For the cosmological systems we are interested in, these parameters have large Figure 2: Here we show how the approximation scheme works diagrammatically for a single mode with a quartic nonlinearity. The left two plots show the Husimi distribution for the quantum distribution at the initial and final times. The red dots show the value of the many random streams sampling the Wigner function of the quantum distribution. We can see that the evolution of the ensemble of points approximates the evolution of the underlying quantum phase space. The right most plot shows the true value and ensembled approximation of \(\text{Var}[\hat{q}]\). We can see that the evolution is well approximated by the ensemble. Here we use \(N_{s}=1024\). values. A typical halo may have \(M_{tot}\sim 10^{10}\,M_{\odot}\), \(m\sim 10^{-22}\,\mathrm{eV/c^{2}}\), \(n_{tot}\sim 10^{100}\). If we use these values, the sampling scheme described in this section will fail as we do not have the numerical precision to model a \(\delta\psi\sim 1/\sqrt{n_{tot}}\sim 10^{-50}\). We therefore simulate systems with non-physical values of these parameters and extrapolate them to physical values. We will describe this procedure in this section. Note that this is a similar procedure as previous work [46]. We instead define the simulation parameters using their relation to the classical field evolution and the sampling scheme. If we normalize the fields \(\psi\) such \(\sum_{g}|\psi^{\prime}(x_{g})|^{2}\,dx=1\) then we can write the classical Schrodinger-Poisson field equations as \[\partial_{t}\psi^{\prime}(x) =-i\left(-\frac{\tilde{h}\nabla^{2}}{2}+\frac{V(x)}{\tilde{h}} \right)\psi^{\prime}(x)\,, \tag{47}\] \[\nabla^{2}V(x) =4\pi GM_{tot}|\psi^{\prime}(x)|^{2}\,. \tag{48}\] The classical field equations then depend only on our choice of \(M_{tot}\) and \(\tilde{h}\) which give the total mass in the simulation and the mass of the field respectively. Notice that \(n_{tot}\) does not enter the classical field equations, as expected since the classical equations are in the \(n_{tot}\to\infty\) limit. Now \(n_{tot}\) only enters as a sampling parameter in equation (41) and is not defined by \(M_{tot}/m\). From here, we distinguish between the physical value of \(n_{tot}^{p}\equiv M_{tot}/m\), and the simulated value of \(n_{tot}^{s}\) which enters only in the field sampling \(\delta_{i}^{\prime R}(x_{g})\), \(\delta_{i}^{\prime I}(x_{g})\sim\mathcal{N}(0,\sqrt{1/2})/\sqrt{n_{tot}^{s}}\). It is interesting to note that a classical field simulation has \(n_{tot}^{s}\neq n_{tot}^{p}\) and instead sets \(n_{tot}^{s}=\infty\). The parameter \(Q(t)\), defined in (21), is a measure of quantumness [45, 46, 47, 67]. When \(Q(t)\ll 1\) then the system is well described by the classical field theory and when \(Q(t)\sim 1\) then quantum corrections will begin to cause deviations from the classical theory, see for example Figures 3 and 1, which demonstrate the relationship between \(Q\) and the density predcited by the quantum and classical evolutions. In this paper we will define a quantum breaktime, \(t_{br}\), to be \(Q(t_{br})\sim 1\). However, as we just discussed, we do not have access to the true value of \(Q_{p}(t)\) for a given set of physical parameters \(M_{tot}\), \(\hbar\), \(m\), \(n_{tot}\) that respect the relationship \(n_{tot}=M_{tot}/m\). Instead, we have a simulated value of \(Q_{s}(t;M_{tot},n_{tot},\tilde{h})\) where \(M_{tot}\) and \(\tilde{h}\) specify the classical field evolution and \(n_{tot}\) is only a sampling parameter. For fixed \(M_{tot}\), \(\tilde{h}\), \(Q\) with different \(n_{tot}\) are related by the ratio of their respective \(n_{tot}\) when \(Q(t)\ll 1\). We can describe this relation as Figure 3: Here we plot the condensed object resulting from the collapse of a momentum space Gaussian density in two spatial dimensions, initial conditions described in Section IV.2. The density shown here is the result of \(1\,\mathrm{Gyr}\) of evolution. Each plot also shows the simulated value of the \(Q\) parameter at this time. The left panel shows the classical field evolution, the condensed object in this case exhibiting the expected granular structure. The right panel shows the evolution for a simulation using the truncated Wigner approximation with \(n_{tot}=10^{6}\) particles. Here, as in the one dimensional case, we see that the quantum corrections have removed most of the granular structure. In the middle panel we show the same simulation with \(n_{tot}=10^{9}\) particles. The quantum corrections in this case are much smaller, and the resulting density is almost identical to the classical case. Here we set \(M_{tot}=6\times 10^{9}\,M_{\odot}\), \(\hbar/m=0.02\,\mathrm{kpc^{2}/Myr}\), \(L=60\,\mathrm{kpc}\), \(M=512^{2}\), \(2k_{d}^{2}=0.05\,\mathrm{kpc^{-2}}\), \(T=1\,\mathrm{Gyr}\). \[Q_{1}(t;M_{tot},\tilde{\hbar},n_{tot}=n_{1})=\frac{n_{2}}{n_{1}}Q_{2}(t;M_{tot}, \tilde{\hbar},n_{2})\,. \tag{49}\] An example is instructive. Let us say that we want to model the evolution of the physical system with \(M_{tot}=10^{10}\,M_{\odot}\), \(\tilde{\hbar}=0.02\,\mathrm{kpc}^{2}/\mathrm{Myr}=\hbar/(10^{-22}\,\mathrm{eV}/ \mathrm{c}^{2})\), \(n_{tot}^{p}\sim 10^{98}\). We simulate the evolution of a system with \(M_{tot}=10^{10}\,M_{\odot}\), \(\tilde{\hbar}=0.02\,\mathrm{kpc}^{2}/\mathrm{Myr}\), \(n_{tot}^{s}\sim 10^{8}\) and measure the resulting simulated \(Q_{s}(t)\sim 10^{-8}\,t^{2}\). Using the above relation, this corresponds to a physical \(Q_{p}(t)\sim 10^{-98}t^{2}\) with corresponding physical breaktime then \(t_{br}^{p}\sim 10^{49}\,\mathrm{Myr}\). Therefore, to relate the simulated \(Q_{s}\) to the physical \(Q_{p}\) we can write \[Q_{p}(t;M_{tot},\tilde{\hbar},n_{tot}=n_{tot}^{p})=\frac{n_{tot}^{s}}{n_{tot}^ {p}}Q_{s}(t;M_{tot},\tilde{\hbar},n_{tot}^{s})\,. \tag{50}\] It is important to note that meaningful predictions of the physical breaktime can only be made using \(Q_{p}\). In Figure 4, we show the physical \(Q_{p}\) predicted by three different simulation sampling \(n^{s}tot\). All three simulations give approximately the same prediction for the growth of \(Q^{p}(t)\). We therefore do not expect the prediction of the breaktime to be sensitive to the choice of \(n_{tot}^{s}\) so long as \(Q_{s}(t)\ll 1\). Finally, the initial simulated value of \(Q^{s}(t=0)\) will not be exactly \(0\) because the sampling scheme provides only an estimation of the underlying distribution. \(Q^{s}(t=0)\) depends on the number of streams \(N_{s}\). We therefore plot simulated \(Q\) with this initial value subtracted. The behavior and growth of \(Q\) is not sensitive to the choice of \(N_{s}\) so long as \(N_{s}\gg 1\). In Figure 5 we plot the simulated value of \(Q\) for simulations with three different resolutions. All three agree, demonstrating again that \(Q\) is independent of the resolution parameters \(M,\,N_{s},\) and \(n_{tot}^{s}\) so long as the resolution is adequate. ### Modeling decoherence Let us start by writing our system as \[\left|A\right\rangle=\left|\mathrm{DM}\right\rangle\left|\mathcal{E}\right\rangle\,. \tag{51}\] \(\left|\mathrm{DM}\right\rangle\) is the initial quantum state of the dark matter, which we will take to be a coherent state. \(\left|\mathcal{E}\right\rangle\) Figure 4: Here plot the prediction for the physical \(Q_{p}\) for three different simulations all with different sampling parameters \(n_{tot}^{s}\). The simulated \(Q_{s}(t)\) are then normalized according to equation (50). We can see that each simulation makes the same prediction for the growth of \(Q_{p}\). The simulations are of the collapse of a momentum space Gaussian, described in Section IV.2, in two spatial dimensions, with \(M_{tot}=6\times 10^{9}\,M_{\odot}\), \(\hbar/m=0.02\,\mathrm{kpc}^{2}/\mathrm{Myr}\), \(L=60\,\mathrm{kpc}\), \(M=512^{2}\), \(2k_{d}^{2}=0.05\,\mathrm{kpc}^{-2}\). Figure 5: Here we plot the simulated value of \(Q\) for the gravitational collapse of an initial overdensity in a single spatial dimension at three different simulation resolutions. \(M\) represents the number of grid cells, and \(N_{s}\) the number of sampling streams. The evolution of \(Q\) is the same in all three cases. This demonstrates that the evolution is independent of the specific simulation parameters. For these simulations \(M_{tot}=10^{8}\,M_{\odot}\), \(L=60\,\mathrm{kpc}\), and \(\hbar/m=0.02\,\mathrm{kpc}^{2}/\mathrm{Myr}\). is the initial state of the environment, which we will take to be test particles with some well defined positions, \(r_{i}\), and momenta, \(p_{i}\) (though we will later consider that this phase space position can only be known to some resolution). We will assume that the gravitational potential is dominated by the dark matter. Using this assumptions we can write \[\hat{H}_{A}=\hat{H}_{\rm DM}+\hat{H}_{\cal E}+\hat{H}_{\rm int}\,, \tag{52}\] \[\hat{H}_{\rm DM}=\] (53) \[\int\int dxdy\,\hat{\psi}^{\dagger}(x)\frac{-\tilde{h}^{2}\, \nabla^{2}}{2}\hat{\psi}(y)+\hat{\psi}^{\dagger}(x)\,m\,\hat{V}(x)\,\hat{\psi} (x)\,,\] \[\hat{H}_{\cal E}=\sum_{i}\frac{\hat{p}_{i}^{2}}{2\,m_{p}}\,,\] (54) \[\hat{H}_{\rm int}=\sum_{i}m_{p}\hat{V}(\hat{r}_{i})\,,\] (55) \[\nabla^{2}\hat{V}(x)=Cm|\hat{\psi}(x)|^{2}\,. \tag{56}\] Where \(m\) is the mass of the dark matter field, and \(m_{p}\) the mass of the test particles (though this is not actually dynamically relevant as the particle is only coupled through gravity). We will write the Wigner function of the dark matter according to the previous sections as an ensembles of streams, and the Wigner function of our test particle as delta functions in momentum-position phase space, i.e. \[f_{DM}^{j}[\psi,\psi^{*}] =\sum_{s}c_{s}\,\delta[\psi-\psi_{s}(x)]\,\delta[\psi^{*}-\psi_{s }^{*}(x)],\ \ {\rm if}\ \ p^{s},r^{s}\in C^{j}\,, \tag{64}\] \[f_{DM}^{R}[\psi,\psi^{*}] =\sum_{j}f_{DM}^{j}[\psi,\psi^{*}] \tag{65}\] Where \(C^{j}\subset\mathbb{R}^{2n_{p}}\) is a region in the \(n_{p}\) test particle phase space configuration space. \(f_{DM}^{R}[\psi,\psi^{*}]\) is a classical ensemble of approximately pure state Wigner functions \(f_{DM}^{j}[\psi,\psi^{*}]\). Note that we have simplified greatly the overlap between environment configuration states by binning the possible environment phase space configurations and assuming that the overlap is large (\(\sim 1\)) if two configurations fall in the same bin, \(C^{j}\), and \(0\) for configurations in different bins (in general this is only true for well separated bins with appropriate width). Test problems In this section we introduce the test problems studied in this paper. The first one is a simple spatial overdensity. The second one we study is the collapse of a random field. Importantly, we study the second test problem in three phases, 1) the initial collapse of the random field into a single virialized object, 2) the stable evolution of this collapsed object, and 3) the merger of two stable collapsed objects. ### Sinusoidal overdensity We consider the gravitational collapse of an initial overdensity, see figure 1. In this test problem, an initial perturbation grows exponentially resulting in density shell crossing and a characteristic spiral structure in classical phase space. In corpuscular cold dark matter, this system continues to make smaller scale structures in phase space indefinitely. In the classical field case, the momentum-position uncertainty relation defines a minimum scale under which phase space structure cannot form, see [80] for a more detailed discussion, resulting in the characteristic "quantum" pressure associated with this model. We note here that the "quantum" pressure exists in the purely classical field formalism. The initial mean field is given \[\psi^{cl}(x_{g})=\sqrt{1+\delta\,\cos{(2\pi x_{g}/L)}}/{\rm Norm}\,. \tag{66}\] where the norm is chosen such that \(\sum_{g}|\psi(x_{g})|^{2}\,dx=1\). Recall that when simulating a quantum coherent state, the field instances are chosen normally distributed around the mean field value parameterized by the total number of particles, as \[\psi_{i}(x_{g})=\psi^{cl}(x_{g})+\delta_{i}^{R}(x_{g})+\delta_{i} ^{I}(x_{g})i\,, \tag{67}\] \[\delta_{i}^{R}(x_{g}),\,\delta_{i}^{I}(x_{g})\sim\mathcal{N}(0, \sqrt{1/2})/\sqrt{n_{tot}}\,. \tag{68}\] ### Momentum space gaussian The growth and evolution of solitons is one of the most studied systems in ultra light dark matter [83, 84, 85, 86, 87]. It is therefore important to understand how quantum corrections grow in these scenarios. We simulate initial conditions following the example of [83] where the initial mean field is chosen in momentum space as \[\tilde{\psi}^{cl}(k_{i})=e^{-k_{i}^{2}/(2\,k_{d}^{2})+i\tilde{\phi}_{i}}\,. \tag{69}\] Where \(k_{i}\) is the wavenumber associated with the \(i\)th momentum mode. \(\tilde{\phi}_{i}\) is chosen randomly at each point in momentum space from a uniform distribution, i.e. \(\tilde{\phi}_{i}\sim U[0,1]\). This describes a Gaussian distributed momentum space with temperature \(k_{d}\). The initial density has granular over-densities given by the interference of momentum modes, see Figure 6. Overtime this system will collapse into a condensed object, see Figure 7, which will then remain a stable "Bose-star", see Figure 8. These objects are typically characterized by their granular interference patterns. We will be interested in the behavior of quantum corrections during both the collapse phase and the stable object phase. Our sampling scheme assumes that the quantum state is well described by a coherent state at the initial conditions. We will have three classes of simulations, the first will be starting from the the momentum space Gaussian described in equation (69), i.e. we sample around the classical field shown in Figure 6. These simulations will represent the **collapsing phase** of the evolution and are intended to demonstrate how Figure 6: The density of the initial conditions used to simulate collapsing objects in two spatial dimensions. The momentum density is Gaussian centered on \(k=0\). The granular structure seen in the density is a result of the interference between different momentum modes. Here \(M_{tot}=6\times 10^{9}\,M_{\odot}\), \(L=60\,\)kpc, and \(\hbar/m=0.02\,\)kpc\({}^{2}\)/Myr, \(t_{c}\sim 2\,\)Gyr, \(M=512^{2}\). quantum corrections grow during gravitational collapse. The second class of simulations will start from the collapsed object formed in the classical evolution of the initial conditions described by equation (69), i.e. we sample around the classical field shown in the left panel of Figure 8, which is the same as the right panel of Figure 7. These simulations will represent the **post collapsed phase** and are intended to demonstrate how quantum corrections grow in a virialized halo. Finally, we will perform simulations taking multiple copies of the collapsed object and allowing them to merge., i.e. we sample around the classical field shown in the left panel of Figure 9, which is four copies of a condensed object given slight perturbations in initial position. These simulations will represent the **merging of collapsed objects** and are intended to demonstrate how quantum corrections grow during halo mergers. Figure 8: Here we plot the classical field evolution of the collapsed object resulting from the evolution of the Gaussian momentum distribution described in IV.2 in three spatial dimensions. Each column represents a different time, with initial conditions are shown on the left plot and the collapsed object on the right plot. We see the object is supported against further collapse with a continually evolving granular envelope. Here we set \(M_{tot}=10^{10}\,M_{\odot}\), \(\hbar/m=0.02\,\mathrm{kpc}^{2}/\mathrm{Myr}\), \(L=60\,\mathrm{kpc}\), \(N=512^{3}\). Figure 7: Here we plot the projected density evolution of the momentum space Gaussian initial conditions described in IV.2 in three spatial dimensions. Each column represents a different time, with initial conditions are shown on the left plot and the collapsed object on the right plot. We see the initially randomly distributed granules collapse into an object. Here we set \(M_{tot}=1\times 10^{10}\,M_{\odot}\), \(\hbar/m=0.02\,\mathrm{kpc}^{2}/\mathrm{Myr}\), \(L=60\,\mathrm{kpc}\), \(N=512^{2}\), \(2k_{d}^{2}=0.05\,\mathrm{kpc}^{-2}\), \(t_{c}\sim 2\,\mathrm{Gyr}\). ## V Results In this section, we describe the results of our simulations. We focus this discussion on three main points. First, we estimate the quantum breaktime, i.e. the timescale on which quantum corrections grow large. Second, we estimate the effect of quantum corrections on the dark matter density. Third, we discuss our analysis of the decoherence time for these systems. ### Breaktimes The breaktime, \(t_{br}\) is calculated using the \(Q\) parameter, defined in equation (21). Section III.3 explains the relationship between this parameter and the breaktime in detail. When \(Q\sim 1\) the system tends to differ from the predictions of the classical field theory [45, 67]. The breaktime is estimated by studying how \(Q\) grows overtime and estimating when \(Q(t_{br})\sim 1\). We first note that the result of simulations in higher dimensions generally corroborate the 1D results from [46]. For the collapse of the Gaussian momentum space density we see an initial quadratic growth of \(Q\) followed by an exponential growth during collapse. For already collapsed systems no longer experiencing nonlinear growth we see only a power law growth of \(Q\). The initial collapse of the k-space Gaussian in three spatial dimensions is shown in Figure 7. The collapse time for this system is approximately \(t_{c}\sim 1/\sqrt{\rho G}=1/\sqrt{G\,10^{10}M_{\odot}/60^{3}kpc^{3}}\sim 2\, \mathrm{Gyr}\). We plot the \(Q\) parameter for this evolution in the left panel of Figure 10. Like in the 1D results shown in [46], the \(Q\) parameter initially grows quadratically before growing exponentially. When normalized by the collapse time we see that the exponential growth is comparable to the 1D results [46]. During collapse \[Q(t)\sim\frac{1}{n_{tot}}e^{7t/t_{c}}\,. \tag{70}\] Therefore the quantum breaktime associated with the nonlinear growth to be \[t_{br}^{NL}\sim\frac{\ln(n_{tot})}{7}\,t_{c}\,. \tag{71}\] Note that during the collapse the specific mass of the particle is not relevant as the evolution of the largest scale modes in ULDM is the same as in CDM [88, 89, 14, 90]. Figure 9 shows the evolution of a system in which two collapsed objects are allowed to merge in two spatial dimensions. In this case the quantum corrections grow similarly to the collapsing case, see right panel of Figure 10, i.e. exponentially throughout the merger. Figure 8 shows the evolution of a system which starts in a collapsed object. This object changes little over the course of the simulation with the Figure 9: Here we plot the classical field evolution of the merger of two collapsed object resulting from the evolution of the Gaussian momentum distribution described in IV.2 in two spatial dimensions. Each column represents a different time, with initial conditions are shown on the left plot and the collapsed object on the right plot. The collapse time is \(t_{c}\sim 600\,\mathrm{Myr}\). Here we set \(M_{tot}=1.2\times 10^{10}\,M_{\odot}\), \(\hbar/m=0.02\,\mathrm{kpc}^{2}/\mathrm{Myr}\), \(L=120\,\mathrm{kpc}\), \(N=1024^{2}\). main evolution being random changes in the granular structure of the envelope surrounding the solitonic core, behavior typical of collapsed objects in ultra light dark matter. The middle panel of Figure 10 plots growth of the \(Q\) parameter for this system. Unlike the collapsing and merging cases, the growth in well fit by a quadratic growth for teh entire simulation duration. This is explained by the growth of the lowest order terms in the moment expansion around the mean field value, see [46]. \(Q\) is well approximated by \[Q(t)=\text{Tr}[\kappa_{ij}]\,t^{2}/2n_{tot}\,, \tag{72}\] where \(\kappa_{ij}\) is given from the momentum space mean field values \[\partial_{tt}\left\langle\delta\hat{a}_{i}^{\dagger}\delta\hat{a }_{j}\right\rangle \sim 2\mathbb{R}\left[\sum_{kplbc}\Lambda_{pl}^{ij}\Lambda_{bc}^{kj} \left\langle\hat{a}_{b}\right\rangle\left\langle\hat{a}_{c}\right\rangle \left\langle\hat{a}_{p}^{\dagger}\right\rangle\left\langle\hat{a}_{l}^{\dagger} \right\rangle\right] \tag{73}\] \[\equiv \kappa_{ij}\,.\] The corresponding breaktime for the stable system is \[t_{br}^{S}\sim\sqrt{n_{tot}/\text{Tr}[\kappa_{ij}]}\,. \tag{74}\] The difference in the evolution of \(Q\) in each of these systems clearly demonstrates that the exponential growth of quantum correction occurs during gravitational collapse and not in already virialized objects. For astrophysical systems \(t_{br}^{NL}\ll t_{br}^{S}\). ### Large quantum corrections It is instructive to look at the effect of large quantum corrections. This allows us to understand what kinds of predictions we expect to most differ between mean field and quantum systems. In order to demonstrate how these corrections effect the spatial density we have simulated the collapse of a sinusoidal overdensity in one spatial dimension. The results are plotted in Figure 1. Phase diffusion is the leading order effect, i.e the phase of the wavefunction at a given position becomes less well defined during the collapse. The distribution of occupation numbers and complex angles at \(x=0\) is given for the ensemble of streams in figure 11. In the one spatial dimension case, the corrections grow most quickly during the collapse as opposed to the post collapse virialized stage of the evolution. However, even though the phase is increasingly poorly defined during the collapse, the amplitude of the field is still close to the classical value until shell crossing. If we look separately at the mass weighted fractional amplitude and Figure 10: Here we plot the evolution of the physical \(Q\) parameter for the three test problems described in Section IV.2. From left to right the plots show \(Q(t)\) for the collapse of a momentum space Gaussian in three spatial dimensions (left), a collapsed object in three spatial dimensions (middle), and the merger of two collapsed objects in two spatial dimensions (right). In both the collapsing and merging case the parameter grows exponentially. In the stable case and in the nonlinear cases at early times the parameter grows quadratically. The growth of \(Q\) for these systems corroborates the 1D results found in [46, 47], i.e. that quantum corrections grow exponentially during nonlinear growth and by a powerlaw for virialized systems and at very early times. For the collapsing and stable simulations \(M_{tot}=10^{10}\,M_{\odot}\), \(L=60\,\text{kpc}\), and \(\hbar/m=0.02\,\text{kpc}^{2}/\text{Myr}\), \(t_{c}\sim 2\) (approximate collapse time), \(\text{Gyr}\), \(M=512^{3}\). For the merging simulation \(M_{tot}=1.2\times 10^{10}\,M_{\odot}\), \(L=120\,\text{kpc}\), and \(\hbar/m=0.02\,\text{kpc}^{2}/\text{Myr}\), \(t_{c}\sim 0.6\,\text{Gyr}\) (approximate merger time), \(M=1024^{2}\). phase variance, we see that phase variance grows during the collapse, but amplitude variance grows very quickly at shell crossing but slowly before and after, see Figure 12 in which we plot the density weighted amplitude and phase variances, which are given, respectively, as \[\text{Var}(\tilde{A}) =\int dV\ \langle\hat{\psi}^{\dagger}(x)\hat{\psi}(x)\rangle\, \langle\delta\hat{A}^{2}(x)\rangle\,/\,\langle\hat{A}(x)\rangle\, \tag{75}\] \[\text{Var}(\tilde{\phi}) =\int dV\ \langle\hat{\psi}^{\dagger}(x)\hat{\psi}(x)\rangle\, \langle\delta\hat{\phi}^{2}(x)\rangle\,/(\pi/\sqrt{3})\,. \tag{76}\] After shell crossing, we can see the primary effect of the quantum corrects is to lessen the degree to which the density has the interference patterns characteristic of scalar field dark matter. This makes sense given that the interference patterns result from differences between well defined spatial phase gradients which become less well defined in the quantum case. We expect then that large quantum corrections effect the spatial density by removing the \(\sim 1\) fluctuations that come from interference of phase space streams. This results in a reduction of the granular structure typical of collapsed objects in ULDM. The result of large quantum corrections on density can be seen for the collapse of a gravitational over-density in a single spatial dimension in Figure 1, and for the gravitational collapse of an object in two spatial dimensions in Figure 3. Each shows a reduction in the amplitude of the interference pattern structure of the density. Large quantum corrections therefore affect the density in a way similar to multi-field [91] or vector-field [92] ultra light dark matter. Figure 11: Here we plot the distribution of complex angles and occupations of the ensemble of fields for a spatial overdensity in one spatial dimension. Two quantum simulations with \(n_{tot}\approx 6\times 10^{4}\) and \(n_{tot}\approx 1\times 10^{6}\) are plotted in black and cyan respectively. Each column represents a different time, \(t\). The top row shows a histogram of the stream ensemble occupations numbers at \(x=0\) and the bottom row a histogram of the stream ensemble complex angles. Shell crossing occurs at \(t=1\). In these simulations \(\tilde{h}=2.5\times 10^{-4}\) and \(N_{s}=1024\). ### Decoherence Decoherence is tested by coupling a test particle with well defined phase space position to the dark matter Wigner function as described in Section III.4. Baryonic particles take well defined trajectories through phase space, so any quantum effects that quickly puts test particle into macroscopic super positions in phase space is unlike to be stable to decoherence. We simulate the one dimensional collapse of a initial overdensity and place a single test particle at initial position \(-L/4\) with initial velocity \(0\) and couple it to the dark matter state. We plot the evolution of the system in Figure 13. In the top row we can see that overtime, the uncertainty in the phase space position of the particle grows as the system collapses. We then measure the position-momentum uncertainty of the test particle throughout the evolution, i.e. \[\Delta x\,\Delta p=\sqrt{\mathrm{Var}(r^{s})\,\mathrm{Var}(p^{s})}\,. \tag{77}\] The growth of this uncertainty compared to the growth of quantum corrections is plotted in Figure 14. Their growth is very similar. Fundamentally, this is due to the fact that the same dynamics causing the wavefunction to spread around its mean also cause the test particle to spread around its classical phase space value. In order to have large quantum corrections of the kind found here, it is the case that baryonic particles would evolve into macroscopic super positions at the same rate that the quantum corrections develop. Because we observe baryonic particles take well defined, it implies that decoherence rapidly collapsing macroscopic dark matter super positions. ## VI Discussion The simulation results allow us to understand a number of quantum effects. We have investigated the timescales on which quantum corrections grow and their effect on observables once they become large. Likewise, we have been able to provide simulations of the decoherence of a test particle. Both of these effects are necessary to understanding the behavior of quantum effects in ultra-light dark matter. With respect to the quantum breaktime, the simulations in higher dimensions largely corroborate the 1D results presented in [46; 47]. Quantum corrections grow exponentially during nonlinear growth, such as the collapse and merger shown, respectively, in the left and right panel of Figure 10. The is typical of quantum systems which exhibit classical chaos [49], making this scaling unsurprising for nonlinear gravitational systems. For virialized systems stable against further nonlinear growth, we observe that corrections grow quadratically, see middle panel of Figure 10. The quadratic growth is seeded from the initial conditions of \(Q=0\) by a term proportional to the commutation of the field operators in the field moment expansion of the equations of motion, see [46; 80]. The breaktime for stable systems, \(t_{br}^{S}\sim\sqrt{n_{tot}/\mathrm{Tr}[\kappa_{ij}]}\), is far too long to introduce quantum corrections in the age of the universe for systems with occupations \(n_{tot}\sim 10^{100}\). However, the breaktime for nonlinear systems, \[t_{br}^{NL}\sim\frac{\ln(n_{tot})}{7}\,t_{c}\,, \tag{78}\] is plausible smaller than the age of the universe of some systems with the shortest dynamical times. This makes an investigation of the effect of quantum corrections and the decoherence timescale useful. The predominant effect of quantum corrections is to remove the \(\sim\mathcal{O}(1)\) density fluctuations resulting from the coherent interference of streams in Figure 12: We plot a normalized mass weighted amplitude variance (see equation (75)) in blue and a normalized mass weighted phase variance (see equation (76)) in orange for the gravitational collapse of an over density in a single spatial dimension. We set \(N=512\), \(M_{tot}=10^{8}\,\mathrm{M}_{\odot}\), \(L=60\,\mathrm{kpc}\), \(\hbar/m=0.01\), \(n_{tot}=10^{6}\). phase space, see Figures 1 and 4. These interference patterns rely on a well defined phase gradient, but the nonlinearity in the Hamiltonian has the effect of causing phase diffusion in the quantum state. When shell crossing occurs and the streams cross each other, the phase is no longer well defined and the amplitude interference pattern is lessened proportionally, see Figure 12. Quantum correction effects on the density fluctuations are similar to multi-field [91] or vector-field [92] ultra light dark matter models. Constraints which depend strongly on these fluctuations, such as the heating of ultra-faint dwarf stellar dispersions [29; 30] and strong gravitational lensing constraints [35], would be most impacted when quantum effects are large. Likewise, the effect of large quantum corrections on haloscope experiments was considered in [65]. It should be pointed out that ultra-faint dwarf galaxies has large dynamical times and therefore would have slower grower quantum corrections than large galactic systems. An important mitigating factor complicating this story is decoherence. As nonlinearities in the quantum system drive the growth of quantum corrections interactions with the environment project the system into its basis of pointer states. Crucially, the same gravitational interaction which drives the growth of corrections also provides a coupling to the baryonic tracer particle environment. And while it is unlikely that this is the fastest decoherence channel, see for instance [57; 58; 59], any corrections that would force baryonic tracer particles into macroscopic superpositions on phase space cannot be realistic as we can observe that baryonic tracers take well defined paths through phase space. We find that quantum corrections and uncertainty in the tracer's phase Figure 13: The evolution of the gravitational collapse of initial overdensity for a quantum coherent state couple dot a tracer particle. The top row shows the phase space of the classical field evolution. Overlayed on top of this is the distribution representing the quantum state of the tracer particle (in black) and the particles classical phase space value (in red). Shell crossing occurs at \(t=1\). In the bottom row, we plot the sample streams, used to approximate the Wigner distribution, values at \(x=0\) (in black) with the classical field value (in red). We can see that as the quantum state spreads around the classical field value of the field it also spreads around the classical phase space position of the particle. In these simulations \(\tilde{h}=2.5\times 10^{-4}\), \(n_{tot}\approx 6\times 10^{4}\), \(M_{tot}=L=1\), and \(N_{s}=512\). space trajectory both grow exponentially, see Figure 14. We can say approximately then that the decoherence timescale is \[\tau_{d}\lesssim t_{br}\,. \tag{79}\] This makes sense as both effects are driven by gravitational nonlinearity. Therefore, it is likely that systems which exhibit large quantum corrections also put baryons into macroscopic superpositions which are not observed. This implies that large quantum corrections like the ones we simulate here are unlikely, a result which strongly supports the accuracy of the classical field approximation. Note that this decoherence time does not depend explicitly on the mass like ones previously found [57, 58, 59]. Instead it is related to the dynamical timescale, which for this system also describes the timescale on which small perturbations in the initial conditions grow apart in phase space. We qualify this support with the following potential caveats. Because the decoherence timescale is fast the pointer states of the system are important to understanding the behavior of the system. If we assume that the pointer states are coherent states then the classical approximation is likely accurate on scales above the scale of quantum fluctuations, i.e. \(\sim\mathcal{O}(1/\sqrt{n_{tot}})\). We can say that the pointer states must allow baryonic tracer particles to take well defined trajectories. Naively this means that we would like the state to be an approximate eigenstate of the density operator, but that would be true for coherent states, squeezed states or field number eigenstates, all described by a single classical field but with different quantum properties, or conceivably quantum states that are not described by a single classical field, such as fragmented states (appearing in the context of BECs) where multiple incoherent fields are needed to encapsulate the quantum state. These states have been studied previously in a similar context in for example [64, 51]. Importantly, a coherent state is only an exact eigenstate of the linear field operator, though the fractional variance of the density operator for a coherent state is on the scale of quantum fluctuations and thus small. It is plausible that other states may be the pointer states. For example, field number states (written in terms of the number eigenstate basis as [93]) are eigenstates of the density operator and have been shown in previous work to spread more slowly due to gravitational nonlinearity [45]. It is possible that there exist pointer states which satisfy the conditions we describe here but still admit corrections to the classical equations of motion or have interesting quantum properties. This work does not contain an analysis of field number states because the Wigner function of the field number state is more difficult to approximate using the truncated Wigner approximation. Likewise, we did not provide any estimation of the pointer states in this work. While Section III.4 contains a description of how this method can be used to obtain a reduced density matrix it is unclear to directly identify the pointer states without simply guessing and checking. Finally, we point out that the quartic self-interaction term, which is not considered here, is also unlikely to cause large quantum corrections. Previous work has shown that corrections due to this term grow proportional to a powerlaw [45], similar to the stable systems investigated here. And while this nonlinearity does not drive baryonic tracers into macroscopic superpositions, it is likely to slow to grow quantum corrections in the lifetime of the universe for systems at high occupation. Investigations of field number states, pointer states, and the ultra light dark matter self interactions remain interesting potential future work. Figure 14: Here we shows the results of using a test particle to estimate decoherence rates for the collapse of a overdensity in a single spatial dimension coupled to a test particle. The left plot show the uncertainty in the particles phase space position over time, and the right plot shows the \(Q\) parameter, measuring the size of quantum corrections in the system. We see that the two grow similarly. Shell crossing occurs at \(t=1\). In these simulations \(\tilde{\hbar}=2.5\times 10^{-4}\) and \(N_{s}=512\), \(n_{tot}^{s}=6.7\times 10^{7}\). ## VII Conclusions In this paper we use the truncated Wigner approximation to study quantum corrections to the classical field theory of ultra-light dark matter. We have provided some of the largest and most realistic simulations used to study quantum effects in ultra-light dark matter to date, involving hundreds of modes in 1, 2, and 3 spatial dimensions. Likewise, we have provided the first direct simulations studying quantum decoherence for ultra-light dark matter. Using this approximation we estimate the quantum breaktime for ultra-light dark matter, provide an estimation of the effect of quantum corrections on the density, and investigate decoherence time due to gravitational coupling to a baryonic tracer particle. Our study of the breaktime corroborates the 1D results in [46; 47]. Quantum corrections grow exponentially in systems which grow nonlinearly, and quadratically in stable virialized systems and at very early times. We have now observed these scaling in systems over a wide range of scales, initial conditions, and spatial dimensions, see for example Figure 10, appendix C, and systems studied in previous work [45; 46; 47]. We find in collapsing systems the breaktime is approximately \(t_{br}\sim\frac{\ln(n_{tot})}{7}\,t_{c}\) where \(t_{c}\sim\sqrt{L^{D}/G\,M}\) is the dynamic time. The systems we have studied in this paper are intended to represent the growth of small scale structure such as dwarf galaxies of approximate mass \(M_{tot}\sim 10^{10}M_{\odot}\) with occupations around \(n_{tot}\sim 10^{100}\). Most constraints relating to the impact of ultra light dark matter on structure growth use structure on this scale or smaller. For such a system the exponential growth of quantum corrections we simulated would predict a quantum breaktime \(\sim 65\,\mathrm{Gyr}\). The quadratic growth of quantum corrections results in a much longer breaktime \(\sim 10^{45}\,\mathrm{Gyr}\). We note both of these breaktimes are well beyond the age of the universe. We have found that when quantum corrections are large the leading order effect is to remove the granular structure associated with the \(\sim\mathcal{O}(1)\) density fluctuations resulting from interfering streams. This effect can be seen in Figures 1 and 3. This is similar to the effect of adding additional light fields [91] or using high spin fields [92]. Large quantum corrections are therefore most important for studies sensitive to this interference structure such as the heating of dwarf galaxy stellar dispersions [29; 30], strong gravitational lensing constraints [35], and haloscopes sensitive to the time variation of the field amplitude [65]. Our simulation of decoherence indicated that the same perturbations that lead to the spreading dark matter wavefunction would also result in macroscopic phase space super positions of baryonic test particles. Because the same physics governs both processes, this happens at approximately the same rate that quantum corrections grow, see Figure 14. As we do not observe baryonic particles in macroscopic super positions, it is unlikely that a macroscopic super position of dark matter is stable to decoherence. These results use direct nonlinear simulations of quantum corrections to provide some of the strongest evidence to date that the classical field approximation used in ULDM simulations is accurate. However, in this work we did not identify the pointer states of the system, study alternative initial quantum states (such as field number states), or take into account ULDM self interaction. These remain interesting potential future work. ###### Acknowledgements. Some of the computing for this project was performed on the Sherlock cluster. This work was supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515. ## Appendix A Appendices ### Proof of truncated Wigner method In this appendix we have provide a proof that a classical ensemble of fields, \(f_{S}\), \[f_{S}[\psi,\psi^{*};t]=\frac{1}{N_{s}}\sum_{i}^{N_{s}}c_{i}\,\delta[\psi-\psi _{i}(x,t)]\,\delta[\psi^{*}-\psi_{i}^{*}(x,t)]\,,\] (A.1) where \[\partial_{t}\psi_{i}(x,t) =-\frac{i}{\hbar}\left\{\,H_{W}[\psi_{i}(x),\psi_{i}^{*}(x)]\,,\, \psi_{i}(x,t)\,\right\}_{c}\] (A.2) \[=-\frac{i}{\hbar}\frac{\partial H_{W}[\psi_{i}(x),\psi_{i}^{*}(x)] }{\partial\psi_{i}^{*}(x)}\] (A.3) where \(H_{W}\) is the Weyl symbol of the Hamiltonian, solves the equation of motion for this Wigner function, i.e. \[\partial_{t}f_{S}[\psi,\psi^{*};t]\approx-\frac{i}{\hbar}\,\left\{\,H_{W}[\psi,\psi^{*}]\,,\,f_{S}[\psi,\psi^{*};t]\,\right\}_{c}.\] (A.4) We start by taking a time derivative of equation (A.1) and then making substitutions using equations (A.3), and \(a\,\delta(a-b)=b\,\delta(b-a)\) where necessary. We will also use the notational short hand \(\delta[\Psi_{i}]\equiv\delta[\psi(x)-\psi_{i}(x,t)]\). \[\partial_{t}f_{S}[\psi,\psi^{*}] =\frac{1}{N_{s}}\partial_{t}\sum_{i}c_{i}\,\delta[\Psi_{i}]\, \delta[\Psi_{i}^{*}]\] \[=\frac{1}{N_{s}}\sum_{i}c_{i}\left(\partial_{t}\delta[\Psi_{i}] \right)\,\delta[\Psi_{i}^{*}]+\delta[\Psi_{i}]\,\left(\partial_{t}\delta[ \Psi_{i}^{*}]\right)\] \[=\frac{1}{N_{s}}\sum_{i}c_{i}\left(\frac{\partial\delta[\Psi_{i}] }{\partial\psi}\frac{\partial\psi_{i}(x,t)}{\partial t}\right)\,\delta[\Psi_{ i}^{*}]+c.c.\] \[=-\frac{i}{\hbar}\frac{1}{N_{s}}\sum_{i}c_{i}\left(\frac{ \partial\delta[\Psi_{i}]}{\partial\psi}\frac{\partial H_{W}[\psi(x),\psi^{*} (x)]}{\partial\psi^{*}}\right)\,\delta[\Psi_{i}^{*}]-c.c.\] \[=-\frac{i}{\hbar}\frac{\partial H_{W}[\psi(x),\psi^{*}(x)]}{ \partial\psi^{*}}\frac{\partial}{\partial\psi}\frac{1}{N_{s}}\sum_{i}c_{i}\, \delta[\Psi_{i}]\,\delta[\Psi_{i}^{*}]-c.c.\] \[=-\frac{i}{\hbar}\,\frac{\partial H_{W}[\psi(x),\psi^{*}(x)]}{ \partial\psi^{*}}\frac{\partial f_{S}[\psi(x),\psi^{*}(x)]}{\partial\psi}-c.c.\] \[=-\frac{i}{\hbar}\,\left\{\,H_{W}[\psi(x),\psi^{*}(x)]\,,\,f_{S}[ \psi(x),\psi^{*}(x)]\,\right\}_{c}.\] (A.5) And we see that equation (A.5) is the same as equation (A.4), completing the proof. ### MSM: A Rust/C++ Implementation We use Rust bindings for the C++ library Arrayfire to create a fast single-GPU implementation of the ensemble method presented in this paper. We call this implementation MSM: MultiStream Method. The implementation can be found at [https://github.com/andillio/MSM](https://github.com/andillio/MSM). The implementation includes several test problems such as the spherical tophat, coherent and incoherent Gaussians, and supports user-specified initial conditions. Since Python remains a popular language, the code outputs snapshots in Numpy's npy format for ease of use and allows for such format to be read in as initial conditions. The implementation supports one, two, and three spatial dimensions. The repository also includes a synthesizer tool, which synthesizes the streams output by the simulator. It executes and averages arbitrary functions \(\mathbb{C}^{N^{3}}\rightarrow\mathbb{C}^{N}\) and \(\mathbb{C}^{N^{3}}\rightarrow\mathbb{C}\) across the streams, where \(N\) is the number of spatial cells in the individual streams These functions can be applied either on the individual streams before being averaged or on the averaged wavefunction. Averaging the stream wavefunctions, their Fourier transforms, and their respective squares, along with calculating the \(Q\) used in this paper are several examples of this use. ### Comparing spatial dimensions We include a breif study of systems with different numbers of spatial dimensions to demonstrate that the behavior we observe here is not specific to any particular set of dimensions. Simulations of the same test problem in higher dimensions produce similar results. For example, in Figure 15, we show the evolution of the \(Q(t)\) parameter for the collapse of a sinewave overdensity in a single compared with two spatial dimensions with the same total occupation. We can see that the evolution is quite similar qualitatively, the only difference being the a factor of about \(2.5\) in the value of \(Q\), a factor which has a vanishingly small effect on the order of the quantum breaktime. This corroborates the results of this work, in which the results found in [46] in a single spatial dimension are largely applicable in higher dimensions and with a different numerical method.
2301.09675
Improved Rate of First Order Algorithms for Entropic Optimal Transport
This paper improves the state-of-the-art rate of a first-order algorithm for solving entropy regularized optimal transport. The resulting rate for approximating the optimal transport (OT) has been improved from $\widetilde{{O}}({n^{2.5}}/{\epsilon})$ to $\widetilde{{O}}({n^2}/{\epsilon})$, where $n$ is the problem size and $\epsilon$ is the accuracy level. In particular, we propose an accelerated primal-dual stochastic mirror descent algorithm with variance reduction. Such special design helps us improve the rate compared to other accelerated primal-dual algorithms. We further propose a batch version of our stochastic algorithm, which improves the computational performance through parallel computing. To compare, we prove that the computational complexity of the Stochastic Sinkhorn algorithm is $\widetilde{{O}}({n^2}/{\epsilon^2})$, which is slower than our accelerated primal-dual stochastic mirror algorithm. Experiments are done using synthetic and real data, and the results match our theoretical rates. Our algorithm may inspire more research to develop accelerated primal-dual algorithms that have rate $\widetilde{{O}}({n^2}/{\epsilon})$ for solving OT.
Yiling Luo, Yiling Xie, Xiaoming Huo
2023-01-23T19:13:25Z
http://arxiv.org/abs/2301.09675v1
# Improved Rate of First Order Algorithms for Entropic Optimal Transport ###### Abstract This paper improves the state-of-the-art rate of a first-order algorithm for solving entropy regularized optimal transport. The resulting rate for approximating the optimal transport (OT) has been improved from \(\widetilde{\mathcal{O}}(n^{2.5}/\epsilon)\) to \(\widetilde{\mathcal{O}}(n^{2}/\epsilon)\), where \(n\) is the problem size and \(\epsilon\) is the accuracy level. In particular, we propose an accelerated primal-dual stochastic mirror descent algorithm with variance reduction. Such special design helps us improve the rate compared to other accelerated primal-dual algorithms. We further propose a batch version of our stochastic algorithm, which improves the computational performance through parallel computing. To compare, we prove that the computational complexity of the Stochastic Sinkhorn algorithm is \(\widetilde{\mathcal{O}}(n^{2}/\epsilon^{2})\), which is slower than our accelerated primal-dual stochastic mirror algorithm. Experiments are done using synthetic and real data, and the results match our theoretical rates. Our algorithm may inspire more research to develop accelerated primal-dual algorithms that have rate \(\widetilde{\mathcal{O}}(n^{2}/\epsilon)\) for solving OT. ## 1 Introduction The _Optimal Transport_ (OT) [22, 16, 29] is an optimization problem that has been actively studied. In this section, we review the OT problem. In Section 1.1, we review the OT formulation and its related concepts. In Section 1.2, we survey the existing algorithms for solving OT and summarize our contribution given the literature background. ### Optimal Transport We review the definition of OT. Given a cost matrix \(C\in\mathbb{R}_{+}^{n\times n}\) and two vectors \(\mathbf{p},\mathbf{q}\in\Delta_{n}\), where \(\Delta_{n}:=\{\mathbf{a}\in\mathbb{R}_{+}^{n}:\mathbf{a}^{T}\mathbf{1}=1\}\) is the standard simplex, OT is defined as follows: \[\min_{X\in\mathcal{U}(\mathbf{p},\mathbf{q})}\langle C,X\rangle, \tag{1}\] where \(\mathcal{U}(\mathbf{p},\mathbf{q}):=\left\{X\in\mathbb{R}_{+}^{n\times n}\left|X\mathbf{1 }=\mathbf{p},X^{T}\mathbf{1}=\mathbf{q}\right.\right\}\), and \(\langle C,X\rangle:=\sum_{i,j=1}^{n}C_{i,j}X_{i,j}\). The _\(\epsilon\)-solution_ is always used when evaluating algorithm efficiency for solving OT, so we review its definition as follows. Denote the optimal solution of problem (1) as \(X^{*}\), an \(\epsilon-\)solution \(\widehat{X}\) is such that: \[\widehat{X}\in\mathcal{U}(\mathbf{p},\mathbf{q});\] \[\langle C,\widehat{X}\rangle\leq\langle C,X^{*}\rangle+\epsilon.\] Note that for a stochastic algorithm, the second condition is replaced by \(\operatorname{\mathbb{E}}\langle C,\widehat{X}\rangle\leq\langle C,X^{*} \rangle+\epsilon\). Our paper adopts a two-step approach [4] for finding an \(\epsilon\)-solution to problem (1). In the first step, one finds an approximate solution \(\widetilde{X}\) to the _entropic OT_ problem (2). \[\min_{X\in\mathcal{U}(\mathbf{p}^{\prime},\mathbf{q}^{\prime})}\langle C,X\rangle- \eta H(X), \tag{2}\] where \(H(X)=-\sum_{i,j}X_{i,j}\log(X_{i,j})\) is the entropy. In the second step, one rounds \(\widetilde{X}\) to the original feasible region \(\mathcal{U}(\mathbf{p},\mathbf{q})\). By taking proper parameters \(\eta,\mathbf{p}^{\prime},\mathbf{q}^{\prime}\) and requiring a suitable accuracy level when approximating problem (2), the work [4] guarantees the final solution to be an \(\epsilon\)-solution to problem (1). ### Literature Review We review the state-of-the-art algorithms that solve OT by the two-step approach and summarize their computational complexity (measured by the number of numerical operations) for giving an \begin{table} \begin{tabular}{c c c c} \hline \hline Year & Algorithm & Order of Complexity & Solves Entropic OT \\ \hline 2013 & Sinkhorn [8] & \(n^{2}/\epsilon^{2}\)[10] & \(\surd\) \\ 2017 & Greenkhorn [4] & \(n^{2}/\epsilon^{3}\)[4]; \(n^{2}/\epsilon^{2}\)[19] & \(\surd\) \\ 2018 & Stochastic Sinkhorn [1] & \(n^{2}/\epsilon^{3}\); \(n^{2}/\epsilon^{2}\) (This paper) & \(\surd\) \\ 2018 & APDAGD [10] & \(n^{2.5}/\epsilon\) & \(\surd\) \\ 2018 & Packing LP [5, 27] & \(n^{2}/\epsilon\) & \(\times\) \\ 2018 & Box Constrained Newton [5] & \(n^{2}/\epsilon\) & \(\surd\) \\ 2019 & APDAMD [19] & \(n^{2.5}/\epsilon\) & \(\surd\) \\ 2019 & Dual Extrapolation [15] & \(n^{2}/\epsilon\) & \(\times\) \\ 2019 & Accelerated Sinkhorn [20] & \(n^{7/3}/\epsilon^{4/3}\) & \(\surd\) \\ 2019 & Dijkstra’s search + DFS [17] & \(n^{2}/\epsilon+n/\epsilon^{2}\) & \(\times\) \\ 2020 & APDRCD [14] & \(n^{2.5}/\epsilon\) & \(\surd\) \\ 2021 & AAM [13] & \(n^{2.5}/\epsilon\) & \(\surd\) \\ 2022 & Hybrid Primal-Dual [6] & \(n^{2.5}/\epsilon\) & \(\surd\) \\ 2022 & PDASGD [30] & \(n^{2.5}/\epsilon\) & \(\surd\) \\ 2022 & PDASMD & \(\mathbf{n^{2}/\epsilon}\) (This paper) & \(\surd\) \\ \hline \hline \end{tabular} \end{table} Table 1: In this table, we list the year of the relevant publication, the names of the methods, the simplified version of its computational complexity, and whether (a \(\surd\) sign) or not (an \(\times\) sign) the method solves entropic OT as an intermediate step for approximating OT in columns. The mark of “(This Paper)” indicates a rate derived in this paper. It is clear that our method achieves the lowest rate among the methods that solve entropic OT. \(\epsilon\)-solution to OT in Table 1. The computational complexities in Table 1 are shown in their order of \(n\) and \(\epsilon\), where the \(\log(n)\) term is omitted. There are four main techniques to solve problem (2) in current literature: * The first technique solves the dual problem of problem (2) by the Bregman projection technique. Specifically, this technique partitions the dual variables into blocks and iteratively updates each block. Algorithms that use this technique include the Sinkhorn algorithm [8], the Greenkhorn algorithm [4], and the Stochastic Sinkhorn algorithm [1]. * The second technique also solves the dual problem of problem (2) but uses accelerated first-order methods. Algorithms that use this technique include accelerated gradient descent (APDAGD) [10], accelerated mirror descent (APDAMD) [19], accelerated alternating minimization (AAM) [13], accelerated randomized coordinate descent (APDRCD) [14] and accelerated stochastic gradient descent (PDASGD) [30]. This technique can also be combined with the first technique. See, for example, the accelerated Sinkhorn algorithm in reference [20]. * The third technique solves the dual problem of problem (2) by second-order algorithms. An instance that uses this technique is the box-constrained Newton algorithm [5]. * The fourth technique minimizes the primal-dual gap of problem (2). An instance that uses this technique is the hybrid primal-dual algorithm [6]. Besides works that use the two-step approach to solve the entropic OT first, some works directly solve the unpenalized OT problem (1) by linear programming [5, 27], dual-extrapolation [15], or graph-based search algorithm [17]. We compare the computational complexity in Table 1 of our algorithm with other state-of-the-art algorithms as follows. First, our PDASMD algorithm belongs to the second class of algorithms to solve the entropic OT problem (2). All other algorithms in this class reported a rate of \(\widetilde{\mathcal{O}}(n^{2.5}/\epsilon)\) for approximating OT, while our algorithm has a better rate of \(\widetilde{\mathcal{O}}(n^{2}/\epsilon)\). Thus our algorithm improves the rate for this class. The advantage of our algorithm mainly comes from the special technique that we use: though all the algorithms in this class use the acceleration technique, no accelerated variance reduction version of stochastic mirror descent has been tried in the previous algorithms. We apply those techniques to entropic OT and find that they lead to a better theoretical rate. Second, our PDASMD algorithm still reports the best rate among all algorithms for solving entropic OT. There is only one algorithm on entropic OT that achieved the same rate: the box-constrained Newton algorithm. However, we note that the Newton algorithm is a second-order algorithm, which requires computing the Hessian of the objective function. By its second-order nature, each step of the Newton algorithm will be expensive in terms of computation and memory. On the other hand, our PDASMD algorithm is based on mirror descent, which is a first-order algorithm. Our PDASMD algorithm is thus easier to implement. Finally, the algorithms that directly solve the original OT problem also report the same optimal rate as our PDASMD algorithm, including the packing LP algorithm, the dual extrapolation algorithm, and the graph-based Dijkstra DFS algorithm (when \(\epsilon\gtrsim 1/n\)). Compared with those algorithms, we have the extra advantage that our algorithm can not only approximate the OT problem but also solve the entropic OT. Thus, when one wants to solve the entropic OT, our algorithm is still preferred. Our ContributionWe summarize two main contributions in this work as follows. * We propose an accelerated primal-dual stochastic algorithm that has computational complexity \(\widetilde{\mathcal{O}}(n^{2}/\epsilon)\) for solving OT. Every step of our algorithm is defined by simple arithmetic operations and is counted in the complexity calculation. Thus our algorithm is practical. Moreover, compared with other algorithms that achieve the same rate for solving OT: our algorithm has the extra advantage that it can also be applied to entropic OT; it is a first-order algorithm, so it can be easily implemented without computing the Hessian. We also propose a batch version of our algorithm to increase the computational power. * We prove that the computational complexity of the Stochastic Sinkhorn algorithm is \(\widetilde{\mathcal{O}}(n^{2}/\epsilon^{2})\), instead of the \(\widetilde{\mathcal{O}}(n^{2}/\epsilon^{3})\) rate in the literature. Our proved rate for Stochastic Sinkhorn matches the state-of-the-art rate of Sinkhorn and Greenkhorn. Moreover, the provable rate by our accelerated primal-dual stochastic algorithm is better than that of the Stochastic Sinkhorn, which again illustrates the advantage of our algorithm. Paper OrganizationThe rest of the paper is organized as follows. In Section 2, we present our main algorithm of Primal-Dual Accelerated Stochastic Proximal Mirror Descent (PDASMD), show its convergence, and analyze its complexity for solving OT; as a comparison, we also prove the rate of Stochastic Sinkhorn, which is improved over the existing result. In Section 3, we develop a batch version of PDASMD and show its convergence and computational complexity. We run numerical examples in Section 4 to support our theorems. In Section 5, we discuss the findings of this work and some future research. ## 2 Primal-Dual Accelerated Stochastic Proximal Mirror Descent (PDASMD) In this section, we present our PDASMD algorithm for solving a linear constrained convex problem, which includes the entropic OT as a special case. We analyze the convergence rate of the PDASMD algorithm, then apply it to OT and derive the computational complexity. As a comparison, we also analyze the computational complexity of the Stochastic Sinkhorn. Since our algorithm uses the Proximal Mirror Descent technique, we review the background of such a technique in Appendix A and briefly explain why it is suitable for entropic OT. ### Definition and Notation We first introduce some notations that we will use throughout the rest of this paper. **Notations**: For a vector \(\mathbf{a}\): let \(sign(\mathbf{a})\) be such that \((sign(\mathbf{a}))_{i}=1\) if \(a_{i}>0\) and \(-1\) otherwise. Let \(\mathbf{1}_{n}\) be the \(n\)-dimensional vector where each element is \(1\). For matrices \(X\in\mathbb{R}^{n\times o},Y\in\mathbb{R}^{p\times q}\): let \(X\otimes Y\) denote the standard Kronecker product; let \(\exp(X)\) and \(\log(X)\) be the element-wise exponential and logarithm of \(X\); let \(\|X\|_{2}\) be the operator norm of \(X\) and \(\|X\|_{\infty}\) be \(\max_{i,j}|X_{i,j}|\); denote the matrix norm induced by two arbitrary vector norms \(\|\cdot\|_{H}\) and \(\|\cdot\|_{E}\) as \(\|X\|_{E\to H}:=\max_{\mathbf{a}:\|\mathbf{a}\|_{E}\leq 1}\|X\mathbf{a}\|_{H}\); denote the vectorization of \(X\) as \(\text{Vec}(X)=(X_{11},...,X_{n1},X_{12},...,X_{n2},...,X_{1o},...,X_{no})^{T}\). For two non-negative real values \(s(\kappa)\) and \(t(\kappa)\), denote \(s(\kappa)=\Theta(t(\kappa))\) if \(\exists k>0\) and \(K>0\) such that \(kt(\kappa)\leq s(\kappa)\leq Kt(\kappa)\); denote \(s(\kappa)=\mathcal{O}(t(\kappa))\) if \(\exists K>0\) such that \(s(\kappa)\leq Kt(\kappa)\); denote \(s(\kappa)=\widetilde{\mathcal{O}}(t(\kappa))\) to indicate the previous inequality where \(K\) depends on some logarithmic function of \(\kappa\). Next, we review some key definitions that will be useful. 1 Footnote 1: Our definitions follow those in [2]. **Definition 1** (Strong convexity).: \(f:\mathcal{Q}\to\mathbb{R}\) _is \(\alpha\)-strongly convex w.r.t. \(\|\cdot\|_{H}\) if \(\forall\mathbf{x},\mathbf{y}\in\mathcal{Q}\):_ \[f(\mathbf{y})\geq f(\mathbf{x})+\langle\nabla f(\mathbf{x}),\mathbf{y}-\mathbf{x} \rangle+\frac{\alpha}{2}\|\mathbf{x}-\mathbf{y}\|_{H}^{2}.\] **Definition 2** (Smoothness).: _A convex function \(f:\mathcal{Q}\to\mathbb{R}\) is \(\beta\)-smooth w.r.t. \(\|\cdot\|_{H}\) if \(\forall\mathbf{x},\mathbf{y}\in\mathcal{Q}\):_ \[\|\nabla f(\mathbf{x})-\nabla f(\mathbf{y})\|_{H,*}\leq\beta\|\mathbf{x}- \mathbf{y}\|_{H},\] _where \(\|\mathbf{u}\|_{H,*}:=\max_{\mathbf{v}}\{\langle\mathbf{u},\mathbf{v}\rangle:\|\mathbf{v}\|_{H} \leq 1\}\) is the dual norm of \(\|\cdot\|_{H}\). Or equivalently,_ \[f(\mathbf{y})\leq f(\mathbf{x})+\langle\nabla f(\mathbf{x}),\mathbf{y}-\mathbf{x} \rangle+\frac{\beta}{2}\|\mathbf{x}-\mathbf{y}\|_{H}^{2}.\] **Definition 3** (Bregman divergence).: _For a mirror function \(w(\cdot)\) that is \(1\)-strongly convex w.r.t. \(\|\cdot\|_{H}\), we denote by \(V_{\mathbf{x}}(\mathbf{y})\) the Bregman divergence w.r.t. \(\|\cdot\|_{H}\) generated by \(w(\cdot)\), where_ \[V_{\mathbf{x}}(\mathbf{y}):=w(\mathbf{y})-w(\mathbf{x})-\langle\nabla w(\mathbf{x}), \mathbf{y}-\mathbf{x}\rangle.\] _One can conclude from the definition that_ \[V_{\mathbf{x}}(\mathbf{y})\geq\frac{1}{2}\|\mathbf{x}-\mathbf{y}\|_{H}^{2}.\] _If we further assume that the mirror function \(w(\cdot)\) is \(\gamma\)-smooth w.r.t. \(\|\cdot\|_{H}\), we then have_ \[V_{\mathbf{x}}(\mathbf{y})\leq\frac{\gamma}{2}\|\mathbf{x}-\mathbf{y}\|_{H}^{2}.\] ### General Formulation and PDASMD Algorithm In this section, we first state a general linear constrained problem and explain how it includes entropic OT as a special case. We then propose our algorithm to solve this general problem. Finally, we show the convergence rate of our algorithm. We consider a linear constrained problem as follows: \[\min_{\mathbf{x}\in\mathbb{R}^{m}}f(\mathbf{x})\qquad s.t.\quad A\mathbf{x}= \mathbf{b}\in\mathbb{R}^{l}, \tag{3}\] where \(f\) is strongly convex. One observes that the entropic OT (2) is a special case of problem (3) with \(\mathbf{x}=\text{Vec}(X)\), \(f(\mathbf{x})=\langle\text{Vec}(C),\mathbf{x}\rangle+\eta\sum_{i,j=1}^{n}x_{in+j}\log( x_{in+j})\), \(\mathbf{b}=(\mathbf{p}^{T},\mathbf{q}^{T})^{T}\), \(A=\left[\begin{array}{c}\mathbf{1}^{T}\otimes I_{n}\\ I_{n}\otimes\mathbf{1}^{T}\end{array}\right]\). A standard approach for solving the constrained problem (3) is to optimize its Lagrange dual problem (4): \[\min_{\mathbf{\lambda}}\{\phi(\mathbf{\lambda}):= \langle\mathbf{\lambda},\mathbf{b}\rangle+\max_{\mathbf{x}}(-f(\mathbf{x})- \langle A^{T}\mathbf{\lambda},\mathbf{x}\rangle)\] \[= \langle\mathbf{\lambda},\mathbf{b}\rangle-f(\mathbf{x}(\mathbf{\lambda}))-\langle A ^{T}\mathbf{\lambda},\mathbf{x}(\mathbf{\lambda})\rangle\}, \tag{4}\] where by F.O.C. \(\mathbf{x}(\mathbf{\lambda})\) is such that \[\nabla_{\mathbf{x}}f(\mathbf{x}(\mathbf{\lambda}))=-A^{T}\mathbf{\lambda}. \tag{5}\] Since problem (3) is a linear constrained convex problem, the strong duality holds. Thus solving problem (3) is equivalent to solving its dual problem (4). In particular, we develop a stochastic algorithm for the case that the dual is of finite sum form. We further assume that all terms in the finite sum are smooth for convergence analysis. The conditions on the dual are formalized as follows: **Assumption 1** (Finite-sum dual).: _Assume that the dual can be written as \(\phi(\mathbf{\lambda})=\frac{1}{m}\sum_{i=1}^{m}\phi_{i}(\mathbf{\lambda})\), where \(\phi_{i}\) is convex and \(L_{i}-\)Lipchitz smooth w.r.t. an arbitrary \(\|\cdot\|_{H}\) norm._ Note that the assumption on the dual is reasonable and can be satisfied by some problems, including entropic OT. We now give a concrete example that the assumption holds. Consider a primal objective \(f(\mathbf{x})=\sum_{i=1}^{m}f_{i}(x_{i})\) where each \(f_{i}\) is \(\nu-\)strongly convex w.r.t. another arbitrary norm \(\|\cdot\|_{E}\) (note that it can be different from the \(\|\cdot\|_{H}\) norm). In this case, we can solve the primal-dual relationship in equation (5) to get: \[x_{i}(\mathbf{\lambda})=(\nabla f_{i})^{-1}(-\mathbf{a}_{i}^{T}\mathbf{\lambda}),i=1, \ldots,m,\] where \(\mathbf{a}_{i}\) is the \(i\)th column of \(A\). As a consequence, the dual problem (4) can be written as a finite sum: \[\phi(\mathbf{\lambda}) =\frac{1}{m}\sum_{i=1}^{m}(\langle\mathbf{\lambda},\mathbf{b}_{i}\rangle -mf_{i}(x_{i}(\mathbf{\lambda}))-m\mathbf{a}_{i}^{T}\mathbf{\lambda}x_{i}(\mathbf{\lambda}))\] \[:=\frac{1}{m}\sum_{i=1}^{m}\phi_{i}(\mathbf{\lambda}),\] where \(\mathbf{b}_{i}\)'s are arbitrarily chosen vectors satisfying the constraint \(\sum_{i=1}^{m}\mathbf{b}_{i}=m\mathbf{b}\). One can check that \(\nabla\phi_{i}(\mathbf{\lambda})=\mathbf{b}_{i}-mx_{i}(\mathbf{\lambda})\mathbf{a}_{i}\). By [24], \(\phi_{i}\) is convex and \(L_{i}-\)Lipchitz smooth w.r.t. \(\|\cdot\|_{H}\) norm, where \(L_{i}\leq\frac{m}{\nu}\|\mathbf{a}_{i}\|_{E\to H,*}\). With the finite sum representation of \(\phi\), we propose a PDASMD algorithm (Algorithm 1) to solve problem (3). We add a few remarks to explain the algorithm as follows. **Remark 1**.: _To run the algorithm, one should choose a specific \(\|\cdot\|_{H}\) norm and a mirror function \(w(\cdot)\). Those choices have a direct impact on the mirror descent step 10 and proximal gradient descent step 11: if we let \(\|\cdot\|_{H}=\|\cdot\|_{2}\) and \(w(\cdot)=\frac{1}{2}\|\cdot\|_{2}^{2}\), both steps reduce to stochastic gradient descent, then the algorithm essentially reduces to the PDASGD algorithm in [30]._ **Remark 2**.: _The primal variables \(\mathbf{x}\)'s in Algorithm 1 are updated by Steps 14 through 16, and we explain those steps as follows: The iterates in Steps 14 through 16 essentially leads to \(\mathbf{x}^{S-1}=\left(\sum\limits_{s=0}^{S-1}\mathbf{x}(\widetilde{\mathbf{y}}_{s})/\tau_ {1,s}\right)\left/\left(\sum\limits_{s=0}^{S-1}(1/\tau_{1,s})\right)\right..\) We express such updates in \(\mathbf{x}^{s}\) in an iterative way to avoid storing all updates of \(\widetilde{\mathbf{y}}_{s}\)'s. In this way, our algorithm is memory efficient._ **Remark 3**.: _The dual variables \(\mathbf{v},\mathbf{z},\mathbf{y}\)'s are updated by Steps 2 through 13. The update consists of outer loops indexed by \(s\) and inner loops indexed by \(j\), which uses the variance reduction and acceleration technique in [2] (Algorithm 5 in that paper). We now summarize the variance reduction and acceleration technique for a better understanding of our algorithm. The **variance reduction** in Algorithm 1 is step 9, which works as follows: For the finite-sum dual \(\phi(v)=\frac{1}{m}\sum_{i=1}^{m}\phi_{i}(v)\), a stochastic algorithm without variance reduction updates the parameter estimation using \(\nabla\phi_{i}(v)\), which in general has \(Var[\nabla\phi_{i}(v)]\neq 0,\forall v\) and thus needs the step size \(\to 0\) for convergence. A variance reduced algorithm replaces \(\nabla\phi_{i}(v)\) by \(A_{k}=\nabla\phi_{i}(v)-B_{k}+\mathbb{E}[B_{k}]\). When \(B_{t}\) and \(\nabla\phi_{i}(v)\) have correlation \(r>0.5\) and \(Var[B_{t}]\approx Var[\nabla\phi_{i}(v)]\), one can check that \(Var[A_{t}]=Var[\nabla\phi_{i}(v)-B_{k}]=Var[\nabla\phi_{i}(v)]-2r\sqrt{Var[ \nabla\phi_{i}(v)]Var[B_{k}]}+Var[B_{k}]<Var[\nabla\phi_{i}(v)]\) (so the variance is reduced). Step 9 in Algorithm 1 uses this variance reduction technique by taking \(B_{k}=\nabla\phi_{i}(\widetilde{v}^{s})\). The **acceleration** in Algorithm 1 are steps 7, 10, 11, namely the Katyusha acceleration in [2]. We summarize this technique and compare it with a classical method in [3] that uses Nesterov's momentum. To simplify explanation, consider the special case \(\|\cdot\|_{H}=\|\cdot\|_{2},w(\cdot)=\frac{1}{2}\|\cdot\|_{2}^{2}\), steps 7, 10, 11 of Algorithm 1 are:_ \[v_{k+1}=\tau_{1}z_{k}+\tau_{2}\tilde{v}+(1-\tau_{1}-\tau_{2})y_{k};y_{k+1}=v_{ k+1}-\frac{1}{3L}\tilde{\nabla}_{k+1};z_{k+1}=z_{k}-\alpha\tilde{\nabla}_{k+1},\] _where \(\mathbb{E}\tilde{\nabla}_{k+1}=\nabla\phi(v_{k+1})\). On the other hand, the method in [3] updates as_ \[v_{k+1}=\tau_{1}z_{k}+(1-\tau_{1})y_{k};y_{k+1}=v_{k+1}-\frac{1}{L}\nabla\phi (v_{k+1});z_{k+1}=z_{k}-\alpha\nabla\phi(v_{k+1}).\] _The two updating schemes both have a "gradient descent" step in \(y_{k+1}\) and "momentum" term \(z_{k+1}\) that accumulates the gradient history; the difference is in \(v_{k+1}\): the classical method takes a weighted average of \(z_{k}\) and \(y_{k}\) (that is, Nesterov's momentum), while Katyusha acceleration has one more term \(\tilde{v}\) (which is called Katyusha momentum [2]). Such Katyusha momentum serves as a "magnet" to retract the estimation to \(\tilde{v}\), which is the average of past \(l\) estimates. Since our algorithm is a stochastic algorithm, such a "magnet" helps the algorithm to stabilize. Thus, the Katyusha acceleration works well._ We prove the convergence rate of the PDASMD algorithm as follows: **Theorem 1**.: _Under Assumption 1, we apply Algorithm 1 to solve problem (3). Choose a mirror function \(w(\cdot)\) that is \(1\)-strongly convex and \(\gamma\)-smooth w.r.t. \(\|\cdot\|_{H}\) norm. Denote the primal and dual optimal solution as \(\mathbf{x}^{*}\) and \(\mathbf{\lambda}^{*}\), respectively. Assume that \(\|\lambda^{*}\|_{H}\leq R\). We have the convergence of the algorithm as follows:_ \[\|\mathbb{E}\left[\mathbf{b}-A\mathbf{x}^{S-1}\right]\|_{H,*}\leq\frac{2} {S^{2}l}\left[l\bar{L}R+18\bar{L}R\gamma\right], \tag{6}\] \[f(\mathbb{E}\left(\mathbf{x}^{S-1}\right))-f(\mathbf{x}^{*})\leq\frac{2} {S^{2}l}\left[l\bar{L}R^{2}+18\bar{L}R^{2}\gamma\right]. \tag{7}\] The proof of the theorem is deferred to Appendix B. ### Applying to Optimal Transport This section gives the detailed procedure of applying PDASMD to get an approximation solution to the OT. Especially we consider two cases: in the first case, we use \(\|\cdot\|_{H}=\|\cdot\|_{2}\) and PDASMD reduce to PDASGD; in the second case, we use \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\) and prove an improved computational complexity over the first case. Our algorithm achieves the best possible rate in the current literature for the latter case. Our algorithm improves the rate of the first-order algorithms for solving entropic OT. We apply the PDASMD algorithm to solve the entropic OT (2) as follows. Since problem (2) a special case of problem (3), we plug \(A,\mathbf{b},f(\cdot)\) into the general dual formula (4) to get the dual problem of problem (2). With a little abuse of notation, we split the dual variables as \((\mathbf{\tau}^{T},\mathbf{\lambda}^{T})^{T}\) for \(\mathbf{\tau},\mathbf{\lambda}\in\mathbb{R}^{n}\). The dual problem of problem (2) is: \[\phi(\mathbf{\tau},\mathbf{\lambda})=\eta\langle\mathbf{1}_{n^{2}},\mathbf{x}(\mathbf{\tau},\mathbf{ \lambda})\rangle-\langle\mathbf{p}^{\prime},\mathbf{\tau}\rangle-\langle\mathbf{q}^{\prime },\mathbf{\lambda}\rangle, \tag{8}\] where the relationship between primal-dual variables is \[\mathbf{x}(\mathbf{\tau},\mathbf{\lambda})=\exp\left(\frac{A^{T}(\mathbf{\tau}^{T},\mathbf{ \lambda}^{T})^{T}-\text{Vec}(C)-\eta\mathbf{1}_{n^{2}}}{\eta}\right). \tag{9}\] Moreover, to get a dual with the finite-sum structure, we follow [12] to transfer the dual objective to semi-dual by fixing \(\mathbf{\lambda}\) and solving the first order condition w.r.t. \(\mathbf{\tau}\) in objective (8). This gives us the relationship between the dual variables: \[\tau_{i}(\mathbf{\lambda})=\eta\log p_{i}^{\prime}-\eta\log\left(\sum_{j=1}^{n} \exp((\lambda_{j}-C_{i,j}-\eta)/\eta)\right).\] Plugging the relationship above into the dual objective (8) gives us the semi-dual objective. With a little abuse of notation, we denote the semi-dual objective function as \(\phi(\mathbf{\lambda})\), which is: \[\phi(\mathbf{\lambda}) =-\langle\mathbf{q}^{\prime},\mathbf{\lambda}\rangle-\eta\sum_{i=1}^{n}p_{ i}^{\prime}\log p_{i}^{\prime}\] \[\quad+\eta\sum_{i=1}^{n}\log\left(\sum_{j=1}^{n}\exp((\lambda_{j} -C_{i,j}-\eta)/\eta)\right)+\eta\] \[=\frac{1}{n}\sum_{i=1}^{n}np_{i}^{\prime}\Bigg{[}-\langle\mathbf{q}^{ \prime},\mathbf{\lambda}\rangle-\eta\log p_{i}^{\prime}\] \[\quad+\eta\log\left(\sum_{j=1}^{n}\exp((\lambda_{j}-C_{i,j}-\eta) /\eta)\right)+\eta\Bigg{]}\] \[:=\frac{1}{n}\sum_{i=1}^{n}\phi_{i}(\mathbf{\lambda}). \tag{10}\] It is easy to check that each \(\phi_{i}(\mathbf{\lambda})\) is convex. To apply our algorithm, we further check the smoothness of \(\phi_{i}(\mathbf{\lambda})\) in the following lemma: **Lemma 1**.: \(\phi_{i}(\cdot)\) _in the semi-dual objective (10) is \(\frac{np_{i}^{\prime}}{\eta}\) smooth w.r.t. \(\|\cdot\|_{2}\) norm, and is \(\frac{5np_{i}^{\prime}}{\eta}\) smooth w.r.t. \(\|\cdot\|_{\infty}\) norm._ Lemma 1 is proved in Appendix C. By Lemma 1, we can calculate the parameter in PDASMD Algorithm 1 as \(\bar{L}=1/\eta\) for \(\|\cdot\|_{H}=\|\cdot\|_{2}\), and \(\bar{L}=5/\eta\) for \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\). For these two cases, we can apply Algorithm 1 to approximate problem (2). We further round the approximating solution of problem (2) to the feasible region of problem (1). This way, we get an \(\epsilon-\)solution to problem (1). The full procedure is deferred to Appendix D due to the page limit. We state the computational complexity of the full procedure in the following theorem: **Theorem 2**.: _Set \(l=\Theta(n)\) in the PDASMD algorithm, the overall number of arithmetic operations for finding a solution \(\widehat{X}\) such that \(\mathbb{E}\langle C,\widehat{X}\rangle\leq\langle C,X^{*}\rangle+\epsilon\) is_ * \(\widetilde{\mathcal{O}}\left(\frac{n^{2.5}\|C\|_{\infty}(1+\sqrt{\gamma/n})}{ \epsilon}\right)\) _for_ \(\|\cdot\|_{H}=\|\cdot\|_{2}\)_;_ * \(\widetilde{\mathcal{O}}\left(\frac{n^{2}\|C\|_{\infty}(1+\sqrt{\gamma/n})}{ \epsilon}\right)\) _for_ \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\)_._ The proof of Theorem 2 is in Appendix D. **Remark 4**.: _The complexities still depend on \(\gamma\), the smoothness of \(w(\cdot)\) w.r.t. \(\|\cdot\|_{H}\). For example, when taking \(w(\cdot)=\frac{1}{2}\|\cdot\|_{2}^{2}\), we have \(\gamma=1\) for \(\|\cdot\|_{H}=\|\cdot\|_{2}\), and \(\gamma=n\) for \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\). The corresponding computational complexity is then \(\widetilde{\mathcal{O}}\left(\frac{n^{2.5}\|C\|_{\infty}}{\epsilon}\right)\) and \(\widetilde{\mathcal{O}}\left(\frac{n^{2}\|C\|_{\infty}}{\epsilon}\right)\). Now for \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\), as long as we choose a proper \(w(\cdot)\) such that \(\gamma=\mathcal{O}(n)\), the rate \(\widetilde{\mathcal{O}}\left(\frac{n^{2}\|C\|_{\infty}}{\epsilon}\right)\) is achieved. One may further improve the rate by a constant by improving the dependency of \(\gamma\) on \(n\). Such improvement is an open question in optimization; though we make no effort to do it in this paper, we still note this opportunity._ **Remark 5**.: _If we choose \(w(\cdot)=\frac{1}{2}\|\cdot\|_{2}^{2}\), we have closed-form solutions for each step of PDASMD._ * _For both settings, step 10 of PDASMD algorithm becomes_ \(\mathbf{z}_{k+1}=\mathbf{z}_{k}-\alpha_{s}\widetilde{\mathbf{\nabla}}_{k+1}\)_;_ * _For_ \(\|\cdot\|_{H}=\|\cdot\|_{2}\)_, step 11 of PDASMD is_ \(\mathbf{y}_{k+1}=\mathbf{v}_{k+1}-\frac{1}{9L}\widetilde{\mathbf{\nabla}}_{k+1}\)_;_ * _For_ \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\)_, step 11 of PDASMD becomes_ \(\mathbf{y}_{k+1}=\mathbf{v}_{k+1}-\frac{\|\widetilde{\mathbf{\nabla}}_{k+1}\|_{1}}{9L} sign(\widetilde{\mathbf{\nabla}}_{k+1})\)_._ _It is clear that in both settings, each step of PDASMD is defined by simple arithmetic operations and thus is easy to implement. There is no gap between our theory and practice._ ### Computational Complexity of the Stochastic Sinkhorn In this section, we prove that the computational complexity of the Stochastic Sinkhorn for finding an \(\epsilon\)-solution to OT is \(\widetilde{\mathcal{O}}(\frac{n^{2}}{\epsilon^{2}})\), which is improved over the known rate of \(\widetilde{\mathcal{O}}(\frac{n^{2}}{\epsilon^{3}})\)[1] and matches the state-of-the-art rate of Sinkhorn and Greenkhorn [10, 19]. Moreover, our PDASMD algorithm beats the provable rate of Stochastic Sinkhorn. This illustrates the advantage of our PDASMD algorithm. The Stochastic Sinkhorn algorithm is proposed by [1]. One can check Appendix E for a full algorithm description. We show the computational complexity of the Stochastic Sinkhorn as follows: **Theorem 3**.: _Stochastic Sinkhorn finds a solution \(\widehat{X}\) such that \(\mathbb{E}\left\langle C,\widehat{X}\right\rangle\leq\left\langle C,X^{*} \right\rangle+\epsilon\) in_ \[\mathcal{O}\left(\frac{n^{2}\|C\|_{\infty}^{2}\log n}{\epsilon^{2}}\right)\] _arithmetic operations._ The proof of Theorem 3 is in Appendix E. ## 3 PDASMD with Batch Implementation (PDASMD-B) In this section, we propose a batch version of PDASMD, namely the PDASMD-B algorithm. The batch implementation of the stochastic step in PDASMD-B allows parallel computing. This further improves the computational power of our algorithm. We give PDASMD-B in Algorithm 2 and briefly explain it. As compared to the non-batch version PDASMD in Algorithm 1, Step 8 of PDASMD-B now samples a small batch of samples and calculates \(\widetilde{\mathbf{\nabla}}_{k+1}\) based on the gradient of this small batch. Other hyper-parameters in the algorithm are changed accordingly to ensure convergence. We apply PDASMD-B to solve OT. The main steps are the same as those in Subsection 2.3; thus, we omit the details. To compute the computational complexity for giving an \(\epsilon\)-solution to OT, one needs the convergence result of PDASMD-B, which we include in Appendix F. And the computational complexity for solving OT is stated in the following corollary. **Corollary 1**.: _Run PDASMD-B with batch size \(B\), \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\) and inner loop size \(l=n/B\) (assume w.l.o.g. that \(l\) is an integer), the overall number of arithmetic operations to find a solution \(\widehat{X}\) such that \(\mathbb{E}\langle C,\widehat{X}\rangle\leq\langle C,X^{*}\rangle+\epsilon\) is_ \[\widetilde{\mathcal{O}}\left(\frac{n^{2}\|C\|_{\infty}\sqrt{1/B+B\gamma/n}}{ \epsilon}\right).\] Figure 1: Computational complexity comparison of different algorithms for finding an \(\epsilon\)-solution of OT. The logarithmic of the total number of numerical operations to achieve a given \(\epsilon\) approximation error is plotted against either the logarithmic transform of the sample size \(n\) in the PDASMD algorithm (rows 1 and 3) or the batch size in the PDASMD-B algorithm (rows 2 and 4). The first two rows use synthetic data, and the last two are for the MNIST data. The relevant discussion can be seen in Section 4 Numerical Studies. The error bars in all the plots come from repeating the experiment using 5 pairs of randomly generated/chosen marginals. **Remark 6**.: _Corollary 1 shows the speed-up of PDASMD-B from parallel computing. We analyzed the speed-up for two cases of \(\gamma\) as follows. The first case is similar to the one in Remark 4: taking \(w(\cdot)=\frac{1}{2}\|\cdot\|_{2}^{2}\), then we have \(\gamma=n\). This gives us the total computation of \(\widetilde{\mathcal{O}}\left(\frac{n^{2}\|C\|_{\infty}\sqrt{B}}{\epsilon}\right)\), which is \(\sqrt{B}\) times that of non-batch version. There are \(B\) batches of parallel computation, so if we ignore the communication time, our batch algorithm enjoys a sublinear speed-up of \(\mathcal{O}(\sqrt{B})\). The second case assumes one can further improve the rate \(\gamma\sim\mathcal{O}(n)\) to \(\gamma\sim\mathcal{O}(\sqrt{n})\). Then for \(B\leq\sqrt{n}\), the number of total computations does not increase with \(B\), which indicates a linear speed-up of \(\mathcal{O}(B)\) using parallel computing. Though such an improvement in \(\gamma\) is still an open question in optimization, this implies a potentially huge advantage of the batch algorithm._ ## 4 Numerical Studies In this section, we discuss the result of our numerical studies. The goals of our experiment are to check our theoretical computational complexity of the PDASMD algorithm w.r.t. the marginal size \(n\) in Theorem 2, and to check the theoretical computational complexity of the PDASMD-B algorithm w.r.t. the batch size \(B\) in Corollary 1. We use both synthetic and real grey-scale images 2 as the marginal distribution for our experiment. Due to the page limit, our data description and algorithm implementation are deferred to Appendix G. We have more applications of our algorithm, including domain adaptation and color transfer, in Appendix H. Footnote 2: The MNIST dataset [18]. Our experiment results are given in Figure 1. We now explain the plots and summarize the results from the plots as follows. Figures 1(a), 1(b), 1(e) and 1(f) check the computational complexity of PDASMD on the marginal size \(n\). In our experiment, we run PDASMD with \(w(\cdot)=\frac{1}{2}\|\cdot\|_{2}^{2}\) and \(\|\cdot\|_{H}=\|\cdot\|_{\infty}\). By Theorem 2, for this case, when fixing the accuracy level \(\epsilon\), we should have the computational complexity \(\sim\mathcal{O}(n^{2})\). That is, fixing a \(\epsilon\) and plotting the logarithm of computation count versus the logarithm of \(n\), we expect to see a line with slope 2. Figures 1(a), 1(b) (using synthetic data as marginals) and Figures 1(e) and 1(f) (using real data as marginals) have the lines corresponding to the PDASMD algorithm have slopes that are close to 2, which supports our theoretical rate. In Figures 1(a), 1(b), 1(e) and 1(f) we also include lines that correspond to other state-of-the-art algorithms. The goal is to compare the practical performance of the PDASMD algorithm with deterministic algorithms (Figures 1(a) and 1(e)) and other stochastic algorithms (Figure 1(b) and 1(f)). We conclude from the plots that the total computation numbers of the AAM, Sinkhorn, and Stochastic Sinkhorn are less than that of the PDASMD, which illustrates the practical advantage of those algorithms. However, such an observation does not disqualify our PDASMD algorithm since we still have a provable complexity that is better than those algorithms. Inspired by such an observation, one may further improve the PDASMD in practice. One possible way is to combine the PDASMD algorithm with the Sinkhorn to take advantage of the better theoretical rate of PDASMD and the good empirical performance of the Sinkhorn. Figures 1(c) and 1(g) check the computational complexity of PDASMD-B on the batch size \(B\). We fix the accuracy level \(\epsilon\) and run PDASMD-B with \(w(\cdot)=\frac{1}{2}\|\cdot\|_{2}^{2}\). By Corollary 1, for a given marginal size \(n\), we have the number of total computation \(\sim\mathcal{O}(\sqrt{B})\). Thus, when plotting the logarithm of computation count versus the logarithm of \(B\), we should get a line with a slope 0.5. In Figures 1(c) (using synthetic data as marginals) and 1(g) (using real data as marginals), we see that for different marginal sizes \(n\), the slopes are all close to \(.5\). Such an observation matches our theory. With such computational complexity of PDASMD-B on the batch size \(B\), if we can fully parallelize, the running time of PDASMD-B should be \(\sim\mathcal{O}(B^{-0.5})\). To check this, we plot the logarithm of running time versus the logarithm of \(B\) in Figures 1(d) and 1(h). The lines fail to have slope \(-0.5\). This is not surprising to see in practice because of the commutation time and limit in the computational resource. But from the plots, we can still benefit from the batch algorithm: when the batch size is not too large (\(<=\exp(2.5)\)), the running time decreases as the batch size increases. This illustrates the usefulness of the batch version algorithm in practice. To summarize, our computational complexity of PDASMD on \(n\) and PDASMD-B on \(B\) are supported by numerical studies. ## 5 Discussion and Future Study This paper proposes a new first-order algorithm for solving entropic OT. We call our algorithm the PDASMD algorithm. We prove that our algorithm finds an \(\epsilon\)-solution to OT using \(\widetilde{\mathcal{O}}(n^{2}/\epsilon)\) arithmetic operations. Such a rate improves the previously state-of-the-art rate of \(\widetilde{\mathcal{O}}(n^{2.5}/\epsilon)\) among the first-order algorithms applied to entropic OT. We perform numerical studies, and the results match our theory. We discuss some future directions for improving the computational efficiency of OT. One direction is to revisit other first-order algorithms that are proved to have \(\widetilde{\mathcal{O}}(n^{2.5}/\epsilon)\) computational complexity, and see if they can be improved to \(\widetilde{\mathcal{O}}(n^{2}/\epsilon)\). Some algorithms show the \(\widetilde{\mathcal{O}}(n^{2}/\epsilon)\) rate in practice, but there is no proof for such a rate. The techniques in our paper may inspire proper modifications to those algorithms to get a better provable rate. In this way, one may further prove a computational complexity better than that of the PDASMD algorithm by a constant. Another direction is to combine our algorithm with iterative projection-based algorithms such as the Sinkhorn. This direction is motivated by the Accelerated Sinkhorn algorithm in [20], which updates the dual variables of entropic OT by Nesterov's estimate sequence (for acceleration) and two Sinkhorn steps. Now our PDASMD algorithm also uses an acceleration technique (Katyusha momentum), it would be interesting to analyze a stochastic Accelerated Sinkhorn by replacing its Nesterov's estimate sequence with the Katyusha momentum. The third direction is to improve the batch version of our PDASMD algorithm. Our batch-version algorithm has a sub-linear speed-up when fully parallelized and ignores the communication time. In such a setting, one may expect an optimally designed batch algorithm to speed up linearly. That is, the total number of computations does not scale up with the batch size, and the computing time is \(1/B\) that of the non-batch version when the batch size is \(B\). If one can improve our batch version algorithm to achieve a linear speed-up, the computational advantage will be huge. Besides computing for OT, the broader applications of our PDASMD algorithm are also interesting. Our PDASMD algorithm can be applied to a linear constrained strongly convex problem as long as its dual is of a finite-sum form. This motivates one to apply our algorithm to solve other problems such as the unbalanced OT [26] and the Wasserstein barycenter [9] for better computational complexity.
2302.10674
Declarative Probabilistic Logic Programming in Discrete-Continuous Domains
Over the past three decades, the logic programming paradigm has been successfully expanded to support probabilistic modeling, inference and learning. The resulting paradigm of probabilistic logic programming (PLP) and its programming languages owes much of its success to a declarative semantics, the so-called distribution semantics. However, the distribution semantics is limited to discrete random variables only. While PLP has been extended in various ways for supporting hybrid, that is, mixed discrete and continuous random variables, we are still lacking a declarative semantics for hybrid PLP that not only generalizes the distribution semantics and the modeling language but also the standard inference algorithm that is based on knowledge compilation. We contribute the measure semantics together with the hybrid PLP language DC-ProbLog (where DC stands for distributional clauses) and its inference engine infinitesimal algebraic likelihood weighting (IALW). These have the original distribution semantics, standard PLP languages such as ProbLog, and standard inference engines for PLP based on knowledge compilation as special cases. Thus, we generalize the state of the art of PLP towards hybrid PLP in three different aspects: semantics, language and inference. Furthermore, IALW is the first inference algorithm for hybrid probabilistic programming based on knowledge compilation
Pedro Zuidberg Dos Martires, Luc De Raedt, Angelika Kimmig
2023-02-21T13:50:38Z
http://arxiv.org/abs/2302.10674v2
# Declarative Probabilistic Logic Programming ###### Abstract Over the past three decades, the logic programming paradigm has been successfully expanded to support probabilistic modeling, inference and learning. The resulting paradigm of probabilistic logic programming (PLP) and its programming languages owes much of its success to a declarative semantics, the so-called distribution semantics. However, the distribution semantics is limited to discrete random variables only. While PLP has been extended in various ways for supporting hybrid, that is, mixed discrete and continuous random variables, we are still lacking a declarative semantics for hybrid PLP that not only generalizes the distribution semantics and the modeling language but also the standard inference algorithm that is based on knowledge compilation. We contribute the hybrid distribution semantics together with the hybrid PLP language DC-ProbLog and its inference engine _infinitesimal algebraic likelihood weighting_ (IALW). These have the original distribution semantics, standard PLP languages such as ProbLog, and standard inference engines for PLP based on knowledge compilation as special cases. Thus, we generalize the state-of-the-art of PLP towards hybrid PLP in three different aspects: semantics, language and inference. Furthermore, IALW is the first inference algorithm for hybrid probabilistic programming based on knowledge compilation. keywords: Probabilistic Programming, Declarative Semantics, Discrete-Continuous Distributions, Likelihood Weighting, Logic Programming, Knowledge Compilation, Algebraic Model Counting ## 1 Introduction Probabilistic logic programming (PLP) is at the crossroads of two parallel developments in artificial intelligence and machine learning. On the one hand, there are the probabilistic programming languages with built-in support for machine learning. These languages can be used represent very expressive - Turing equivalent - probabilistic models, and they provide primitives for inference and learning. On the other hand, there is the longstanding open question for integrating the two main frameworks for reasoning, that is logic and probability, within a common framework (Russell, 2015; De Raedt et al., 2016). Probabilistic logic programming (De Raedt and Kimmig, 2015; Riguzzi, 2018) fits both paradigms and goes back to at least the early 90s with seminal works by Sato (1995) and Poole (1993). Poole introduced ICL, the Independent Choice Logic, an elegant extension of the Prolog programming language, and Sato introduced the _distribution semantics_ for probabilistic logic programs in conjunction with a learning algorithm based on expectation maximization (EM). The PRISM language (Sato, 1995), which utilizes the distributions semantics and the EM learning algorithm constitutes, to the best of the authors' knowledge, the very first probabilistic programming language with support for machine learning. Today, there is a plethora of probabilistic logic programming languages, most of which are based on extensions of the ideas by Sato and Poole (Sato and Kameya, 1997; Kersting and De Raedt, 2000; Vennekens et al., 2004; De Raedt et al., 2007). However, the vast majority of them is restricted to discrete, and more precisely finite categorical, random variables. When merging logic with probability, the restriction to discrete random variables is natural and allowed Sato to elegantly extend the logic program semantics into the celebrated distribution semantics. However, it is also an important restriction, which raises the question of how to extend the semantics towards hybrid, i.e. discrete-continuous, random variables. Defining the semantics of probabilistic programming language with support for random variables with infinite and possibly uncountable sample spaces is a much harder task. This can be observed when looking at the development of important imperative and functional probabilistic programming languages (Goodman et al., 2008; Mansinghka et al., 2014) that support continuous random variables. These works initially focused on inference, typically using a particular Monte Carlo approach, yielding an operational or procedural semantics. It is only follow-up work that started to address a declarative semantics for such hybrid probabilistic programming languages. (Staton et al., 2016; Wu et al., 2018). The PLP landscape has experienced similar struggles. First approaches for hybrid PLP languages were achieved by restricting the language (Gutmann et al., 2010, 2011; Islam et al., 2012) or via recourse to procedural semantics (Nitti et al., 2016). The key contributions of this paper are: * We introduce the _hybrid_ distribution semantics for mixed discrete-continuous probabilistic logic programming. The _hybrid_ distribution semantics extends Sato's distribution semantics and supports: * a countably infinite number of random variables, * a uniform treatment of discrete and continuous random variables, * a clear separation between probabilistic dependencies and logical dependencies by extending the ideas of Poole (2010) to the hybrid domain. * We introduce DC-ProbLog, an expressive PLP language in the discrete-continuous domain, which incorporates the _hybrid_ distribution semantics. DC-ProbLog has standard discrete PLP, e.g. ProbLog (Fierens et al., 2015), as a special case (unlike other hybrid PLP languages (Gutmann et al., 2011; Nitti et al., 2016)). * We introduce a novel inference algorithm, _infinitesimal algebraic likelihood weighting_ (IALW), for hybrid PLPs, which extends the standard knowledge compilation approach used in PLP towards mixed discrete continuous distributions, and which provides an operational semantics for hybrid PLP. In essence, our contributions **C1** and **C2** generalize both Sato's distribution semantics and discrete PLP such that in the absence of random variables with infinite sample spaces we recover the ProbLog language and declarative semantics. It is noteworthy that our approach of disentangling probabilistic dependencies and logical ones, allows us to express more general distributions than state-of-the-art approaches such e.g. (Gutmann et al., 2011; Nitti et al., 2016; Azzolini et al., 2021). Contribution **C3** takes this generalization to the inference level: in the exclusive presence of finite random variables our IALW algorithm reduces to ProbLog's current inference algorithm (Fierens et al., 2015). ## 2 A Panoramic Overview Before diving into the technical details of the paper we first give a high-level overview of the DC-ProbLog language. This will also serve us as roadmap to the remainder of the paper. We will first introduce, by example, the DC-ProbLog language (Section 2.1). The formal syntax and semantics of which are discussed in Section 3 and Section 4. In Section 2.2 we demonstrate how to perform probabilistic inference in DC-ProbLog by translating a queried DC-ProbLog program to an algebraic circuit (Zuidberg Dos Martires et al., 2019). Before giving the details of this transformation in Section 6 and Section 7, we define conditional probability queries on DC-ProbLog programs (Section 5). The paper ends with a discussion on related work (Section 8) and concluding remarks in Section 9. Throughout the paper, we assume that the reader is familiar with basic concepts from logic programming and probability theory. We provide, however, a brief refresher of basic logic programming concepts in Appendix A. In Appendix B we give a tabular overview of notations used, and in the remaining sections of the appendix we give proofs to propositions and theorems or discuss is in more detail some of the more subtle technical details. ### Panorama of the Syntax and Semantics **Example 2.1**.: _A shop owner creates random bags of sweets with two independent random binary properties (large and balanced). He first picks the number of red sweets from a Poisson distribution whose parameter is 20 if the bag is large and 10 otherwise, and then the number of yellow sweets from a Poisson whose parameter is the number of red sweets if the bag is balanced and twice that number otherwise. His favorite type of bag contains more than 15 red sweets and no less than 5 yellow ones. We model this in DC-ProbLog as follows:_ ``` [email protected]::large. [email protected]::balanced. * red ~ poisson(2@) :- large. * red ~ poisson(1@) :- not large. * yellow ~ poisson(red) :- balanced. * yellow ~ poisson(2*red) :- not balanced. * favorite :- red > 15, not yellow < 5. _In the first two lines we encounter probabilistic facts, a well-known modelling construct in discrete PLP languages (e.g. [4]). Probabilistic facts, written as logical facts labeled with a probability, express Boolean random variables that are true with the probability specified by the label. For instance, 0.5::large expresses that_large _is true with probability_0.5 _and false with probability_1-0.5_._ _In Lines 4 to 8, we use distributional clauses (DCs); introduced by Gutmann et al. [2011] into the PLP literature. DCs are of the syntactical form_ v-d:-b _and define random variables_ v _that are distributed according to the distribution_ d_, given that_ b _is true. For example, Line 4 specifies that when_large _is true,_red _is distributed according to a Poisson distribution. We call the left-hand argument of a ~/2 predicate in infix notation a_random term_. The random terms in the program above are_red _and_yellow_._ _Note how random terms reappear in three distinct places in the DC-ProbLog program. First, we can use them as parameters to other distributions, e.g._yellow ~ poisson(red)_. Second, we can use them within arithmetic expression, such as 2*red _in Line 8. Third, we can use them in comparison atoms (red>15) in Line 10. The comparison atoms appear in the bodies of logical rules that express logical consequences of probabilistic event, for example having more than 15 red sweets and less than 5 yellow ones._ Probabilistic facts and distributional clauses are the main modelling constructs to define random variables in probabilistic logic programs. As they are considered to be fundamental building blocks of a PLP language, the semantics of a language are defined in function of these syntactical constructs (cf.. [12, 13]). We now make an important observation: probabilistic facts and distributional clauses can be deconstructed into a much more fundamental concept, which we call the _distributional fact_. Syntactically, a distributional fact is of the form v-d. That is, a distributional clause with an empty body. As a consequence, probabilistic facts and distributional clauses do not constitute fundamental concepts in PLP but are merely special cases, i.e. while helpful for writing concise programs, they are only of secondary importance when it comes to semantics. **Example 2.2**.: _We now rewrite the program in Example 2.1 using distributional facts only. Note how probabilistic facts are actually distributional facts in disguise. The random variable is now distributed according to a Bernoulli distribution (flip) and the atom of the probabilistic fact is the head of a rule with a probabilistic comparison in its body (e.g. Lines 1 and 2 in the program below). Rewriting distributional facts is more involved. The main idea is to introduce a distinct random term for each distributional clause. Take for example, the random term_red _in Example 2.1. This random term encodes, in fact, two distinct random variables, which we denote in the program below red_large and red_small. We now have to propagate this rewrite through the program and replace every occurrence of red with red_large and red_small. This is why we get instead of two distributional clauses for yellow, four distributional facts. It explains also why we get instead of one rule for favorite in Example 2.1 four rules now._ ``` 1rv_large ~flip(0.5). 2large :- rv_large=:=1. 3rv_balanced ~flip(0.5). 4balanced :- rv_balanced=:=1. 5 6red_large ~poisson(20). 7red_small ~poisson(10). 8 9yellow_large_balanced ~poisson(red_large). 10yellow_large_unbalanced ~poisson(2*red_large). 11yellow_small_balanced ~poisson(red_small). 12yellow_small_unbalanced ~poisson(2*red_small). 13 14favorite :- large, red_large > 15, 15balanced, not yellow_large_balanced < 5. 16favorite :- large, red_large > 15, 17not balanced, not yellow_large_unbalanced < 5. 18favorite :- not large, red_small > 15, 19balanced, not yellow_small_balanced < 5. 20favorite :- not large, red_small > 15, 21not balanced, not yellow_small_unbalanced < 5. 22 23 24 25 26 27 28 29 30 310 32 333 344 345 346 347 348 349 350 3510 3521 353 354 3556 3571 3581 3591 3592 3593 360 36112 36213 36313 36414 36515 36616 36717 3618 36192 3622 3632 3642 3652 3663 3664 3653 36656 36667 3678 36889 36900 369101 369211 369322 36942 370111 370111 3811111 381111 3811111 3811111 381111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 381111 3811111 381111 3811111 3811111 3811111 3811111 381111 3811111 3811111 381111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 38111111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 3811111 38111111 3811111 3811111 38111111 3811111 38111111 3811111 3811111 38111111 38111111 3811111 38111111 38111111 38111111 38111111 38111111 3811111 38111111 38111111 38111111 38111111 38111111 38111111 381111111 38111111 38111111 38111111 38111111 38111111 38111111 38111111 38111111 381111111 381111111 38111111 38111111 381111111 38111111 38111111 381111111 381111111 38111111 381111111 381111111 381111111 381111111 381111111 381111111 38111111 381111111 381111111 3811111111 381111111 3811111111 381111111 3811111111 3811111111 3811111111 3811111111 3811111111 3811111111 3811111111 3811111111 3811111111 3811111111 3811111111 38111111111 38111111111 38111111111 38111111111 38111111111 38111111111 38111111111 38111111111 381111111111 381111111111 381111111111 3811111111111 38111111111111 3811111111111 381111111111111 381111111111111 3811111111111111 38111111111111111 381111111111111111111 38 In other words, the joint probability of favorite being true and large being false. While the example query above is simply a joint probability, we generalize this in Section 5 to conditional probabilities (possible with zero-probability events in the conditioning set). Second, we map the queried ground program to a labeled Boolean formula. Contrary to the approach taken by Fierens et al. (2015) the labels are not probabilities (as usual in PLP) but indicator functions. This mapping to a labeled Boolean formula happens again in a series of program transformations, which we describe in Section 6. On of these steps is obtaining the relevant ground program to a query. For example, for the query above only the last two rules for favorite matter. favorite :- not large, rs > 15, balanced, not ysb < 5. favorite :- not large, rs > 15, not balanced, not ysu < 5. Here, we abbreviated red_small as rs and yellow_small_balanced and yellow_small_unbalanced as ysb and ysu, respectively. We can further rewrite these rules by replacing large and balanced with equivalent comparison atoms and pushing the negation into the comparisons: favorite :- rvl=:=0, rs > 15, rvb=:=1, ysb >= 5. favorite :- rvl=:=0, rs > 15, rvb=:=0, ysu >= 5. Again using abbreviations: rvl for rv_large and rvb for rv_balanced. In Section 7 we then show how to compute the expected value of the labeled propositional Boolean formula corresponding to these rules by compiling it into an algebraic circuit, which is graphically depicted in Figure 2.1. In order to evaluate this circuit and obtain the queried probability (expected value), we introduce the IALW algorithm. The idea of IALW is the following: sample the random variables dangling at the bottom of the circuit by sampling parents before children. For instance, we first sample from poisson(10) (at the very bottom) before sampling from poisson(\(rs\)) using the sampled value of the parent as the parameter of the child. Once we reach the comparison atoms (e.g. \(ysb\geq 5\)) we stick in the sampled values for the mentioned random variables. This evaluates the comparisons to either 1 or 0, for which we then perform the sums and products as prescribed by the circuit. We get a Monte Carlo estimate of the queried probability by averaging over multiple such evaluations of the circuit. The method, as sketched here, is in essence the probabilistic inference algorithm Sampo presented in (Zuidberg Dos Martires et al., 2019). The key contribution of IALW, which we discuss in Section 7, is to extend Sampo such that conditional inference with zero probability events is performed correctly. Figure 2.1: Graphical representation of the computation graph (i.e. algebraic circuit) used to compute the probability (\(\mathtt{favorite}=\top,\mathtt{large}=\bot\)) using the IALW algorithm introduced in Section 7. ## 3 Df-Plp Sato's distribution semantics (Sato, 1995) start from a probability measure over a countable set of facts \(\mathcal{F}\), the so-called _basic distribution_, and extends this to a probability measure over the Herbrand interpretations of the full program. It is worth noting that the basic distribution is defined independently of the logical rules and that the random variables are mutually marginally independent. In our case, the set \(\mathcal{F}\) consists of ground Boolean comparison atoms over the random variables, for which we drop the mutual marginal independence assumption. These comparison atoms form an interface between the random variables (represented as terms) and the logical layer (clauses) that reasons about truth values of atoms. While Gutmann et al. (2011) used the same principle to define the distribution semantics for Distributional Clauses, they did not support negation. (Nitti et al., 2016) extended the fixed point semantics for hybrid probabilistic logic programs (also introduced by Gutmann et al. (2011)) to stratified programs with negation. However, by doing so Nitti et al. (2016) introduced a procedural element to the semantics. In this section we introduce the syntax and declarative semantics of DF-PLP-a probabilistic programming language with a minimal set of built-in predicates and functors. We do this in three steps. Firstly, we discuss distributional facts and the probability measure over random variables they define (Section 3.1). Secondly, we introduce the Boolean comparison atoms that form the interface layer between random variables and a logic program (Section 3.2). Thirdly, we add the logic program itself (Section 3.3). An overview table of the notation related to semantics is provided in Appendix B. **Definition 3.1** (Reserved Vocabulary).: _We use the following reserved vocabulary (buit-ins), whose intended meaning is fixed across programs:_ * _a finite set_ \(\Delta\) _of_ distribution functors_._ * _a finite set_ \(\Phi\) _of_ arithmetic functors_._ * _A finite set_ \(\Pi\) _of binary_ comparison predicates_,_ * _the binary predicate_ \(\sim\)_/2 (in infix notation)._ Examples of distribution functors that we have already seen in Section 2 are poisson/1 and flip/1 but may also include further functors such as normal/2 to denote normal distributions. Possible arithmetic functors are */2 (cf. Example 2.1) but also max/2, +/2, abs/1, etc.. Binary comparison predicates (in Prolog syntax and infix notation) are </2, >/2, =</2, >=/2, =:=/2, =\(\setminus\)=/2. The precise definitions of \(\Delta\), \(\Phi\) and \(\Pi\) are left to system designers implementing the language. **Definition 3.2** (Regular Vocabulary).: _We call an atom \(\mu(\rho_{1},\ldots,\rho_{k})\) whose predicate \(\mu/k\) is not part of the reserved vocabulary a regular atom. The set of all regular atoms constitutes the regular vocabulary._ Note that the arguments of a predicate \(\mu/k\) can contain element of \(\mathcal{D}\) and \(\mathcal{F}\). In this case they will have a purely logical meaning. We discuss this in more detail in Definition 4.19. As a brief comment on notation: in the remainder of the paper we will usually denote logic program expressions in teletype font (e.g. \(4\mathbf{>x}\)) when giving examples. When defining new concepts or stating theorems and propositions, we will use the Greek alphabet. ### Distributional Facts and Random Variables **Definition 3.3** (Distributional Fact).: _A distributional fact is of the form \(\nu\sim\delta\), with \(\nu\) a regular ground term, and \(\delta\) a ground term whose functor is in \(\Delta\). The distributional fact states that the ground term \(\nu\) is interpreted as a random variable distributed according to \(\delta\)._ **Definition 3.4** (Sample Space).: _Let \(\nu\) be be a random variable distributed according to \(\delta\). The set of possible samples (or values) for \(\nu\) is the sample space denoted by \(\Omega_{\nu}\) and which is determined by \(\delta\). We denote a sample from \(\Omega_{\nu}\) by \(\omega(\nu)\), where \(\omega\) is the sampling or value function._ **Definition 3.5** (Distributional Database).: _A distributional database is a countable set \(\mathcal{D}=\{\nu_{1}\sim\delta_{1},\nu_{2}\sim\delta_{2},\ldots\}\) of distributional facts, with distinct \(\nu_{i}\). We let \(\mathcal{V}=\{\nu_{1},\nu_{2},\ldots\}\) denote the set of random variables._ **Example 3.6**.: _The following distributional database encodes a Bayesian network with normally distributed random variables, two of which serve as parameters in the distribution of another one. We thus have \(\mathcal{V}=\{\mathbf{x},\mathbf{y},\mathbf{z}\}\)._ ``` 1%distributionfacts\(\mathcal{D}\) 2x\(\sim\)normal(5,2). 3y\(\sim\)normal(x,7). 4z\(\sim\)normal(y,1). ``` In order for a distributional database \(\mathcal{D}\) to be meaningful, it has to encode a unique joint distribution over the variables \(\mathcal{V}\). The key idea is to view the set of random variables as the nodes of a Bayesian network, where each node's distribution is parameterized by the node's parents. **Definition 3.7** (Parent, Ancestor).: _Let \(\mathcal{D}\) be a distributional database. For facts \(\nu_{p}\sim\delta_{p}\) and \(\nu_{c}\sim\delta_{c}\) in \(\mathcal{D}\). The random variable \(\nu_{p}\) is a parent of the child random variable \(\nu_{c}\) if and only if \(\nu_{p}\) appears in \(\delta_{c}\). We define ancestor to be the transitive closure of parent. A node's ancestor set is the set of all its ancestors._ **Example 3.8** (Ancestor Set).: _We graphically depict the ancestor set of the distributional database in Example 3.6 in Figure 3.1._ Figure 3.1: Directed acyclic graph representing the ancestor relationship between the random variables in Example 3.6. The ancestor set of \(\mathbf{x}\) is the empty set, the one of \(\mathbf{y}\) is \([\mathbf{x}]\) and the one of \(\mathbf{z}\) is \([\mathbf{x},\mathbf{y}]\). **Definition 3.10** (Well-Defined Distributional Database).: _A distributional database \(\mathcal{D}\) is called well-defined if and only if it satisfies the following criteria:_ * _Each_ \(\nu\in\mathcal{V}\) _has a finite set of ancestors._ * _The ancestor relation on the variables_ \(\mathcal{V}\) _is acyclic._ * _If_ \(\nu\sim\delta\in\mathcal{D}\) _and the parents of_ \(\nu\) _are_ \(\{\nu_{1},\ldots,\nu_{m}\}\)_, then replacing each occurrence of_ \(\nu_{i}\) _in_ \(\delta\) _by a sample_ \(\omega(\nu_{i})\) _always results in a well-defined distribution for_ \(\nu\)_._ The distributional database in Example 3.6 is well-defined: the ancestor relation is acyclic and finite, and as normally distributed random variables are real-valued, using such a variable as the mean of another normal distribution is always well-defined. The database would no longer be well-defined after adding w \(\sim\) poisson(x), as not all real numbers can be used as a parameter of a Poisson distribution. **Definition 3.11**.: _A value assignment \(\omega(\mathcal{V})\) is a combined value assignment to all random variables \(\mathcal{V}=\{\nu_{1},\nu_{2},...\}\), i.e., \(\omega(\mathcal{V})=(\omega(\nu_{1}),\omega(\nu_{2}),\ldots)\)._ **Proposition 3.12**.: _A well-defined distributional database \(\mathcal{D}\) defines a unique probability measure \(P_{\mathcal{V}}\) on value assignments \(\omega(\mathcal{V})\)._ Proof.: See C.1. ### Boolean Comparison Atoms over Random Variables Starting from the distribution over random variables defined by a well-defined distributional database, we now introduce the corresponding distribution over Boolean comparison atoms, which corresponds to the basic (discrete) distribution in Sato's distribution semantics. **Definition 3.13** (Boolean Comparison Atoms).: _Let \(\mathcal{D}\) be a well-defined distributional database. A binary comparison atom \(\gamma_{1}\bowtie\gamma_{2}\) over \(\mathcal{D}\) is a ground atom with predicate \(\bowtie\in\Pi\). The ground terms \(\gamma_{1}\) and \(\gamma_{2}\) are either random variables in \(\mathcal{V}\) or terms whose functor is in \(\Phi\). We denote by \(\mathcal{F}\) the set of all Lebesgue-measurable Boolean comparison atoms over \(\mathcal{D}\)._ **Example 3.14**.: _Examples of Boolean comparison atoms over the distributional database of Example 3.6 include z\(>\)10, x\(<\)y, abs(x-y)=\(<\)1, and 7*x=:=y+5._ **Proposition 3.15**.: _The probability measure \(P_{\mathcal{V}}\), defined by a well-defined distributional database \(\mathcal{D}\), induces a unique probability measure \(P_{\mathcal{F}}\) over value assignments to the comparison atoms \(\mathcal{F}\)._ Proof.: See C.2. ### Logical Consequences of Boolean Comparisons We now define the semantics of a DF-PLP program, i.e., extend the basic distribution \(P_{\mathcal{F}}\) over the comparison atoms of a distributional database to a distribution over the Herbrand interpretations of a logic program using said database. **Definition 3.16** (DF-PLP Program).: _A DF-PLP program \(\mathcal{P}^{DF}=\mathcal{D}\cup\mathcal{R}\) consists of a well-defined distributional database \(\mathcal{D}\) (Definition 3.10), comparison atoms \(\mathcal{F}\) (Definition 3.13), and a normal logic program \(\mathcal{R}\) where clause heads belong to the regular vocabulary (cf. Definition 3.2), and which can use comparison atoms from \(\mathcal{F}\) in their bodies._ **Example 3.17**.: _We further extend the running example._ ``` 1%distributionfacts\(\mathcal{D}\) 2x~normal(5,2). 3y~normal(x,7). 4z~normal(y,1). 5%logicprogram\(\mathcal{R}\) 6a:-abs(x-y)=<1. 7b:-nota,z>10. ``` _The logic program defines two logical consequences of Boolean comparisons over the Bayesian network, where \(\mathsf{a}\) is true if the absolute difference between random variables \(\mathbf{x}\) and \(\mathbf{y}\) is at most one, and \(\mathsf{b}\) is true if \(\mathsf{a}\) is false, and the random variable \(\mathbf{z}\) is greater than \(10\)._ In order to extend the basic distribution to logical consequences, i.e. logical rules, we require the notion of a _consistent comparisons database_ (CCD). The key idea is that samples of the random variables in \(\mathcal{D}\) jointly determine a truth value assignment to the comparison atoms in \(\mathcal{F}\). **Definition 3.18** (Consistent Comparisons Database).: _Let \(\mathcal{D}\) be a well-defined distributional database, \(\mathcal{F}=\{\kappa_{1},\kappa_{2},\ldots\}\) the corresponding set of measurable Boolean comparison atoms, and \(\omega(\mathcal{V})\) a value assignment to all random variables \(\mathcal{V}=\{\nu_{1},\nu_{2},...\}\). We define \(I_{\omega(\mathcal{V})}(\kappa_{i})=\top\) if \(\kappa_{i}\) is true after setting all random variables to their values under \(\omega(\mathcal{V})\), and \(I_{\omega(\mathcal{V})}(\kappa_{i})=\bot\) otherwise. \(I_{\omega(\mathcal{V})}\) induces the consistent comparisons database \(\mathcal{F}_{\omega(\mathcal{V})}=\{\kappa_{i}\mid I_{\omega(\mathcal{V})}( \kappa_{i})=\top\}\)._ To define the semantics of a DF-PLP program \(\mathcal{P}^{DF}\), we now require that, given a CCD \(\mathcal{F}_{\omega(\mathcal{V})}\), the logical consequences in \(\mathcal{P}^{DF}\) are uniquely defined. As common in the PLP literature, we achieve this by requiring the program to have a two-valued well-founded model (Van Gelder et al., 1991) for each possible value assignment \(\omega(\mathcal{V})\). **Definition 3.19** (Valid DF-PLP Program).: _A DF-PLP program \(\mathcal{P}^{DF}=\mathcal{D}\cup\mathcal{R}\) is called valid if and only if for each CCD \(\mathcal{F}_{\omega(\mathcal{V})}\), the logic program \(\mathcal{F}_{\omega(\mathcal{V})}\cup\mathcal{R}\) has a two-valued well-founded model._ We follow the common practice of defining the semantics with respect to ground programs; the semantics of a program with non-ground \(\mathcal{R}\) is defined as the semantics of its grounding with respect to the Herbrand universe. **Proposition 3.20**.: _A valid DF-PLP program \(\mathcal{P}^{DF}\) induces a unique probability measure \(P_{\mathcal{P}^{DF}}\) over Herbrand interpretations._ Proof.: See Appendix C.3. **Definition 3.21**.: _We define the declarative semantics of a DF-PLP program \(\mathcal{P}^{DF}\) to be the probability measure \(P_{\mathcal{P}^{DF}}\)._ In contrast to the imperative semantics of Nitti et al. (2016), in DF-PLP the connection between comparison atoms and the logic program is purely declarative. That is, logic program negation on comparison atoms negates the (interpreted) comparison. For example, if we have a random variable n, then n>=2 is equivalent to not n<2. Such equivalences do not hold in the stratified programs introduced by Nitti et al. (2016). This then allows the programmer to refactor the logic part as one would expect. ## 4 DC-ProbLog While the previous section has focused on the core elements of the DC-ProbLog language, we now introduce syntactic sugar to ease modeling. We consider three kinds of modeling patterns in DF-PLP, and introduce a more compact notation for each of them. We focus on examples and intuitions first. Subsequently, we formally define the semantics of DC-ProbLog (DF-PLP + syntactic sugar) in Section 4.2. ### Syntactic Sugar: Syntax and Examples #### 4.1.1 Boolean Random Variables The first modelling pattern concerns Boolean random variables, which we already encountered in Section 2.1 as probabilistic facts (in DC-ProbLog) or as a combination of a Bernoulli random variable, a comparison atom, and a logic rule (in DF-PLP). Below we give a more concise example. **Example 4.1**.: _We model, in DF-PLP, an alarm that goes off for different reasons._ ``` 1issue1\(\sim\)flip(0.1). 2issue2\(\sim\)flip(0.6). 3issue3\(\sim\)flip(0.3). 4 5alarm:-issue1=:=1,notissue2=:=1. 6alarm:-issue3=:=1,issue1=:=0. 7alarm:-issue2=:=1. ``` To make such programs more readable, we borrow the well-known notion of _probabilistic fact_ from discrete PLP, which directly introduces a logical atom as alias for the comparison of a random variable with the value 1, together with the probability of that value being taken. **Definition 4.2** (Probabilistic Fact).: _A probabilistic fact is of the form \(p\,{:}\,\mu\), where \(p\) is an arithmetic term that evaluates to a real number in the interval \([0,1]\) and \(\mu\) is a regular ground atom._ **Example 4.3**.: _We use probabilistic facts to rewrite the previous example._ ``` 10.1::problem1. 20.6::problem2. 30.3::problem3. 4 5alarm:-problem1,notproblem2. 6alarm:-problem3,notproblem1. 7alarm:-problem2. ``` #### 4.1.2 Probabilistically Selected Logical Consequences The second pattern concerns situations where a random variable with a finite domain is used to model a choice between several logical consequences: **Example 4.4**.: _We use a random variable to model a choice between whom to call upon hearing the alarm._ ``` 1call-finite([0.6:1,0.2:2,0.1:3]). 2alarm. 3call(mary):-call=:=1,alarm. 4call(john):-call=:=2,alarm. 5call(police):-call=:=3,alarm. ``` To more compactly specify such situations, we borrow the concept of an _annotated disjunction_ from discrete PLP (Vennekens et al., 2004). **Definition 4.5** (Annotated Disjunction).: _An annotated disjunction (AD) is a rule of the form_ \[p_{1}::\mu_{1};\ldots;p_{n}::\mu_{n}:\neg\beta.\] _where the \(p_{i}\)'s are arithmetic terms each evaluating to a number in \([0,1]\) with a total sum of at most \(1\). The \(\mu\)i's are regular gorund atoms, and \(\beta\) is a (possibly empty) conjunction of literals._ The informal meaning of such an AD is "if \(\beta\) is true, it probabilistically causes one of the \(\mu_{i}\) (or none of them, if the probabilities sum to less than one) to be true as well". **Example 4.6**.: _We now use an AD for the previous example._ ``` alarm. 0.6::call(mary);0.2::call(john);0.1::call(police):-alarm. ``` It is worth noting that the same head atom may appear in multiple ADs, whose bodies may be non-exclusive, i.e., be true at the same time. That is, while a single AD _can_ be used to model a multi-valued random variable, _not all_ ADs encode such variables. **Example 4.7**.: _The following program models the effect of two kids throwing stones at a window._ 0.5::throws(suzy). throws(billy). 0.8::effect(broken); 0.2::effect(none) :- throws(suzy). 0.6::effect(broken); 0.4::effect(none) :- throws(billy). _Here, we have \(P(\text{effect(broken)})=0.76\) and \(P(\text{effect(none)})=0.46\), as there are worlds where both \(\text{effect(broken)}\) and \(\text{effect(none)}\) hold. The two ADs do hence not encode a categorical distribution. This is explicit in the DF-PLP program, which contains two random variables (x1 and x2):_ x0 - flip(0.5). throws(suzy) :- x0=:=1. throws(billy). x1 - finite([0.8:1,0.2:2]). effect(broken) :- x1=:=1, throws(suzy). effect(none) :- x1=:=2, throws(suzy). x2 - finite([0.6:1,0.4:2]). effect(broken) :- x2=:=1, throws(billy). effect(none) :- x2=:=2, throws(billy). #### 4.1.3 Context-Dependent Distributions The third pattern is concerned with situations where the same conclusion is based on random variables with different distributions depending on specific properties of the situation, as illustrated by the following example. **Example 4.8**.: _We use two separate random variables to model that whether a machine works depends on the temperature being below or above a threshold. The temperature follows different distributions based on whether it is a hot day or not, but the threshold is independent of the type of day._ ``` 10.2::hot. 2 3temp_hot - normal(27,5). 4temp_not_hot - normal(20,5). 5 6works :- hot, temp_hot<25.0. 7works :- not hot, temp_not_hot<25.0. To more compactly specify such situations, we borrow the syntax of _distributional clauses_ from the DC language (Gutmann et al., 2011), which we already encountered in Section 2.1. **Definition 4.9** (Distributional Clause).: _A distributional clause (DC) is a rule of the form_ \[\tau\sim\delta\mathbin{:}-\beta.\] _where \(\tau\) is a regular ground expression, the functor of \(\delta\) is in \(\Delta\), and \(\beta\) is a conjunction of literals._ We call the left-hand side of the \(\sim\)/2 prediate in a distributional clause a _random term_ and the right-hand side a _distribution term_. Note that random terms cannot always be interpreted as random variables, which we discuss now. The informal meaning of a distributional clause is "if \(\beta\) is true, then the random term \(\tau\) refers to a random variable that follows a distribution given by the distribution term \(\delta\)". Here, the distinction between _refers to_ a random variable and _is_ a random variable becomes crucial, as we will often have several distributional clauses for the same random term. This is also the case in the following example. **Example 4.10**.: _Using distributional clauses, we can rewrite the previous example with a single random term_ temp _as_ ``` 10.2:hot. 2 3temp \(\sim\) normal(27,5) :- hot. 4temp \(\sim\) normal(20,5) :- not hot. 5 6works :- temp < 25.0. ``` _The idea is that we still have two underlying random variables, one for each distribution, but the logic program uses the same term to refer to both of them depending on the logical context. The actual comparison facts are on the level of these implicit random variables, and_ temp<0.25 _refers to one of them depending on context, just as in the original example._ ### Syntactic Sugar: Semantics We now formalize the declarative semantics of DC-ProbLog, i.e. DF-PLP extended with probabilistic facts, annotated disjunctions and distributional clauses, The idea is to define program transformations that eliminate these three modelling constructs from a DC-ProbLog program, resulting in a DF-PLP program for which we have defined the semantics in Section 3. Throughout this section, we will treat distributional facts as distributional clauses with empty bodies, and we will only consider ground programs for ease of notation. As usual, a non-ground program is shorthand for its Herbrand grounding. **Definition 4.11** (Statement).: _A DC-ProbLog statement is either a probabilistic fact, an annotated disjunction, a distributional clause, or a normal clause._ **Definition 4.12** (DC-ProbLog program).: _A DC-ProbLog program \(\mathcal{P}\) is a countable set of ground DC-ProbLog statements._ #### 4.2.1 Eliminating Probabilistic Facts and Annotated Disjunctions **Example 4.13**.: _We use the following DC-ProbLog program as running example._ ``` 1p-beta(1,1). 2p:a. 3b-normal(3,1):-a. 4b-normal(10,1):-nota. c ~ normal(b,5). 60.2::d;0.5::e;0.3::f :- not b<5, b < 10. 7g :- a, not f, b+c<15. **Definition 4.14** (Eliminating Probabilistic Facts and ADs).: _Let \(\mathcal{P}\) be a DC-ProbLog program. We define the following transformation rules to eliminate probabilistic facts and annotated disjunctions._ * _Replace each probabilistic fact_ \(p::\mu\) _in_ \(\mathcal{P}\) _by_ \[\nu \sim flip(p).\] \[\mu :-\nu =:=1.\] _with a fresh random variable_ \(\nu\) _for each probabilstic fact._ * _Replace each AD_ \(p_{1}::\mu_{1};\ldots;p_{n}::\mu_{n}:-\beta\) _in_ \(\mathcal{P}\) _by_ \[\nu \sim finite([p_{1}:1,\ldots,p_{n}:n])\] \[\mu_{1} :-\nu =:=1,\beta.\] \[\ldots\] \[\mu_{n} :-\nu =:=n,\beta.\] _with a fresh random variable_ \(\nu\) _for each AD._ Note that if the probability label(s) of a fact or AD include random terms, as in the case of p::a in the Example 4.13, then these are parents of the newly introduced random variable. However, the new random variable will not be a parent of other random variables, as they are only used locally within the new fragments. They thus introduce neither cycles nor infinite ancestor sets into the program. **Definition 4.15** (AD-Free Program).: _An AD-free DC-ProbLog program \(\mathcal{P}^{*}\) is a DC-ProbLog program that contains neither probabilistic facts nor annotated disjunctions. We denote by \(\mathcal{H}_{\mathcal{P}^{*}}\) the set of atoms \(\tau\sim\delta\) that appear as head of a distributional clause in \(\mathcal{P}^{*}\), and by \(\mathcal{T}_{\mathcal{P}^{*}}\) the set of random terms in \(\mathcal{H}_{\mathcal{P}^{*}}\)._ **Example 4.16**.: _Applying Definition 4.14 to Example 4.13 results in_ ``` 1p ~beta(1,1). 2x ~flip(p). 3a :- x =:= 1. 4b ~normal(3,1) :- a. 5b ~normal(10,1) :- not a. 6c ~normal(b,5). 7y ~finite([0.2:1,0.5:2,0.3:3]). 8d :- y =:= 1, not b<5, b < 10. 9e :- y =:= 2, not b<5, b < 10. 10f :- y =:= 3, not b<5, b < 10. 11g :- a, not f, b+c<15. _We have \(\mathcal{H}_{\mathcal{P}^{*}}=\{\text{p-beta(1,1),x-flip(p),b-normal(3,1), b-normal(10,1), c-normal(b,5),y-finite[0.2:1,0.5:2.0.3:3]}\}\). Furthermore, we also have \(\mathcal{T}_{\mathcal{P}^{*}}=\{\text{p,x,b,c,y}\}\)._ #### 4.2.2 Eliminating Distributional Clauses While eliminating probabilistic facts and annotated disjunctions is a rather straightforward local transformation, eliminating distributional clauses is more involved. The reason is that a distributional clause has a global effect in the program, as it defines a condition under which a random term has to be _interpreted_ as a specific random variable when mentioned in a distributional clause or comparison atom. Therefore, eliminating a distributional clause involves both introducing the relevant random variable explicitly to the program and pushing the condition from the body of the distributional clause to all the places in the logic program that interpret the original random term. Before delving into the mapping from an AD-free DC-ProbLog to a DF-PLP program, we introduce some relevant terminology. **Definition 4.17** (Parent, Ancestor).: _Given an AD-free program \(\mathcal{P}^{*}\) with \(\tau_{p}\) and \(\tau_{c}\) in \(\mathcal{T}_{\mathcal{P}^{*}}\). We call \(\tau_{p}\) a parent of \(\tau_{c}\) if and only if \(\tau_{p}\) appears in the distribution term \(\delta_{c}\) associated with \(\tau_{c}\) in \(\mathcal{H}_{\mathcal{P}^{*}}\) (\(\tau_{c}\sim\delta_{c}\in\mathcal{H}_{\mathcal{P}^{*}}\)). We define ancestor to be the transitive closure of parent._ For random terms, we distinguish _interpreted occurrences_ of the term that need to be resolved to the correct random variable from other occurrences where the random term is treated as any other term in a logic program, e.g., as an argument of a logical atom. **Definition 4.19** (Interpreted Occurrence).: _An interpreted occurrence of a random term \(\tau\) in an AD-free program \(\mathcal{P}^{*}\) is one of the following:_ * _the use of_ \(\tau\) _as parameter of a distribution term in the head of a distributional clause in_ \(\mathcal{P}^{*}\)__ * _the use of_ \(\tau\) _in a comparison literal in the body of a (distributional or normal) clause in_ \(\mathcal{P}^{*}\)__ _We say that a clause interprets \(\tau\) if there is at least one interpreted occurrence of \(\tau\) in the clause._ **Definition 4.20** (Well-Defined AD-free Program).: _Given an AD-free program \(\mathcal{P}^{*}\) with \(\mathcal{C}_{\mathcal{P}^{*}}\) the set of distributional clauses in \(\mathcal{P}^{*}\), we call \(\mathcal{C}_{\mathcal{P}^{*}}\) well-defined if the following conditions hold:_ Figure 4.1: Directed acyclic graph representing the ancestor relationship between the random variables in Example 4.15. The random terms \(\mathtt{p}\), \(\mathtt{b}\) and \(\mathtt{y}\) have the empty set as their ancestor set. The ancestor set of \(\mathtt{x}\) is \(\{\mathtt{p}\}\) and \(\mathtt{c}\) is \(\{\mathtt{b}\}\). **DC1**: _For each random term_ \(\tau\in\mathcal{T}_{\mathcal{P}^{*}}\)_, the number of distributional clauses_ \(\tau\sim\delta\mathbin{:}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\mathcal{C}_{\mathcal{P}^{*}}\) the ancestor relationship between random terms constitutes an acyclic directed graph, the apparent mutual recursion evaporates. We can now define the distributional facts encoding of the distributional clauses, which will give rise to a DF-PLP program instead of DC-ProbLog program. **Definition 4.22** (Distributional Facts Encoding).: _Let \(\mathcal{P}^{*}\) be an AD-free DC-ProbLog program and \(\mathcal{C}_{\mathcal{P}^{*}}\) its set of distributional clauses. We define the distributional facts encoding of \(\mathcal{C}_{\mathcal{P}^{*}}\) as \(\mathcal{C}_{\mathcal{P}^{*}}^{DF}\coloneqq\mathcal{D}\cup\mathcal{R}^{c}\), with_ \[\mathcal{D} =\bigcup_{\tau\in\mathcal{T}_{\mathcal{P}^{*}}}\mathcal{D}(\tau) \mathcal{R}^{c} =\bigcup_{\tau\in\mathcal{T}_{\mathcal{P}^{*}}}\mathcal{R}^{c}(\tau)\] _using \(\mathcal{D}(\cdot)\) and \(\mathcal{R}^{c}(\cdot)\) from Definition 4.21._ **Example 4.23** (Eliminating Distributional Clauses).: _We demonstrate the elimination of distributional clauses using the DCs in Example 4.16, i.e._ ``` 1p\(\sim\)beta(1,1). 2x\(\sim\)flip(p). 3b\(\sim\)normal(3,1):-a. 4b\(\sim\)normal(10,1):-nota. 5c\(\sim\)normal(b,5). 6y\(\sim\)finite([0.2:1,0.5:2,0.3:3]). ``` _Here, the distribution terms in Line 2 and Line 5 (flip(p) and normal(b,5)) contain one parent random term each (p and b, respectively), whereas all others have no parents. As b is defined by two clauses, we get fresh random variables for each of them, which in turn introduces different fresh random variables for the child c. This gives us:_ ``` 1v1\(\sim\)beta(1,1). 2rv(p,v1). 3v2\(\sim\)flip(v1). 4rv(x,v2):-rv(p,v1). 5v3\(\sim\)normal(3,1). 6rv(b,v3):-a. 7v4\(\sim\)normal(10,1). 8rv(b,v4):-nota. 9v5\(\sim\)normal(v3,5). 10rv(c,v5):-rv(b,v3). 11v6\(\sim\)normal(v4,5). 12rv(c,v6):-rv(b,v4). 13v7\(\sim\)finite([0.2:1,0.5:2,0.3:3]). 14rv(y,v7). ``` Eliminating distributional clauses (following Definition 4.21) introduces the distributional facts and context rules necessary to encode the original distributional clauses. To complete the transformation to a DF-PLP program, we further transform the logical rules. Prior to that, however, we need to define the _contextualization function_. **Definition 4.24** (Contextualization Function).: _Let \(\beta\) be a conjunction of atoms and let its comparison literals interpret the random terms \(\tau_{1},\ldots,\tau_{n}\). Furthermore, let \(\Lambda_{i}\) be a special logical variable associated to a random term \(\tau_{i}\in\mathcal{T}_{\mathcal{P}^{\ast}}\) for each \(\tau_{i}\). We define \(K(\beta)\) to be the conjunction of literals obtained by replacing the interpreted occurrences of the \(\tau_{i}\) in \(\beta\) by their corresponding \(\Lambda_{i}\) and conjoining to this modified conjunction \(\textsc{rv}(\tau_{i},\Lambda_{i})\) for each \(\tau_{i}\). We call \(K(\cdot)\) the contextualization function._ **Definition 4.25** (Contextualized Rules).: _Let \(\mathcal{P}^{\ast}\) be an AD-free program with logical rules \(\mathcal{R}^{\mathcal{P}^{\ast}}\) and distributional clauses \(\mathcal{C}_{\mathcal{P}^{\ast}}\), and let \(\mathcal{C}^{DF}_{\mathcal{P}^{\ast}}=\mathcal{D}\cup\mathcal{R}^{c}\) be the distributional facts encoding of \(\mathcal{C}_{\mathcal{P}^{\ast}}\). We define the contextualization of the bodies of the rules \(\mathcal{R}^{\mathcal{P}^{\ast}}\cup\mathcal{R}^{DF}\) as a two-step process:_ 1. _Apply the contextualization function_ \(K\) _to all bodies in_ \(\mathcal{R}^{\mathcal{P}^{\ast}}\cup\mathcal{R}^{c}\) _and obtain:_ \[\mathcal{R}^{\Lambda}=\{\eta:-K(\beta)\mid\eta:-\beta\in\mathcal{R}^{\mathcal{P }^{\ast}}\cup\mathcal{R}^{c}\}\] 2. _Obtain the set of ground logical rules_ \(\mathcal{R}\) _by grounding each logical variable_ \(\Lambda_{i}\) _in_ \(\mathcal{R}^{\Lambda}\) _with random variables_ \(\nu_{i}\in\mathcal{V}(\tau_{i})\) _in all possible ways._ _We call \(\mathcal{R}\) the contextualized logic program of \(\mathcal{P}^{\ast}\)._ The contextualization function \(K(\cdot)\) creates non-ground comparison atoms, e.g. L\(>\)5. Contrary to (ground) random terms, non-ground logical variables in such a comparison atom are not interpreted occurrences (cf. Definition 4.19) and the comparison itself only has a logical meaning. By grounding out the freshly introduced logical variables we obtain a purely logical program where the comparison atoms contain either arithmetic expressions or random variables (instead of random terms). **Example 4.26** (Contextualizing Random Terms).: _Let us now study the effect of the second transformation step. Consider again the AD-free program in Example 4.16 and the set of rules and distributional clauses obtained in Example 4.23. The contextualization step T2a rewrites the logical rules in the AD-free input program to_ 1a :- rv(x,Lx), Lx =:= 1. 2d :- rv(y,Ly), rv(b,lb), Ly =:= 1, not Lb\(<\)5, Lb < 10. 3e :- rv(y,Ly), rv(b,lb), Ly =:= 2, not Lb\(<\)5, Lb < 10. 4f :- rv(y,Ly), rv(b,lb), Ly =:= 3, not Lb\(<\)5, Lb < 10. 5g :- rv(b,Lb), rv(c,Lc), a, not f, Lb+Lc < 15. _These rules then get instantiated (step T2b) to_ 1a :- rv(x, v2), v2 =:= 1. 2d :- rv(y,v7), rv(b,v3), v7 =:= 1, not v3\(<\)5, v3 < 10. 3e :- rv(y,v7), rv(b,v3), v7 =:= 2, not v3\(<\)5, v3 < 10. 4f :- rv(y,v7), rv(b,v3), v7 =:= 3, not v3\(<\)5, v3 < 10. 5d :- rv(y,v7), rv(b,v4), v7 =:= 1, not v4\(<\)5, v4 < 10. 6e :- rv(y,v7), rv(b,v4), v7 =:= 2, not v4\(<\)5, v4 < 10. 7f :- rv(y,v7), rv(b,v4), v7 =:= 3, not v4\(<\)5, v4 < 10. 8g :- rv(b,v3), rv(c,v5), a, not f, v3+v5\(<\)15. g :- rv(b,v3), rv(c,v6), a, not f, v3+v6<15. 3g :- rv(b,v4), rv(c,v5), a, not f, v4+v5<15. 4g :- rv(b,v4), rv(c,v6), a, not f, v4+v6<15. _Together with the distributional facts and rules obtained in Example 4.23, this last block of rules forms the DC-PLP program that specifies the semantics of the AD-free DC-ProbLog program, and thus the semantics of the DC-ProbLog program in Example 4.13._ We note that the mapping from an AD-free program to a set of distributional facts and contextualized rules as defined here is purely syntactical, and written to avoid case distinctions. Therefore, it usually produces overly verbose programs. For instance, for random terms introduced by a distributional fact, the indirection via rv is only needed if there is a parent term in the distribution that has context-specific interpretations. The grounding step may introduce rule instances whose conjunction of rv-atoms is inconsistent. This is for example the case for the last three rules for g in the Example 4.26, which we illustrate in the example below. **Example 4.27**.: _The following is a (manually) simplified version of the DF-PLP program for the running example, where we propagated definitions of rv-atoms:_ ``` 1v1__beta(1,1). 2v2__flip(v1). 3v3__normal(3,1). 4v4__normal(10,1). 5v5__normal(v3,5). 6v6__normal(v4,5). 7v7__finite([0.2:1,0.5:2,0.3:3]). 8 9a :- v2 =:= 1. 10d :- a, v7 =:= 1, not v3<5, v3 < 10. 11e :- a, v7 =:= 2, not v3<5, v3 < 10. 12f :- a, v7 =:= 3, not v3<5, v3 < 10. 13d :- not a, v7 =:= 1, not v4<5, v4 < 10. 14e :- not a, v7 =:= 2, not v4<5, v4 < 10. 15f :- not a, v7 =:= 3, not v4<5, v4 < 10. 16g :- a, a, a, not f, v3+v5<15. 17g :- a, not a, a, not f, v3+v6<15. % inconsistent 18g :- not a, a, a, not f, v4+v5<15. % inconsistent 19g :- not a, not a, a, not f, v4+v6<15. % inconsistent ``` _In the bodies of the last three rules we have, inter alia, conjunctions of a and not a. This can never be satisfied and renders the bodies of these rules inconsistent._ **Definition 4.28** (Semantics of AD-free DC-ProbLog Programs).: _The semantics of an AD-free DC-ProbLog program \(\mathcal{P}^{*}\) is the semantics of the DF-PLP program \(\mathcal{P}^{DF,*}=\mathcal{D}\cup\mathcal{R}\). We call \(\mathcal{P}^{*}\) valid if and only if \(\mathcal{P}^{DF,*}\) is valid._ **Definition 4.29** (Semantics of \(DC-ProbLog\) Programs).: _The semantics of a \(DC-ProbLog\) program \(\mathcal{P}\) is the semantics of the \(AD\)-free \(DC\)-ProbLog program \(\mathcal{P}^{*}\). We call \(\mathcal{P}\) valid if and only if \(\mathcal{P}^{*}\) is valid._ Programs with distributional clauses can make programs with combinatorial structures more readable by grouping random variables with the same role under the same random term. However, the programmer needs to be aware of the fact that distributional clauses have non-local effects on the program, as they affect the interpretation of their random terms also outside the distributional clause itself. This can be rather subtle, especially if the bodies of the distributional clauses with the same random term are not exhaustive. We discuss this issue in more detail in Appendix D. ### Syntactic Sugar: Validity As stated above, a DC-ProbLog program \(\mathcal{P}\) is syntactic sugar for an AD-free program \(\mathcal{P}^{*}\) (Definition 4.14), and is valid if \(\mathcal{P}^{DF,*}\) as specified in Definition 4.28 is a valid DF-PLP program, i.e. the distributional database is well-defined, the comparison literals are measurable, and each consistent fact database results in a two-valued well-founded model if added to the logic program (Definition 3.19). For the distributional database to be well-defined (Definition 3.10), it suffices to have \(\mathcal{C}_{\mathcal{P}^{*}}\) well-defined (Definition 4.20), as can be verified by comparing the relevant definitions. Indeed, a well-defined \(\mathcal{C}_{\mathcal{P}^{*}}\) is a precondition for the transformation as stated in the definition. The transformation changes neither distribution terms nor comparison literals, and thus maintains measurability of the latter. As far as the logic program structure is concerned, the transformation to a DF-PLP adds rules for rv based on the bodies of all distributional clauses, and uses positive rv atoms in the bodies of all clauses that interpret random terms to ensure that all interpretations of random variables are anchored in the appropriate parts of the distributional database. This level of indirection does not affect the logical reasoning for programs that only interpret random terms in appropriate contexts. It is the responsibility of the programmer to ensure that this is the case and indeed results in appropriately defined models. ### Syntactic Sugar: Additional Constructs #### 4.4.1 User-Defined Sample Spaces The semantics of DC-ProbLog as presented in the previous sections only alllows for random variables with numerical sample spaces, e.g. normal distributions, or Poisson distributions. For categorical random variables, however, one might like to give a specific meaning to the elements in the sample space instead of a numerical value. **Example 4.30**.: _Consider the following program:_ 1. color \(\sim\) uniform([r,g,b]). 2. red:- color=:=r. _Here we discribe a categorical random variable (uniformaly distributed) whose sample space is the set of expressions \(\{\mathtt{r},\mathtt{b},\mathtt{g}\}\). By simply associating a natural number to each element of the sample space we can map the program back to a program whose semantics we already defined:_ 1 color \(\sim\) uniform([1,2,3]). 2 r:- color=:=1, 3 red:- r. Swapping out the sample space of discrete random variables with natural numbers is always possible as the cardinality of such a sample space is either smaller (finite categorical) or equal (infinite) to the cardinality of the natural numbers. #### 4.4.2 Multivariate Distributions Until now we have restricted the syntax and semantics of DC-ProbLog to univariate distributions, e.g. the univariate normal distribution. At first this might seem to severely limit the expressivity of DC-ProbLog, as probabilistic modelling with multivariate random variables is a common task in modern statistics and probabilistic programming. However, this concern is voided by realizing that multivariate random variables can be decomposed into _combinations_ of independent univariate random variables. We will illustrate this on the case of the bivariate normal distribution. **Example 4.31** (Constructing the Bivariate Normal Distribution).: _Assume we would like to construct a random variable distributed according to a bivariate normal distribution:_ \[\begin{pmatrix}\nu_{1}\\ \nu_{2}\end{pmatrix}\sim\mathcal{N}\left(\begin{pmatrix}\mu_{1}\\ \mu_{2}\end{pmatrix},\begin{pmatrix}\sigma_{11}&\sigma_{12}\\ \sigma_{21}&\sigma_{22}\end{pmatrix}\right)\] _The equation above can be rewritten as:_ \[\begin{pmatrix}\nu_{1}\\ \nu_{2}\end{pmatrix}\sim\begin{pmatrix}\mu_{1}\\ \mu_{2}\end{pmatrix}+\begin{pmatrix}\eta_{11}&\eta_{12}\\ \eta_{21}&\eta_{22}\end{pmatrix}\begin{pmatrix}\mathcal{N}(0,\lambda_{1})\\ \mathcal{N}(0,\lambda_{2})\end{pmatrix}\] _where it holds that_ \[\begin{pmatrix}\sigma_{11}&\sigma_{12}\\ \sigma_{21}&\sigma_{22}\end{pmatrix}=\begin{pmatrix}\eta_{11}&\eta_{12}\\ \eta_{21}&\eta_{22}\end{pmatrix}\begin{pmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{pmatrix}\begin{pmatrix}\eta_{11}&\eta_{21}\\ \eta_{12}&\eta_{22}\end{pmatrix}\] _It can now be shown that the bivariate distributions can be expressed as:_ \[\begin{pmatrix}\nu_{1}\\ \nu_{2}\end{pmatrix}\sim\begin{pmatrix}\mathcal{N}(\mu_{\nu_{1}},\sigma_{\nu_{ 1}})\\ \mathcal{N}(\mu_{\nu_{2}},\sigma_{\nu_{2}})\end{pmatrix}\] _where \(\mu_{\nu_{1}}\), \(\mu_{\nu_{2}}\), \(\sigma_{\nu_{1}}\) and \(\sigma_{\nu_{2}}\) can be expressed as:_ \[\mu_{\nu_{1}}=\mu_{1} \sigma_{\nu_{1}}=\sqrt{\eta_{11}\lambda_{1}^{2}+\eta_{12}\lambda_ {2}^{2}}\] \[\mu_{\nu_{2}}=\mu_{2} \sigma_{\nu_{2}}=\sqrt{\eta_{21}\lambda_{1}^{2}+\eta_{22}\lambda_ {2}^{2}}\] _We conclude from this that a bivariate normal distribution can be modeled using two univariate normal distributions that have a shared set of parameters and is thereby semantically defined in DC-ProbLog._ Expressing multivariate random variables in a user-friendly fashion in a probabilistic programming language is simply a matter of adding syntactic sugar for combinations of univariate random variables once the semantics are defined for the latter. **Example 4.32** (Bivariate Normal Distribution).: _Possible syntactic sugar to declare a bivariate normal distribution in DC-ProbLog, where the mean of the distribution in the two dimensions is \(0.5\) and \(2\), and the covariance matrix is \(\begin{bmatrix}2&0.5\\ 0.5&1\end{bmatrix}\)._ 1. (x1,x2) ~normal2D([0.5,2], [[2, 0.5],[0.5,1]]) 2. q:- x1<0.4, x2>1.9. On the inference side, the special syntax might then additionally be used to deploy dedicated inference algorithms. This is usually done in probabilistic programming languages that cater towards inference with multivariate (and often continuous) random variables (Carpenter et al., 2017; Bingham et al., 2019). Note that probability distributions are usually constructed by applying transformations to a set of independent uniform distribution. From this viewpoint the builtin-in normal/\(2\), denoting the univariate normal distribution, is syntactic sugar for such a transformation as well. ## 5 Probabilistic Inference Tasks In Section 3.3 we defined the probability distribution induced by a DF-PLP program by extending the basic distribution to logical consequences (expressed as logical rules). The joint distribution is then simply the joint distribution over all (ground) logical consequences. We obtain marginal probability distributions by marginalizing out specific logical consequences.1 This means that marginal and joint probabilities of atoms in DF-PLP programs are well-defined. Defining the semantics of probabilistic logic programs using an extension of Sato's distribution semantics gives us the semantics of probabilistic queries: the probability of an atom of interest is given by the probability induced by the joint probability of the program and marginalizing out all atoms one is not interested in. Footnote 1: This is possible as the compatibility condition is satisfied by construction in the distribution semantics. See also the proof of Proposition 3.15 in Section C.2. The situation is more involved with regard to conditional probability queries. In contrast to unconditional queries, not all conditional queries are well-defined under the distribution semantics. We will now give the formal definition of the PROB task, which lets us compute the (conditional) marginal probability of probabilistic events and which has so far not yet been defined in the PLP literature for hybrid domains under a declarative semantics (e.g. (Azzolini et al., 2021)). After defining the task of computing conditional marginal probabilities, we will study how to compute these probabilities in the hybrid domain. Before defining the PROB task, we will first need to formally introduce the notion of a conditional probability with respect to a DC-ProbLog program. **Definition 5.1** (Conditional Probability).: _Let \(\mathcal{A}\) be the set of all ground atoms in a given DC-ProbLog program \(\mathcal{P}\). Let \(\mathcal{E}=\{\eta_{1},\ldots,\eta_{n}\}\subset\mathcal{A}\) be a set of observed atoms, and \(e=\langle e_{1},\ldots,e_{n}\rangle\) a vector of corresponding observed truth values, with \(e_{i}\in\{\bot,\top\}\). We refer to \((\eta_{1}=e_{1})\wedge\ldots\wedge(\eta_{n}=e_{n})\) as the evidence and write more compactly \(\mathcal{E}=e\). Let \(\mu\in\mathcal{A}\) be an atom of interest called the query. If the probability of \(\mathcal{E}=e\) is greater than zero, then the conditional probability of \(\mu=\top\) given \(\mathcal{E}=e\) is defined as:_ \[P_{\mathcal{P}}(\mu=\top\mid\mathcal{E}=e)=\frac{P_{\mathcal{P}}(\mu=\top, \mathcal{E}=e)}{P_{\mathcal{P}}(\mathcal{E}=e)} \tag{5.1}\] **Definition 5.2** (PROB Task).: _Let \(\mathcal{A}\) be the set of all ground atoms of a given DC-ProbLog program \(\mathcal{P}\). We are given the (potentially empty) evidence \(\mathcal{E}=e\) (with \(\mathcal{E}\subset\mathcal{A}\)) and a set \(\mathcal{Q}\subset\mathcal{A}\) of atoms of interest, called query atoms. The_ **PROB task** _consists of computing the conditional probability of the truth value of every atom in \(\mathcal{Q}\) given the evidence, i.e. compute the conditional probability \(P_{\mathcal{P}}(\mu\)=\(\top\mid\mathcal{E}\)=\(e)\) for each \(\mu\in\mathcal{Q}\)._ **Example 5.3** (Valid Conditioning Set).: _Assume two random variables \(\nu_{1}\) and \(\nu_{2}\), where \(\nu_{1}\) is distributed according to a normal distribution and \(\nu_{2}\) is distributed according to a Poisson distribution. Furthermore, assume the following conditioning set \(\mathcal{E}=\{\eta_{1}=\top,\eta_{2}=\top\}\), where \(\eta_{1}\leftrightarrow(\nu_{1}>0)\) and \(\eta_{2}\leftrightarrow(\nu_{2}=5)\). This is a valid conditioning set as none of the events has a zero probability of occurring, and we can safely perform the division in Equation 5.1._ ### Conditioning on Zero-Probability Events A prominent class of conditional queries, which are not captured by Definition 5.1, are so-called zero probability conditional queries. For such queries the probability of the observed event happening is actually zero but the event is still possible. Using Equation 5.1 does not work anymore as a division by zero would now occur. **Example 5.4** (Zero-Probability Conditioning Set).: _Assume that we have a random variable \(\nu\) distributed according to a normal distribution and that we have the conditioning set \(\mathcal{E}=\{\eta=1\}\), with \(\eta\leftrightarrow(\nu=20)\). In other words, we condition the query on the observation that the random variable \(\nu\) takes the value \(20\) - for instance in a distance measuring experiment. This is problematic as the probability of any specific value for a random variable with uncountably many outcomes is in fact zero and applying Equation 5.1 leads to a division-by-zero. Consequently, an ill-defined conditional probability arises._ In order to sidestep divisions by zero when conditioning on zero-probability (but possible) events, we modify Definition 5.1. Analogously to Nitti et al. (2016), we follow the approach taken in (Kadane, 2011). **Definition 5.5** (Conditional Probability with Zero-Probability Events).: _Let \(\nu\) be a continuous random variable in the DC-ProbLog program \(\mathcal{P}\) with ground atoms \(\mathcal{A}\). Furthermore, let us assume that the evidence consists of \(\mathcal{E}=\{\eta_{0}=\top\}\) with \(\eta_{0}\leftrightarrow(\nu=w)\) and \(w\in\mathcal{Q}_{\nu}\). The conditional probability of an atom of interest \(\mu\in\mathcal{A}\) is now defined as:_ \[P_{\mathcal{P}}(\mu=\top\mid\eta_{0}=\top)=\lim_{\mathcal{A}w\to 0}\frac{P_{ \mathcal{P}}(\mu=\top,\nu\in[w-\nicefrac{{dw}}{{2}},w+\nicefrac{{dw}}{{2}}])} {P_{\mathcal{P}}(\nu\in[w-\nicefrac{{dw}}{{2}},w+\nicefrac{{dw}}{{2}}])} \tag{5.2}\] To write this limit more compactly, we introduce an infinitesimally small constant \(\delta w\) and two new comparison atoms \(\eta_{1}\leftrightarrow(w-\nicefrac{{\delta w}}{{2}}\leq\nu)\) and \(\eta_{2}\leftrightarrow(\nu\leq w+\nicefrac{{\delta w}}{{2}})\) that together encode the limit interval. Using these, we rewrite Equation 5.2 as \[P_{\mathcal{P}}(\mu=\top\mid\eta_{0}=\top)=\frac{P_{\mathcal{P}}(\mu=\top,\eta_ {1}=\top,\eta_{2}=\top)}{P_{\mathcal{P}}(\eta_{1}=\top,\eta_{2}=\top)} \tag{5.3}\] Applying the definition recursively, allows us to have multiple zero probability conditioning events. More specifically, let us assume an additional continuous random variable \(\nu^{\prime}\) that takes the value \(w^{\prime}\) for which we define: \(\eta_{1}^{\prime}\leftrightarrow(w^{\prime}-\nicefrac{{\delta w^{\prime}}}{{2 }}\leq\nu^{\prime})\) and \(\eta_{2}^{\prime}\leftrightarrow(\nu^{\prime}\leq w^{\prime}+\nicefrac{{ \delta w^{\prime}}}{{2}})\). This then leads to the following conditional probability: \[P_{\mathcal{P}}(\mu=\top\mid\nu=w,\nu^{\prime}=w^{\prime}) =\frac{P_{\mathcal{P}}(\mu=\top,\eta_{1}=\top,\eta_{2}=\top\mid \nu^{\prime}=w^{\prime})}{P_{\mathcal{P}}(\eta_{1}=\top,\eta_{2}=\top\mid\nu^ {\prime}=w^{\prime})}\] \[=\frac{\frac{P_{\mathcal{P}}(\mu=\top,\eta_{1}=\top,\eta_{2}= \top,\eta_{1}^{\prime}=\top,\eta_{2}^{\prime}=\top)}{P_{\mathcal{P}}(\eta_{1}= \top,\eta_{2}=\top,\eta_{1}^{\prime}=\top)}}{\frac{P_{\mathcal{P}}(\eta_{1}= \top,\eta_{2}=\top,\eta_{1}^{\prime}=\top,\eta_{2}^{\prime}=\top)}{P_{\mathcal{ P}}(\eta_{1}^{\prime}=\top,\eta_{2}=\top,\eta_{1}^{\prime}=\top,\eta_{2}^{ \prime}=\top)}}\] \[=\frac{P_{\mathcal{P}}(\mu=\top,\eta_{1}=\top,\eta_{2}=\top,\eta _{1}^{\prime}=\top,\eta_{2}^{\prime}=\top)}{P_{\mathcal{P}}(\eta_{1}=\top, \eta_{2}=\top,\eta_{1}^{\prime}=\top,\eta_{2}^{\prime}=\top)} \tag{5.4}\] Here we first applied the definition of the conditional probability for the observation of the random variable \(\nu\) and then for the observation of the random variable \(\nu^{\prime}\). Finally, we simplified the expression. **Proposition 5.6**.: _The conditional probability as defined in Definition 5.5 exists._ Proof.: See [11, Equation 6]. In order to express zero-probability events in DC-ProbLog we add a new built-in comparison predicate to the finite set of comparison predicates \(\Pi=\{<,>,=<,>=,=:=,\) =\(\backslash=\}\) (cf. Definition 3.1). **Definition 5.7** (Delta Interval Comparison).: _For a random variable \(\nu\) and a rational number \(\mathsf{w}\), we define delta_interval(v,w) (with delta_interval\(/2\in\Pi\)) as follows. If \(\nu\) has a countable sample space, then delta_interval(v,w) is equivalent to v=:=w. Otherwise, delta_interval(v,w) is equivalent to the conjunction of the two comparison atoms \(\mathsf{w}\)-\(\delta\)w=\(\mathsf{v}\) and \(\mathsf{v}\)=\(\mathsf{c}\mathsf{w}\)+\(\delta\)w, where \(\delta\)w is an infinitesimally small number._ The delta interval predicate lets us express conditional probabilities with zero probability conditioning events as defined in Definition 5.5. Zero probability conditioning events are often abbreviated as \(P_{\mathcal{P}}(\mu=\top\mid\nu=w)\). This can be confusing as it does not convey the intent of conditioning on an infinitesimally small interval. To this end, we introduce the symbol '\(\triangleq\)' (equal sign with a dot on top). We use this symbol to explicitly point out an infinitesimally small conditioning set. For instance, we abbreviate the limit \[\lim_{\Delta w\to 0}\frac{P_{\mathcal{P}}(\mu=\top,\nu\in[w-\nicefrac{{ \Delta w}}{{2}},w+\nicefrac{{\Delta w}}{{2}}])}{P_{\mathcal{P}}(\nu\in[w- \nicefrac{{\Delta w}}{{2}},w+\nicefrac{{\Delta w}}{{2}}])}\] in Definition 5.5 as: \[P_{\mathcal{P}}(\mu=\top\mid\nu\doteq w) \tag{5.5}\] More concretely, if we measure the height \(h\) of a person to be \(180cm\) we denote this by \(h\doteq 180\). This means that we measured the height of the person to be in an infinitesimally small interval around \(180cm\). Note that the \(\doteq\) sign has slightly different semantics for random variables with a countable support. For discrete random variables the \(\doteq\) is equivalent to the _equal_ sign. **Example 5.8**.: _Assume that we have a random variable \(\nu\) distributed according to a normal distribution and that we have the evidence set \(\mathcal{E}=\{\eta=\top\}\), with \(\eta\leftrightarrow(\nu\doteq 20)\). This is a valid conditional probability defined through Definition 5.5._ **Example 5.9**.: _Assume that we have a random variable \(\nu\) distributed according to a normal distribution and that we have the conditioning set \(\mathcal{E}=\{\eta=\top,\eta^{\prime}=\top\}\), with \(\eta_{1}\leftrightarrow(\nu\doteq 20)\) and \(\eta^{\prime}\leftrightarrow(\nu\doteq 30)\). This does not encode a conditional probability as the conditioning event is not a possible event: one and the same random variable cannot be observed to have two different outcomes._ The notation used to condition on zero probability events (even when using '\(\doteq\)') hides away the limiting process that is used to define the conditional probability. This can lead to situations where seemingly equivalent conditional probabilities have diametrically opposed meanings. **Example 5.10**.: _Let us consider the conditioning set \(\mathcal{E}=\{\eta=\top,\eta^{\prime}=\top\}\), with \(\eta\leftrightarrow(\nu\leq 20)\) and \(\eta^{\prime}\leftrightarrow(20\leq\nu)\), which we use again to condition a continuous random variable \(\nu\). In contrast to Example 5.8, where we directly observed \(\nu\doteq 20\), here, Definition 5.1 applies, which states that the conditional probability is undefined as \(P(\nu\leq 20,20\leq\nu)=0\)._ ### Discussion on the Well-Definedness of a Query The probability of an unconditional query to a valid DC-ProbLog program is always well-defined, as it is simply a marginal of the distribution represented by the program. This stands in stark contrast to conditional probabilities: an obvious issue are divisions by zero occurring when the conditioning event does not belong to the set of possible outcomes of the conditioned random variable. Similarly to Wu et al. (2018) we will assume for the remainder of the paper that conditioning events are always possible events, i.e. events that have a non-zero probability but possibly an infinitesimally small probability of occurring. This allows us to bypass potential issues caused by zero-divisions.2 Footnote 2: In general, deciding whether a conditioning event is possible or not is undecidable. This follows from the undecidability of general logic programs under the well-founded semantics (Cherchago et al., 2007). A similar discussion is also presented in the thesis of Brian Milch (Milch, 2006, Proposition 4.8) for the BLOG language, which also discusses decidable language fragments (Milch, 2006, Section 4.5). Even when discarding impossible conditioning events, conditioning a probabilistic event on a zero probability (but possible) event remains inherently ambiguous (Jaynes, 2003] and might lead to the Borel-Kolmogorov paradox. Problems arise when the limiting process used to define the conditional probability with zero probability events (cf. Definition 5.5) does not produce a unique limit. For instance, a conditional probability \(P(\mu=\top\mid 2\nu\doteq\nu^{\prime})\), where \(\nu\) and \(\nu^{\prime}\) are two random variables, depends on the parametrization used. We refer the reader to [Shan and Ramsey, 2017] and [Jacobs, 2021] for a more detailed discussion on ambiguities arising with zero probability conditioning events in the context of probabilistic programming. We will sidestep such ambiguities completely by limiting observations of zero probability events to direct comparisons between random variables and numbers. This makes also sense from an epistemological perspective: we interpret a conditioning event as the outcome of an experiment, which produces a number, for instance the reading of a tape measure. ### Conditional Probabilities by Example **Example 5.11**.: _The following ProbLog program models the conditions under which machines work. There are two machines (Line 1), and three (binary) random terms, which we interpret as random variables as the bodies of the probabilistic facts are empty. The random variables are: the outside temperature (Line 3) and whether the cooling of each machine works (Lines 4 and 5). Each machine works if its cooling works or if the temperature is low (Lines 7 and 8)._ machine(1). machine(2). 0.8::temperature(low). 0.99::cooling(1). 0.95::cooling(2). works(N):- machine(N), cooling(N). works(N):- machine(N), temperature(low). _We can query this program for the probability of_ works(1) _given that we have as evidence that_ works(2) _is true:_ \[P(\texttt{works(1)=1}\mid\texttt{works(2)=1})\approx 0.998\] **Example 5.12**.: _In the previous example there are only Boolean random variables (encoded as probabilistic facts) and the DC-ProbLog program is equivalent to an identical ProbLog program. An advantage of DC-ProbLog is that we can now use an almost identical program to model the temperature as a continuous random variable._ machine(1). machine(2). temperature ~ normal(20,5). 0.99::cooling(1). 0.95::cooling(2). works(N):- machine(N), cooling(N). works(N):- machine(N), temperature<25.0. _We can again ask for the probability of_ works(1) _given that we have as evidence that_ works(2) _is true but now the program also involves a continuous random variable:_ \[P(\texttt{workss(1)=T}\mid\texttt{workss(2)=T})\approx 0.998\] In the two previous examples we were interested in a conditional probability where the conditioning event has a non-zero probability of occurring. However, DC-ProbLog programs can also encode conditional probabilities where the conditioning event has a zero probability of happening, while still being possible. **Example 5.13**.: _We model the size of a ball as a mixture of different beta distributions, depending on whether the ball is made out of wood or metal (Line 1). We would now like to know the probability of the ball being made out of wood given that we have a measurement of the size of the ball._ ``` 13/10::material(wood);7/10::material(metal). 2 3size-beta(2,3):- material(metal). 4size-beta(4,2):- material(wood). ``` _Assume that we measure the size of the ball and we find that it is \(0.4cm\), which means that we have a measurement (or observation) infinitesimally close to \(0.4\). Using the '\(\triangleq\)' notation, we write this conditional probability as:_ \[P\big{(}\texttt{material(wood)=T}\mid\texttt{(size=4/10)=T}\big{)} \tag{5.6}\] The _Indian GPA problem_ was initially proposed by Stuart Russell as an example problem to showcase the intricacies of mixed random variables. Below we express the Indian GPA problem in DC-ProbLog. **Example 5.14**.: _The Indian GPA problem models US-American and Indian students and their GPAs. Both receive scores on the continuous domain, namely from 0 to 4 (American) and from 0 to 10 (Indian), cf. Line 9 and 13. With non-zero probabilities both student groups can also obtain marks at the extremes of the respective scales (Lines 10, 11, 14, 15)._ ``` 11/4::american;3/4::indian. 2 319/20::isdensity(a). 499/100::isdensity(i). 5 617/20::perfect_gpa(a). 71/10::perfect_gpa(i). 8 9opa(a)-uniform(0,4):- isdensity(a). 10opa(a)-delta(4.0):- not isdensity(a), perfect_gpa(a). 11opa(a)-delta(0.0):- not isdensity(a), not perfect_gpa(a). * [13] spa(i)-uniform(0,10):- isdensity(i). * [14] spa(i)-delta(10.0):- not isdensity(i), perfect_gpa(i). * [15] spa(i)-delta(0.0):- not isdensity(i), not perfect_gpa(i). * [16] * [17] spa(student)-delta(gpa(a)):- american. * [18] spa(student)-delta(spa(i)):- indian. _Note that in order to write the probability distribution of_ spa(a) _and_ spa(i) _we used uniform and Dirac delta distributions. This allowed us to distribute the random variables_ spa(a) _and_ spa(i) _according to a discrete-continuous mixture distribution. We then observe that a student has a GPA of \(4\) and we would like to know the probability of this student being American or Indian._ \[P\big{(}\texttt{american=}\texttt{T}\,|\,(\texttt{spa(student)} \doteq 4)=\texttt{T}\big{)}=1\] \[P\big{(}\texttt{indian=}\texttt{T}\,|\,(\texttt{spa(student)} \doteq 4)=\texttt{T}\big{)}=0\] ## 6 Inference via Computing Expectations of Labeled Logic Formulas In the previous sections we have delineated the semantics of DC-ProbLog programs and described the PROB task that defines conditional probability queries on DC-ProbLog programs. The obvious next step is to actually perform the inference. We will follow an approach often found in implementations of PLP languages in the discrete domain: reducing inference in probabilistic programs to performing inference on labeled Boolean formulas that encode relevant parts of the logic program. Contrary to languages in the discrete domain that follow this approach [Fierens et al., 2015; Riguzzi and Swift, 2011], we will face the additional complication of handling random variables with infinite sample spaces. We refer the reader to [Riguzzi, 2018, Section 5] for a broader overview of this approach. Specifically, we are going to define a reduction from DC-ProbLog inference to the task of computing the expected label of a propositional formula. The formula is a propositional encoding of the relevant part of the logic program (relevant with respect to a query), where atoms become propositional variables, and the labels of the basic facts in the distribution database are derived from the probabilistic part of the program. At a high level, we extend ProbLog's inference algorithm such that Boolean comparison atoms over (potentially correlated) random variables are correctly being kept track of. The major complication, with regard to ProbLog and other systems such as PITA [Riguzzi and Swift, 2011], is the presence of context-dependent random variables, which are denoted by the same ground random term. For instance, the random term size in the program in Example 5.13 denotes two different random variables but is being referred to by one and the same term in the program. Inference algorithms for PLP languages often consider only a fragment of the language for which the semantics have been defined. A common restriction for inference algorithms is to only consider range-restricted programs3. Furthermore, we consider, without loss of generality only AD-free programs, cf. Definition 4.15, as annotated disjunctions or probabilistic facts can be eliminated up front by means of _local_ transformations that solely affect the annotated disjunctions (or probabilistic facts).4 Footnote 4: For non-ground ADs, we adapt Definition 4.14 to include all logical variables as arguments of the new random variable. As this introduces non-ground distributional facts, which are not range-restricted, we also move the comparison atom to the end of the rule bodies of the AD encoding to ensure those local random variables are ground when reached in backward chaining. The high level steps for converting a DC-ProbLog program to a labeled propositional formula closely follow the corresponding conversion for ProbLog programs provided by Fierens et al. (2015, Section 5), i.e., given a DC-ProbLog program \(\mathcal{P}\), evidence \(\mathcal{E}=e\) and a set of query atoms \(\mathcal{Q}\), the conversion algorithm performs the following steps: 1. Determine the relevant ground program \(\mathcal{P}_{g}\) with respect to the atoms in \(\mathcal{Q}\cup\mathcal{E}\) and obtain the corresponding DF-PLP program. 2. Convert \(\mathcal{P}_{g}\) to an equivalent propositional formula \(\phi_{g}\) and \(\mathcal{E}=e\) to a propositional conjunction \(\phi_{e}\). 3. Define the labeling function for all atoms in \(\phi_{g}\). Step 1 exploits the fact that ground clauses that have no influence on the truth values of query or evidence atoms are irrelevant for inference and can thus be omitted from the ground program. Step 2 performs the conversion from logic program semantics to propositional logic, generating a formula that encodes _all_ models of the relevant ground program as well as a formula that serves to assert the evidence by conjoining both formulas. Step 3 completes the conversion by defining the labeling function. In the following, we discuss the three steps in more detail and prove correctness of our approach (cf. Theorem 6.10). ### The Relevant Ground Program The first step in the conversion of a non-ground DC-ProbLog program to a labeled Boolean formula consists of grounding the program with respect to a query set \(\mathcal{Q}\) and the evidence \(\mathcal{E}=e\). For each ground atom in \(\mathcal{Q}\) and \(\mathcal{E}\) we construct its dependency set. That is, we collect the set of ground atoms and ground rules that occur in any of the proofs of an atom in \(\mathcal{Q}\cup\mathcal{E}\). The union of all dependency sets for all the ground atoms in \(\mathcal{Q}\cup\mathcal{E}\) is the dependency set of the DC-ProbLog with respect to the sets \(\mathcal{Q}\) and \(\mathcal{E}\). This dependency set, consisting of ground rules and ground atoms, is called the relevant ground program (with respect to a set of queries and evidence). **Example 6.1**.: _Consider the non-ground (AD-free) DC-ProbLog program below._ 1. _rv_hot \(\sim\) flip(0.2)._ 2. _hot:- rv_hot=:=1._ rv_cool(1) ~ flip(0.99). 4cool(1):- rv_cool(1)=:=1. 5 6temp(1) ~ normal(27,5):- hot. 7temp(1) ~ normal(20,5):- not hot. 8 9works(N):- cool(N). 10works(N):- temp(N)<25.0. If we ground it with respect to the query works(1) and subsequently apply the rewrite rules from Section 4.2.2 we obtain: 1 rv_hot ~ flip(0.2). 2 hot:- rv_hot=:=1. 3rv_cool(1) ~ flip(0.99). 4cool(1):- rv_cool(1)=:=1. 5 6temp(hot) ~ normal(27,5). 7temp(not_hot) ~ normal(20,5). 8 9works(1):- cool(1). 10works(1):- hot, temp(hot)<25.0, 11works(1):- not hot, temp(not_hot)<25.0. A possible way, as hinted at in Example 6.1 of obtaining a ground DF-PLP program from a non-ground DC-ProbLog program is to first ground out all the logical variables. Subsequently, one can apply Definition 4.14 to eliminate annotated disjunctions and probabilistic facts, Definition 4.14 and Definition 4.25 in order to obtain a DF-PLP program with no distributional clauses. A possible drawback of such a two-step approach (grounding logical variables followed by obtaining a DC-ProbLog program) is that it might introduce spurious atoms to the relevant ground program. A more elegant but also more challenging approach is to interleave the grounding of logical variables and distributional clause elimination. We leave this for future research. **Theorem 6.2** (Label Equivalence).: _Let \(\mathcal{P}\) be a DC-ProbLog program and let \(\mathcal{P}_{g}\) be the relevant ground program for \(\mathcal{P}\) with respect to a query \(\mu\) and the evidence \(\mathcal{E}=e\) obtained by first grounding out logical variables and subsequently applying transformation rules from Section 4. The programs \(\mathcal{P}\) and \(\mathcal{P}_{g}\) specify the same probability:_ \[P_{\mathcal{P}}(\mu=\top\mid\mathcal{E}=e)=P_{\mathcal{P}_{g}}(\mu=\top\mid \mathcal{E}=e) \tag{6.1}\] Proof.: See Appendix F.1. ### The Boolean Formula for the Relevant Ground Program Converting a ground logic program, i.e. a set of ground rules, into an equivalent Boolean formula is a purely logical problem and well-studied in the non-probabilistic logic programming literature. We refer the reader to Janhunen (2004) for an account of the transformation to Boolean formula in the non-probabilistic setting and to Mantadelis and Janssens (2010) and Fierens et al. (2015) in the probabilistic setting, including correctness proofs. We will only illustrate the most basic case with an example here. **Example 6.3** (Mapping DC-ProbLog to Boolean Formula).: _Consider the ground program in Example 6.1. To highlight the move from logic programming to propositional logic, we introduce for every atom \(\mathsf{a}\) in the program a corresponding propositional variable \(\phi_{\mathsf{a}}\). As the program does not contain cycles, we can use Clark's completion for the transformation, i.e., a derived atom is true if and only if the disjunction of the bodies of its defining rules is true. The propositional formula \(\phi_{g}\) corresponding to the program is then the conjunction of the following three formulas:_ \[\phi_{\mathsf{works}(1)} \leftrightarrow\left(\phi_{\mathsf{cool}(1)}\vee\phi_{\mathsf{ hot}}\wedge\phi_{\mathsf{temp}(\mathsf{hot})<25.0}\vee\neg\phi_{\mathsf{hot}} \wedge\phi_{\mathsf{temp}(\mathsf{not\_hot})<25.0}\right)\] \[\phi_{\mathsf{cool}(1)} \leftrightarrow\phi_{\mathsf{rv\_cool}(1)=:=1}\] \[\phi_{\mathsf{hot}} \leftrightarrow\phi_{\mathsf{rv\_hot}:=1}\] Note that the formula obtained by converting the relevant ground program still admits _any_ model of that program, including ones that are inconsistent with the evidence. In order to use that formula to compute conditional probabilities, we still need to assert the evidence into the formula by conjoining the corresponding propositional literals. The following theorem then directly applies to our case as well. **Theorem 6.4** (Model Equivalence (Fierens et al., 2015) (Theorem 2, part 1)).: _Let \(\mathcal{P}_{g}\) be the relevant ground program for a DC-ProbLog program \(\mathcal{P}\) with respect to query set \(\mathcal{Q}\) and evidence \(\mathcal{E}=e\). Let \(MOD_{\mathcal{E}=e}(\mathcal{P}_{g})\) be those models in \(MOD(\mathcal{P}_{g})\) that are consistent with the evidence. Let \(\phi_{g}\) denote the propositional formula derived from \(\mathcal{P}_{g}\), and set \(\phi\leftrightarrow\phi_{g}\wedge\phi_{e}\), where \(\phi_{e}\) is the conjunction of literals that corresponds to the observed truth values of the atoms in \(\mathcal{E}\). We then have_ **model equivalence**_, i.e.,_ \[MOD_{\mathcal{E}=e}(\mathcal{P}_{g})=ENUM(\phi) \tag{6.2}\] _where \(ENUM(\phi)\) denotes the set of models of \(\phi\)._ ### Obtaining a Labeled Boolean Formula In contrast to a ProbLog program, a DC-ProbLog program does not explicitly provide independent probability labels for the basic facts in the distribution semantics, and we thus need to suitably adapt the last step of the conversion. We will first define the labeling function on propositional atoms and will then show that the probability of the label of a propositional formula is the same as the probability of the relevant ground program under the distribution semantics from Section 3. We call this _label equivalence_ and prove it in Theorem 6.9. **Definition 6.5** (Label of Literal).: _The label \(\alpha(\phi_{\rho})\) of a propositional atom \(\phi_{\rho}\) (or its negation) is given by:_ \[\alpha(\phi_{\rho})=\begin{cases}\llbracket c(vars(\rho))\rrbracket,&\text{if $ \rho$ is a comparison atom}\\ 1,&\text{otherwise}\end{cases} \tag{6.3}\] _and for the negated atom:_ \[\alpha(\neg\phi_{\rho})=\begin{cases}\llbracket\neg c(vars(\rho)) \rrbracket,&\text{if $\rho$ is a comparison atom}\\ 1,&\text{otherwise}\end{cases} \tag{6.4}\] _We use Iverson brackets \(\llbracket\cdot\rrbracket\)[Iverson, 1962] to denote an indicator function. Furthermore, \(vars(\rho)\) denotes the random variables that are present in the arguments of the atom \(\rho\) and \(c(\cdot)\) encodes the constraint given by \(\rho\)._ **Example 6.6** (Labeling function).: _Continuing Example 6.3, we obtain, inter alia, the following labels:_ \[\alpha(\phi_{\text{rv\_hot}:=1})=\llbracket rv\_hot=1\rrbracket\] \[\alpha(\neg\phi_{\text{rv\_hot}:=1})=\llbracket\neg(rv\_hot=1) \rrbracket=\llbracket rv\_hot=0\rrbracket\] \[\alpha(\phi_{\text{hot}})=1\] \[\alpha(\neg\phi_{\text{hot}})=1\] **Definition 6.7** (Label of Boolean Formula).: _Let \(\phi\) be a Boolean formula and \(\alpha(\cdot)\) the labeling function for the variables in \(\phi\) as given by Definition 6.5. We define the label of \(\phi\) as_ \[\alpha(\phi)=\sum_{\varphi\in ENUM(\phi)}\prod_{\ell\in\varphi} \alpha(\ell)\] _i.e. as the sum of the labels of all its models, which are in turn defined as the product of the labels of their literals._ **Example 6.8** (Labeled Boolean Formula).: _The label of the conjunction_ \[\neg\phi_{\text{hot}}\wedge\neg\phi_{\text{rv\_hot}:=:1}\wedge \phi_{\text{temp(not\_hot)}<25.0}\wedge\neg\phi_{\text{cool(1)}}\wedge\neg \phi_{\text{rv\_cool(1)}=:=1}\wedge\phi_{\text{works(1)}}\] _which describes one model of the example formula, is computed as follows:_ \[\alpha(\neg\phi_{\text{hot}}\wedge\neg\phi_{\text{rv\_hot}:=:1} \wedge\phi_{\text{temp(not\_hot)}<25.0}\] \[\wedge\neg\phi_{\text{cool(1)}}\wedge\neg\phi_{\text{rv\_cool(1) }=:=1}\wedge\phi_{\text{works(1)}})\] \[=\alpha(\neg\phi_{\text{hot}})\times\alpha(\neg\phi_{\text{rv\_ hot}:=:1})\times\alpha(\phi_{\text{temp(not\_hot)}<25.0})\] \[\times\alpha(\neg\phi_{\text{cool(1)}})\times\alpha(\neg\phi_{ \text{rv\_cool(1)}=:=1}\times\alpha(\phi_{\text{works(1)}}))\] \[=1\times\llbracket rv\_hot=0\rrbracket\times\llbracket temp(not\_hot )<25\rrbracket\times 1\times\llbracket rv\_cool(1)=0\rrbracket\times 1\] \[=\llbracket rv\_hot=0\rrbracket\times\llbracket temp(not\_hot )<25\rrbracket\times\llbracket rv\_cool(1)=0\rrbracket\] **Theorem 6.9** (Label Equivalence).: _Let \(\mathcal{P}_{g}\) be the relevant ground program for a DC-ProbLog program \(\mathcal{P}\) with respect to a query \(\mu\) and the evidence \(\mathcal{E}=e\). Let \(\phi_{g}\) denote the propositional formula derived from \(\mathcal{P}_{g}\) and let \(\alpha\) be the labeling function as defined in Definition 6.5. We then have_ **label equivalence**_, i.e._ \[\forall\varphi\in ENUM(\phi_{g}):\operatorname*{\mathbb{E}}_{ \mathcal{V}\sim\mathcal{P}_{g}}[\alpha(\varphi)]=P_{\mathcal{P}_{g}}(\varphi) \tag{6.5}\] _In other words, for all models \(\varphi\) of \(\phi_{g}\), the expected value (\(\operatorname*{\mathbb{E}}[\cdot]\)) of the label of \(\varphi\) is equal to the probability of \(\varphi\) according to the probability measure of relevant ground program \(\mathcal{P}_{g}\)._ Proof.: See Appendix F.2. Theorem 6.9 states that we can reduce inference in hybrid probabilistic logic programs to computing the expected value of labeled Boolean formulas, as summarized in the following theorem. **Theorem 6.10**.: _Given a DC-ProbLog program \(\mathcal{P}\), a set \(\mathcal{Q}\) of queries, and evidence \(\mathcal{E}=e\), for every \(\mu\in\mathcal{Q}\), we obtain the conditional probability of \(\mu=q\) (\(q\in\{\bot,\top\}\)) given \(\mathcal{E}=e\) as_ \[P(\mu=q\mid\mathcal{E}=e)=\frac{\mathbb{E}_{\mathit{vars}(\phi)\sim\mathcal{P }_{g}}[\alpha(\phi\wedge\phi_{q})]}{\mathbb{E}_{\mathit{vars}(\phi)\sim\mathcal{ P}_{g}}[\alpha(\phi)]}\] _where \(\phi\) is the formula encoding the relevant ground program \(\mathcal{P}_{g}\) with the evidence asserted (cf. Theorem 6.4), and \(\phi_{q}\) the propositional atom for \(\mu\)._ Proof.: This directly follows from model and label equivalence together with the definition of conditional probabilities. We have shown that the probability of a query to a DC-ProbLog program can be expressed as the expected label of a propositional logic formula. ## 7 Computing Expected Labels via Algebraic Model Counting In this section we will adapt the approach taken by Zuidberg Dos Martires et al. (2019), dubbed _Sampo_ to compute the expected value of labeled propositional Boolean formulas. The method approximates intractable integrals that appear when computing expected labels using Monte Carlo estimation. The main difference between Sampo and our approach, which we dub _infinitesimal algebraic likelihood weighting_ (IALW) is that IALW can also handle infinitesimally small intervals, which arise when conditioning on zero probability events. ### Monte Carlo Estimate of Conditional Query In Definition 5.1 we defined the conditional probability as: \[P_{\mathcal{P}}(\mu=\top\mid\mathcal{E}=e)=\frac{P_{\mathcal{P}}(\mu=\top, \mathcal{E}=e)}{P_{\mathcal{P}}(\mathcal{E}=e)} \tag{7.1}\] and we also saw in Definition 5.5 that using infinitesimal intervals allows us to consider zero probability events, as well. Computing the probabilities in the numerator and denominator in the equation above is, in general, computationally hard. We resolve this using a Monte Carlo approximation. **Proposition 7.1** (Monte Carlo Approximation of a Conditional Query).: _Let the set_ \[\mathcal{S}=\left\{\left(s_{1}^{(1)},\ldots,s_{M}^{(1)}\right),\ldots,\left(s _{1}^{(|\mathcal{S}|)},\ldots,s_{M}^{(|\mathcal{S}|)}\right)\right\} \tag{7.2}\] denote \(|\mathcal{S}|\) i.i.d. samples for each random variable in \(\mathcal{P}_{g}\). A conditional probability query to a DC-ProbLog program \(\mathcal{P}\) can be approximated as:_ \[P_{\mathcal{P}}(\mu=q\mid\mathcal{E}=e)\approx\frac{\sum_{i=1}^{|\mathcal{S}|} \sum_{\varphi\in ENUM(\phi\wedge\phi_{q})}\alpha^{(i)}(\varphi)}{\sum_{i=1}^{| \mathcal{S}|}\sum_{\varphi\in ENUM(\phi)}\alpha^{(i)}(\varphi)}, N<\infty \tag{7.3}\] _The index \((i)\) on \(\alpha^{(i)}(\varphi)\) indicates that the label of \(\varphi\) is evaluated at the \(i\)-th ordered set of samples \(\left(s_{1}^{(i)},\ldots,s_{M}^{(i)}\right)\)._ Proof.: See Appendix F.3. In the limit \(|\mathcal{S}|\to\infty\) this sampling approximation scheme is perfectly valid. However, in practice, with only limited resources available, such a rejection sampling strategy will perform poorly (in the best case) or even give completely erroneous results. After all, the probability of sampling a value from the prior distribution that falls exactly into an infinitesimally small interval given in the evidence tends to zero. To make the computation of a conditional probability, using Monte Carlo estimates, feasible, we are going to introduce _infinitesimal algebraic likelihood weighting_. But first, we will need to introduce the concept of infinitesimal numbers. ### Infinitesimal Numbers Remember that infinitesimal intervals arise in zero probability conditioning events and describe an infinitesimally small interval around a specific observed value, e.g. \(\nu\in[w-\nicefrac{{dw}}{{2}},w+\nicefrac{{dw}}{{2}}]\) for a continuous random variable \(\nu\) that was observed to take the value \(w\) (cf. Definition 5.5). We will describe these infinitesimally small intervals using so-called _infinitesimal numbers_, which were first introduced by Nitti et al. (2016) and further formalized in Wu et al. (2018), (Zuidberg Dos Martires, 2020) and (Jacobs, 2021). The latter work also coined the term _'infinitesimal number'_. **Definition 7.2** (Infinitesimal Numbers).: _An infinitesimal number is a pair \((r,n)\in\mathbb{R}\times\mathbb{Z}\), also written as \(r\epsilon^{n}\), and which corresponds to a real number when \(n=0\). We denote the set of all infinitesimal numbers by \(\mathbb{I}\)._ **Definition 7.3** (Operations in \(\mathbb{I}\)).: _Let \((r,n)\) and \((t,m)\) be two numbers in \(\mathbb{I}\). We define the addition and multiplication as binary operators:_ \[(r,n)\oplus(t,m) \coloneqq\begin{cases}(r+t,n)&\text{if }n=m\\ (r,n)&\text{if }n<m\\ (t,m)&\text{if }n>m\end{cases} \tag{7.4}\] \[(r,n)\otimes(t,m) \coloneqq(r\times t,n+m) \tag{7.5}\] _The operations \(+\) and \(\times\) on the right hand side denote the usual addition and multiplication operations for real and integer numbers._ **Definition 7.4** (Neutral Elements).: _The neutral elements of the addition and multiplications in \(\mathbb{I}\) are, respectively, defined as:_ \[e^{\oplus} \coloneqq(0,0) e^{\otimes} \coloneqq(1,0) \tag{7.6}\] Probabilistic inference and generalization thereof can often be cast as performing computations using commutative semirings (Kimmig et al., 2017). We will follow a similar strategy. **Definition 7.5**.: \(A\) **commutative semiring** _is an algebraic structure \((\mathcal{A},\oplus,\otimes,e^{\oplus},e^{\otimes})\) equipping a set of elements \(\mathcal{A}\) with addition and multiplication such that_ 1. _addition_ \(\oplus\) _and multiplication_ \(\otimes\) _are binary operations_ \(\mathcal{A}\times\mathcal{A}\rightarrow\mathcal{A}\)__ 2. _addition_ \(\oplus\) _and multiplication_ \(\otimes\) _are associative and commutative binary operations over the set_ \(\mathcal{A}\)__ 3. \(\otimes\) _distributes over_ \(\oplus\)__ 4. \(e^{\oplus}\in\mathcal{A}\) _is the neutral element of_ \(\oplus\)__ 5. \(e^{\otimes}\in\mathcal{A}\) _is the neutral element of_ \(\otimes\)__ 6. \(e^{\oplus}\in\mathcal{A}\) _is an annihilator for_ \(\otimes\)__ **Lemma 7.6**.: _The structure \((\mathbb{I},\oplus,\otimes,e^{\oplus},e^{\otimes})\) is a commutative semiring._ Proof.: This follows trivially from the operations defined in Definition 7.3 and the neutral elements in Definition 7.4. We will also need to perform subtractions and divisions in \(\mathbb{I}\), for which we first need to define inverse elements. **Definition 7.7** (Inverse Elements).: _Let \((r,n)\) be a number in \(\mathbb{I}\). We define its inverse with respect to the addition \(-(r,n)\), also called negation, as:_ \[-(r,n)\coloneqq(-r,n) \tag{7.7}\] _Moreover, we define its inverse with respect to the multiplication \((r,n)^{-1}\), also called the reciprocal, as:_ \[(r,n)^{-1}\coloneqq\begin{cases}(r^{-1},-n)&\text{if }r\neq 0\\ undefined&\text{if }r=0\end{cases} \tag{7.8}\] **Definition 7.8** (Subtraction and Division in \(\mathbb{I}\)).: _Let \((r,n)\) and \((s,m)\) be two numbers in \(\mathbb{I}\). We define the subtraction and division as:_ \[(r,n)\odot(t,m) \coloneqq(r,n)\oplus(-(t,m))=(r,n)\oplus(-t,m) \tag{7.9}\] \[(r,n)\oslash(s,m) \coloneqq\begin{cases}(r,n)\otimes(t,m)^{-1}=(r,n)\otimes(t^{-1},-m)&\text{if }t\neq 0\\ undefined&\text{if }t=0\end{cases} \tag{7.10}\] ### Infinitesimal Algebraic Likelihood Weighting The idea behind IALW is that we do not sample random variables that fall within an infinitesimal small interval, encoded as a delta interval (cf. Definition 5.7), but that we force, without sampling, the random variable to lie inside the infinitesimal interval. To this end, assume again that we have \(N\) i.i.d. samples for each random variable. That means that we have again a set of ordered sets of samples: \[\mathcal{S}=\left\{\left(s_{1}^{(1)},\ldots,s_{M}^{(1)}\right),\ldots,\left(s _{1}^{(|\mathcal{S}|)},\ldots,s_{M}^{(|\mathcal{S}|)}\right)\right\} \tag{7.11}\] This time the samples are drawn with the infinitesimal delta intervals taken into account. For example, assume we have a random variable \(\nu_{1}\) distributed according to a normal distribution \(\mathcal{N}(5,2)\) and we have an atom delta_interval(\(\nu_{1}\),4) in the propositional formula \(\phi\). Each sampled value of \(s_{1}^{(i)}\) will then equal 4 ( \(1\leq i\leq N\)). Furthermore, when sampling, we sample the parents of a random variable prior to sampling the random variable itself. For instance, take the random variable \(\nu_{2}\sim\mathcal{N}(\nu_{3}=w,2)\), where \(\nu_{3}\) is itself a random variable. We first sample \(\nu_{3}\) and once we have a value for \(\nu_{3}\) we plug that into the distribution for \(\nu_{2}\), which we sample subsequently. In other words, we sample according to the ancestor relationship between the random variables. We call the ordered set of samples \(\mathbf{s}^{(i)}\in\mathcal{S}\) an _ancestral sample_. **Definition 7.9** (Ialw Label).: _Given is an ancestral sample \(\mathbf{s}^{(i)}=(s_{1}^{(i)},\ldots,s_{M}^{(i)})\) for the random variables \(\mathcal{V}=(\nu_{1},\ldots,\nu_{M})\). We, furthermore, denote the probability distribution of a random variable \(\nu_{k}\) by \(\delta_{k}\) and \(\delta_{k}(\mathbf{s}^{(i)})\) evaluates the distribution for the \(i\)-th ancestral sample. The IALW label of a positive literal \(\ell\) is an infinitesimal number given by:_ \[\alpha_{IALW}^{(i)}(\ell)\] \[=\begin{cases}(\delta_{k}(\mathbf{s}^{(i)}),1),&\text{if $\ell$ is a \leavevmode\nobreak\ \tt delta\_interval whose first argument}\\ &\text{is a continuous random variable}\\ (\ell(\mathbf{s}^{(i)}),0),&\text{if $\ell$ is any comparison atom}\\ (1,0),&\text{otherwise}\end{cases}\] _The expression \(\ell(\mathbf{s}^{(i)})\) denotes the indicator function, which corresponds to the literal \(\ell\), being evaluated using the samples \(\mathbf{s}^{(i)}\), and implies that \(\ell(\mathbf{s}^{(i)})\in\{0,1\}\)._ _For the negated literals we have the following labeling function:_ \[\alpha_{IALW}^{(i)}(\neg\ell)\] \[=\begin{cases}(1,0),&\text{if $\ell$ is a \leavevmode\nobreak\ \tt delta\_interval whose first argument}\\ &\text{is a continuous random variable}\\ (1-\ell(\mathbf{x}^{(i)}),0),&\text{if $\ell$ is any other comparison atom}\\ (1,0),&\text{otherwise}\end{cases}\] Intuitively speaking and in the context of probabilistic inference, the first part of an infinitesimal number accumulates (unnormalized) likelihood weights, while the second part counts the number of times we encounter a delta_interval atom. This counting happens with \(\oplus\) operation of the infinitesimal numbers (Equation 7.4). The \(\oplus\) operation tells us that for two infinitesimal number \((r,n)\) and \((t,m)\) with \(n>m\), the event corresponding to the first of the two infinitesimal numbers is infinitely more probable to happen and that we drop the likelihood weight of the second infinitesimal number (Equation 7.4). In other words, an event with fewer delta_interval-atoms is infinitely more probable than an event with more such intervals. **Example 7.10** (IALW Label of delta_interval with Continuous Random Variable).: _Let us consider a random variable \(\mathbf{x}\), which is normally distributed: \(p(\mathbf{x}|\mu,\sigma)=\nicefrac{{1}}{{(\sigma\sqrt{2\pi})}}\exp\left(-( \nicefrac{{\mathbf{x}-\mu}}{{2\sigma^{2}}})\right)\), where \(\mu\) and \(\sigma>0\) are real valued parameters that we can choose freely. The atom delta_interval(\(\mathbf{x},3\)) gets the label_ \[\left(\frac{1}{(\sigma\sqrt{2\pi})}\exp\left(-(\nicefrac{{3}-\mu}{{2\sigma^{ 2}}})\right),1\right)\] _The first element of the infinitesimal number is the probability distribution evaluated at the observation, in this case \(3\). As this is a zero probability event, the label also picks up a non-zero second element._ _The label of -delta_interval(\(\mathbf{x},3\)) is \((1,0)\). The intuition here being that the complement of an event with zero probability of happening will happen with probability \(1\). As the complement event is not a zero probability event the second element of the label is \(0\) instead of \(1\)._ **Example 7.11** (IALW Label of delta_interval with Discrete Random Variable).: _Let us consider a discrete random variable \(\mathbf{k}\), which is Poisson distributed:_ \[p(\mathbf{k}|\lambda)=\nicefrac{{k}}{{e^{-\lambda}}}/\mathbf{k}!\] _where \(\lambda>0\) is a real-valued parameter that we can freely choose._ _As a delta_interval with a discrete random variable is equivalent to \(a=:=\) comparison (cf. Definition 5.7), we get for the label of the atom delta_interval(\(\mathbf{k},3\)): \((\llbracket s^{(i)}_{x}=3\rrbracket,0)\), where \(s^{(i)}_{\mathbf{k}}\) is the \(i\)-th sample for \(\mathbf{k}\)._ **Definition 7.12** (Infinitesimal Algebraic Likelihood Weighting).: _Let \(\mathcal{S}\) be a set of ancestral samples and let \(DI(\varphi)\) denote the subset of literals in \(\varphi\) that are delta intervals. We then define IALW as expressing the expected value of the label of a propositional formula (given a set of ancestral samples) in terms of a fraction of two infinitesimal numbers:_ \[\left(\mathbb{E}\left[\sum_{\varphi\in ENUM(\phi)}\prod_{\ell\in\varphi} \alpha\left(\ell\right)\left|\mathcal{S}\right|,0\right)\right.\approx\frac{ \underset{i=1}{\overset{|\mathcal{S}|}{\longrightarrow}}\underset{\varphi\in ENUM (\phi)}{\bigoplus_{\ell\in\varphi}}\,\underset{\ell\in\varphi}{\overset{| \mathcal{S}|}{\longrightarrow}}\,\underset{i=1}{\overset{|\mathcal{S}|}{ \longrightarrow}}\,\underset{\varphi\in ENUM(\phi)}{\bigoplus_{\ell\in DI( \varphi)}}\,\underset{IALW}{\overset{(i)}{\longrightarrow}}\,\underset{i=1}{ \overset{|\mathcal{S}|}{\longrightarrow}}\,\underset{\varphi\in ENUM(\phi)}{ \bigotimes}\,\underset{\ell\in DI(\varphi)}{\overset{(i)}{\longrightarrow}}\, \underset{IALW}{\overset{(i)}{\longrightarrow}}\,\underset{\ell\in DI( \varphi)}{\overset{(i)}{\longrightarrow}}\,\underset{IALW}{\overset{(i)}{ \longrightarrow}}\,\underset{\ell\in DI(\varphi)}{\overset{(i)}{ \longrightarrow}}\,\underset{\ell\ **Proposition 7.13** (Consistency of IALW).: _Infinitesimal algebraic likelihood weighting is consistent, that is, the approximate equality in Equation 7.12 is almost surely an equality for \(\left|\mathcal{S}\right|\rightarrow\infty\)._ Proof.: See Appendix F.4. Likelihood weighting, the core idea behind IALW, is a well known technique for inference in Bayesian networks (Fung and Chang, 1990) and probabilistic programming (Milch et al., 2005b; Nitti et al., 2016), and falls within the broader class of self-normalized importance sampling (Kahn, 1950; Kloek and Van Dijk, 1978; Casella and Robert, 1998). Just like IALW, the inference approaches proposed by Nitti et al. (2016); Wu et al. (2018), and Jacobs (2021) generalize the idea of likelihood weighting to the setting with infinitesimally small intervals. What sets IALW apart from these methods is its semiring formulation. The semiring formulation will allow us to seamlessly combine IALW with knowledge compilation (Darwiche and Marquis, 2002), a technique underlying state-of-the art probabilistic inference algorithms in the discrete setting. We examine this next. Having proven the consistency of IALW, we can now express the probability of a conditional query to a DC-ProbLog program in terms of semiring operations for infinitesimal numbers \(\mathbb{I}\). **Proposition 7.14**.: _A conditional probability query to a DC-ProbLog program \(\mathcal{P}\) can be approximated as:_ \[P_{\mathcal{P}}(\mu=q|\mathcal{E}=e)\approx\frac{\bigoplus_{i=1}^{\left| \mathcal{S}\right|}\bigoplus_{\varphi\in ENUM(\phi)\wedge\mathcal{A}_{\phi} )}\bigotimes_{\ell\in\varphi}\alpha_{IALW}^{(i)}\left(\ell\right)}{\bigoplus_{ i=1}^{\left|\mathcal{S}\right|}\bigoplus_{\varphi\in ENUM(\phi)}\bigotimes_{\ell\in \varphi}\alpha_{IALW}^{(i)}\left(\ell\right)} \tag{7.13}\] Proof.: See Appendix F.5. ### Infinitesimal Algebraic Likelihood Weighting via Knowledge Compilation Inspecting Equation 7.13 we see that we have to evaluate expressions of the following form in order to compute the probability of a conditional query to a DC-ProbLog program. \[\bigoplus_{i=1}^{\left|\mathcal{S}\right|}\underbrace{\bigoplus_{\omega\in ENUM (\varphi)}\bigotimes_{\ell\in\varphi}\alpha_{IALW}^{(i)}\left(\ell\right)}_{ \text{=algebraic model count}} \tag{7.14}\] In other words, we need to compute \(\left|\mathcal{S}\right|\) times a sum over products - each time with a different ancestral sample. Such a sum over products is also called the algebraic model count of a formula \(\phi\)(Kimmig et al., 2017). Subsequently, we then add up the \(\left|\mathcal{S}\right|\) results from the different algebraic model counts giving us the final answer. Unfortunately, computing the algebraic model count is in general a computationally hard problem (Kimmig et al., 2017) - #P-hard to be precise (Valiant, 1979). A popular technique to mitigate this hardness is to use a technique called knowledge compilation (Darwiche and Marquis, 2002), which splits up the computation into a hard step and a subsequent easy step. The idea is to take the propositional Boolean formula underlying an algebraic model counting problem (cf. \(\phi\) in Equation 7.14) and compile it into a logically equivalent formula that allows for the tractable computation of algebraic model counts. The compilation constitutes the computationally hard part (#P-hard). Afterwards, the algebraic model count is performed on the compiled structure, also called _algebraic circuit_(Zuidberg Dos Martires et al., 2019). Intuitively speaking, knowledge compilation takes the sum of products and maps it to recursively nested sums and products. Effectively, finding a dynamic programming scheme (Bellman, 1957) to compute the initial sum of products. Different circuit classes have been identified as valid knowledge compilation targets (Darwiche and Marquis, 2002) - all satisfying different properties. Computing the algebraic model count on an algebraic circuit belonging to a specific target class is only correct if the properties of the circuit class match the properties of the deployed semiring. The following three lemmas will help us determining which class of circuits we need to knowledge-compile our propositional formula \(\phi\) into. **Lemma 7.15**.: _The operator \(\oplus\) (c. Definition 7.3) is not idempotent. That is, it does not hold for every \(a\in\mathbb{1}\) that \(a\oplus a=a\)._ **Lemma 7.16**.: _The pair \((\oplus,\alpha_{IALW})\) is not neutral. That is, it does not hold that \(\alpha_{IALW}(\neg\ell)\oplus\alpha_{IALW}(\neg\ell)=e^{\otimes}\) for arbitrary \(\ell\)._ **Lemma 7.17**.: _The pair \((\otimes,\alpha_{IALW})\) is not consistency-preserving. That is, it does not hold that \(\alpha_{IALW}(\ell)\otimes\alpha_{IALW}(\neg\ell)=e^{\oplus}\) for arbitrary \(\ell\)._ From (Kimmig et al., 2017, Theorem 2 and Theorem 7) and the three lemmas above, we can conclude that we need to compile our propositional logic formulas into so-called smooth, deterministic and decomposable negation normal form (sd-DNNF) formulas (Darwiche, 2001).5 Footnote 5: Note that we only require smoothness over derived atoms (otherwise case in Definition 7.12), as for the other cases the neutral sum property holds. Certain encodings of logic programs eliminate derived atoms. For such encodings the smoothness property can be dropped (Vlasselaer et al., 2014). A more detailed discussion on the smoothness requirement of circuits in a PLP context can be found in (Fierens et al., 2015, Appendix C). **Proposition 7.18** (ALW on d-DNNF).: _We are given the propositional formulas \(\phi\) and \(\phi_{q}\) and a set \(\mathcal{S}\) of ancestral samples, we can use Algorithm 7.19 to compute the conditional probability \(P_{\mathcal{P}}(\mu=q|\mathcal{E}=e)\)._ Proof.: See Appendix F.6. Algorithm 7.19 takes as input a two propositional logic formulas \(\phi\) and \(\phi_{q}\), and a set of ancestral samples. It then knowledge-compiles the formulas \(\phi\wedge\phi_{q}\) and \(\phi\) into circuits \(\Gamma_{q}\) and \(\Gamma\). These circuits are then evaluated using Algorithm 7.20. The variables \(\mathit{ialw}_{q}\) and \(\mathit{ialw}\) hold infinitesimal numbers. The returned result is the ration of these two number, which corresponds to ration in Equation 7.13. Algorithm 7.19 ``` 1functionProbALW(\(\ell\), \(\phi_{q}\), \(\mathcal{S}\)) 2\(\Gamma_{q}\leftarrow\mathsf{KC}(\phi\wedge\phi_{q})\) 3\(\Gamma\leftarrow\mathsf{KC}(\phi)\) 4\(\mathit{i}\mathit{i}\mathit{i}\mathit{h}_{q}\leftarrow\mathsf{TALW}(\Gamma_{q}, \mathcal{S})\)// cf. Algorithm 7.20 5\(\mathit{i}\mathit{i}\mathit{h}\leftarrow\mathsf{IALW}(\Gamma,\mathcal{S})\)// cf. Algorithm 7.20 6return\(\mathit{i}\mathit{i}\mathit{h}_{q}\oslash\mathit{i}\mathit{h}\) ``` **Algorithm 7.20**Computing the IALW Algorithm 7.20 compute the IALW given as input a circuit \(\Gamma\) and a set of ancestral samples. The loop evaluates the circuit (using Algorithm 7.21) for each ancestral sample \(\mathbf{s}^{(i)}\) and accumulates the result, which is then returned once the loop terminates. The accumulation inside the loop corresponds to the \(\bigoplus_{i=1}^{|\mathcal{S}|}\) summation in Equation 7.14. Algorithm 7.21 evaluates a circuit \(\Gamma\) for a single ancestral sample \(\mathbf{s}^{(i)}\) and is a variation of the circuit evaluation algorithm presented by Kimmig et al. (2017). ``` 1functionEval(\(\Gamma\),\(\mathbf{s}^{(i)}\)) 2if\(\Gamma\) is a literal node 1thenreturn\(\alpha^{(i)}(l)\) 3elseif\(\Gamma\) is a disjunction \(\bigvee_{j=1}^{m}\Gamma_{j}\)then 4return\(\bigoplus_{j=1}^{m}\mathtt{Eval}(\Gamma_{j}\), \(\mathbf{s}^{(i)})\) 5else// \(\Gamma\) is a conjunction \(\bigwedge_{j=1}^{m}\Gamma_{j}\) 6return\(\bigotimes_{i=j}^{m}\mathtt{Eval}(\Gamma_{j}\), \(\mathbf{s}^{(i)})\) ``` **Algorithm 7.21**Evaluating an sd-DNNF circuit \(\Gamma\) for labeling function \(\alpha^{(i)}\) **Example 7.22** (IALW on Algebraic Circuit).: _Consider a version of the program in Example 5.13 where the annotated disjunction has been eliminated and been replaced with a binary random variable \(m\) and a flip distribution._ ``` 1m-flip(0.3). size-beta(2,3):- m=:=0. * size-beta(4,2):- m=:=1. _We query the program for the conditional probability \(P((\texttt{m=:=1})=\top\mid\texttt{size}\triangleq 4/10)\). Following the program transformations introduced in Section 6 and then compiling the labeled propositional formula, we obtain a circuit representation of the queried program. Evaluating this circuit yields the probability of the query. To be precise, we actually obtain two circuits, one representing the probability of relevant program with the evidence enforced and with additionally having the value of the query atom set. In Figure 7.4 we show the circuit where only the evidence is enforced._ _The probability of the query (given the evidence) can now be obtained by evaluating recursively the internal nodes in the algebraic circuit using Algorithm 7.21. We perform the evaluation of the circuit in Figure 7.4 for a single iteration of the loop in Algorithm 7.20, and we assume that we have sampled the value \(m=0\) from the flip(0.3) distribution._ Figure 7.4: At the bottom of the circuit we see the distributions feeding in. The flip distribution feeds into its two possible (non-zero probability) outcomes. The two beta distributions feed into an observation statement each. We use the ‘\(\doteq\)’ symbol to denote such an observation. Note how we identify each of the two random variables for the size by a unique identifier in their respective subscripts. The circled numbers next to the internal nodes, i.e. the sum and product nodes, will allow us to reference the nodes later on and do not form a part of the algebraic circuit. Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{10.0pt}}\)) \(=e^{\otimes}\om\alpha_{IALW}(size_{0}(1)\doteq 0.4)\) \(=(1,0)\ominus(1.728,1)\) \(=(1,0)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{20.0pt}}\)) \(=\alpha_{IALW}(size_{1}(1)\doteq 0.4)\otimes\alpha_{IALW}(0=1)\) \(=(0.768,1)\otimes(0,0)\) \(=(0,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{30.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{10.0 pt}}\)) \(=(1,0)\otimes(0,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{30.0pt}}\)) \(=(0.768,1)\otimes(0,0)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{40.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{10.0 pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{30.0pt}}\)) \(=(1,0)\oplus(0,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0,1)\oplus(1,0)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(1,0)\oplus(1,0)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(1,0)\oplus(1,0)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(1,0)\oplus(1,0)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(1,0)\oplus(1,0)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(1,728,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0,1)\oplus(1.728,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0,1)\oplus(1.728,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0.768,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0,1)\oplus(1.728,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0,1)\oplus(1.728,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0,1)\oplus(1.728,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0 pt}{50.0pt}}\)) \(=(0,1)\oplus(1.728,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=(0.768,1)\) Eval(\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{10.0pt}{50.0pt}}\)) \(=\mathop{\mathtt{Eval}}(\raisebox{-1.0pt}{ languages only support binary random variables (and by extension discrete random variables with finite support), while DC-ProbLog interleaves discrete and continuous random variables. In a sense, the expectation gets pushed from the root of the algebraic circuit representing a probability to its leaves. This is, however, only possible if the circuit respects specific properties. Namely, the ones respected by d-DNNF formulas (cf. Section 7.4), which we use as our representation language for the probability. **Definition 7.24** (Symbolic IALW Label of a Literal).: _Given is an ancestral sample \(\mathbf{s}^{(i)}=(s_{1}^{(i)},\ldots,s_{M}^{(i)})\) for the random variables \(\mathcal{V}=(v_{1},\ldots,v_{M})\). The Symbolic IALW (SIALW) label of a positive literal \(\ell\) is an infinitesimal number given by:_ \[\alpha_{SIALW}^{(i)}(\ell)=\begin{cases}(p_{\ell},0),&\text{if $\ell$ encodes a probabilistic fact}\\ \alpha_{IALW}^{(i)}(\ell),&\text{otherwise}\end{cases}\] _For the negated literals we have the following labeling function:_ \[\alpha_{SIALW}^{(i)}(\neg\ell)=\begin{cases}(1-p_{\ell},0),&\text{if $\ell$ encodes a probabilistic fact}\\ \alpha_{IALW}^{(i)}(\neg\ell),&\text{otherwise}\end{cases}\] _The number \(p_{\ell}\) is the label of the probabilistic fact in a DC-ProbLog program._ In the definition above we replace the label of a comparison that corresponds to a probabilistic fact with the probability of that fact being satisfied. This has already been shown to be beneficial when performing inference, both in terms of inference time and accuracy of Monte Carlo estimates (Zuidberg Dos Martires et al., 2019). Following the work of (Kolb et al., 2019) one could also develop more sophisticated methods to detect which comparison in the leaves can be replaced with their expectation. We leave this for future work. **Example 7.25** (Symbolic IALW on Algebraic Circuit).: _Symbolic inference for the random variable \(m\) from the circuit in Example 7.22 results in annotating the leaf nodes for the different outcomes of the random variable \(m\) with the probabilities of the respective outcomes. This can be seen in the red dashed box in the bottom right of Figure 7.5._ _Evaluating the marginalized circuit now returns immediately the unnormalized algebraic model count for the evidence without the need to draw samples and consequently without the need to sum over the samples._ Eval(1) \(=e^{\otimes}\om\alpha_{SIALW}(size_{0}(1)\doteq 0.4)\) \(=(1,0)\ominus(1.728,1)\) \(=(1,0)\) Eval(2) \(=\alpha_{SIALW}(size_{1}(1)\doteq 0.4)\otimes\alpha_{SIALW}(0.3)\) \(=(0.768,1)\otimes(0.3,0)\) \(=(0.2304,1)\) Eval(3) \(=\) Eval(1) \(\otimes\) Eval(2) \(=(1,0)\otimes(0.2304,1)\) \(=(0.2304,1)\) Eval(4) \(=\) Eval(5) \(=(0.2304,1)\oplus(1.2096,1)\) \(=(1.440,1)\) Eval(6) \(=\) Eval(7) \(=\) Eval(8) \(=(0.2304,1)\oplus\) Eval(8) \(=(0.2304,1\oplus(1.2096,1)\) \(=(1.440,1)\) Figure 7.5: Circuit representation of the SIALW algorithm for the probability \(P(\texttt{size}\doteq 4/10)\). ## 8 DC-ProbLog and the Probabilistic Programming Landscape In recent years a plethora of different probabilistic programming languages have been developed. We discuss these by pointing key features present in DC-ProbLog (listed bellow), which are missing in specific related works. We organize these features along the three key contributions stated in Section 1. Our first key contribution is the introduction of the hybrid distribution semantics with the following features: 1. random variables with (possibly) infinite sample spaces 2. functional dependencies between random variables 3. uniform treatment of discrete and continuous random variables 4. negation Our second contribution is the introduction of the DC-ProbLog language, which 1. has purely discrete PLPs and their semantics as a special case, 2. supports a rich set of comparison predicates, and 3. is a Turing complete language (DC-PLP) Our last contributions concern inference, which includes 1. a formal definition of the hybrid probabilistic inference task inference task, 2. an inference algorithm called IALW, 3. that uses standard knowledge compilation in the hybrid domain. ### ProbLog and Distributional Clauses The DC-ProbLog language is a generalization of ProbLog, both in terms of syntax and semantics. A DC-ProbLog program that does not use distributional clauses (or distributional facts) is also a ProbLog program, and both views define the same distribution over the logical vocabulary of the program. DC-ProbLog properly generalizes ProbLog to include random variables with infinite sample spaces (C1.1). On a syntactical level, DC-ProbLog is closely related to the Distributional Clauses (DC) language, with which it shares the \(\sim\)/2 predicate used in infix notation. In Appendix E we discuss in more detail the relationship between DC-ProbLog and the Distributional Clauses language. Concretely, we point out that DC-ProbLog generalizes the original and negation-free version of DC (Gutmann et al., 2011) (C1.4). However, DC-ProbLog differs in its declarative interpretation of negation from the procedural interpretation as introduced to DC by Nitti et al. (2016). As a consequence, the semantics of DC and ProbLog differ in the absence of continuous random variables, while DC-ProbLog is a strict generalization of ProbLog (C2.1). ### Extended PRISM An early attempt of equipping a probabilistic logic programming language with continuous random variables can be found in (Islam et al., 2012), which was dubbed _Extended PRISM_. Similar to DC-ProbLog, Extended PRISM's semantics are based again on Sato's distribution semantics. However, Extended PRISM assumes, just like Distributional Clauses, pairwise mutually exclusive proofs (we refer again to Appendix E for details on this). On the expressivity side, Extended PRISM only supports linear equalities - in contrast to DC-ProbLog, where also inequalities are included in the semantics of the language (C2.2). An advantage of restricting possible constraints to equalities is the possibility of performing exact symbolic inference. In this regard, Extended PRISM, together with its symbolic inference algorithm, can be viewed as a logic programming language that has access to a computer algebra system. Swapping out the approximate Sampo-inspired inference algorithm in DC-ProbLog by an exact inference algorithm using symbolic expression manipulations would result in an inference approach closely related to that of Extended PRISM. One possibility would be to use the Symbo algorithm presented in (Zuidberg Dos Martires et al., 2019), which uses the PSI-language (Gehr et al., 2016) as its (probabilistic) computer algebra system. ### Probabilistic Constraint Logic Programming Impressive work on extending probabilistic logic programs with continuous random variables was presented by Michels et al. (2015) with the introduction of Probabilistic Constraint Logic Programming (PCLP). The semantics of PCLP are again based on Sato's distribution semantics and the authors also presented an approximate inference algorithm for hybrid probabilistic logic programs. Interestingly, the algorithm presented in (Michels et al., 2015) to perform (conditional) probabilistic inference extends weighted model counting to continuous random variables using imprecise probabilities, and more specifically credal sets. A shortcoming of PCLP's semantics is the lack of direct support for generative definitions of random variables, i.e., random variables can only be interpreted within constraints, but not within distributions of other random variables as is possible in DC-ProbLog (C1.2). Azzolini et al. (2021) define a non-credal version of this semantics using a product measure over a space that explicitly separates discrete and continuous random variables, assuming that a measure over the latter is given as part of the input without further discussion of how this part of the measure is specified in a program. Furthermore, they do not define any inference tasks (C3.1), e.g. computing conditional probabilities (cf. Section 5), nor do they provide an inference algorithm (C3.2). A later proposal for the syntax of such programs (Azzolini and Riguzzi, 2021) combines two classes of terms (logical and continuous ones) with typed predicates and functors, and defines mixture variables as well as arithmetic expressions over random variables through logical clauses. In other words, user-defined predicates define families of random variables through the use of typed arguments of the predicate identifying a specific random variable, arguments providing parameters for the distribution, and one argument representing the random variable itself. In contrast, the syntax of DC-ProbLog clearly identifies all random variables through explicit terms introduced through distributional facts or distributional clauses, explicitly exposes the probabilistic dependency structure by using random variable terms inside distribution terms, and avoids typing through argument positions. Moreover, DC-ProbLog takes a uniform view on all random variables in terms of semantics, thereby avoiding treating discrete and continuous random variables separately (C1.3). ### Blog Notable in the domain of probabilistic logic programming is also the BLOG language (Milch et al., 2005; Wu et al., 2018). Contrary to the aforementioned probabilistic logic programming languages, BLOG's semantics are not specified using Sato's distribution semantics but via so-called _measure-theoretic Bayesian networks_ (MTBN), which were introduced in (Wu et al., 2018). MTBNs can be regarded as the assembly language for BLOG: every BLOG program is translated or compiled to an MTBN. With DC-ProbLog we follow a similar pattern: every DC-ProbLog program with syntactic sugar (e.g. annotated disjunctions) is transformed into DF-PLP program. The semantics are defined on the bare-bones program. Note that the assembly language for DC-ProbLog (DF-PLP) is Turing complete. This is not the case for MTBNs (C2.3). ### Non-logical Probabilistic Programming As first pointed out by Russell (2015) and later on elaborated upon by Kimmig and De Raedt (2017), probabilistic programs fall either into the _possible worlds semantics_ category or the _probabilistic execution traces semantics_ category. The former is usually found in logic based languages, while the latter is the prevailing view in imperative and functional probabilistic languages. While, the probabilistic programming languages discussed so far follow the possible worlds paradigm, many languages follow the execution traces paradigm, either as a probabilistic functional language (Goodman et al., 2008; Wood et al., 2014) or as a imperative probabilistic language (Gehr et al., 2016; Salvatier et al., 2016; Carpenter et al., 2017; Bingham et al., 2019; Ge et al., 2018). Generally speaking, functional and imperative probabilistic programming languages target first and foremost continuous random variables, and discrete random variables are only added as an afterthought. A notable exception is the functional probabilistic programming language Dice (Holtzen et al., 2020), which targets discrete random variables exclusively. Concerning inference in probabilistic programming, we can observe general trends in logical and non-logical probabilistic languages. While the latter are interested in adapting and speeding up approximate inference algorithms, such as Markov chain Monte Carlo sampling schemes or variational inference, the former type of languages are more invested in exploiting independences in the probabilistic programs, mainly by means of knowledge compilation. Clearly, these trends are not strict. For instance, Obermeyer et al. (2019) proposed so-called _funors_ to express and exploit independences in Pyro (Bingham et al., 2019), an imperative probabilistic programming language, and Gehr et al. (2016) developed a computer algebra system to perform exact symbolic probabilistic inference. ### Representation of Probabilistic Programs at Inference Time Lastly, we would like to point out a key feature of the IALW inference algorithm that sets it apart from any other inference scheme for probabilistic programming in the hybrid domain. But first, let us briefly talk about computing probabilities in probabilistic programming. Roughly speaking, probabilities are computed summing and multiplying weights. These can for example be represented as floating point numbers or symbolic expressions. The collection of all operations that were performed to obtain the probability of a query to a program is called the computation graph. Now, the big difference between IALW and other inference algorithms lies in the structure of the computation graph. IALW represents the computation graph as a directed acyclic graph (DAG), while all other languages, except some purely discrete languages (Fierens et al., 2015; Holtzen et al., 2020), use a tree representation. IALW is the first inference algorithm in the discrete-continuous domain that uses DAGs (C3.3)! In cases where the computation graph can be represented as a DAG the size of the representation might be exponentially smaller compared to tree representations, which leads to faster inference times. Note that Gutmann et al. (2010) and more recently Saad et al. (2021) presented implementations of hybrid languages where the inference algorithm leverages directed acyclic graphs, as well. However, the constraints that may be imposed on random variables are limited to univariate equalities and inequalities. In the weighted model integration literature it was shown that such probability computations can be mapped to probability computations of discrete random variables only (Zeng and Van den Broeck, 2020). ## 9 Conclusions We introduced DC-ProbLog, a hybrid PLP language for the discrete-continuous domain and its accompanying hybrid distribution semantics. DC-ProbLog strictly extends the discrete ProbLog language (De Raedt et al., 2007; Fierens et al., 2015) and the negation-free Distributional Clauses (Gutmann et al., 2011) language. In designing the language and its semantics we adapted Poole (2010)'s design principle of percolating probabilistic logic programs into two separate layers: the random variables and the logic program. Boolean comparison atoms then form the link between the two layers. It is this clear separation between the random variables and the logic program that has allowed us to use simpler language constructs and to write programs using a more concise and intuitive syntax than alternative hybrid PLP approaches (Gutmann et al., 2010; Nitti et al., 2016; Speichert and Belle, 2019; Azzolini et al., 2021). Separating random variables from the logic program also allowed us to develop the IALW algorithm to perform inference in the hybrid domain. IALW is the first algorithm based on knowledge compilation and algebraic model counting for hybrid probabilistic programming languages and as such it generalizes the standard knowledge compilation based approach for PLP. It is noteworthy that IALW correctly computes conditional probabilities in the discrete-continuous domain using the newly introduced infinitesimal numbers semiring. Interesting future research directions include adapting ideas from functional probabilistic programming (the other declarative programming style besides logic programming) in the context of probabilistic logic programming. For instance, extending DC-ProbLog with a type system (Schrijvers et al., 2008) or investigating more recent ad vances, such as _quasi-Borel spaces_(Heunen et al., 2017) in the context of the distribution semantics. ## Acknowledgement This research received funding from the Wallenberg AI, Autonomous Systems and Software Program (WASP) of the Knut and Alice Wallenberg Foundation, the Flemish Government (AI Research Program), the KU Leuven Research Fund, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No [694980] SYNTH: Synthesising Inductive Data Models), and the Research Foundation - Flanders.
2307.03968
Multi-Level Power Series Solution for Large Surface and Volume Electric Field Integral Equation
In this paper, we propose a new multilevel power series solution method for solving a large surface and volume electric field integral equation based H-Matrix. The proposed solution method converges in a fixed number of iterations and is solved at each level of the H-Matrix computation.The solution method avoids the computation of a full matrix, as it can be solved independently at each level, starting from the leaf level. Solution at each level can be used as the final solution, thus saving the matrix computation time for full H-Matrix. The paper shows that the leaf level matrix computation and solution with power series gives an accurate results as full H-Matrix iterative solver method. The method results in considerable time and memory savings compared to the H-Matrix iterative solver. Further, the proposed method retains the O(NlogN) solution complexity
Y. K. Negi, N. Balakrishnan, S. M. Rao
2023-07-08T12:54:50Z
http://arxiv.org/abs/2307.03968v1
# Multi-Level Power Series Solution for Large Surface and Volume Electric Field Integral Equation ###### Abstract In this paper, we propose a new multi-level power series solution method for solving a large surface and volume electric field integral equation-based H-Matrix. The proposed solution method converges in a fixed number of iterations and is solved at each level of the H-Matrix computation. The solution method avoids the computation of a full matrix, as it can be solved independently at each level, starting from the leaf level. Solution at each level can be used as the final solution, thus saving the matrix computation time for full H-Matrix. The paper shows that the leaf level matrix computation and solution with power series gives an accurate results as full H-Matrix iterative solver method. The method results in considerable time and memory savings compared to the H-Matrix iterative solver. Further, the proposed method retains the \(O(NlogN)\) solution complexity. Method of Moments (MoM), H-Matrix, surface electric field integral equation,volume electric field integral equation. ## 1 Introduction With the use of ever increasing higher frequencies for various defence and civilian applications in the current world, the electrical size of electromagnetic scattering/radiation problem has grown drastically [1, 2]. Solving the electrically large problems numerically to obtain fast and accurate results is the biggest challenge in the Computational Electromagnetics (CEM) community. Also, with the increase in computing power and memory, the need for large-scale solution algorithms has grown even more. Out of the various numerical methods in CEM, the most popular methods are: a) the Finite Difference Time Domain (FDTD) [3] method in the time domain and b) the Method of Moments (MoM) [4] and Finite Element Method (FEM) [5] in the frequency domain. Traditionally, the frequency domain methods have been more popular than the time domain methods as most of the early experimental results were available in the frequency domain and validating the computational results was convenient and easy. Out of the various frequency domain methods, MoM based methods are highly accurate and flexible for modeling irregular structures, the MoM matrix can be computed with the Surface Electric Field Integral Equation (S-EFIE) for solving Perfect Electrical Conductor (PEC) problems with surface mesh, and the Volume Electric Field Integral Equation (V-EFIE) [6] for solving inhomogeneous dielectric problems with volume mesh. Further, the MoM leads to a smaller number of unknowns compared to FEM and is free from grid dispersion error. However, the MoM matrix is a full matrix compared to a sparse matrix for the FEM method. Hence, the solution to large size problems with MoM in electromagnetics requires high matrix memory and computation time due to the dense matrix. Note that MoM dense matrix computation, matrix vector product and storage cost scales to \(O(N^{2})\) for \(N\) number of unknowns. Solving the dense matrix with an iterative solver leads to \(N_{itr}O(N^{2})\) calculations for \(N_{itr}\) iteration with \(O(N^{2})\) for matrix-vector multiplication cost. With the direct solver, the complexity grows as \(O(N^{3})\). Various fast solver algorithms like Multi-Level Fast Multipole Algorithm (MLFMA) [6], Adaptive Integral Method (AIM) [7], FFT [8], IE-QR [9], and Hierarchical Matrix (H-Matrix) [10, 11, 12] have been proposed to overcome the MoM limitations of high memory and computation cost. Fast solver reduces the matrix memory, matrix fill time, and matrix-vector product time to \(O(NlogN)\). The reduced matrix-vector product time improves the solution time to \(N_{itr}O(NlogN)\) for \(N_{itr}\) iterations with various iterative solution methods like Bi-Conjugate Gradient (BiCG) or Generalized Minimum Residual (GMRES). Fast solvers are built on the compressibility property of the far-field interaction matrices. The compression of the far-field matrices can be done using analytical matrix compression methods like MLFMA or AIM, and also with numerical matrix compression methods like H-Matrix. Compared to analytical compression methods, numerical compression methods are easy to implement and are kernel independent. All the fast solvers depend on the iteration count of the iterative solution methods. The convergence of the iterations depends on the condition number of the computed MoM matrix, and further, for a large number of unknowns, the convergence iteration count also increases. The high iteration count can be mitigated by using various preconditions like ILUT, Null-Field, and Schur's complement method based preconditioners [13, 14, 15]. The matrix preconditioner improves the condition number of the matrices and reduces the iteration count of the overall matrix solution. Despite the improvement in solution time, the use of preconditioners comes with the overhead of preconditioner computation time and extra preconditioner solution time for each iteration. Also, for the solving of a large number of unknowns, the iteration count may still be high. Recently there has been a trend in the CEM community for the development of an iteration-free fast solver method for solving problems with a large number of unknowns. Various fast direct solvers [16, 17] have been proposed to overcome the iteration dependency of the solution process. These direct solvers are based on LU decomposition and compression methods. The methods are complex to implement and give quadratic scaling for complex real-world problems. In this work, we propose a Multi-Level (ML) fast matrix solution method based on the power series [18, 19]. The proposed method exploits the property of ML matrix compression of the H-Matrix. The matrix is solved for each level using the matrix computation of the leaf level only, and the matrix solution can be terminated at the desired level as per the required accuracy. Our experimental results show that we get good accuracy even for the lowest level solution. The method relies on matrix-vector multiplication at each level and using the solution of the lowest level saves matrix computation time and memory requirement for the overall matrix solution. The rest of the paper is organized as follows. Section II gives a summary of MoM computation for S-EFIE and V-EFIE, section III covers H-Matrix computation for S-EFIE and V-EFIE. The derivation of the proposed ML power series solver is given in section IV. The numerical results of the proposed method, and conclusion are discussed in sections V, and VI. ## II Method of Moments MoM is a popular and efficient integral equation based method for solving various electromagnetic radiation/scattering problems. MoM can be computed using Electric Field Integral Equation (EFIE) for both surface and volume modeling. Surface modeling can be done using Rao Wilton Glisson (RWG) [20] triangle basis function, whereas volume modeling can be done using Schaubert Wilton Glisson (SWG) [21] tetrahedral basis function. In the case of dielectric modeling compared to S-EFIE, V-EFIE is an integral equation of the second kind and is more well-conditioned and stable. V-EFIE can model inhomogeneous bodies more efficiently than surface EFIE. In this work, we use RWG basis function for PEC surface S-EFIE modeling and SWG basis function for volume V-EFIE modeling. The surface/volume EFIE governing equation for the conductor/dielectric scattering body illuminated with the incident plane wave is given as the total electric field (\(\mathbf{E}^{total}\)) from a scattering surface/volume and is the sum of incident electric field (\(\mathbf{E}^{inc}\)) and scattered electric fields (\(\mathbf{E}^{scatt}\)). \[\mathbf{E}^{total}=\mathbf{E}^{inc}+\mathbf{E}^{scatt}. \tag{1}\] The scattered electric field is due to the surface current in PEC surface or volume polarization current in the dielectric media and is given as: \[\mathbf{E}^{scatt}=-j\omega\mathbf{A}(\mathbf{r})-\nabla\phi(\mathbf{r}). \tag{2}\] In the above equation \(\mathbf{A}(\mathbf{r})\) is the magnetic vector potential and describes radiation of current, \(\phi(\mathbf{r})\) is electric potential and describes associate bound charge. Applying the boundary condition for PEC structure the S-EFIE can be written as: \[\mathbf{E}^{inc}=j\omega\mathbf{A}(\mathbf{r})+\nabla\phi(\mathbf{r}). \tag{3}\] Similarly, the V-EFIE can be written for a dielectric inhomogeneous body as: \[\mathbf{E}^{inc}=\frac{\mathbf{D}(\mathbf{r})}{\epsilon(\mathbf{r})}+j\omega\mathbf{A}(\mathbf{r})+ \nabla\phi(\mathbf{r}). \tag{4}\] In the above, equation \(\mathbf{D}(\mathbf{r})\) is the electric flux density and \(\epsilon(\mathbf{r})\) is the dielectric constant of the scattering volume media. The surface current in equation (3) for PEC structure is expanded with RWG function, and similarly in equation (4) for dielectric volume structure polarization current and charge is modeled with SWG basis function. Performing Galarkin testing over each term with integrating over the surface/volume, the final system of equation boils down to the linear system of the equation as below: \[[\mathbf{Z}]\mathbf{x}=\mathbf{b}. \tag{5}\] In the above equation, \(\mathbf{Z}\) is a dense MoM matrix, \(\mathbf{b}\) is a known incident plane wave, and \(\mathbf{x}\) is an unknown coefficient to be computed. The dense matrix leads to high cost matrix computation and memory requirement as well as solution time complexity. In the next section, we discuss the implementation of the H-Matrix for the mitigation of high cost of the conventional MoM matrix ## 3 H-Matrix The high cost of MoM limits its application to a few \(\lambda\) problem sizes. This limitation of MoM can be overcome by incorporating fast solvers. Most of the fast solvers work on the principle of compressibility of the far-field matrices. For the implementation of a fast solver, the mesh of geometry is divided into blocks using an oct-tree or binary-tree division process and terminated at the desired level with a limiting edge or face count in each block. The non-far-field interaction blocks at the lowest level are considered near-field blocks and are in the dense matrix form. The compression of the far-field block matrix at each level can be done analytically or numerically. The system of equations in equation (5) can now be written as the sum of near-field and far-field matrix form as: \[[\mathbf{Z}_{N}+\mathbf{Z}_{F}]\mathbf{x}=\mathbf{b}. \tag{6}\] In the above equation \(\mathbf{Z}_{N}\) is a near-field block matrix and \(\mathbf{Z}_{F}\) is far-field compressed block matrices for the MoM fast solver matrix. Numerical compression of far-field matrices is easy to implement and is kernel-independent. A few of the popular fast solvers using numerical compression methods are IE-QR, H-Matrix. In this work, we have implemented H-Matrix for ML matrix compression. For the ML compression computation, the mesh is divided into ML binary tree division-based subgroups. H-Matrix works on the computation of a far-field matrix for the interaction blocks satisfying the admissibility condition given in equation (7). The admissibility condition states that \(\eta\) times the distance between the observation cluster (\(\Omega_{t}\)) and source cluster (\(\Omega_{s}\)) should be greater or equal to the minimum diameter of the observation cluster or source cluster for far-field computation, where \(\eta\) is the admissibility control parameter, and its value is taken as 1.0. \[\eta\;dist(\Omega_{t},\Omega_{s})\geq min(diam(\Omega_{t}),diam(\Omega_{s})). \tag{7}\] The far-field matrix block compression is done in such a way that its parent interaction matrix should not be computed at the top level. Matrix compression at each level is carried out using Adaptive Cross Approximation (ACA) [22][23] method. The method exploits the rank deficiency property of the far-field matrix blocks. The low-rank sub-block of the far-field \(\mathbf{Z}_{sub}\) with \(m\) rows and \(n\) columns is decomposed into approximate \(\mathbf{U}_{(m\times k)}\) and \(\mathbf{V}_{(k\times n)}\) matrices where \(k\) is the numerical rank of the low-rank sub-block far-field matrix such that \(k<<min(m,n)\). In this work, for memory savings, we only compute half of the H-Matrix [12] by making the computation process symmetric, and to maintain the accuracy of the H-Matrix, we use re-compressed ACA [24] for far-field block compression. The solution of the iterative solver is iteration count dependent, and further, the convergence iteration count depends on the condition number of the matrix. Also, as the number of unknowns increases, the iterating count for the convergence increases. In the next section, we discuss our proposed method, which is an iteration count and far-field level block independent solution process. ## 4 Multi-Level Power Series Solution The full H-Matrix is a combination of near-field and far-field block matrices. The far-field compressed block matrices are computed for various levels, and in equation (6), the far-field matrix (\(\mathbf{Z}_{F}\)) can be further decomposed into the different matrix levels as below: \[[\mathbf{Z}_{F}]=[\mathbf{Z}_{F1}]+[\mathbf{Z}_{F2}]+[\mathbf{Z}_{F3}]. \tag{8}\] In the above equation far-field matrix \(\mathbf{Z}_{F1}\) is for level 1, \(\mathbf{Z}_{F2}\) is for level 2. and, \(\mathbf{Z}_{F3}\) is for level 3. Level 3 forms the leaf level of the binary tree and level 1 as the top level of the tree. Fig. 1. shows Figure 1: Compressed far-field and dense near-field matrix blocks layout. the H-Matrix layout for a two-dimension strip. In Fig. 1. light gray boxes represent \(\mathbf{Z}_{F1}\) far-field matrix at level 1, dark gray boxes as \(\mathbf{Z}_{F2}\) is for level 2 and large white boxes as \(\mathbf{Z}_{F3}\) for level 3, the black boxes are the near-field dense matrices. For illustrative purposes, the near-field matrix is a diagonal block form for a two-dimension strip. The real-world problems are three-dimension in structure, giving a non-diagonal block near-field matrix. To implement our ML power series solution method, we must diagonalize the near-field block matrix. The near-field matrix in equation (6) is diagonalized using diagonal scaling coefficient \([\mathbf{\alpha}]\), as computed in [15] such that the scaled diagonal block near-field matrix can be given as: \[[\tilde{\mathbf{Z}}_{N}]=[\mathbf{\alpha}][\mathbf{Z}_{N}]. \tag{9}\] Expanding equation (8) and scaling it with the scaling coefficients \([\mathbf{\alpha}]\) gives: \[[\mathbf{\alpha}][\mathbf{Z}_{N}+\mathbf{Z}_{F1}+\mathbf{Z}_{F2}+\mathbf{Z}_{F3}]\mathbf{x}=[\mathbf{ \alpha}]\mathbf{b}. \tag{10}\] \[[\tilde{\mathbf{Z}}_{N}]\mathbf{x}+[\mathbf{\alpha}][\mathbf{Z}_{F1}]\mathbf{x}+[\mathbf{\alpha}][ \mathbf{Z}_{F2}]\mathbf{x}+[\mathbf{\alpha}][\mathbf{Z}_{F3}]\mathbf{x}=\tilde{\mathbf{b}}. \tag{11}\] In the above equation \(\tilde{\mathbf{b}}\) is a \([\mathbf{\alpha}]\) scaled vector \(\mathbf{b}\) and can be further simplified as : \[\mathbf{x}+[\tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F1}]\mathbf{x}+[ \tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F2}]\mathbf{x} \tag{12}\] \[+[\tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F3}]\mathbf{x}=[ \tilde{\mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}.\] Let \([\tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F1}]=[\mathbf{U}_{1}]\), \([\tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F2}]=[\mathbf{U}_{2}]\) and \([\tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F3}]=[\mathbf{U}_{3}]\) equation (12) can further be simplified as \[\mathbf{x}+[\mathbf{U}_{1}]\mathbf{x}+[\mathbf{U}_{2}]\mathbf{x}+[\mathbf{U}_{3}]\mathbf{x}=[ \tilde{\mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}. \tag{13}\] \[[\mathbf{I}+\mathbf{U}_{1}]\mathbf{x}+[\mathbf{U}_{2}]\mathbf{x}+[\mathbf{U}_{3}]\mathbf{x}=[ \tilde{\mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}.\] (14) \[\mathbf{x}+[\mathbf{I}+\mathbf{U}_{1}]^{-1}[\mathbf{U}_{2}]\mathbf{x}+[\mathbf{I}+\mathbf{U} _{1}]^{-1}[\mathbf{U}_{3}]\mathbf{x}\] (15) \[=[\mathbf{I}+\mathbf{U}_{1}]^{-1}[\tilde{\mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}.\] Let \([\mathbf{I}+\mathbf{U}_{1}]^{-1}[\mathbf{U}_{2}]=[\mathbf{V}_{2}]\) and \([\mathbf{I}+\mathbf{U}_{1}]^{-1}[\mathbf{U}_{3}]\) \(=[\mathbf{V}_{3}]\) equation (15) can further be simplified as \[\mathbf{x}+[\mathbf{V}_{2}]\mathbf{x}+[\mathbf{V}_{3}]\mathbf{x}=[\mathbf{I}+\mathbf{U}_{1}]^{-1}[\tilde{ \mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}. \tag{16}\] \[\mathbf{x}+[\mathbf{I}+\mathbf{V}_{2}]^{-1}[\mathbf{V}_{3}]\mathbf{x}=[\mathbf{I}+\mathbf{V}_{2}]^{-1}[\mathbf{ I}+\mathbf{U}_{1}]^{-1}[\tilde{\mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}. \tag{17}\] Let \([\mathbf{I}+\mathbf{V}_{2}]^{-1}[\mathbf{V}_{3}]=[\mathbf{W}_{3}]\) and equation (17) can be written as \[\mathbf{x}+[\mathbf{W}_{3}]\mathbf{x}=[\mathbf{I}+\mathbf{V}_{2}]^{-1}[\mathbf{I}+\mathbf{U}_{1}]^{-1}[ \tilde{\mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}. \tag{18}\] \[\mathbf{x}=[\mathbf{I}+\mathbf{W}_{3}]^{-1}[\mathbf{I}+\mathbf{V}_{2}]^{-1}[\mathbf{I}+\mathbf{U}_{1}]^{-1 }[\tilde{\mathbf{Z}}_{N}]^{-1}\tilde{\mathbf{b}}. \tag{19}\] In the above equations \([\mathbf{I}+\mathbf{W}_{3}]^{-1},[\mathbf{I}+\mathbf{V}_{2}]^{-1}\) and \([\mathbf{I}+\mathbf{U}_{1}]^{-1}\) can be solved independently at each level using a power series solution method with the expansion as below: \[[\mathbf{I}+\mathbf{U}_{1}]^{-1}=[\mathbf{I}+[\tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z} _{F1}]]^{-1}. \tag{20}\] \[[\mathbf{I}+\mathbf{V}_{2}]^{-1}=[\mathbf{I}+[\mathbf{I}+\mathbf{U}_{1}]^{-1}[\mathbf{U}_{2}]]^{-1}[ \mathbf{U}_{2}]]^{-1}\] \[=[\mathbf{I}+[\mathbf{I}+[\tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F1}]]^{-1}[ \tilde{\mathbf{Z}}_{N}]^{-1}[\mathbf{\alpha}][\mathbf{Z}_{F2}]]^{-1}. \tag{21}\] From equations (20), (21), and (22), it can be observed that the solution of these equations is dependent on that level and the lower levels of the binary tree block interaction matrix. At each level, the inverse of the matrix system equation can be efficiently computed by using a fast power series solution[18]. The fast power series iterative solution converges in two fixed iterations. The solution process only depends on the matrix-vector product of the H-Matrix, thus retaining the complexity of \(O(NlogN)\)[18]. The ML solution can be computed at the desired level per the required accuracy. Our results show that the solution at the leaf level gives an accurate result leading to time and memory savings. ## V Numerical Results In this section, we show the accuracy and efficiency of the proposed method. The simulations are carried out on 128 GB memory and an Intel (Xeon E5-2670) processor system for the double-precision data type. The H-Matrix computation is done with the ACA matrix compression error tolerance of 1e-3 [22] and solved with GMRES iterative solver with convergence tolerance of 1e-6 [12]. For a compressed or dense matrix \([\mathbf{Z}]\) if we want to expand \([1+\mathbf{Z}]^{-1}\) in power series, the necessary and sufficient condition for convergence is \(|\mathbf{Z}|<1\) and we choose 0.1 for our simulations [25].The conductor and dielectric geometry with dielectric constant \(\epsilon_{r}\) is meshed with an element size less than \(\lambda/10\) and \(\lambda/(10\sqrt{\epsilon_{r}})\) respectively. To show the accuracy of the proposed method, the RCS results are compared with full H-Matrix iterative solver[12]. In the further subsections, we demonstrate the far-field memory and computation time savings along with in solution time saving with our proposed ML power series solution with different examples. ### PEC square plate To show the accuracy and efficiency on a PEC object in this subsection, we consider a square plate of size 15.0 \(\lambda\) along x and y axis meshed with 67,200 unknown edges. The square plate mesh is divided with binary tree division till level 6. The PEC S-EFIE H-Matrix is solved with ML power series solution method and H-Matrix iterative solver. ML power series converges in 2 iterations, and the iterative solver solution converges in 686. Only the far-field matrix at leaf level 6 is computed for the ML power series solution, ignoring far-field computation from levels 1 to 5 of the binary tree. Fig. 2 shows the Bi-static RCS of a PEC square plate, and from the Fig., it can be observed that the solution with ML power series solver matches with the H-Matrix iterative solver. Table 1 shows the savings in memory, computation, and solution time of the ML power series solution method as compared with conventional H-Matrix-based iterative solver. ### Dielectric slab To show the accuracy and efficiency for a considerable size dielectric problem in this subsection, we consider a dielectric slab elongated along the y-axis with a height of 10.0 \(\lambda\) length, 1.0 \(\lambda\) width, and 0.1 \(\lambda\) thickness and dielectric constant (\(\epsilon_{r}=2.0\)) meshed with 120,080 tetrahedral faces. The ML power series converges in 2 iterations, and the regular H-Matrix iterative solver converges in 33 iterations. The dielectric slab mesh is divided with binary tree division till level 10. Only the far-field matrix at leaf level 10 is computed for the ML power series solution. The accuracy of the method for a Bi-static RCS is shown in Fig. 3. Table 2 shows the significant matrix memory, matrix fill and solution time savings of the ML power series solution compared to the conventional H-Matrix-based iterative solver. ## VI Conclusion It can be observed from the illustrative examples in the previous sections that our proposed ML power series solution method gives considerable matrix memory, fill and solve time saving for significant size problems. The solution method is as accurate as the H-Matrix iterative solver. The savings may not be substantial for small-size mesh structures. Still, the method will give significant savings for large-size problems taken up for illustration and for complex and sizeable electrical problems like antenna arrays and complex composite structures. Also, the technique is entirely algebraic in nature and can apply to fast analytical solver-based methods like AIM and MLFMA. The matrix block in each level can be computed independently, and the solution of the method only depends on the matrix-vector product of the system matrix. Hence, the proposed method is amenable to efficient parallelization.
2305.15973
Integro-differential equations with delays: A perturbation approach
This paper focuses on the study of integro-differential equations with delays, presenting a novel perturbation approach. The primary objective is to introduce the concepts of classical and mild solutions for these equations and establish their existence and uniqueness, under suitable assumptions. Furthermore, we provide a variation of constants formula that characterizes these solutions. To illustrate the applicability of the proposed methodology, we present an example of integro-differential Volterra equations with a nonlocal kernel. In addition to the aforementioned contributions, a secondary goal of this paper is to address an issue concerning the statement and proof of a fundamental theorem presented in a previous work \cite{Zaza}. Specifically, we aim to rectify the statement and provide a corrected proof for Theorem 2.6 in \cite{Zaza}. By doing so, we enhance the accuracy and reliability of the existing literature in this field.
Hamid Bounit, Abderrahim Driouich, Said Hadd
2023-05-25T12:11:54Z
http://arxiv.org/abs/2305.15973v1
# Integro-differential equations with delays: A perturbation approach ###### Abstract. This paper focuses on the study of integro-differential equations with delays, presenting a novel perturbation approach. The primary objective is to introduce the concepts of classical and mild solutions for these equations and establish their existence and uniqueness, under suitable assumptions. Furthermore, we provide a variation of constant formula that characterizes these solutions. To illustrate the applicability of the proposed methodology, we present an example of integro-differential Volterra equations with a nonlocal kernel. In addition to the aforementioned contributions, a secondary goal of this paper is to address an issue concerning the statement and proof of a fundamental theorem presented in a previous work [11]. Specifically, we aim to rectify the statement and provide a corrected proof for Theorem 2.6 in [11]. By doing so, we enhance the accuracy and reliability of the existing literature in this field. Key words and phrases:Integro-differential equations, delay equation, semigroup, perturbation 2020 Mathematics Subject Classification: Primary 45K05, 35R09; Secondary 93C25,47D06 ## 1. Introduction Integro-differential Volterra equations with delays are a type of functional differential equation that combines integral and differential terms with delay effects. They are named after Vito Volterra [15], an Italian mathematician who made significant contributions to the field of mathematical analysis in the late 19th and early 20th centuries. Note that Volterra equations with delays arise in many areas of science and engineering, including physics, biology, economics, and control theory. This paper focuses on establishing the existence, uniqueness, and variation of constants formula for both mild and classical solutions of the following integro-differential equation with delays \[\begin{split}\dot{u}(t)&=Au(t)+\int_{0}^{t}a(t-s)Cu(s) ds\\ &\qquad\qquad\qquad+\int_{0}^{t}b(t-s)Lu_{s}ds+Ku_{t}+f(t),\quad t \geq 0,\\ u(0)&=x,\quad u(t)=\varphi(t),\quad t\in[-1,0], \end{split} \tag{1}\] Here \(A:D(A)\subset X\to X\) is the generator of a strongly continuous semigroup \(T:=(T(t))_{t\geq 0}\) on a Banach space \(X\), \(C:D(A)\to X\) is a linear (not closed nor closable) operator on \(X\), the delay operator \(L,K:W^{1,p}([-1,0],X)\to X\) are linear, \(a(\cdot)\) and \(b(\cdot)\) are scaler functions belonging to the Lebesgue space \(L^{p}(\mathbb{R}^{+})\), the nonhomogeneous term \(f\in L^{p}(\mathbb{R}^{+},X)\) for some \(p\in(1,\infty)\), and the initial conditions \(x\in X\) and \(\varphi\in L^{p}([-1,0],X)\). Here \(u(\cdot):[-1,\infty)\to X\) is the eventual solution to (1). At any time \(t\geq 0\), the history function of \(u(\cdot)\) is the function \(u_{t}:[-1,0]\to X\) with \(u_{t}(\theta)=u(t+\theta)\) for any \(\theta\in[-1,0]\). This paper introduces a perturbation approach to the equation (1), presenting a novel method for analyzing and solving these equations. By incorporating perturbation techniques, we aim to derive a new variation of constant formula for the solutions, utilizing Yosida extensions of the operators \(K,L\), and \(C\). The perturbation approach offers several advantages over classical methods for analyzing integro-differential equations with delays. Firstly, by considering perturbations, we gain insights into the effects of small changes or disturbances on the solutions of the equation (1). This allows us to understand the system's response under perturbations and quantify their impact using the derived variation of constants formula. Secondly, by utilizing Yosida extensions, we introduce a powerful mathematical tool that helps us handle the delay operators \(K,L\) and the kernel operator \(C\) in a unified framework. This enables a more comprehensive analysis and solution of the integro-differential equations with delays. Free-delay integro-differential equations have been extensively studied in the literature due to their wide applicability and significance in various fields. In this paper, we highlight key references that are relevant to the investigation of these equations, namely [4], [6, Chap.VI, Section 7(c)], and [13]. These references provide valuable insights and utilize a semigroup approach to solve and analyze free-delay integro-differential equations. The reference [4] offers significant contributions to the study of free-delay integro-differential equations. It presents in-depth analysis and provides fundamental results related to the existence, uniqueness, and stability of solutions. The chapter [6, Chap.VI, Section 7(c)] offers a comprehensive overview of the theory, highlighting the challenges and intricacies associated with free-delay integro-differential equations in the case of \(C=A\) and \(L=K=0\). The authors provide rigorous mathematical formulations and techniques, enabling a deeper understanding of these equations and their solutions. The book [3] focuses on collocation methods for solving integro-differential equations, including those with delays. It provides a thorough treatment of numerical methods and their convergence analysis, along with practical examples and applications. The reference [10] explores fractional integro-differential equations with delays, which generalize the classical theory of integro-differential equations. It covers existence, uniqueness, stability, and numerical methods for fractional integro-differential equations with delays. Although focused on delay differential equations, the reference [8] also covers some aspects of integro-differential equations with delays. It provides a detailed analysis of oscillatory behavior and stability properties of solutions to delay equations, including integral terms. More recently, in the paper [5], the authors introduced an insightful analytic approach to address the equation (1) specifically for the scenario where \(C=A\) and \(L=0\). They successfully derived a variation of constants formula by employing the concept of resolvent families linked to the free-delay integro-differential equation. Additionally, their work incorporates a comprehensive and detailed spectral theory, along with a significant contribution to control theory. In recent years, significant advancements have been made in the study of integro-differential equations with delays, specifically focusing on Equation (1). One notable contribution is the work by the authors of [5], where they presented an analytical approach to this equation in the case of \(C=A\) and \(L=0\). Their study encompasses a variation of constant formula derived through the utilization of resolvent families associated with the free-delay integro-differential equation. Additionally, they introduce a detailed spectral theory and explore the application of control theory. More recently, a work [11] has emerged, addressing Equation (1) under the conditions of \(K=0\) and \(f=0\). This work draws inspiration from previous studies such as [1], [5], and incorporates perturbation techniques using admissible observation operators developed in [7], which generalize the Miyadera-Voigt perturbation theorem. However, it is important to note that the work [11] contains several issues that we will outline in this discussion. Prior to undertaking the present work, we took the initiative to contact the journal in which the work [11] was published. Our objective was to highlight the lack of proper citation of our ideas within [11], as well as to bring attention to several issues present in his study. Unfortunately, the feedback we received from the editorial board of the journal was negative. Consequently, we made the decision to embark on a new research endeavor that aims to rectify the issues of [11] and incorporate recent, intriguing developments in the field. One issue with the work of [11] is that the statement of [11, Theorem 2.6] is incomplete and lacks some essential details. Specifically, the condition \(x\in D(A)\) needs to be replaced with \(x\in D(A)\) and \(\varphi\in W^{1,p}([-1,0],X)\) such that \(x=\varphi(0)\). Furthermore, the well-posedness of integro-differential equation with delay is not clearly defined before this theorem, even though it is the focus of the theorem. Clarifying the definition of classical solutions before stating [11, Theorem 2.6] would improve the clarity of the work [11]. The variation of constants formula stated in this theorem involves the terms \(Cu(\tau)\) and \(Lu_{\tau}\) (the notation in [11] was \(C=F\)). However, it is unclear why these terms are well-defined, given that the operators \(C\) and \(L\) are defined on \(D(A)\) and \(W^{1,p}([-1,0],X)\), respectively. This highlights the importance of first clearly defining the concept of solutions for delayed integro-differential equation before using it to prove the result in[11, Theorem 2.6]. The proof of [11, Theorem 2.6] presents a significant gap. Specifically, the proof does not provide a complete explanation of why the classical solution \(u(\cdot)\) of (1) exists in the case of \(K=0\) and \(f=0\). The fourth and fifth pages of [11] contain formulas that are presented in a formal manner, without proper justification. While these formulas provide an idea about the free-delay Cauchy problem and its solution, they do not offer a complete proof of the existence and uniqueness of classical solutions. The author mention that he follow the strategy of the well-known reference [4], but it is important to note that in that reference, the authors explicitly state that they use formal equations. Therefore, to improve the rigor of the proof in [11, Theorem 2.6], it is necessary to provide a clear and complete justification of the formulas used, and to establish the existence and uniqueness of classical solutions in a rigorous manner. Once the existence of the classical solution is established, the next step is to prove the variation of constants formula for such a solution. While this may seem like a straightforward task, it is important to ensure that the proof is rigorous and clearly explains the steps involved. By carefully proving the variation of constants formula, we can ensure that it is valid for all classical solutions to delay integro-differential equation, and use it to derive useful information about the behavior of the solutions. Therefore, it is essential to provide a well-justified and detailed proof of the variation of constants formula for classical solutions. In addition to the concerns raised regarding the work [11], it is worth noting that a related study, namely [12], also exhibits similar issues and shortcomings. These works share notable similarities in their methodology, results, and deficiencies. Therefore, it is imperative to address and rectify the concerns associated with both papers in order to ensure the reliability and accuracy of the scientific contributions. The organization of the present work is as follows: In Section 2, we delve into the topic of admissible observation operators and their connection to perturbed Cauchy problems. Specifically, we provide a comprehensive overview of relevant facts in this area. We recall a perturbation theorem, originally presented in [7], which serves as a key foundation for our subsequent analysis. Additionally, we present a significant result that demonstrates the invariance of admissibility for observation operators under a specific class of perturbations. These findings contribute to a deeper understanding of the relationship between observation operators and perturbed systems, shedding light on their interplay and implications in the broader context of integro-differential equations with delays. In Section 3, we provide a comprehensive analysis of the integro-differential equation with delay (1) by introducing and exploring two important solution concepts: classical solutions and mild solutions. Firstly, we define classical solutions to (1) in Definition 3.1 and establish their existence and uniqueness through rigorous proof. This sets the foundation for understanding the behavior and properties of the equation. Next, we introduce the concept of mild solutions in Definition 3.6 and demonstrate that every classical solution also qualifies as a mild solution to (1). This result bridges the gap between the two solution frameworks and highlights their inherent connections. To further advance our understanding, we present a theorem that outlines the conditions under which a unique mild solution exists for (1). This result provides valuable insights into the existence and uniqueness aspects of the equation, facilitating a more comprehensive analysis of its behavior. Lastly, we conclude the section by considering an application of these concepts to an integro-differential equation governed by the Laplcian operator on \(L^{2}(\Omega)\) for an open bounded set \(\Omega\subset\mathbb{R}^{n}\) with smooth boundary \(\partial\Omega\) subject to Neumann boundary conditions and a nonlocal perturbation kernel \(C\). This practical example illustrates the applicability and relevance of our theoretical findings in real-world scenarios, further enhancing the practical significance of our research. The rest of the content of the article will be given here once published in a journal. We are forced not to put all the details of the results at this stage because we have already had bad experiences lately, as already explained... But certainly, in a few weeks, we will put the complete paper in Arxiv.
2302.00467
Review on Quantum Computing for Lattice Field Theory
In these proceedings, we review recent advances in applying quantum computing to lattice field theory. Quantum computing offers the prospect to simulate lattice field theories in parameter regimes that are largely inaccessible with the conventional Monte Carlo approach, such as the sign-problem afflicted regimes of finite baryon density, topological terms, and out-of-equilibrium dynamics. First proof-of-concept quantum computations of lattice gauge theories in (1+1) dimensions have been accomplished, and first resource-efficient quantum algorithms for lattice gauge theories in (1+1) and (2+1) dimensions have been developed. The path towards quantum computations of (3+1)-dimensional lattice gauge theories, including Lattice QCD, requires many incremental steps of improving both quantum hardware and quantum algorithms. After reviewing these requirements and recent advances, we discuss the main challenges and future directions.
Lena Funcke, Tobias Hartung, Karl Jansen, Stefan Kühn
2023-02-01T14:28:50Z
http://arxiv.org/abs/2302.00467v2
# Review on Quantum Computing for Lattice Field Theory ###### Abstract: In these proceedings, we review recent advances in applying quantum computing to lattice field theory. Quantum computing offers the prospect to simulate lattice field theories in parameter regimes that are largely inaccessible with the conventional Monte Carlo approach, such as the sign-problem afflicted regimes of finite baryon density, topological terms, and out-of-equilibrium dynamics. First proof-of-concept quantum computations of lattice gauge theories in (1+1) dimensions have been accomplished, and first resource-efficient quantum algorithms for lattice gauge theories in (1+1) and (2+1) dimensions have been developed. The path towards quantum computations of (3+1)-dimensional lattice gauge theories, including Lattice QCD, requires many incremental steps of improving both quantum hardware and quantum algorithms. After reviewing these requirements and recent advances, we discuss the main challenges and future directions. Introduction Gauge theories are at the heart of our understanding of modern high-energy physics, with the Standard Model of particle physics being arguably the most prominent example. Discretizing gauge theories on a spacetime lattice allows for powerful numerical simulations based on Markov chain Monte Carlo (MCMC) methods. However, despite the great success of MCMC methods, they cease to work in certain parameter regimes due to the infamous sign problem, in particular in the presence of \(\theta\)-terms, baryon chemical potentials, or out-of-equilibrium dynamics. Various methods have been proposed to alleviate or circumvent this problem, including complex Langevin (see, e.g., Ref. [1]), contour deformations using Lefschitz thimbles (see, e.g., Ref. [2]), as well as approaches based on machine learning (see, e.g., Ref. [3]), tensor networks (TN) (see, e.g., Refs. [4, 5]), and quantum computing (see, e.g., Refs. [6, 7]). Moreover, MCMC methods suffer from critical slowing down, i.e., rapidly growing autocorrelation times when the lattice spacing is decreased, which could be overcome with Hamiltonian-based approaches such as TN or quantum computing. In these proceedings, we review the current status and future prospects of applying quantum computing to lattice field theory. First quantum computations of gauge theories in (1+1) dimensions have already been performed, which demonstrate some of the characteristic features of the Standard Model (see, e.g., Refs. [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]). Moreover, several resource-efficient formulations of (1+1)- and (2+1)-dimensional gauge theories for quantum computations have already been developed (see, e.g., Refs. [23, 24, 25, 26, 27, 28, 29]). However, the ambitious goal of quantum computing (3+1)-dimensional phenomena within and beyond the Standard Model is still rather far away, given the currently available Noisy Intermediate-Scale Quantum (NISQ) hardware [30]. To achieve this goal, many incremental steps have to be taken, in particular for improving quantum hardware, quantum algorithms, and quantum error correction. Furthermore, validating quantum computations is essential for getting reliable results, especially in parameter regimes that are inaccessible with MCMC methods. This validation can be performed using classical Hamiltonian methods, in particular TN-based approaches [31], for cross-checking the results of quantum computations in the slightly entangled regimes. These proceedings are organized as follows. In Sec. 2, we review the computational challenges of MCMC algorithms, comment on classical approaches to address these problems, and discuss why classical computing may (not) be enough. In Sec. 3, we provide an example for outperforming specific classical computations with analog quantum simulators. In Sec. 4, we briefly review the basics of digital quantum computing, introducing the concepts of qubits, quantum circuits, quantum errors, as well as quantum error mitigation and correction. In Sec. 5, we review recent advances in developing quantum algorithms for lattice field theory. We discuss hybrid quantum-classical algorithms, quantum circuit design, and methods to implement fermions and gauge fields. We also show a few examples of lattice field theories that have already been simulated on quantum computers. We provide conclusions, discussions, and an outlook in Sec. 6. ## 2 Why quantum computing for lattice field theory? ### Computational challenges of MCMC algorithms Classical MCMC simulations usually rely on a Euclidean space-time formulation of lattice field theories and, thus, cannot access real-time dynamics. Therefore, phenomena like the Schwinger effect leading to electron-positron production in strong electric fields or the out-of-equilibrium dynamics following heavy-ion and proton-collisions at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) cannot be accessed with the method. Moreover, the MCMC approach fails at studying strongly-coupled matter at high density. Thus, this approach cannot provide precise numerical results for the QCD equation of state, which is crucial for understanding future experimental data from gravitational-wave observations coming from neutron star collisions, e.g., as observed at the Laser Interferometer Gravitational-Wave Observatory (LIGO). The origin of these classical computational challenges is the infamous sign problem (see, e.g., Ref. [32]), which is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical MCMC methods fail here because of the near-cancellation of the positive and negative contributions to the integral, i.e., both have to be integrated to high precision in order for their difference to be obtained with useful accuracy. The number of Monte Carlo sampling points needed to obtain an accurate result rises exponentially with the volume of the system. Furthermore, MCMC methods are inherently afflicted with autocorrelation times. These autocorrelation times grow rapidly when the lattice spacing is decreased, which is a problem called "critical slowing down". In some cases, the autocorrelation times can even grow exponentially, see Ref. [33], which makes it hard to investigate the continuum limit. ### Do we really need _quantum_ computing? Once we encounter exponentially hard problems of classical algorithms, the first question we may ask is whether quantum algorithms could be used to solve these problems efficiently. This is given for the problem of critical slowing down (see, e.g., Ref. [7]) and also for the sign problem of MCMC methods (see, e.g., Ref [6]). However, the second question we should ask is: Do we really need quantum computing to circumvent these problems, or are there also classical approaches? Indeed, there are several classical approaches to reduce or circumvent the obstacles of MCMC methods, in particular the problem of critical slowing down and the sign problem (see, e.g., Refs. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]). For example, approaches for overcoming the problem of critical slowing down have been proposed based on machine learning (see, e.g., Refs. [3, 36]). Regarding the sign problem, complex Langevin (see, e.g., Ref. [1]), contour deformations using Lefschitz thimbles (see, e.g., Ref. [2]), methods based on machine learning (see, e.g., Ref. [3]), and further approaches have been shown to alleviate the sign problem. One of the most promising techniques to completely circumvent the sign problem is the Hamiltonian formulation, which can be efficiently addressed with classical approaches based on TN. This numerical method was originally developed in the context of condensed-matter physics, but has found numerous applications in other fields, including lattice field theory [4] and even quantum gravity (see, e.g., Refs. [38, 39, 40]). The TN approach can be used to validate quantum computations in moderately entangled regimes. Therefore, we will review this approach in the following. For details on other methods tackling computational challenges of MCMC simulations, we refer the reader to, e.g., Refs. [1, 2, 3]. #### 2.2.1 Classical methods: example of tensor networks TN states are a specific representation for quantum many-body states, based on the entanglement structure of the state. To illustrate the method, let us consider a generic quantum state \(|\psi\rangle\) for interacting quantum systems. The wave function of the system can be expressed as the sum over all the basis states, \[|\psi\rangle=\sum_{i_{1},\ldots,i_{N}=1}^{d}C_{i_{1}\ldots i_{N}}|i_{1}\rangle \otimes|i_{2}\rangle\otimes\cdots\otimes|i_{N}\rangle, \tag{1}\] where the states \(|i_{k}\rangle\) form a basis for the \(d\)-dimensional Hilbert space of the quantum system at site \(k\). The tensor \(C_{i_{1}\ldots i_{N}}\) that multiplies the many-body basis states has an exponentially large number of complex entries (\(d^{N}\)) and, thus, cannot be stored efficiently on a classical computer. The basic idea of TN is to decompose the \(C_{i_{1}\ldots i_{N}}\) into a network of smaller tensors. A simple example for a family of one-dimensional TN states are Matrix Product States (MPS), which parametrize the coefficients of the wave function as a product of matrices, \[|\psi\rangle=\sum_{i_{1},i_{2},\ldots,i_{N}}^{d}\mathrm{tr}\left(A_{1}^{i_{1}}A _{2}^{i_{2}}\cdots A_{N}^{i_{N}}\right)|i_{1}\rangle\otimes|i_{2}\rangle\otimes \cdots\otimes|i_{N}\rangle. \tag{2}\] In the expression above, \(A_{k}^{ik}\) are complex matrices of size \(D\times D\) and \(\mathrm{tr}\) denotes the trace. The parameter \(D\) is called the bond dimension of the MPS and limits the amount of entanglement that can be present in the ansatz1[41, 42, 31]. In particular, for the example of the MPS ansatz, one has to store \(NdD^{2}\) complex numbers. Thus, provided that \(D\) does not grow exponentially with \(N\), the representation in terms of a TN is efficient, and it allows for overcoming the exponential scaling in the system size. Results from quantum information theory show that for many physically relevant situations, \(D\) does indeed only show a polynomial dependence on \(N\)[43, 44, 45]. The MPS ansatz can be immediately generalized to higher dimensions [46], and there exist more general TN ansatze for one and higher dimensions [47, 48, 49, 50]. Footnote 1: Considering the bipartion of the system into two contiguous subsystems, one can show that the von Neumann entropy for the reduced density operator describing a subsystem is upper bounded by \(2\log(D)\)[31]. Besides being a powerful theoretical tool, the TN formalism allows for an efficient computation of ground states and low-lying excitations. Given a (local) Hamiltonian \(H\), one can find a TN approximation for the ground state through variationally minimizing the energy \(E=\langle\psi|H|\psi\rangle\) by iteratively updating the tensors [41, 42, 31]. Subsequently, one can obtain excitations by considering a Hamiltonian projected onto the subspace orthogonal to the ground state. The ground state is an eigenstate with vanishing eigenvalue of the projected Hamiltonian, and the first excited state is an eigenstate with energy \(E_{1}\). Given that \(E_{1}<0\), which one can always achieve by adding an appropriate constant to \(H\), the first excitation corresponds then to the ground state of the projected Hamiltonian, which can be obtained by using the variational algorithm presented above [51]. Moreover, the TN formalism allows to compute the evolution of a quantum system in time, as long as the entanglement in the state during the evolution stays moderate [41, 42, 31]. In both cases, one can obtain the expectation values of (local) observables \(O\) by directly computing \(\langle\psi|O|\psi\rangle\). Thus, no MCMC sampling is required, and the TN approach is free from the sign problem. This has been successfully demonstrated for various models, as we will discuss in the next subsection. #### 2.2.2 Applying tensor-network methods to lattice field theory Investigations of lattice field theories with TN methods have progressed substantially in recent years. For example, the MPS approach has been used to explore quantum link models, \(\mathbb{Z}_{n}\)-QED models, and non-Abelian gauge models. Moreover, the spectrum of the Schwinger model has been computed using MPS, and the model has been studied at non-zero chemical potential, non-zero \(\theta\)-term, non-zero temperature, and for real-time problems. Going beyond MPS, also TN renormalization techniques [52, 53, 54] have been used to investigate various gauge theories in (1+1) dimensions, as well as the CP(1) model with a \(\theta\)-term. See Refs. [4, 5] for reviews and Refs. [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65] for more recent studies not included in Refs. [4, 5]. In particular, recently there have been first TN studies of (2+1)-dimensional gauge theories [63, 64] and the first TN study of Lattice QED in (3+1)-dimensions at finite density [65]. In Fig. 1, we show two examples of applying TN methods to overcoming the sign problem, which arises due to a \(\theta\)-term [Fig. 1(a)] or due to finite density [Fig. 1(b)]. Figure 1(a) shows results obtained in Ref. [59], where the topological vacuum structure of the Schwinger model with a \(\theta\)-term was studied. The Schwinger model shares many similarities with (3+1)-dimensional QCD and thus serves as benchmark model for testing new numerical techniques aimed at Lattice QCD applications. Using MPS, Ref. [59] simulated parameter regimes of the Schwinger model for which both perturbation theory and MCMC methods break down. In particular, Ref. [59] quantified the lattice distortion of the quantum anomaly equation that maps negative to positive fermion masses, \(m\to-m\), by shifting the \(\theta\)-parameter, \(\theta\to\theta+\pi\), see Fig. 1(a). Figure 1(b) shows results obtained in Ref. [65], where (3+1)-dimensional Lattice QED was simulated at finite charge density. Using tree TN, Ref. [65] computed the surface charge density \(\sigma_{\rm I}=\frac{1}{A(I)}\sum_{{\bf x}\in A(I)}\langle\hat{\psi}_{\bf x}^ {\dagger}\hat{\psi}_{\bf x}\rangle\), where \({\bf x}\equiv(i,j,k)\) for \(0\leq i,j,k\leq L-1\) labels the sites of the lattice, \(L\) is the lattice size, \(\hat{\psi}_{\bf x}\) is the staggered spinless fermion field, and \(A(I)\) contains only sites Figure 1: (a) Ground-state energy density of the Schwinger model as a function of the \(\theta\)-parameter, for the bare mass \(m/g=0.07\) (filled markers) and for \(m/g=-0.07\) and \(\theta\to\theta+\pi\) (open markers). The orange triangles (green squares) correspond to finite-lattice data with \(x=80\) (\(x=160\)), where \(x\equiv 1/(ag)^{2}\), \(a\) is the lattice spacing, and \(g\) is the coupling. The red dots represent the results after extrapolating the finite-lattice data to the continuum. Inset: Absolute value of the deviations between the lattice and continuum data, which demonstrates the lattice distortion of the anomaly equation. Figure and caption adapted from Ref. [59]. (b) Surface charge density \(\sigma_{\rm I}\) of (3+1)-dimensional Lattice QED on a cube whose faces are at distance \(l\) from the boundaries of the lattice with linear size \(L=8\), for different values of the fermion mass \(m\). The system is in the global symmetry sector with \(Q=128\) positive charges (finite density \(\rho=1/4\)). Figure reprinted with permission from Ref. [65], [https://doi.org/10.1038/s41467-021-23646-3](https://doi.org/10.1038/s41467-021-23646-3), under the Creative Commons Attribution 4.0 International License. sitting at a particular lattice distance \(l\) from the closest boundary. These first (3+1)-dimensional TN calculations are a milestone for classically tackling the sign problem, but many algorithmic advances are still required to apply TN methods to (3+1)-dimensional Lattice QCD. ### Why is(n't) classical computing enough? Although alternative classical methods are able to overcome or alleviate some of the limitations of the conventional MCMC approach, so far it has not been possible to fully access the regimes that cannot be addressed with MCMC simulations. Quantum computers offer the prospect to overcome these restrictions. In the following, we discuss these limitations focusing on the TN approach. As outlined above, the success of the TN approach crucially relies on the fact that most physically relevant states only carry little entanglement and, thus, can be described with moderate tensor size. In particular, this encompasses the ground states of Hamiltonians with local interactions of finite strength and a nonvanishing energy gap in the thermodynamic limit, which are believed to be efficiently described by TN states2. However, there exist certain situations where the amount of entanglement can be prohibitively large, preventing an efficient TN description. A prominent example are out-of-equilibrium dynamics following a global quench, during which the entanglement in the system can grow linearly in time [66, 67]. This would require the tensor size to grow exponentially to maintain a faithful representation of the wave function of the system. In these situations, TN simulations only allow for accessing short time scales before the exponential growth of the tensor size renders the computation infeasible. Footnote 2: In fact, this can be proven for systems in (1+1) dimensions, see Ref. [45]. For higher dimensions, it is conjectured that ground states of local gapped Hamiltonians are efficiently described by TN states. Moreover, even in situations where an efficient TN description is possible, the computational cost of TN algorithms, despite being polynomial in the tensor size, might prevent calculations in practice. For the MPS introduced above, the leading-order computational cost of the variational ground-state search and the simulation of time evolution scales as \(\mathcal{O}(D^{3})\)[41, 42, 31]. For its two-dimensional generalization, the projected entangled pair states (PEPS) [46], the algorithms for variational ground-state optimization can scale up to \(\mathcal{O}(D^{10})\)[68]. While this is still polynomial in the bond dimension, this significantly limits the values of the bond dimensions that can be reached in practical calculations. Although there have been recent developments in designing TN ansatze suited to tackle (2+1)- and (3+1)-dimensional lattice gauge models with first successful computations [63, 64, 65], extending the success of TN computations to higher dimensions is not an immediate task. Quantum computing and quantum simulation might offer an alternative route to overcome these limitations, as we discuss in the following section. ## 3 Outperforming tensor-network methods with analog quantum simulators Already in 2012, Ref. [69] demonstrated that analog quantum simulations can outperform classical simulations for the case of real-time evolution, in particular, for simulating the relaxation towards equilibrium in an isolated strongly correlated (1+1)-dimensional Bose gas: "the controlled [quantum] dynamics runs for longer times than present classical algorithms can keep track of" [69]. Even though this quote essentially translates to the nowadays popular term of "quantum advantage", this term has not been used back in 2012; only ten years later, Ref. [70] argued that a "practical quantum advantage" has already been achieved using analog quantum simulators. The reason for the failure of the classical simulation methods, such as the MPS-based methods used in Ref. [69], is that these classical methods suffer from an extensive increase in entanglement entropy, as outlined in the previous section. This limits the relaxation times accessible in the classical calculations, as shown in Fig. 2. In Ref. [69], the quantum theory is a (1+1)-dimensional chain of lattice sites coupled by a tunnel coupling \(J\) and filled with repulsively interacting bosonic particles. In the tight-binding approximation, the Hamiltonian takes the form of a (1+1)-dimensional Bose-Hubbard model, \[H_{\rm BH_{(1+1)D}} = \sum_{j}\left[-J\left(\hat{a}_{j}^{\dagger}\hat{a}_{j+1}+{\rm h. c.}\right)+\frac{U}{2}\hat{n}_{j}(\hat{n}_{j}-1)+\frac{K}{2}\hat{n}_{j}j^{2} \right], \tag{3}\] where \(\hat{a}_{j}\) annihilates a particle on site \(j\), \(\hat{n}_{j}=\hat{a}_{j}^{\dagger}\hat{a}_{j}\) corresponds to the number of atoms on site \(j\), and \(U\) is the on-site interaction energy. The parameter \(K=m\omega^{2}d^{2}\) (\(m\) is the particle mass, \(d\) the lattice spacing) describes an external harmonic trap with trapping frequency \(\omega\simeq 2\pi\times 61\,\)Hz. Details of the analog quantum simulation of the model in Eq. (3) are shown in Fig. 3. Here, we note that the "practical quantum advantage" has only been demonstrated with analog quantum simulators (see Ref. [70] and references therein), not with digital quantum computers. In the following, we would like to briefly comment on the differences between these two different types of quantum technology. With an analog quantum simulator, one uses a controllable quantum system to emulate the behavior of another quantum system while exploiting quantum effects. Thus, analog quantum simulators perform continuous time evolution and usually are non-universal. In contrast, digital quantum computers are universal and realize unitary operations via a set of universal logical Figure 2: Quantum advantage of quantum simulation over classical simulation, illustrated by errors in single-particle correlations arising for quench dynamics of a (1+1)-dimensional Bose-Hubbard model, see Eq. (3), for \(U/J=1\) and 20 sites. The blue line is a single-site time-dependent variational principle time evolution using MPS with bond dimension \(D=64\), as a demonstration of typical truncation errors from these classical methods relative to exact calculations, which are necessary for longer times and system sizes. The red line is an analog simulation with calibration errors of 1%. The yellow line is the digital simulation with a second-order Trotter decomposition with a time step \(J\delta t=\hbar/8\). Figure reprinted from Ref. [70], [https://doi.org/10.1038/s41586-022-04940-6](https://doi.org/10.1038/s41586-022-04940-6), with permission from Springer Nature. quantum gates acting onto qubits. Thus, the time evolution has to be decomposed in discrete steps. The universality of digital quantum computers implies that they will eventually be more relevant for the ambitious goal to quantum compute higher-dimensional lattice gauge theories, in particular (3+1)-dimensional Lattice QCD. Thus, in the remaining part of these proceedings, we will only focus on the latter type of quantum technology, namely the digital quantum computers. ## 4 Basics of digital quantum computing Quantum computers offer the prospect to outperform classical computers in a variety of tasks ranging from cryptography to machine learning to combinatorial optimization problems. In particular, the potential to efficiently simulate quantum many-body systems makes quantum computers a promising tool for solving quantum many-body problems in physics, chemistry, and beyond, such as the sign problem. In the following, we will review the essential features of digital gate-based quantum computing, which are quantum bits, quantum circuits, and quantum errors. Figure 3: Analog quantum simulation of the (1+1)-dimensional Bose-Hubbard model, see Eq. (3). (a) Concept of the experiment: after having prepared the density wave \(|\psi(t=0)\rangle\) (_i_), the lattice depth was rapidly reduced to enable tunneling (_ii_). Finally, the properties of the evolved state were read out after all tunneling was suppressed again (_iii_). (b) Even-odd resolved detection: particles on sites with odd index were brought to a higher Bloch band. A subsequent band-mapping sequence was used to reveal the odd- and even-site populations. (c) Integrated band-mapping profiles versus relaxation time \(t\) for \(h/(4J)\simeq 0.9\,\mathrm{ms}\), \(U/J=5.16(7)\) and \(K/J\simeq 9\times 10^{-3}\). (d) Odd-site density extracted from the raw data shown in c. The shaded area marks the envelope for free bosons (light grey) and including inhomogeneities of the Hubbard parameters in the experimental system (dark grey). Figure reprinted from Ref. [69], [https://doi.org/10.1038/s41586-022-04940-6](https://doi.org/10.1038/s41586-022-04940-6), with permission from Springer Nature. ### Quantum bits In classical computing, information is stored and processed as bits which can take definite binary values, 0 or 1. In analogy, quantum bits (qubits) are quantum systems described by a two-dimensional Hilbert space with the basis states \(\{|0\rangle,|1\rangle\}\). The laws of quantum mechanics allow for creating superpositions of the two basis states, and a general single-qubit state \(|\psi\rangle\) can be described by the linear combination \[|\psi\rangle=\alpha|0\rangle+\beta|1\rangle,\quad|0\rangle=\begin{pmatrix}1\\ 0\end{pmatrix},\quad|1\rangle=\begin{pmatrix}0\\ 1\end{pmatrix}, \tag{4}\] where \(\alpha\) and \(\beta\) are complex numbers constrained by \(|\alpha|^{2}+|\beta|^{2}=1\) for the state to be normalized. When applying a projective measurement to the qubit in the computational basis, the probability of outcome \(|0\rangle\) is \(|\alpha|^{2}\) and the probability of outcome \(|1\rangle\) is \(|\beta|^{2}\). In particular, the possibility to create superpositions allows for having \(N\) qubits in a superposition state of all \(2^{N}\) possible basis states and, thus, to encode an exponentially large Hilbert space efficiently. How many qubits do current quantum computers have? We have recently entered the Noisy Intermediate-Scale Quantum (NISQ) [30] era, defined as the era of \(\mathcal{O}(10-100)\) noisy qubits of digital quantum computers. For the example of IBM-Q quantum devices, the number of qubits has roughly doubled every year, from 27 in 2019 to 433 in 2022, and this journey is planned to be continued with 1121 qubits in 2023 and 4158 qubits in 2025 (see, e.g., Ref. [71]). For the example of Google quantum devices, the aim is to build a "useful, error-corrected quantum computer" [72] by the end of the decade, which should contain 1,000 fault-tolerant logical qubits (see Sec. 4.3.2 for details on error correction), corresponding to 1,000,000 physical qubits. ### Quantum computations The prevailing model of quantum computation describes the computation in terms of a quantum circuit, which consist of three stages: (i) Initialization of all \(N\) qubits in the \(|0\rangle\) state (denoted by \(|0\rangle^{\otimes N}\)). (ii) Quantum gates, which represent unitary transformations. (iii) Measurements in the computational basis, on some or all of the qubits. Quantum computations can be conveniently visualized with a quantum circuit diagram, in which time flows from left to right. Each single line represents a quantum wire, and each double-line represents a classical bit. The items on the quantum wires indicate operations performed on the qubits, such as gate operations or measurements. For example, let us consider a three-qubit circuit \[\begin{array}{c}\includegraphics[width=142.26378pt]{qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit--qubit-qubit-qubit-qubit-qubit--qubit-qubit-qubit-qubit-qubit-qubit-qubit-qubit--qubit-qubit-qubit-qubit-qubit-qubit--qubit-qubit-qubit--qubit-qubit-qubit-qubit--qubit-qubit--qubit--qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit--qubit-qubit--qubit-qubit--qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit--qubit-qubit--qubit-qubit--qubit--qubit-qubit--qubit--qubit-qubit--qubit--qubit-qubit--qubit-qubit--qubit--qubit-qubit--qubit--qubit-qubit--qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit-qubit--qubit-qubit--qubit-qubit--qubit-qubit--qubit--qubit--qubit-qubit--qubit--qubit--qubit--qubit--qubit--qubit-qubit--qubit--qubit--qubit--qubit--qubit--qubit-qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit--qubit---qubit--qubit--qubit---qubit--qubit---qubit--qubit---qubit---qubit---qubit---qubit--qubit---qubit---qubit---qubit--qubit---qubit---qubit---qubit---qubit--qubit---qubit---qubit---qubit---qubit---qubit--qubit---qubit---qubit---qubit---qubit---qubit---qubit---qubit---qubit---qubit---qubit---qubit---qubit---qubit----qubit----qubit----qubit----qubit---qubit---qubit---qubit---qubit---qubit----qubit---qubit----qubit----qubit---qubit---qubit---qubit----qubit----qubit----qubit----qubit----qubit----qubit---qubit---qubit----qubit----qubit----qubit----qubit----qubit-----qubit----qubit----qubit----qubit----qubit----qubit----qubit-----qubit-----qubit----qubit----qubit----qubit-----qubit----qubit----qubit----qubit----qubit-----qubit-----qubit-----qubit-----qubit-----qubit-----qubit-----qubit------qubit----qubit-----qubit------qubit-----qubit----qubit-----qubit------qubit------qubit------qubit-------qubit------qubit---------qubit---------qubit---------qubit because the composition across wires is achieved by a tensor product, and the composition along wires is achieved by a matrix product. The third stage is a projective measurement in the \(\sigma_{z}\)-basis \(\{|0\rangle,|1\rangle\}\) yielding classical bits. All these three stages of the quantum computation are affected by quantum noise, as we discuss in the following section. ### Quantum errors #### 4.3.1 Which types of errors occur on quantum computers? Current intermediate-scale quantum devices suffer from a considerable level of noise, including gate errors, measurement errors, and thermal relaxation errors. For example, there can be bit-flip measurement errors in the last stage of the quantum computation (see Sec. 4.2). Due to these bit flips, one erroneously reads out a measurement outcome as 0 given it was actually 1, or vice versa. Another example is thermal relaxation errors affecting the qubits. In general, errors can be represented by quantum channels and turn an originally pure state, described by the density operator \(|\psi\rangle\langle\psi|\), into a genuinely mixed state described by a density operator \(\rho\). The evolution of a quantum state \(\rho\) affected by thermal relaxation errors for some time \(t\) can, e.g., be expressed as \[\rho\to\rho^{\prime}=\sum_{k=1}^{2}E_{k}\rho E_{k}^{\dagger},\quad E_{1}= \begin{bmatrix}1&0\\ 0&\sqrt{1-\lambda}\end{bmatrix},\quad E_{2}=\begin{bmatrix}0&\sqrt{\lambda} \\ 0&0\end{bmatrix}, \tag{7}\] where \(\lambda=1-\exp(-t/T_{1})\). Thus, given a state prepared in \(|1\rangle\), the probability that the state is correctly measured in \(|1\rangle\) and has not decayed into the state \(|0\rangle\) is given by \(p(t)=\exp(-t/T_{1})\), where \(T_{1}\sim 100\mu\)s for current superconducting quantum devices. Although this limits the depth of the circuits that can be executed faithfully, Noisy Intermediate-Scale Quantum (NISQ) devices are already able to exceed the capabilities of classical computes in specific cases. Error mitigation and especially error correction remain one of the major challenges of quantum computing, whose successful implementation is crucial for achieving quantum advantage for large-scale problems relevant in physics and chemistry. #### 4.3.2 Error correction versus error mitigation Quantum computers can in principle be made fault-tolerant, according to the threshold theorem for quantum computing (called the "quantum threshold theorem") [73, 74, 75, 76], which is analogous to the threshold theorem for classical computing by von Neumann [77]. Fault-tolerance means that quantum error correction techniques can suppress the logical error rate on quantum computers to _arbitrarily_ small levels, given that the quantum computer has a physical error rate below a certain threshold. There are many error correction schemes on the market, such as the bit-flip code [78], the Shor code [79], the surface code [74], and the GKP code [80]. In general, realizing fault-tolerant quantum computation requires two ingredients: (i) quantum errors below a certain threshold and (ii) additional ("physical") qubits to realize the quantum error correction scheme and successfully encode the information of a fault-tolerant ("logical") qubit. For example, if a quantum computer has a depolarizing error probability below the threshold of 0.1%, the surface code would require more than 1000 physical qubits per logical qubit [81]. Given the current qubit numbers of \(\mathcal{O}(10-100)\) and quantum error rates of up to \(\mathcal{O}(1\%)\) for gate errors and up to \(\mathcal{O}(10\%)\) for measurement errors [82], fault-tolerant quantum computation is still far away. Thus, as a near-term solution for NISQ devices, one needs to employ quantum error mitigation instead of quantum error correction. The general concept behind quantum error mitigation is to use a low-overhead method to alleviate the effects of quantum noise and thereby obtain more reliable estimates, e.g., for expectation values of observables. For instance, this can be done by altering the quantum circuit executed on the quantum device, by post-processing the data collected from the device, by measuring modified operators, or by using combinations thereof. A few examples of error mitigation techniques are zero-noise extrapolation, randomized compiling, quasi-probability decomposition, and operator rescaling methods (see, e.g., Refs. [82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93]). #### 4.3.3 Example of error mitigation: zero-noise extrapolation for lattice theory application Let us briefly discuss one example of error mitigation, i.e., zero-noise extrapolation [85], including its application to quantum computing a lattice field theory. The basic idea behind zero-noise extrapolation is that one cannot easily _reduce_ the noise of a quantum circuit, but one can easily _add_ noise to a quantum circuit. For example, one might add a pair of CNOT gates behind every CNOT gate in a quantum circuit. In the perfect, noise-free case, this should not alter the quantum computation due to (CNOT) \({}^{2}=\mathds{1}\). However, for noisy CNOT gates we have (CNOT) \({}^{2}\neq\mathds{1}\), and adding an increasingly large number of CNOT pairs will result in an increasingly large error of the quantum computation. Thus, we may introduce a noise parameter \(r\), where \(r-1\) is the number of additional CNOT gates inserted at each location of a CNOT gate in the original quantum circuit. Performing the quantum computation at different values of \(r\) and subsequently extrapolating to \(r\to 0\) therefore offers a way to reduce the systematic bias due to CNOT gate noise. Figure 4 shows an example of how this method can be applied to lattice field theory, in particular a hybrid quantum-classical computation of Schwinger model dynamics [16]. The algorithm behind this hybrid quantum-classical computation will be explained in the following, see Sec. 5.1.1. Figure 4: Zero-noise extrapolation for hybrid quantum-classical computation of Schwinger model dynamics. Ground-state energy \(\langle H\rangle\) and chiral condensate \(\langle\bar{\psi}\psi\rangle\) (purple and blue, respectively) as a function of the noise parameter \(r\). The points at \(r=0\) (black) have been quadratically extrapolated. The horizontal dashed lines indicate the exact values. Figure reprinted with permission from Ref. [16], [https://doi.org/10.1103/PhysRevA.98.032331](https://doi.org/10.1103/PhysRevA.98.032331), copyright (2018) by the Americal Physical Society. ## 5 Quantum algorithms for lattice field theory ### Hybrid quantum-classical algorithms #### 5.1.1 Variational quantum eigensolver for computing ground states In the context of quantum many-body systems, a promising approach for exploiting NISQ devices is the use of hybrid quantum-classical algorithms, such as the variational quantum eigensolver (VQE) [94, 95]. This algorithm makes use of a feedback loop between a classical computer and a quantum coprocessor, where the latter is used to efficiently evaluate the cost function \[C(\vec{\theta})=\langle\psi(\vec{\theta})|H|\psi(\vec{\theta})\rangle \tag{8}\] for a given set of variational parameters \(\vec{\theta}\). Here, the quantum state \(|\psi(\vec{\theta})\rangle\) is realized by a parametric quantum circuit, such as the exemplary circuit in Eq. (5), where the variational parameters are the rotational angles \(\theta_{1}\) and \(\theta_{2}\). Provided that the ansatz for \(|\psi(\vec{\theta})\rangle\) is expressive enough, the minimum for \(C(\vec{\theta})\) is obtained for the ground state of the problem Hamiltonian \(H\). In the quantum-classical feedback loop, the parameters \(\vec{\theta}\) are optimized on a classical computer based on the measurement outcome obtained from the quantum coprocessor. #### 5.1.2 Variational quantum deflation for computing excited states Using an extension of the VQE algorithm described in Sec. 5.1.1, which is called the variational quantum deflation (VQD) algorithm [96, 97], one can compute the mass gaps of a lattice field theory. The goal is to estimate the energy of the \(k\)-th excited state \(E_{k}\) by penalizing the solutions of the lowest excited states. This can be done through minimizing the cost function \[C(\vec{\theta}_{k})=\langle\psi(\vec{\theta}_{k})|H|\psi(\vec{\theta}_{k}) \rangle+\sum_{i=0}^{k-1}\beta_{i}|\langle\psi(\vec{\theta}_{k})|\psi(\vec{ \theta}_{i}^{*})\rangle|^{2}, \tag{9}\] where \(\vec{\theta}_{i}^{*}\) are the optimal parameters for the \(i\)-th excited state and \(\beta_{i}\) are real-valued coefficients, which must be larger than the mass gaps \(E_{k}-E_{i}\). Thus, one minimizes the variational energy \(E(\vec{\theta}_{k})\) [the first term in Eq. (9)] with the constraint [the second term in Eq. (9)] that the state \(|\psi(\vec{\theta}_{k})\rangle\) must be orthogonal to the previous \(k\) states. In order to compute the mass gap, for example, between the ground state \(E_{0}\) and first excited state \(E_{1}\), one follows three steps: * Perform the VQE and obtain optimal parameters and an approximate ground state \(|\psi(\theta_{0}^{*})\rangle\), * For the energy \(E_{1}\) of the first excited state, define the Hamiltonian: \(H_{1}=H+\beta_{1}|\psi(\theta_{0}^{*})\rangle\langle\psi(\theta_{0}^{*})|\). * Perform the VQE with the Hamiltonian \(H_{1}\) to find an approximate first excited state \(|\psi(\theta_{1}^{*})\rangle\). It has been experimentally demonstrated (see, e.g., Refs. [6, 98, 99] for reviews) that VQE and VQD allow for finding both the ground state and low-lying excitations of low-dimensional benchmark models relevant for particle physics, condensed matter physics, as well as quantum chemistry. To reach applicability in higher dimensions and for larger system sizes, the development of resource-efficient quantum algorithms is crucial. This research area, including the implementation of (gauge) symmetries, has overlap with classical algorithm development, e.g., in TN and machine learning. In particular, the VQE and VQD algorithms are similar to variational TN algorithms (see Sec. 2.2.1), now realizing the quantum state \(|\psi(\vec{\theta})\rangle\) by a quantum circuit [see Eq. (5)] instead of a tensor network [see Eq. (2)], thus replacing the tensor parameters with gate parameters. ### How to design optimal quantum circuits for hybrid quantum-classical algorithms? Parametric quantum circuits (see Fig. 6 for an example) are at the heart of variational quantum algorithms (see Sec. 5.1.1). For efficient quantum computation, it is essential to design optimal quantum circuits for obtaining the low-lying energy spectrum of a given problem Hamiltonian. On the one hand, the quantum circuit should contain sufficiently many parametric quantum gates to express the desired solution. On the other hand, the number of quantum gates should be minimal, in order to reduce the noise (see Sec. 4.3) in the quantum computation. There have been many attempts to efficiently design quantum circuits, in particular for lattice field theory applications with physical symmetries (see, e.g., Refs. [13, 14, 15, 16, 17, 19, 21, 100, 101, 102, 103]). One example for a generic method to design minimal and maximally expressive quantum circuits was provided in Refs. [102, 103], which was called a dimensional expressivity analysis. This analysis offers a practical, systematic way to optimize a given quantum circuit by removing redundant parameters, incorporating physical symmetries, removing unwanted symmetries, and testing whether the quantum circuit is sufficiently expressive. In the following, we briefly describe the key concept of the dimensional expressivity analysis. A quantum circuit \(C(\vec{\theta})\) contains unitary gates that depend on parameters \(\theta\) and, thus, can be understood as a map from a parameter space into the quantum device state space. Thus, one can define two manifolds: (i) a manifold \(M\) of states \(|C(\vec{\theta})\rangle\) that the quantum circuit can reach and (ii) a manifold \(S\) of all physical states of the quantum device. In order to obtain a minimal and maximally expressive circuit, the circuit has to only generate physical states (\(M\subseteq S\)), the Figure 5: Illustration of the tangent space (illustrated in yellow) of the manifold \(M\) (illustrated in grey) of states \(|C(\vec{\theta})\rangle\) that a given parametric quantum circuit can reach, in the point \(\vec{\theta}\) (red dot), spanned by the tangent vector \(|\partial_{j}C(\vec{\theta})\rangle\) (blue arrow). Figure and caption adapted from Ref. [105]. Figure 6: QISKIT’sEfficientSU2 2-local circuit with 2 layers. The coloured gates are the gates that can be removed after applying the dimensional expressivity analysis. Figure and caption adapted from Ref. [102]. number of parameters has to be equal to \(\dim(S)\), and the co-dimension of \(M\), which is defined as \(\operatorname{codim}(M)=\dim(S)-\dim(M)\), has to vanish. Using a hybrid quantum-classical algorithm (see Ref. [102] for details), the co-dimension can be computed by determining the tangent vectors \(|\partial_{j}C(\vec{\theta})\rangle\) for a given parameter \(\theta_{j}\) and by testing their linear independence (see Fig. 5 for an illustration). Using an iterative procedure, one can identify the redundant parameters \(\theta_{k}\), for which \(|\partial_{k}C(\vec{\theta})\rangle\) is a linear combination of \(|\partial_{j}C(\vec{\theta})\rangle\) with \(j\neq k\), and remove them by setting them to a constant value. This procedure results in a minimal quantum circuit, which is maximally expressive if the number of remaining parameters is equal to \(\dim(S)\). As an example, Ref. [102] applied the dimensional expressivity analysis to the commonly used EfficientSU2 2-local quantum circuit of Qiskit [104], shown exemplarily for 3 qubits in Fig. 6. In this quantum circuit, the coloured unitary gates turn out to be redundant and thus can be removed from the circuit. ### How to deal with gauge fields and fermions? The Hamiltonian lattice formulation of the field theories relevant for the Standard Model comprise two major ingredients: fermionic matter and bosonic (gauge) fields. For implementing the fermionic degrees of freedom, most simulations so far have used a Jordan-Wigner transformation to translate them to qubits. While the Jordan-Wigner mapping associates each fermionic degree of freedom to a single qubit, it transforms originally local terms to long-range interactions in form of Pauli strings for models in (2+1) dimensions and beyond [106, 8, 21, 22, 18]. More local encodings avoiding the long-range Pauli strings are known [107, 108, 109], and there are recent efforts to develop new locality-preserving mappings that are suitable for quantum computing [110, 111, 112]. However, these come in general at the expense of requiring more qubits than a Jordan-Wigner approach. In order to deal with the gauge fields, a variety of methods are available. Early works on digital quantum computing mainly focused on gauge models in (1+1) dimensions on lattices with open boundaries [113, 18], for which the gauge degrees of freedom can be completely integrated out [114]. This allows for obtaining a formulation directly on the gauge-invariant subspace of the theory. In higher dimensions, the gauge fields can no longer fully eliminated, and the infinite Hilbert spaces associated with the gauge fields have to be truncated to a finite dimension. Several approaches for such a truncation have been proposed in the literature. Using the basis in which the electric field energy is diagonal, the model can be truncated in the irreducible representations of the gauge group, while preserving the gauge symmetry [115]. However, when using this approach, one is typically restricted to regimes where the (color) electric field is small, and one cannot explore the weak-coupling regime. Alternatively, one can work in the magnetic basis, which renders the magnetic term of the Hamiltonian diagonal, and truncate this basis at a finite number of elements [29, 26]. Similar to the truncation in the irreducible representations, this approach works best in regimes where one is close to an eigenstate of the magnetic part of the Hamiltonian and fails at strong coupling. Another approach that has been put forward is to digitize the gauge degrees of freedom by taking discrete subgroups or subsets of the gauge group [116, 117]. In addition, especially for non-Abelian theories, loop-string-hadron formulations have been suggested, which allow for solving the non-Abelian gauge constraints locally and only impose an Abelian constraint on the gauge links [118, 119, 120]. Moreover, quantum link models [121, 122] provide formulations of gauge theories, where the Hilbert space dimensions are by construction finite dimensional at the expense of an extra dimension which has to be taken to infinity to obtain the continuum limit [123, 124]. For more details and references, see the review in Ref. [125]. ### Which field theories have already been simulated on quantum computers? The ambitious goal of quantum computing the Standard Model is still far away, given the sizes and noise levels of current NISQ devices. Still, first quantum computations of (1+1)-dimensional gauge theories have already been accomplished, showing some of the relevant characteristics of the SM, including phase transitions [8, 9, 10, 11, 21, 22]. In addition, resource-efficient formulations of gauge theories for quantum computations have been developed (see, e.g., Refs. [23, 24, 25, 26, 27, 28]). To perform lattice field theory computations on NISQ devices, one can use hybrid quantum-classical algorithms, such as VQE (see Sec. 5.1.1) and VQD (see Sec. 5.1.2). These algorithms have already found numerous applications for studying benchmark models of particle physics in (1+1) dimensions (see, e.g., Refs. [98, 6, 99] for reviews). #### 5.4.1 Example: computing hadron masses of (1+1)-dimensional SU(2) gauge theory One example is the recently accomplished implementation of a non-Abelian gauge theory with both gauge and matter fields on a quantum computer [21]. As shown in Fig. 7(a), the hadron masses of the (1+1)-dimensional SU(2) gauge theory were computed using VQE (see Sec. 5.1.1) and the IBM-Q Casablanca quantum processor. The corresponding lattice Hamiltonian reads \[\hat{H}_{\rm SU(2)_{(1+1)D}}=\frac{1}{2a}\sum_{n=1}^{N-1}\left(\hat{\phi}_{n} ^{\dagger}\hat{U}_{n}\hat{\phi}_{n+1}+{\rm h.\;c.}\right)+m\sum_{n=1}^{N}(-1) ^{n}\hat{\phi}_{n}^{\dagger}\hat{\phi}_{n}+\frac{ag^{2}}{2}\sum_{n=1}^{N-1} \hat{L}_{n}^{2}, \tag{10}\] where \(N\) is the number of lattice sites with spacing \(a\), \(g\) is the gauge coupling, \(m\) is the mass of the "quark" fermion, \(\hat{\phi}_{n}=\left(\hat{\phi}_{n}^{1},\hat{\phi}_{n}^{2}\right)^{T}\) is the staggered "quark" fermion field at site \(n\) with two color components (red and green), \(\hat{U}_{n}\) is the gauge link connecting sites \(n\) and \(n+1\), and \(\hat{L}_{n}^{2}=\sum_{a}\hat{L}_{n}^{a}\hat{L}_{n}^{a}=\sum_{a}\hat{R}_{n}^{a} \hat{R}_{n}^{a}\) is the color electric field energy, where \(\hat{L}_{n}^{a}\) and \(\hat{R}_{n}^{a}\) (with \(a=x,y,z\)) are the left and right color electric field components on the link \(n\). For the non-Abelian SU(2) gauge group, the right and left color electric field are different and are related via the adjoint representation \(\hat{R}_{n}^{a}=\sum_{b}(\hat{U}_{n}^{\rm adj})_{ab}\hat{L}_{n}^{b}\), where \((\hat{U}_{n}^{\rm adj})_{ab}=2\mbox{Tr}\left[\hat{U}_{n}\hat{T}^{a}\hat{U}_{n }^{\dagger}\hat{T}^{b}\right]\), \(\hat{T}^{a}=\hat{\sigma}^{a}/2\) are the three generators of the SU(2) algebra, and \(\hat{\sigma}^{a}\) are the Pauli matrices. In Fig. 7(a), the mass \(M_{\rm b}=E_{\rm b}-E_{\rm v}\) of the lightest baryon is shown, which is defined as the energy gap between the lowest baryon state \(E_{\rm b}\) and the vacuum state \(E_{\rm v}\). In order to obtain these results, Ref. [21] prepared both states on the quantum hardware, for the exemplary parameters \(N=4\), \(\tilde{m}\equiv am=1\), and \(x\equiv 1/(ag)^{2}\in[0,5]\). #### 5.4.2 Example: performing real-time evolution for (1+1)-dimensional SU(3) gauge theory Going from SU(2) to SU(3) gauge theory, Ref. [14] recently accomplished the first quantum computation of pure (1+1)-dimensional SU(3) gauge theory dynamics. Instead of discussing the original lattice Hamiltonian considered in Ref. [14], we would like to now give an example of the final Hamiltonian used for the actual quantum computation, after applying the methods discussed in Sec. 5.3. Using the lowest non-trivial truncation in the color parity basis, the original lattice Hamiltonian of the SU(3) gauge theory was simplified to [14] \[\hat{H}_{\rm trunc.SU(3)_{(1+1)D}}=\left(\frac{4}{3}g^{2}+\frac{11}{4g^{2}} \right)\hat{\mathds{1}}+\left(-\frac{4}{3}g^{2}+\frac{1}{4g^{2}}\right)\hat{Z}- \frac{1}{\sqrt{2}g^{2}}\hat{X}, \tag{11}\] where \(g\) is the gauge coupling and \(\hat{X}\) and \(\hat{Z}\) are the first and third Pauli operators, respectively. Figure 7(b) shows the results of performing the associated real-time evolution on IBM-Q's Athens quantum processor with \(g=1\), starting in the electric vacuum. While these results have been obtained for pure SU(3) gauge theory without fermions, we note that there has very recently been the first quantum computation of real-time evolution of tetraquark physics in SU(3) gauge theory plus fermionic matter, i.e., (1+1)-dimensional "QCD" [22]. ### Outlook: combining quantum computations with classical MCMC computations In this section, we would like to discuss an example of _combining_ quantum computations with classical MCMC computations in the future. As proposed in Ref. [7], one may combine small-scale quantum computations with large-scale MCMC computations, in order to overcome the MCMC problem of critical slowing down (see Sec. 2.1). In contrast to MCMC methods, quantum computing does not face any obstacles when investigating small lattice spacings. Thus, one may match the results of short-distance quantities obtained from quantum computations with the ones coming from MCMC simulations in the strong and intermediate coupling regime. This matching can then be used, e.g., to compute the \(\Lambda\)-parameter [7]. In order to set the physical value of the lattice spacing, one needs to compute an observable, such as the spectral gap \(\Delta=E_{1}-E_{0}\). Figure 8 shows the results for the spectral gap of (2+1)-dimensional QED for truncation levels up to \(l=3\), using the VQD algorithm (see Sec. 5.1.2) and Qiskit's EfficientSU2 quantum circuit (shown exemplarily for 3 qubits and 2 layers in Fig. 6) with up to 5 layers. The results in Fig. 8 have been obtained using a classical simulator of the quantum hardware without noise and with an infinite number of shots, in order to test the feasibility of the method for a small number of qubits, and the Figure 7: (a) VQE calculation of the baryon mass, for \(N=4\) lattice sites, a “quark” mass of \(\bar{m}=am=1\), and a range of values for the inverse coupling constant \(x=1/(ag)^{2}\in[0,5]\). The baryon is an SU(2)-“proton” (see inset), and the error bars are smaller than the markers. Figure reprinted with permission from Ref. [21], [https://doi.org/10.1038/s41467-021-26825-4](https://doi.org/10.1038/s41467-021-26825-4), under the Creative Commons Attribution 4.0 International License. (b) VQE results for the energy in the electric field of the one-plaquette SU(3) system evolved according to the Hamiltonian in Eq. (11). The data points correspond to the average value and the maximal extent of 68% binomial confidence intervals across four implementations on IBM-Q’s Athens quantum processor. Figure reprinted with permission from Ref. [14], [https://doi.org/10.1103/PhysRevD.103.094501](https://doi.org/10.1103/PhysRevD.103.094501), copyright (2021) by the Americal Physical Society. method can be implemented on quantum hardware in the future. Other avenues of combining small-scale quantum computations with large-scale MCMC computations have been proposed, e.g., for addressing the MCMC problem of interpolator optimization [126, 127]. In general, for computations that combine lattice results from the Hamiltonian and Lagrangian formulations, several challenges need to be addressed. First, lattice field theories are usually expressed in the Lagrangian formulation, and one needs to derive and optimize the corresponding Hamiltonian formulation (see, e.g., Refs. [128, 129, 130, 131]). Second, one needs to match the different bare parameters and observables between these two formulations (see, e.g., Refs. [132, 133, 134, 135]). Third, one needs to implement the same lattice fermion formulation, but lattice computations in the Hamiltonian formulation have so far mainly focused on staggered fermions. Recently, there has been progress with the implementation of Wilson fermions in the Hamiltonian formulation (see, e.g., Refs. [136, 137]) and with determining the resulting mass shift [61]. ## 6 Summary and outlook: where do we stand, where will we go? ### Short summary: quantum hardware and algorithms Quantum computing offers the prospect to overcome challenges of MCMC methods, including the sign problem and the problem of critical slowing down. On the hardware side, digital gate-based quantum computers with \(\mathcal{O}(10-100)\) noisy qubits are currently available. On the algorithms side, first resource-efficient quantum algorithms for gauge theories in (1+1) and (2+1) dimensions have been developed, and first proof-of-concept quantum computations of gauge theories in (1+1) Figure 8: Classically simulated VQD results of spectral gap \(\Delta E=E_{1}-E_{0}\) for (2+1)-dimensional QED in the electric basis, as a function of the gauge coupling \(g\) at vanishing fermion mass, \(m=0\). Both the VQD results (dots) and the exact diagonalization (ED) results (lines) are shown for truncation levels up to \(l=3\). The shaded area corresponds to the region where the results are not precise enough to estimate the gap reliably. Figure and caption adapted from Ref. [7]. dimensions have been accomplished. Thus, the research field of quantum computing for lattice field theory is still in its very early stages of development. The path towards quantum computations of lattice field theories in (3+1) dimensions, in particular Lattice QCD, requires numerous steps of quantum algorithm and hardware improvement. ### Discussion and outlook: the future of quantum computing The field of quantum computing recently entered the era of noisy, intermediate-scale quantum devices, allowing for first computations of lattice gauge theories on lattice sizes comparable to the ones in the pioneering work by Creutz [138] more than 40 years ago (see Fig. 9). The investigation of lattice field theories with classical MCMC algorithms has seen tremendous advances in these 40 years, currently allowing the simulation of QCD with physical values of the quark masses for the first two generations of quarks. To reach the current era of large-scale, high-precision computations on today's supercomputers, which allows the computation of the light QCD spectrum and much more, both the classical algorithms and the classical hardware had to be improved to a level that might have been considered impossible from the perspective of the 1980's. Thus, one may ask: can the field of quantum computing for lattice field theory develop similarly to the field of classical computing for lattice field theory, reaching a similar level of precision in the upcoming decades? Of course, this question cannot be answered, because such an answer strongly depends on the future development of quantum hardware and algorithms, which are both unknown. However, one can make a few rough estimates regarding the requirements of quantum computing for Lattice QCD. The spatial lattice volumes that can currently be simulated classically are large, up to \(96^{3}\), and would require \({\cal O}(10^{7}-10^{8})\) qubits on the quantum computing side [139]. Considering fault-tolerant logical qubits, this number would get multiplied by a factor of \(\sim 1000\), as explained in Sec. 4.3.2. The quantum computing roadmap of various companies suggests that fault-tolerant quantum computing might become reality in the current or next decade (see, e.g., Ref. [72, 140]). Following these roadmaps and extrapolating them in time (which poses various challenges for quantum hardware and error correction), a quantum computation of Lattice QCD with spatial lattice volumes of \(96^{3}\) might become feasible in the 2040s or 2050s. This time period would be long but comparable to the time since the first days of classical lattice field theory computations. At this point, we wish to emphasize that for quantum computing to become useful for lattice field theory computations, spatial lattice volumes as large as \(96^{3}\) are not required. For simulating sign-problem-afflicted regimes that can neither be studied with MCMC nor with classical methods beyond MCMC, such as TN methods, it would be sufficient to perform quantum computations with much smaller lattice volumes. Examples of outperforming the best current classical algorithms in sign-problem-afflicted regimes have already been provided, e.g., in the context of (1+1)-dimensional lattice field theories, as explained in Sec. 3. ### Discussion and outlook: the race between quantum and classical computing In the previous section, we brought up a comparison between the history of classical computing and the possible future of quantum computing for lattice field theory. This comparison raises an important conceptual question: if the classical algorithms and hardware have progressed so tremendously in the past, might they develop quickly enough in the future to catch up with any progress in quantum computing? There have been several examples in the history of quantum computing, where a quantum advantage turned into a "classical advantage" after the classical algorithms had been improved. For example, Ref. [141] demonstrated a quantum computation on 53 qubits taking 200s, substantially shorter than the run time of 10,000y of the classical algorithm. Due to a loophole in the classical algorithm, the run time was later reduced to a few days, also when slightly increasing the qubit number [142], and was further reduced to 304s in Ref. [143], thus comparable to the run time of the quantum computation. Would it be possible that classical algorithms for lattice field theory, including MCMC-based methods, TN methods, machine-learning methods, and other methods to overcome or mitigate the sign problem, might advance quickly enough in the upcoming decades that quantum computing would not be needed anymore? This is again a question that cannot be answered, because the answer strongly depends on the future development of classical hardware and algorithms, which are both unknown. Even when classical and quantum computations can compete, quantum computing could still give advantages, e.g., in specific parameter regimes of the lattice field theory, or more generally, e.g., due to lower power consumption. Crucially, whenever one encounters an exponentially hard classical problem, a small quantum step corresponds to a giant classical leap. For example, to simulate out-of-equilibrium dynamics, the errors of the best known classical algorithms grow exponentially in time, as discussed in Sec. 3. For highly entangled quantum systems, we expect that quantum computing will be able to outperform classical computing for lattice field theories in (3+1) dimensions in the future, once sufficient resources will be available. Figure 9: First lattice field theory computation using classical MC methods. Wilson loops of pure SU(2) gauge fields at \(\beta\) = 3 as a function of lattice size. Figure reprinted with permission from Ref. [138], [https://doi.org/10.1103/PhysRevD.21.2308](https://doi.org/10.1103/PhysRevD.21.2308), copyright (1980) by the Americal Physical Society. ## Acknowledgments L.F. is partially supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C\({}^{2}\)QA) under contract number DE-SC0012704, by the DOE QuantiSED Consortium under subcontract number 675352, by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, [http://iaiffi.org/](http://iaiffi.org/)), and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under grant contract numbers DE-SC0011090 and DE-SC0021006. S.K. acknowledges financial support from the Cyprus Research and Innovation Foundation under projects "Future-proofing Scientific Applications for the Supercomputers of Tomorrow (FAST)", contract no. COMPLEMENTARY/0916/0048, and "Quantum Computing for Lattice Gauge Theories (QC4LGT)", contract no. EXCELLENCE/0421/0019.
2308.15757
Phase transition to chaos in complex ecosystems with non-reciprocal species-resource interactions
Non-reciprocal interactions between microscopic constituents can profoundly shape the large-scale properties of complex systems. Here, we investigate the effects of non-reciprocity in the context of theoretical ecology by analyzing a generalization of MacArthur's consumer-resource model with asymmetric interactions between species and resources. Using a mixture of analytic cavity calculations and numerical simulations, we show that such ecosystems generically undergo a phase transition to chaotic dynamics as the amount of non-reciprocity is increased. We analytically construct the phase diagram for this model and show that the emergence of chaos is controlled by a single quantity: the ratio of surviving species to surviving resources. We also numerically calculate the Lyapunov exponents in the chaotic phase and carefully analyze finite-size effects. Our findings show how non-reciprocal interactions can give rise to complex and unpredictable dynamical behaviors even in the simplest ecological consumer-resource models.
Emmy Blumenthal, Jason W. Rocks, Pankaj Mehta
2023-08-30T04:56:43Z
http://arxiv.org/abs/2308.15757v2
# Phase transition to chaos in complex ecosystems with non-reciprocal species-resource interactions ###### Abstract Non-reciprocal interactions between microscopic constituents can profoundly shape the large-scale properties of complex systems. Here, we investigate the effects of non-reciprocity in the context of theoretical ecology by analyzing a generalization of MacArthur's consumer-resource model with asymmetric interactions between species and resources. Using a mixture of analytic cavity calculations and numerical simulations, we show that such ecosystems generically undergo a phase transition to chaotic dynamics as the amount of non-reciprocity is increased. We analytically construct the phase diagram for this model and show that the emergence of chaos is controlled by a single quantity: the ratio of surviving species to surviving resources. We also numerically calculate the Lyapunov exponents in the chaotic phase and carefully analyze finite-size effects. Our findings show how non-reciprocal interactions can give rise to complex and unpredictable dynamical behaviors even in the simplest ecological consumer-resource models. Many complex systems operate out of equilibrium where components generically interact non-reciprocally. Significant current research aims to untangle the implications of non-reciprocal interactions for self-organization and pattern formation. While much progress has been made towards understanding non-reciprocity in systems composed of a few types of species or fields, the consequences of non-reciprocity in more complex systems composed of many interacting components are less clear [1; 2; 3; 4]. Large, diverse ecosystems with many types of species and resources provide a natural setting for exploring this open problem. Over the last decade, researchers have adapted methods from the statistical physics of disordered systems (e.g., replicas, the cavity method, Random Matrix Theory) to analyze such ecosystems [5; 6; 7; 8; 9; 10; 11; 12]. Much of this work has focused on systems with reciprocal interactions in which dynamics are often implicitly governed by an optimization function and reach a fixed point [13]. One notable exception are recent studies of the random Generalized Lotka-Volterra model in which species interact non-reciprocally [14; 15; 16; 17; 18; 19]. These systems can exhibit novel behaviors such as dynamic fluctuations and chaos, including unpredictable "boom-and-bust" dynamics where low-abundance species suddenly bloom to high abundance [20]. These observations suggest that non-reciprocal interactions can qualitatively change ecological dynamics in species-only models. However, the generalization of these observations to more complex ecosystems with multiple trophic layers or environmentally-mediated interactions remains unexplored. Here, we introduce a generalization of the classic MacArthur Consumer Resource Model (MCRM) that includes non-reciprocal interactions between species and resources. Consumer-resource models, first introduced by MacArthur and Levins [21; 22; 23], have played a foundational role in modern theoretical ecology and undergird many powerful theoretical frameworks for understanding ecological competition, including contemporary niche theory and Tilman's R* principle [24; 25]. **Theoretical Setup**. We consider an ecosystem with \(i=1,\ldots,S\) species which may consume \(\alpha=1,\ldots,M\) distinct self-replenishing resources with dynamics governed by the equations, \[\frac{\mathrm{d}N_{i}}{\mathrm{d}t} =N_{i}\Biggl{(}\sum_{\alpha=1}^{M}c_{i\alpha}R_{\alpha}-m_{i} \Biggr{)}, \tag{1}\] \[\frac{\mathrm{d}R_{\alpha}}{\mathrm{d}t} =R_{\alpha}(K_{\alpha}-R_{\alpha})-\sum_{i=1}^{S}N_{i}e_{i\alpha} R_{\alpha}, \tag{2}\] where \(N_{i}\) is the population size of species \(i\), \(R_{\alpha}\) is the abundance of resource \(\alpha\), \(c_{i\alpha}\) is the relative consumption preference of species \(i\) for resource \(\alpha\), \(e_{i\alpha}\) describes the impact of species \(i\) on resource \(\alpha\), \(m_{i}\) is the natural mortality rate of species \(i\), and \(K_{\alpha}\) is the carrying capacity of resource \(\alpha\) in the absence of consumption. We call this model the asymmetric MacArthur Consumer Resource Model (aMCRM) with a schematic provided in Fig. 1. When \(e_{i\alpha}=c_{i\alpha}\) the species-resource interactions become reciprocal, or symmetric, and the aMCRM reduces to the classical MacArthur Consumer Resource Model (MCRM). To develop intuition for the role of non-reciprocity in the aMCRM, we consider the limit where the resource dynamics are fast and the resource abundances become entrained to species dynamics. In this case, we take the RHS of Eq. (2) to be zero and solve to find \(R_{\alpha}=\max\{0,K_{\alpha}-\sum_{i}N_{i}e_{i\alpha}R_{\alpha}\}\). Substituting this result into the equation for species dynamics yields an ef fective Lotka-Volterra equation, \[\begin{split}\frac{\mathrm{d}N_{i}}{\mathrm{d}t}=& N_{i}\Bigg{(}\kappa_{i}-\sum_{j=1}^{S}A_{ij}N_{j}\Bigg{)},\\ \kappa_{i}=&\sum_{\alpha=1}^{M}c_{i\alpha}K_{\alpha} -m_{i},\\ A_{ij}=&\sum_{\alpha=1}^{M}c_{i\alpha}e_{j\alpha} \Theta(R_{\alpha}),\end{split} \tag{3}\] where \(\kappa_{i}\) is the effective carrying capacity for species \(i\) and \(A_{ij}\) is the effective species-species interaction matrix, encoding how species \(j\) impacts species \(i\) (\(\Theta\) is the Heaviside function). Although typically not quantitatively accurate, this approximation provides useful qualitative insight into the nature of the non-reciprocal interactions. In MacArthur's original consumer-resource model, impacts and benefits are identical, \({e_{i\alpha}=c_{i\alpha}}\). In this case, \(A_{ij}\) is symmetric, all interactions are reciprocal, the ecosystem has a unique fixed point, and the resulting steady state can be derived using an optimization principle [13]. Such behavior is expected because choosing \({c_{i\alpha}=e_{i\alpha}}\) implicitly assumes that each species consumes resources proportional to the marginal utility conferred to that species (in the context of game theory and microeconomics, this is a "rational strategy"). When the resource-species interactions are non-reciprocal, \({e_{i\alpha}\neq c_{i\alpha}}\), \(A_{ij}\) is no longer symmetric, the resulting dynamics can no longer be described using an optimization principle, and there is no guarantee that the dynamics will reach a stable fixed point. **Thermodynamic Limit.** To investigate the aMCRM, we work in the thermodynamic limit where the numbers of species \(S\) and resources \(M\) become very large while their ratio \(M/S\) is held fixed. We assume that parameters are drawn randomly from a fixed distribution analogous to quenched disorder. To ensure a proper thermodynamic limit, parameters are drawn as follows: \[\begin{split} K_{\alpha}=K+\sigma_{K}\delta K_{\alpha},\quad m_{ i}=m+\sigma_{m}\delta m_{i},\\ c_{i\alpha}=\frac{\mu_{c}}{M}+\frac{\sigma_{c}}{\sqrt{M}}d_{i \alpha},\\ e_{i\alpha}=\frac{\mu_{e}}{M}+\frac{\sigma_{e}}{\sqrt{M}}\Big{(} \rho d_{i\alpha}+\sqrt{1-\rho^{2}}x_{i\alpha}\Big{)}\end{split} \tag{4}\] where \(\delta K_{\alpha},\delta m_{i},d_{i\alpha},x_{i\alpha}\) are independent standard random variables (i.e., zero mean and unit variance) and \(|\rho|\leq 1\) is the interaction reciprocity parameter. For simplicity, we take \(\mu_{c}=\mu_{e}\equiv\mu\) and \(\sigma_{c}=\sigma_{e}\equiv\sigma\) in all figures and simulations. The central limit theorem ensures that, in the thermodynamic limit, our results are agnostic to the exact form of the underlying distributions and depend only on first and second moments. Therefore, we sample all parameters from normal distributions unless otherwise specified. With this parameterization, \(\rho\) controls the level of reciprocity of species-resource interactions through the correlation of consumption benefits and impacts: \[\mathrm{Cor}(c_{i\alpha},e_{j\beta})=\rho\;\delta_{ij}\delta_{\alpha\beta}. \tag{5}\] When \(\rho=1\), the aMCRM reduces to the fully symmetric MCRM; when \(\rho=0\), the aMCRM models completely non-reciprocal species-resource interactions. By tuning \(\rho\), we can systematically explore the effects of non-reciprocity. **Cavity Method.** Just as in the original MCRM, we can analytically calculate the thermodynamic-limit behavior using the cavity method [11; 12; 26; 27]. Unlike replicas, the cavity method does not require the existence of an energy function and therefore can be extended to the aMCRM. We assume dynamics are self-averaging and described by a replica-symmetric ansatz. Using this ansatz, we derive self-consistent mean-field equations for the fraction of surviving species, the fraction of non-depleted resources, the first and second moments of the steady-state species and resource abundances, and the trace of two relevant susceptibility matrices (see Appendix A for detailed derivations). As seen in Figs. A6 and A8, numerical simulations and analytical predictions agree remarkably well for moderate non-reciprocity. **Transition to Dynamic Phase.** Without reciprocal interactions, the aMCRM has no guarantee of reaching a steady state. In fact, we find that when the interaction reciprocity \(\rho\) is less than a critical \(\rho^{\star}\), the aMCRM exhibits a phase transition from a unique self-averaging steady state to a chaotic dynamic phase. Fig. 2 shows numerical simulations of typical resource and species dynamics observed in each phase (see Appendix D for simulation details [28; 29; 30; 31; 32; 33]). Using the cavity method, we can analytically compute the phase boundary between the stable and dynamic phases [26]. We perturb the non-zero steady-state species and resource abundances, \({N_{i}\to N_{i}+\varepsilon\eta_{i}^{(N)}}\) and \({R_{\alpha}\to R_{\alpha}+\varepsilon\eta_{\alpha}^{(R)}}\), where \(\varepsilon\) is a small parameter and Figure 1: Schematic of the asymmetric MacArthur Consumer Resource Model (aMCRM). Species \(i\) benefits with relative weight \({c_{i\alpha}}\) from consuming resource \(\alpha\) and impacts the abundance of the resource with relative weight \({e_{i\alpha}}\). \(\eta_{i}^{(N)},\eta_{\alpha}^{(R)}\) are independent standard random variables, and calculate the susceptibilities \(\mathrm{d}N_{i}/\mathrm{d}\varepsilon\), \(\mathrm{d}R_{\alpha}/\mathrm{d}\varepsilon\). Because of the disordered nature of the perturbation, the expectations of the first moments of the susceptibilities are zero, but the second moments, \(\langle\,(\mathrm{d}N_{i}/\mathrm{d}\varepsilon)^{2}\rangle\), \(\langle(\,\mathrm{d}R_{\alpha}/\mathrm{d}\varepsilon)^{2}\rangle\), are non-zero (see Appendix B for details). The phase transition to the dynamic phase is signaled by the divergence of the these susceptibilities' second moments (see Fig. 3). Surprisingly, we find that \(\rho^{\star}\), the critical value marking the phase transition to chaos, depends on model parameters only through the species packing fraction, the ratio of surviving species to non-depleted resources, via the expression (see Appendix B): \[(\rho^{\star})^{2}=\frac{(\text{\# of surviving species})}{(\text{\# of non-depleted resources})}. \tag{6}\] When \(\rho<\rho^{\star}\) the ecosystem undergoes a phase transition to chaos. As the number of surviving species and non-depleted resources are fixed by model parameters, the above equation defines a co-dimension-one phase boundary in the parameter space. Beyond this boundary in the dynamic phase, the second moments of the susceptibilities become negative, indicating that the replica-symmetric ansatz no longer holds, and its results are unstable to any perturbation. Fig. 3(a) shows a phase diagram overlain on a heatmap of the fraction of simulations that reach steady state within a chosen finite runtime. We highlight the locations of the simulations in the stable and dynamic phases in Fig. 2 with a circle and a star, respectively. In Fig. 3(b), we plot the second moments of the susceptibilities as a function of \(\rho\) with fixed \(\sigma\) along the slice of phase space indicated by the dashed line in Fig. 3(a). The susceptibilities' variances diverge at the phase transition and become invalidly negative in the dynamic phase. As the phase transition is approached, the fraction of simulations that reach steady state in a finite simulation time sharply decreases. An alternative phase diagram with parameters drawn from uniform distributions is shown in Fig. A10. Finally, we note that for certain choices of parameters, the replica-symmetric self-consistent equations do not have a solution. This transition to infeasibility has an interesting interpretation but is not physically realized because it occurs within the dynamic phase where the replica symmetric solution is unstable (see Appendix A.5). **Chaos.** In order to better understand the transition to chaos, we numerically computed the maximal Lyapunov Figure 3: Phase diagram of the aMCRM and diverging susceptibility. (a) Heatmap of the fraction of simulations which reached steady state in finite simulation time for various values of \(\rho\), the level of reciprocity of species-resource interactions, and \(\sigma\), the magnitude of fluctuations in species-resource interactions. Overlain is the cavity method-calculated phase boundary. (b) Variances of susceptibilities of mean-field species and resources as a function of \(\rho\), with \(\sigma\) fixed at the value indicated by the dashed line in (a). Figure 2: Example dynamics of the aMCRM in a community of \(S=M=256\) species and resources. Left: dynamics in the stable phase; species-resource interactions are nearly reciprocal. Right: dynamics in the dynamic phase; species-resource interactions are less reciprocal. The parameter values for the stable-phase and dynamic-phase simulations are respectively marked with a circle and star in Fig. 3(a). exponent \(\lambda_{1}\) of the aMCRM in the dynamic and stable phases using the "H2" method of Geist [34, 35, 36, 37]. The maximal Lyapunov exponent characterizes how quickly trajectories from nearby initial conditions diverge (positive exponent) or converge (negative exponent). As seen in Fig. 4(a), in the dynamic phase, \(\lambda_{1}>0\), while in the stable phase, \(\lambda_{1}<0\). For the parameters used in Fig. 2, \(|\lambda_{1}|\approx 5\times 10^{-3}\), indicating that the divergence or convergence of nearby trajectories occurs on a timescale of \(\lambda_{1}^{-1}=2\times 10^{2}\) time units. We further confirmed the existence of chaos by analyzing the generalized alignment index (GALI) which measures how a volume element formed by tangent vectors to a trajectory changes over time [36, 37, 38] (see Fig. C14). Further details are given in Appendix C. A direct signature of chaotic dynamics is high sensitivity to initial conditions as observed in Fig. 4(b). The red and blue lines show the simulated trajectory of a single species (top) and resource (bottom) started from initial conditions with slight differences. Initially, the trajectories are almost identical before diverging from each other significantly after a few Lyapunov times. **Finite-Size Effects.** Like most phase transitions, the transition between the stable and dynamic phases is a thermodynamic-limit phenomenon. In small ecosystems, the aMCRM may approach steady state even when in the dynamic phase due to finite-size effects. As a result, it is not clear in Fig. 3 what the true probability of steady state is in the thermodynamic limit. In Appendix E, we quantify these affects by performing a numerical analysis to extrapolate the steady-state probabilities to infinite system size for each of the two points highlighted in Fig. 3. For both sets of parameters, we measure the distribution of steady-state times for many simulations for a variety of system sizes. Using a custom method based on maximum-likelihood estimation, we then perform a finite-size scaling collapse on these distributions, allowing us to approximately determine the steady-state probabilities as a function of system size. Our scaling collapses provide strong evidence that the probability of reaching steady state in the thermodynamic limit approaches exactly zero in the dynamic phase and one in the stable phase. **Discussion.** In this letter, we analyzed the effects of non-reciprocal species-resource interactions on the stability of ecosystems. We introduced the asymmetric MacArthur Consumer Resource Model (aMCRM), a generalization of the MacArthur Consumer Resource Model (MCRM). Using the cavity method, we identified a phase transition between a stable phase in which a unique, uninvadable, self-averaging steady state exists and a dynamic phase with chaotic fluctuations. Remarkably, the phase boundary depends on model parameters only through the species-packing ratio--the ratio of surviving species to non-depleted resources. We found that the chaotic regime is generic and occurs robustly, in contrast with some recent works on Generalized Lotka-Volterra models with completely antisymmetric interactions where chaos is often hard to nucleate [14, 15]. In addition, the chaotic dynamics in consumer-resource models generically occurs when the systems are well below the competitive exclusion bound, while the dynamics in Generalized Lotka-Volterra systems can violate the competitive exclusion principle. Collectively, these works suggest that non-reciprocal interactions can lead to complex, chaotic dynamics in systems with many different types of species/fields. In particular, like Generalized Lotka-Volterra models, we also find that species and resources often jump rapidly between low and high abundances. In the future, it will be interesting to see if the methods developed in Ref. [20] in the context of Lotka-Volterra systems generalize to explain such boom-and-busst dynamics in consumer-resource models. Finally, it will also be interesting to understand these phenomena in the context of ecological processes such as immigration, alternative resource dynamics [39], the addition of network Figure 4: Chaos in the dynamic phase of the aMCRM. (a) Dot plot of \(\lambda_{1}8\), maximal Lyapunov exponents, for simulations classified by whether they reach a steady state for various values of \(\rho\). (b) Two trajectories (red and blue) with slightly different initial conditions in the dynamic phase of the aMCRM. A species and a resource are highlighted to emphasize the chaotic dynamics; all other species and resources are shown at low opacity for clarity. The units of time are given by the inverse of the maximal Lyapunov exponent, \(\lambda_{1}^{-1}=190\). and metabolic structure into interactions [40; 41; 42], the inclusion of additional trophic structure [43], and spatial and temporal structure [44]. **Acknowledgements.** We would like to thank Zhijie (Sarah) Feng, Claudio Chamon, and Chris Laumann for useful discussions. Additionally, we thank the Boston University Research Computing Services for managing computational resources. This work was funded by NIH NIGMS R35GM119461 to P.M. and the Boston University Undergraduate Research Opportunities Program to E.B.
2302.01779
Relativistic analytical R-matrix (ARM) theory for strong-field ionization
The analytical R-matrix (ARM) theory has been known for an efficient description of the Coulomb effects of the atomic core in strong-field ionization in the nonrelativistic regime. We generalize the ARM theory into the relativistic domain aiming at the application to strong-field ionization of highly-charged ions in ultrastrong laser fields. Comparison with the relativistic Coulomb-corrected strong field approximations (SFA) is provided, highlighting the advantages and disadvantages. The weakly relativistic asymptotics and its accordance with the nondipole Coulomb-corrected SFA are examined. As an example of a physical application of the relativistic ARM, the Coulomb enhancement of tunneling ionization probability for highly-charged ions at the cutoff of the direct channel is discussed.
Michael Klaiber, Karen Z. Hatsagortsyan, Christoph H. Keitel
2023-02-03T14:35:25Z
http://arxiv.org/abs/2302.01779v1
# Relativistic analytical R-matrix (ARM) theory for strong-field ionization ###### Abstract The analytical R-matrix (ARM) theory has been known for an efficient description of the Coulomb effects of the atomic core in strong-field ionization in the nonrelativistic regime. We generalize the ARM theory into the relativistic domain aiming at the application to strong-field ionization of highly-charged ions in ultrastrong laser fields. Comparison with the relativistic Coulomb-corrected strong field approximations (SFA) is provided, highlighting the advantages and disadvantages. The weakly relativistic asymptotics and its accordance with the nondipole Coulomb-corrected SFA are examined. As an example of a physical application of the relativistic ARM, the Coulomb enhancement of tunneling ionization probability for highly-charged ions at the cutoff of the direct channel is discussed. ## I Introduction Advances in the experimental technique for high-resolution measurements of the photoelectron and ion momentum distributions [1; 2] allowed the recent extension of experimental investigations of strong-field ionization beyond the dipole regime [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. The leading nondipole effect is the radiation pressure which is responsible for the partitioning of the absorbed photon momentum between the photoelectron and the parent ion in strong-field ionization [3; 14; 15; 16; 17; 18; 19]. Interesting dynamical properties arise due to the interplay between Coulomb effects of the atomic core and the nondipole effects [4; 5; 6; 20; 21; 22; 23; 24; 25; 26]. The nondipole theory has been developed for the interpretation of experimental results, including the strong field approximation (SFA) [14; 18; 19; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37], the numerical solution of the time-dependent Schrodinger equation (TDSE) [30; 38; 39], as well as the classical trajectory Monte Carlo (CTMC) simulations [40; 41; 12; 42]. While presently strong laser fields up to the intensity of \(10^{23}\) W/cm\({}^{2}\) are achievable [43], the relativistic regime of the laser-atom interaction in ultrastrong fields is far from deep experimental scrutiny. This is because the most interesting dynamics, including electron correlations, is expected when the atomic and laser fields are of the same magnitude. The latter necessitates dealing with an atomic system of highly-charged ions (HCI), which are extremely difficult to handle experimentally. The pioneering experiment in this field by Moore _et al._[44] more than 20 years ago at an intensity of \(3\times 10^{18}\) W/cm\({}^{2}\), has been followed by a series of fine experiments aimed at the observation of signatures of the atomic bound dynamics in the photoelectron momentum distribution (PMD) during the ionization process in external fields of relativistic intensity [45; 46; 47; 48; 49; 50; 51; 52; 53]. It was clearly shown that the drift of the electron induced by the laser magnetic field suppresses the usual electron correlation channel - the recollision, and related phenomena of high-order harmonic generation, above-threshold ionization, and nonsequential double ionization, see e.g. [40; 46; 54]. However, still it is not clarified whether specific electron correlations in the relativistic regime of strong-field ionization, such as shake-up, shake-off processes, collective tunneling, _etc._[55], would exist at suppressed recollisions. The workhorses of analytical investigations in strong-field physics, the strong-field approximation (SFA) [56; 57; 58], and the quasiclassical imaginary time method (ITM) [59; 60] have been generalized into the relativistic regime in [61; 62], and [63; 64; 65; 66], respectively. However, in the standard SFA, the influence of the Coulomb field of the atomic core is neglected in the electron continuum dynamics. This approximation is especially unsuitable in the case of HCI. In the non-relativistic regime the ITM has been improved to treat Coulomb field effects during the ionization and the well-known quantitatively correct Perelomov-Popov-Terent'ev (PPT) ionization rates have been derived [67] (in the adiabatic regime also known as Ammosov, Delone, Krainov (ADK) rates [68]). The PPT theory uses the quasi-classical wave function for the description of the tunneling part of the electron wave packet through the nonadiabatic barrier formed by the laser and the atomic field. The continuum wave function is matched to the exact bound state [59; 69], in this way removing the singularity in the phase of the quasi-classical wave function at the Coulomb center. The PPT theory does not address Coulomb effects during the photoelectron dynamics in the continuum. The latter is very important for the forming of various features of the photoelectron momentum distribution (PMD) and has been treated within different versions of the Coulomb-corrected SFA (CCSFA). The simplest approach is the Coulomb-Volkov ansatz when the Volkov wave function [70] in the SFA matrix element is replaced by the Coulomb-Volkov wave function [71; 72], which incorporates the asymptotic phase of the exact Coulomb continuum wave function into the phase of the Volkov state. While the Coulomb-Volkov approach can be formulated rigorously as an S-matrix expansion [73], it accounts for the coupling between the Coulomb and the laser field perturbatively and the approach fails when the electron appears in the continuum after tunneling close to the atomic core [74]. The extension of the nonrelativistic PPT theory to treat the Coulomb effects in the continuum also employs the electron continuum wave function in the eikonal approximation [75; 76; 77]. The CCSFA via the eikonal approximation has been rigorously formulated in [78; 79], evaluating the eikonal phase of the continuum electron wave function along the exact classical electron trajectories driven by the laser and the Coulomb field. The same approximation has been worked out in [80; 81] via the Feynman path integration concept. Higher-order contributions in CCSFA have been discussed in [82], by removing the Coulomb singularity with the use of the saddle-point approximation [83], rather than with the matching procedure to the bound state. An innovative way of matching the electron eikonal wave function for the continuum to the atomic bound state within the nonrelativistic SFA approach has been advanced in the analytical R-matrix (ARM) theory [84; 85; 86; 87]. Here, it has been shown that the rigorous matching procedure is equivalent to a particular (imaginary) shift of the starting point of the complex time-integration in the phase of the eikonal wave function in the SFA amplitude. The ARM theory provides the most efficient version of CCSFA. While the employed eikonal approximation in different versions of CCSFA allows the treatment of rescattering effects, however, restricts the rescattering only to soft ones [88]. The relativistic regime of strong-field ionization can be characterized by the following parameters. For the sub-barrier dynamics, the parameter \(\upsilon\equiv\kappa/c\sim 1\) indicates the relativistic domain, with the atomic momentum \(\kappa=\sqrt{2I_{p}}\), the ionization energy \(I_{p}\), and the speed of light \(c\). For the continuum dynamics, the relativistic domain is achieved when the relativistic invariant field parameter \(\xi\equiv E_{0}/(c\omega)\sim 1\), with the laser field amplitude \(E_{0}\), and the frequency \(\omega\). Recollisions in the relativistic regime are suppressed when the Lorentz deflection parameter \(\Gamma_{R}\gtrsim 1\), with \(\Gamma_{R}\equiv(1/16)\kappa\xi^{3}(c^{2}/\omega)\)[40; 89]. Atomic units are used throughout. The relativistic domain of strong field ionization is accessible with HCI driven by ultrastrong lasers fields. The ITM including Coulomb corrections during ionization has been extended into the relativistic regime [63; 64; 65; 66], allowing to calculate quantitatively relevant ionization rates in the relativistic case. The relativistic version of the plain SFA has been put forward by Howard Reiss in [61; 62]. The CCSFA, based on the relativistic eikonal-Volkov wave function for the continuum electron [90], has been proposed in [91]. The calculation of spin-resolved ionization probabilities in the relativistic regime using relativistic CCSFA has been provided in Ref. [92], showing the equivalence of the CCSFA to the Coulomb corrected ITM. We indicate also the significant efforts in the numerical investigations of the relativistic ionization dynamics via the Dirac equation, in particular with HCIs and superstrong laser fields, carried out in Refs. [93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105]. In this paper the ARM theory is extended into the relativistic domain aiming at the application of strong-field ionization of HCIs in ultrastrong laser fields. The ARM theory [84; 85; 86; 87] is a version of the eikonal approximation in the description of the Coulomb field of the atomic core for the electron during its continuum dynamics after ionization in a strong laser field. The main advantage of the ARM theory is that the explicit matching procedure of the continuum wave function to the bound state is replaced by the specific shift of the border of the time integration into the complex plane in the eikonal. The consequence of this procedure is that the singularity of the wave function at the saddle-point of the time integration (corresponding to the center of the Coulomb potential) is eliminated. For the extension of the ARM theory into the relativistic regime we make use of the latter property, namely, derive such a shift of the time integration border in the complex domain which eliminates the singularity of the phase of the relativistic CCSFA wave function at the time saddle-point. Finally, we apply the relativistic ARM theory for the investigation of the Coulomb enhancement effect at the cutoff of the direct ionization channel in the relativistic domain with HCIs. This effect is known in the nonrelativistic regime, described in [88; 106]. The structure of the paper is the following. In Sec. II we begin with the nonrelativistic regime, elucidating our approach for the derivation of the ARM theory amplitude, then apply it for the relativistic case in Sec. III. Examples of the application of the derived relativistic ARM-theory are discussed in Sec. VI, and our conclusions are formulated in Sec. VII. ## II Nonrelativistic theory In this section we elucidate our approach for the derivation of the ARM-theory amplitude in the nonrelativistic regime. Note that the ionization amplitude in the nonrelativistic ARM-theory has been derived in [84; 85; 86; 87] by dividing the interaction region into two sub-regions (inner region and outer region), and by rigorously matching the eikonal wave function for the continuum electron in the outer region to the bound state wave function in the inner-region using a R-matrix approach. Here, we use an operational approach, namely, taking into account that the above mentioned matching procedure of the wave functions is equivalent to a shift of the time integration border in the complex domain, and we derive such a shift of the time integration border in the eikonal that eliminates the singularity of the SFA amplitude. Firstly, we derive with our operational approach the strong-field ionization amplitude in the case of a short-range atomic potential and then discuss the case with the Coulomb field. We apply SFA for the description of the laser driven ionization process of the atomic bound electron. The SFA ionization amplitude of the electron with the asymptotic outgoing momentum \(\mathbf{p}\) is given by [107] \[m_{\mathbf{p}}=-i\int dt\langle\psi_{\mathbf{p}}(t)|H_{\mathrm{s}}(t)|\phi(t)\rangle, \tag{1}\] where \(\phi(t)\) is the bound state wave function, \(\psi_{\mathbf{p}}(t)\) the electron outgoing continuum state, and the interaction Hamiltonian in the length gauge \[H_{\mathrm{s}}(t)=\mathbf{r}\cdot\mathbf{E}(t), \tag{2}\] with the laser electric field \(\mathbf{E}=-\partial_{t}\mathbf{A}\). ### Short-range potential Let us firstly derive the analytical expression of the ionization amplitude in the leading order \(E_{0}/E_{a}\) term in the case of a short-range atomic potential, where \(E_{a}=\kappa^{3}\) is the atomic field. In this case the continuum state in the laser field in Eq. (1) is described by the Volkov-state [70], \(\psi_{\mathbf{p}}(\mathbf{r},t)\rightarrow\psi_{\mathbf{p}}^{(0)}(\mathbf{r},t)\): \[\psi_{\mathbf{p}}^{(0)}(\mathbf{r},t)=\frac{1}{\sqrt{(2\pi)^{3}}} \exp\left[i(\mathbf{p}+\mathbf{A}(t))\cdot\mathbf{r}+i\int_{t}\!ds\frac{( \mathbf{p}+\mathbf{A}(s))^{2}}{2}\right], \tag{3}\] with the laser vector potential \(\mathbf{A}(t)\), here we consider a linearly polarized laser field. The bound state in the case of the short-range potential is \(\phi(\mathbf{r},t)\rightarrow\phi^{(0)}(\mathbf{r},t)\): \[\phi^{(0)}(\mathbf{r},t)=\sqrt{\frac{\kappa}{2\pi r^{2}}}\exp\left[-\kappa r+ i\frac{\kappa^{2}}{2}t\right]. \tag{4}\] In this case we straightforwardly arrive at the amplitude \(m_{\mathbf{p}}\to m_{\mathbf{p}}^{(0)}\): \[m_{\mathbf{p}}^{(0)} =-\frac{i\sqrt{\kappa}}{4\pi^{2}}\int d^{3}\mathbf{r}\int dt \frac{\mathbf{r}\cdot\mathbf{E}(t)}{r}\exp\left[-i\left(\mathbf{p}+\mathbf{A} (t)\right)\cdot\mathbf{r}\right.\] \[-i\left.\int_{t}\!ds\frac{(\mathbf{p}+\mathbf{A}(s))^{2}}{2}+i \frac{\kappa^{2}}{2}t-\kappa r\right]. \tag{5}\] In the next step we approximate the \(t\)-integration via the saddle point approximation (SPA). Here the solution of the \(t\)-saddle point equation \[(\mathbf{p}+\mathbf{A}(t))^{2}+\kappa^{2}+2\mathbf{r}\cdot\mathbf{E}(t)=0, \tag{6}\] is found perturbatively with respect to the last term which is eqivalent to an expansion in the parameter \(E_{0}/E_{a}\). It yields \[t_{s}=\tilde{t}_{0}+i\frac{\mathbf{r}\cdot\mathbf{E}(\tilde{t}_{0})}{| \mathbf{E}(\tilde{t}_{0})|\tilde{\kappa}}\approx\tilde{t}_{0}+i\frac{\mathbf{r }\cdot\mathbf{E}(t_{0})}{|\mathbf{E}(t_{0})|\kappa}, \tag{7}\] where \(\tilde{\kappa}=\sqrt{\kappa^{2}+p_{\perp}^{2}}\), and \(\tilde{t}_{0}\) is the common zeroth-order solution [108] via \[\left(\mathbf{p}+\mathbf{A}(\tilde{t}_{0})\right)^{2}+\kappa^{2}=0. \tag{8}\] Here, we distinguish between \(\tilde{t}_{0}\) and \(t_{0}=\tilde{t}_{0}(\mathbf{p}_{max})\), with \(\mathbf{p}_{max}=(-A(t_{max}),0,0)\) the most probable quasi-classical momentum in the nonrelativistic case, corresponding to the ionization at the time \(t_{max}\). In the perturbation term in Eq. (7), we approximate \(\tilde{\kappa}\approx\kappa\) and \(\tilde{t}_{0}\approx t_{0}\), because otherwise higher order terms with respect to \(E_{0}/E_{a}\) would be included. Note that the time dependence of the pre-exponential in Eq. (5) \(\partial_{t}\ln(E(t))\sim\omega\) is small and can be neglected with an accuracy of \(\omega/I_{p}\). With the solution Eq. (7), the amplitude in SPA yields \[m_{\mathbf{p}}^{(0)} =-i\int d^{3}\mathbf{r}\;\mathcal{M}^{(0)}(\mathbf{r}), \tag{9}\] \[\mathcal{M}^{(0)}(\mathbf{r}) =\frac{1}{4\pi^{2}}\sqrt{\frac{2\pi}{|\mathbf{E}(t_{0})|}}\frac{ \mathbf{r}\cdot\mathbf{E}(t_{0})}{r}\] (10) \[\times\exp\left[-i\left(\mathbf{p}_{max}+\mathbf{A}(t_{0}) \right)\cdot\mathbf{r}-\frac{(\mathbf{r}\cdot\mathbf{E}(t_{0}))^{2}}{2\kappa| \mathbf{E}(t_{0})|}-\kappa r\right.\] \[-i\left.\int_{t_{0}}\!ds\frac{1}{2}\left(\mathbf{p}+\mathbf{A}( s)\right)^{2}+i\frac{\kappa^{2}}{2}\tilde{t}_{0}\right],\] where the terms up to the first order in \(E_{0}/E_{a}\) in the exponent are kept, and with the same accuracy, the pre-factor is estimated at \(\mathbf{p}=\mathbf{p}_{max}\). The remaining \(\mathbf{r}\)-integral is then calculated analytically: \[m_{\mathbf{p}}^{(0)} =\frac{i}{\sqrt{2\pi|\mathbf{E}(t_{0})|}}\exp\left[-i\int_{t_{0} }\!ds\frac{(\mathbf{p}+\mathbf{A}(s))^{2}}{2}+i\frac{\kappa^{2}}{2}\tilde{t}_ {0}\right]. \tag{11}\] ### Coulomb potential Now with our operational approach we derive the strong-field ionization amplitude in the case of the atomic Coulomb potential. In this case, the bound state as well as the continuum state in the eikonal approximation obtain exponential corrections proportional to \(\nu=Z/\kappa\), with the charge \(Z\) of the atomic core. The bound state in the Coulomb potential in \(r\rightarrow\infty\) asymptotics reads \[\phi(\mathbf{r},t) =\phi^{(0)}(\mathbf{r},t)\phi^{(1)}(\mathbf{r}), \tag{12}\] \[\phi^{(1)}(\mathbf{r}) =\mathcal{C}\exp\left[\nu\ln(\kappa r)\right],\] where \(\mathcal{C}\) is the normalization constant (for hydrogen like ions with \(Z=\kappa\), it is \(\mathcal{C}=\sqrt{2}\)) and the continuum state in the eikonal approximation is \[\psi(\mathbf{r},t) =\psi^{(0)}(\mathbf{r},t)\psi^{(1)}(\mathbf{r}), \tag{13}\] \[\psi^{(1)}(\mathbf{r},t) =\exp\left[i\nu\int_{t}\!ds\frac{\kappa}{|\mathbf{r}+\mathbf{p}( s-t)+\mathbf{\alpha}(s)-\mathbf{\alpha}(t)|}\right],\] where \(\mathbf{\alpha}(t)=\int dt\mathbf{A}(t)\). The \(t\)-dependence in the Coulomb correction (CC) terms is weak and can be neglected with an accuracy of \(\omega/I_{p}\). In this case the CC momentum amplitude of Eq. (9) reads, cf. [85]: \[m_{\mathbf{p}}^{(1)} =-i\int d^{3}\mathbf{r}\;\mathcal{M}^{(0)}(\mathbf{r})\, \mathcal{C}\exp\left[\nu\left[\ln(\kappa r)\right.\right. \tag{14}\] \[+i\left.\int_{t_{s}}\!ds\frac{\kappa}{|\mathbf{r}+\mathbf{p}(s-t_{s})+ \mathbf{\alpha}(s)-\mathbf{\alpha}(t_{s})|}\right]\right\}.\] The \(s\)-integral in the phase of Eq. (14) is diverging at the low limit \(t=t_{s}\) at the Coulomb center \(\mathbf{r}=0\). However, the diverging term can be canceled with the bound CC term \(\phi^{(1)}(\mathbf{r})\), when using appropriate approximations. We separate the diverging term in integral of Eq. (14) \(\int_{t_{s}}=\int_{t_{s}}^{t_{0}-i\tilde{\kappa}}+\int_{t_{0}-i\tilde{\kappa}}\), and show that with appropriate choice of the parameter \(\delta\), the diverging term \(\int_{t_{s}}^{t_{0}-i\tilde{\kappa}}\) will be canceled by the bound CC term when using the following approximations. Firstly, in the analytical calculation of CC in the continuum state, we neglect all higher order corrections with respect to \(\sqrt{E_{0}/E_{a}}\), in the spirit of ARM [85]. To apply the given approximation, we estimate the variables in the integrand of Eq. (14), with the result summarized in Table 1. We refer to [83] [below Eq. (21)] for the estimation \(r_{E}=\sqrt{E_{a}/E_{0}}/\kappa\), which is the scaling of the coordinate saddle point of the integrand in Eq.(9), i.e., the point, where the ionizing trajectory starts; \(r_{k}\approx r_{k}(t_{0})=0\) is approximated, assuming that the most probable trajectory has zero impact parameter at \(t_{0}\) near the core. Further, we express \(p_{k,E,B}=p_{k,E,B0}+\Delta p_{k,E,B}\), with the most probable value of the momentum \(p_{k,E,B0}\) [in the nonrelativistic theory \(p_{k,E,B0}=0\), and in the relativistic one \(p_{k0}=c(\lambda^{2}-1)/(2\lambda)\), \(p_{E,B0}=0\), see Eq. (28) below], and the new variables \(\Delta p_{k,E,B}\) corresponding to the momentum width of the tunneling wave packet. The latter are estimated as \(\Delta p_{E}\sim\sqrt{E_{0}/E_{a}}E_{0}/\omega\), \(\Delta p_{k}=\Delta p_{B}\sim\sqrt{E_{0}/E_{a}}\kappa\) according to the PPT theory [60]. In the relativistic estimations an additional factor \(g(I_{p}/c^{2}\) depending on \(I_{p}/c^{2}\) arises. Here, \({\bf r}=(r_{k},r_{E},r_{B})\), \({\bf p}=(p_{k},\,p_{E},\,p_{B})\), and the components of the vectors are defined along the laser propagation direction \(r_{k}\equiv{\bf r}\cdot{\bf k}/\), \(p_{k}\equiv{\bf p}\cdot{\bf k}/\), along the laser electric field \(r_{E}\equiv-{\bf r}\cdot{\bf E}/E_{0}\), \(p_{E}\equiv-{\bf p}\cdot{\bf E}/E_{0}\), and along the laser magnetic field \(r_{B}\equiv-{\bf r}\cdot{\bf B}/B_{0}\), \(p_{B}\equiv-{\bf p}\cdot{\bf B}/B_{0}\). We introduce dimensionless variables \(R_{E},\,P_{k},\,P_{E},\,P_{B}\), dividing the given variable over its estimated value in Table 1: \(R_{E}\equiv r_{E}\kappa\sqrt{E_{0}/E_{a}}\),\(P_{k}\equiv\Delta p_{k}/(\sqrt{E_{0}/E_{a}}\kappa)\), \(P_{E}\equiv\Delta p_{E}/(\sqrt{E_{0}/E_{a}}E_{0}/\omega)\), \(P_{B}\equiv\Delta p_{B}/(\sqrt{E_{0}/E_{a}}\kappa)\). Further, we apply a variable transformation \(s=t_{s}+\sigma(t_{0}-i\delta-t_{s})\) and expand the integrand up to leading order in \(E_{0}/E_{a}\) in a quasistatic approximation arriving at: \[\exp\left[i\nu\int_{t_{s}}^{t_{0}-i\delta}ds\frac{\kappa}{|{\bf r }+{\bf p}(s-t_{s})+\mathbf{\alpha}(s)-\mathbf{\alpha}(t_{s})|}\right]\] \[\approx \exp\left[\nu\int_{0}^{1}d\sigma\left(\frac{\sqrt{E_{0}/E_{a}} \big{(}2\delta\kappa^{2}-(\sigma-1)^{2}R_{E}^{2}\big{)}}{2(\sigma-1)^{2}R_{E} }+\frac{1}{\sigma-1}\right)\right]\] \[+\mathcal{O}(\sqrt{E_{0}/E_{a}})\] \[\approx \big{[}\delta\kappa^{2}\left(1/R_{E}-\sqrt{E_{0}/E_{a}}\right) \sqrt{E_{0}/E_{a}}\big{]}^{\tau}+\mathcal{O}(\sqrt{E_{0}/E_{a}})\] \[\approx (\delta\kappa/r_{E})^{\tau}+\mathcal{O}(\sqrt{E_{0}/E_{a}}). \tag{15}\] The singular term at \(r\to 0\) in Eq. (II.2) will be canceled with the similar term in the Coulomb correction of the bound state, see Eq. (14), if we choose \(\delta=1/\kappa^{2}\). Secondly, we approximate \[|{\bf r}+{\bf p}(s-t_{s})+\mathbf{\alpha}(s)-\mathbf{\alpha}(t_{s})|\approx|{\bf p}_ {max}(s-t_{0})+\mathbf{\alpha}(s)-\mathbf{\alpha}(t_{0})\] \[+\mathcal{O}(\sqrt{E_{0}/E_{a}})|, \tag{16}\] which again follows from the scaling laws given in Table 1. Consequently, we obtain that the correction terms are approximately independent of the coordinates and arrive at the momentum amplitude: \[m_{\bf p}^{(1)} =-i\mathcal{C}\int d^{3}{\bf r}\mathcal{M}^{(0)}({\bf r}) \tag{17}\] \[\times\exp\left[i\int_{t_{0}-\frac{i}{\sigma}}\frac{Z\ ds}{|{\bf p }_{max}(s-t_{0})+\mathbf{\alpha}(s)-\mathbf{\alpha}(t_{0})|}\right].\] The latter after the final coordinate integration yields \[m_{\bf p}^{(1)}=Cm_{\bf p}^{(0)}\exp\left[i\int_{t_{0}-\frac{i}{ \sigma}}\frac{Z\ ds}{|{\bf p}_{max}(s-t_{0})+\mathbf{\alpha}(s)-\mathbf{\alpha}(t_{0})| }\right]. \tag{18}\] ## III Relativistic theory In the relativistic regime we employ SFA based on the Dirac equation [92]. The ionization SFA amplitude is again formally given by Eq. (1), where the interaction Hamiltonian in the Goppert-Mayer gauge within the dressed partition [92] reads: \[H_{i}({\bf r},t) ={\bf r}\cdot{\bf E}(\eta), \tag{19}\] \[H_{0} =H_{a}-{\bf r}\cdot{\bf E}(\eta)\alpha_{k} \tag{20}\] where \(\alpha_{k}\equiv\mathbf{\alpha}\cdot\hat{\bf k}\), \(\mathbf{\alpha}\) are Dirac matrices, \(\hat{\bf k}\) is the unit vector along the laser propagation direction, and \(\eta=t-\hat{\bf k}\cdot{\bf r}/c\). The spin quantization axis is chosen along the laser magnetic field. We employ the four vector potential of the laser field in the Goppert-Mayer gauge: \(A^{\mu}=-\left(\hat{\bf k}({\bf r}\cdot{\bf E}(\eta),{\bf r}\cdot{\bf E}(\eta)\right)\). In [92] we have shown that the relativistic SFA provides more close expressions to the relativistic PPT theory [66] for the total ionization rate if the dressed partition is applied. In the dressed partition, the unperturbed bound state is corrected [92] by a factor \[\mathcal{S}=\exp\left(i\frac{A}{2c-I_{p}/c}\right). \tag{21}\] ### Short-range potential In the case of a short-range potential the outgoing state is the relativistic Volkov state \(\psi_{\bf p}({\bf r},t)\rightarrow\psi_{\bf p}^{(0)}({\bf r},t)\): \[\psi_{\bf p}^{(0)}({\bf r},t)=\left(1+\frac{(1+\alpha_{k})\mathbf{ \alpha}\cdot{\bf A}(\eta)}{2c\tilde{\Lambda}}\right)\frac{cu_{f}}{\sqrt{(2 \pi)^{3}\tilde{\varepsilon}}} \tag{22}\] \[\times\exp\left[i({\bf p}+{\bf A}(\eta))\cdot{\bf r}-i\tilde{i}t+i \int_{\eta}ds\left(\frac{{\bf p}\cdot{\bf A}(s)+A(s)^{2}/2}{\tilde{\Lambda}} \right)\right],\] \begin{table} \begin{tabular}{|c|c|c|} quantity & non-relativisitic estimate & relativistic estimate \\ \hline \(r_{E}\) & \(\sqrt{E_{a}/E_{0}}/\kappa\) & \(g(I_{p}/c^{2})\sqrt{E_{a}/E_{0}}/\kappa\) \\ \(r_{k}\) & 0 & 0 \\ \(r_{B}\) & 0 & 0 \\ \(p_{k0}\) & 0 & \(c(\lambda^{2}-1)/(2\lambda)\) \\ \(p_{E0}\) & 0 & 0 \\ \(p_{k0}\) & 0 & 0 \\ \(\Delta p_{k}\) & \(\sqrt{E_{0}/E_{a}}\kappa\) & \(g(I_{p}/c^{2})\sqrt{E_{0}/E_{a}}\kappa\) \\ \(\Delta p_{E}\) & \(\sqrt{E_{0}/E_{a}}\,E_{0}/\omega\) & \(g(I_{p}/c^{2})\sqrt{E_{0}/E_{a}}\,E_{0}/\omega\) \\ \(\Delta p_{B}\) & \(\sqrt{E_{0}/E_{a}}\,\kappa\) & \(g(I_{p}/c^{2})\sqrt{E_{0}/E_{a}}\,\kappa\) \\ \end{tabular} \end{table} Table 1: Estimation of the variables and parameters via SFA for the approximate calculation of the integral in Eq. (II.2), where \({\bf r}=(r_{k},r_{E},r_{B})\), \({\bf p}=(p_{k},\,p_{E},\,p_{B})\), the components of the vectors are defined along the laser propagation direction, the laser electric field, and along the the laser magnetic field, the function \(g(I_{p}/c^{2})\) depends on \(I_{p}/c^{2}\). with the asymptotic energy \(\tilde{\varepsilon}=\sqrt{c^{4}+c^{2}\mathbf{p}^{2}}\), the constant of motion \(\tilde{\Lambda}=\tilde{\varepsilon}/c^{2}-p_{k}/c\), and the bispinor \[u_{f}=\left(\sqrt{\frac{c^{2}+\tilde{\varepsilon}}{2c^{2}}}\chi_{f},\,\frac{ \boldsymbol{\sigma}\cdot\mathbf{p}}{\sqrt{2(c^{2}+\tilde{\varepsilon})}} \chi_{f}\right)^{T}, \tag{23}\] where \(\chi_{+}=(1,0)^{T}\) and \(\chi_{-}=(0,1)^{T}\). The bound state of the short-range potential is \[\phi^{(0)}(\mathbf{r},t)=\sqrt{\frac{\kappa}{2\pi r^{2}}}\exp\left[-\kappa r+i( I_{p}-c^{2})t\right]v_{i}, \tag{24}\] with the atomic momentum \(\kappa=\sqrt{2I_{p}\left(1-\frac{I_{p}}{2c^{2}}\right)}\), and the bispinor \[v_{i}=\left(\chi_{i},i\frac{c(\kappa r+1)\boldsymbol{\sigma}\cdot\mathbf{r}}{ (2c^{2}-I_{p})r^{2}}\chi_{i}\right)^{T}. \tag{25}\] We consider two cases, firstly, when there is no spin flip: \(\chi_{f}=\chi_{i}=\chi_{+}\), and secondly, when there is a spin flip during ionization, i.e. \(\chi_{f}=\chi_{-}\) and \(\chi_{i}=\chi_{+}\). In the first case we have \[m_{\mathbf{p}+}^{(0)} =-\frac{i\left(2I_{p}\right)^{1/4}}{4\pi^{2}}\int d^{3}\mathbf{r} d\eta\,\mathbf{S}(\eta)\,\mathcal{P}_{+}\frac{\mathbf{r}\cdot\mathbf{E}(\eta)}{r} \tag{26}\] \[\times\exp\left\{-i\left[\mathbf{p}+\mathbf{A}(\eta)+\frac{c^{2}- I_{p}-\varepsilon}{c}\tilde{\mathbf{k}}\right]\cdot\mathbf{r}\right.\] \[\left.-i\int_{\eta}ds\left[\tilde{\varepsilon}+\frac{\mathbf{p} \cdot\mathbf{A}(s)+A(s)^{2}/2}{\tilde{\Lambda}}\right]+i(I_{p}-c^{2})\eta- \kappa r\right\}\] with \[\mathcal{P}_{+} =\frac{2c\Lambda\left(-c\sqrt{I_{p}\left(2c^{2}-I_{p}\right)}p_{ k0}-c^{2}\left(-2\varepsilon+I_{p}\right)+2c^{4}-I_{p}\varepsilon\right)+ \sqrt{2\Lambda(\varepsilon-c^{2}+I_{p})}\left(\sqrt{-I_{p}\left(-2c^{2}+I_{p }\right)}+2c^{2}-I_{p}\right)\left(\varepsilon+c\left(c+p_{k0}\right)\right)} {2c^{3/2}\Lambda\left(4c^{2}-2I_{p}\right)^{3/4}\sqrt{\varepsilon\left(c^{2}+ \varepsilon\right)}}\] \[\quad+\mathcal{O}(\sqrt{E_{0}/E_{a}}), \tag{27}\] where the amplitude is evaluated at the most probable quasi-classical momentum \(\mathbf{p}_{max}=(0,0,p_{k0})\), with \[p_{k0}=c(\lambda^{2}-1)/(2\lambda), \tag{28}\] \(\lambda=(\sqrt{\epsilon^{2}+8}-\epsilon)/2\), \(\epsilon=1-I_{p}/c^{2}\) and the notations \(\Lambda\equiv\tilde{\Lambda}(\mathbf{p}_{max})\) and \(\varepsilon\equiv\tilde{\varepsilon}(\mathbf{p}_{max})\). In the second case the amplitude reads: \[m_{\mathbf{p}-}^{(0)} =-\frac{i\left(2I_{p}\right)^{1/4}}{4\pi^{2}}\int d^{3}\mathbf{r} d\eta\,\mathcal{S}(\eta)\,\mathcal{P}_{-}\frac{\mathbf{r}\cdot\mathbf{E}(\eta)}{r} \tag{29}\] \[\times\exp\left\{-i\left[\mathbf{p}+\mathbf{A}(\eta)+\frac{c^{2}- I_{p}-\varepsilon}{c}\tilde{\mathbf{k}}\right]\cdot\mathbf{r}\right.\] \[-\left.i\int_{\eta}ds\left[\tilde{\varepsilon}+\frac{\mathbf{p} \cdot\mathbf{A}(s)+A(s)^{2}/2}{\tilde{\Lambda}}\right]+i(I_{p}-c^{2})\eta- \kappa r\right\},\] with \(\mathcal{P}_{-}=0+\mathcal{O}(\sqrt{E_{0}/E_{a}})\). We have expanded the expressions in the parameter \(E_{0}/E_{a}\) with the relativstic atomic field \(E_{a}=\kappa^{3}\) and \(\kappa\) along the lines of SFA. From the expansion it follows that in leading order in this parameter no spin flip occurs and, consequently, we focus only on the spin flip free process. In the next step we approximate the \(\eta\)-integration via SPA. Here, the \(\eta\)-saddle point equation is \[\varepsilon+\frac{\mathbf{p}\cdot\mathbf{A}(\eta)+A(\eta)^{2}/2}{\Lambda}+I_{ p}-c^{2}+\mathbf{r}\cdot\mathbf{E}(\eta)=0, \tag{30}\] which is solved perturbatively with respect to the last term, yielding the solution \[\eta_{s}=\tilde{\eta}_{0}+\frac{\mathbf{r}\cdot\mathbf{E}(\eta_{0})\Lambda}{ \left[(\mathbf{p}_{max}+\mathbf{A}(\eta_{0}))\cdot\mathbf{E}(\eta_{0})\right]}, \tag{31}\] with the zeroth order solution \(\tilde{\eta}_{0}\) and \(\eta_{0}=\tilde{\eta}_{0}(\mathbf{p}_{max})\)[92]. The time dependence of the pre-exponential \(\partial_{\eta}\ln(E(\eta))\sim\omega\) is small and can be neglected. After the \(\eta\)-SPA, the amplitude is \[m_{\mathbf{p}+}^{(0)} =-i\int d^{3}\mathbf{r}/\mathcal{M}^{(0)}(\mathbf{r}) \tag{32}\] \[\mathcal{M}^{(0)}(\mathbf{r}) =\frac{\left(2I_{p}\right)^{1/4}}{4\pi^{2}}\mathcal{S}(\eta_{0}) \mathcal{P}_{+}\sqrt{\frac{-2i\pi\Lambda}{(\mathbf{p}_{max}+\mathbf{A}(\eta_{ 0}))\cdot\mathbf{E}(\eta_{0})}}\frac{\mathbf{r}\cdot\mathbf{E}(\eta_{0})}{r} \exp\Big{\{}-i\left[\mathbf{p}_{max}+\mathbf{A}(\eta_{0})+\frac{\varepsilon-c ^{2}+I_{p}}{c}\tilde{\mathbf{k}}\right]\cdot\mathbf{r}\] \[\quad+\frac{i\left(\mathbf{r}\cdot\mathbf{E}(\eta_{0})\right)^{2} \Lambda}{2(\mathbf{p}_{max}+\mathbf{A}(\eta_{0}))\cdot\mathbf{E}(\eta_{0})}- \kappa r-i\int_{\tilde{\eta}_{0}}ds\left(\tilde{\varepsilon}+\frac{\mathbf{p} \cdot\mathbf{A}(s)+A(s)^{2}/2}{\tilde{\Lambda}}\right)+i(I_{p}-c^{2})\tilde{ \eta}_{0}\Big{\}}, \tag{33}\] where terms up to the next to leading order in \(E_{0}/E_{a}\) in the exponent are kept. The remaining \(\mathbf{r}\)-integral is then calculated analytically in leading order in \(E_{0}/E_{a}\) \[m_{\mathbf{p}}^{(0)} =\frac{i}{\sqrt{2\pi|\mathbf{E}(\eta_{0})|}}\,\mathcal{S}\mathcal{P} _{+}Q \tag{34}\] \[\times\exp\left\{-i\int_{\eta_{b}}ds\left[\bar{\varepsilon}+\frac{ \mathbf{p}\cdot\mathbf{A}(s)+A(s)^{2}/2}{\bar{\Lambda}}\right]+i(I_{p}-c^{2}) \bar{\eta}_{0}\right\}.\] with the pre-factor \[Q=\sqrt{\frac{\varepsilon-c^{2}+I_{p}}{I_{p}}} \tag{35}\] also evaluated at the most probable quasi-classical momentum. ### Coulomb potential In the relativistic case the corrections to the wave functions of the order of \(Z/\kappa\) due to the Coulomb potential are the following, for the bound state \[\phi(\mathbf{r},\eta) =\phi^{(0)}(\mathbf{r},\eta)\phi^{(1)}(\mathbf{r},\eta) \tag{36}\] \[\phi^{(1)}(\mathbf{r}) =\mathcal{C}\exp\left[\nu\ln(\kappa r)\right],\] (37) \[\nu =\frac{(c^{2}-I_{p})Z}{c^{2}\sqrt{I_{p}(2-I_{p}/c^{2})}} \tag{38}\] with the normalization constant \(\mathcal{C}=2^{\gamma-\frac{1}{2}}\sqrt{\frac{\gamma+1}{\Gamma(2\gamma+1)}}\) for hydrogen-like systems with \(Z=\kappa\), and the continuum state \[\psi(\mathbf{r},\eta) =\psi^{(0)}(\mathbf{r},\eta)\psi^{(1)}(\mathbf{r},\eta) \tag{39}\] \[\psi^{(1)}(\mathbf{r},\eta) =\exp\left\{i\int_{\eta}ds\frac{Z\bar{\varepsilon}(s)}{\bar{ \Lambda}c^{2}}\right.\] (40) \[\times\left.\frac{1}{|\mathbf{r}+(\mathbf{p}(s-\eta)+\mathbf{\alpha}( s)-\mathbf{\alpha}(\eta))/\bar{\Lambda}+\mathbf{r}_{k}(s,\eta)|}\right\}\] with \(\mathbf{r}_{k}(s,\eta)=\hat{\mathbf{k}}[\mathbf{p}\cdot(\mathbf{\alpha}(s)-\mathbf{ \alpha}(\eta))+\beta(s)-\beta(\eta)]/(\bar{\Lambda}^{2})\), \(\beta=\int ds\mathbf{\Lambda}^{2}/2\) and \(\bar{\varepsilon}(\eta)=\bar{\varepsilon}+(\mathbf{p}\cdot\mathbf{A}(\eta)+ A(\eta)^{2}/2)/\bar{\Lambda}\). The singularity in the \(s\)-integral in the phase of Eq. (39) is removed using the same procedure as in the nonrelativistic case. The first term in the integral \(\int_{\eta_{s}}=\int_{\eta_{s}}^{\eta_{s}-i\delta}+\int_{\eta_{s}-i\delta}\) is divergent, which is canceled with the bound CC term \(\phi^{(1)}(\mathbf{r})\), when using an appropriate value for the parameter \(\delta\). Taking into account that the \(\eta\)-dependence in the CC terms is weak, the momentum amplitude for the Coulomb potential is approximated by Eq. (32) including extra CC terms: \[m_{\mathbf{p}+}^{(1)} =-i\mathcal{C}\int d^{3}\mathbf{r}\mathcal{M}^{(0)}(\mathbf{r}) \exp\left\{\nu\ln(\kappa r)\right\} \tag{41}\] \[+\left.i\int_{\eta_{s}}ds\frac{Z\bar{\varepsilon}(s)}{\bar{ \Lambda}c^{2}}\frac{1}{|\mathbf{r}+(\mathbf{p}(s-\eta_{s})+\mathbf{\alpha}(s)- \frac{\mathbf{\alpha}(\eta_{s})}{\bar{\Lambda}}+\mathbf{r}_{k}(s,\eta_{s})|} \right\}.\] The choice of \(\delta\) for the singularity removal is possible when the following approximations are applied. We integrate the Coulomb correction in the continuum state analytically around \(\eta_{s}\): \[\exp\left[i\int_{\eta_{s}}^{\eta_{0}-i\delta}\frac{Z\bar{ \varepsilon}(s)ds}{\bar{\Lambda}c^{2}|\mathbf{r}+(\mathbf{p}(s-\eta_{s})+\mathbf{ \alpha}(s)-\mathbf{\alpha}(\eta_{s}))/\bar{\Lambda}+\mathbf{r}_{k}(s,\eta_{s})|}\right]\] \[\simeq \left(\frac{\sqrt{3}\lambda\kappa\delta}{\sqrt{(4-\lambda^{2})}r_ {E}}\right)^{\nu}, \tag{42}\] where the same method as in the derivation of Eq. (15) is applied, i.e. the same scaled variables are introduced \(R_{E}\), \(P_{k}\), and \(P_{E}\)\(P_{B}\), the variable transformation \(s=\eta_{s}+\sigma(\eta_{0}-i\delta-\eta_{s})\) is used, and then, before analytical integration the integrand is expanded in \(\sqrt{E_{0}/E_{a}}\) in a quasistatic approximation, using estimations of Table 1. Further, the atomic correction term is in leading order in \(E_{0}/E_{a}\) \[(\kappa r)^{\nu}=\left(\kappa\frac{\sqrt{4-\lambda^{2}}r_{E}}{\sqrt{3}}\right)^ {\nu}. \tag{43}\] With the choice \(\delta=1/(\lambda\kappa^{2})=\Lambda/\kappa^{2}\approx(1-\kappa^{2}/(6c^{2}))/ \kappa^{2}\) we cancel the singular term of the Coulomb correction to the bound state. Then, we approximate in leading order in \(E_{0}/E_{a}\): \[|\mathbf{r}+(\mathbf{p}(s-\eta_{s})+\mathbf{\alpha}(s)-\mathbf{\alpha}( \eta_{s}))/\bar{\Lambda}+\mathbf{r}_{k}(s,\eta_{s})| \tag{44}\] \[\approx|(\mathbf{p}_{max}(s-\eta_{0})+\mathbf{\alpha}(s)-\mathbf{\alpha}( \eta_{0}))/\Lambda+\mathbf{r}_{k}(s,\eta_{0})+\mathcal{O}(\sqrt{E_{0}/E_{a}})|.\] Thus, we conclude that the correction terms are approximately independent of the coordinates, and arrive at the momentum amplitude: \[m_{\mathbf{p}}^{(1)} =-i\mathcal{C}\int d^{3}\mathbf{r}\mathcal{M}^{(0)}(\mathbf{r}) \tag{45}\] \[\times\exp\left\{i\int_{\eta_{0}-i\Lambda/\kappa^{2}}\frac{Z \varepsilon(s)/(c^{2}\Lambda)\ ds}{|\mathbf{p}_{max}(s-\eta_{0})+\mathbf{\alpha}(s)- \mathbf{\alpha}(\eta_{0})+\mathbf{r}_{k}(s,\eta_{0})|}\right\}\] after the final coordinate integration this yields \[m_{\mathbf{p}}^{(1)} =Cm_{\mathbf{p}}^{(0)} \tag{46}\] \[\times\exp\left\{i\int_{\eta_{0}-i\Lambda/\kappa^{2}}\frac{Z \varepsilon(s)/(c^{2}\Lambda)\ ds}{|\mathbf{p}_{max}(s-\eta_{0})+\mathbf{\alpha}(s)- \mathbf{\alpha}(\eta_{0})+\mathbf{r}_{k}(s,\eta_{0})|}\right\}.\] The equation (46) is the main result of the paper, providing the Coulomb corrected strong-field ionization amplitude for the relativistic regime using the ARM approach. ## IV Comparison with the relativistic CCSFA The comparison of the total ionization probability in relativistic ARM (RARM) with CCSFA of [92] and PPT theories is provided in Fig. 1. The RARM probability coincides with the PPT theory, whereas the CCSFA overestimates slightly the PPT theory. Here the rate is calculated in leading order in \(E_{0}/E_{a}\) and the final momentum integration is accomplished via SPA at the most probable momentum after a transformation from the asymptotic momentum \(\mathbf{p}\) to the tunnel exit distribution \((\eta_{e},p_{e},B,p_{e,k})\) with \(p_{E}+A(\eta_{e})=0,p_{B}=p_{e,B}\) and \(p_{k}=p_{k,e}+A(\eta_{e})^{2}/2/c/\bar{\Lambda}\). In Fig. 1, the comparison of the results for the total probability per laser cycle via RARM, CCSFA, and PPT with the numerical calculation of Hafizi et al. using the Klein-Gordon equation [104], is shown. While the theoretical results almost coincide with each other, there is a significant deviation from the numerical calculation especially at high values of \(I_{p}/c^{2}\). There are two reasons for the deviation of the analytical quasiclassical theories, CCSFA and RARM, with respect to the numerical result. We compare the total ionization rate via RARM based on the Dirac equation with the numerical solution of the Klein-Gordon equation, intuitively assuming that for the total ionization rate spin effects would not matter much. However, this assumption is valid only for \(I_{p}/c^{2}\ll 1\). The results of Ref. [60; 109] show that at large \(I_{p}/c^{2}\sim 1\) spin asymmetry arises in the ionization (difference in ionization probability of different spin states) which will lead to a modification of the spin averaged probability. However, this effect is of the order of at most 1% even for hydrogenlike uranium and cannot account for the large discrepancy apparent in Fig. 2. The main source of the deviation (by a factor of \(\sim 20\)) possibly comes from the Stark-shift and polarization of the atomic state in strong fields near the threshold of the over-the-barrier ionization. These corrections are especially relevant in the near-threshold regime of tunneling ionization at \(E_{0}/E_{a}\sim 1/10\), which is the case in the numerical data of Hafizi (\(E_{0}/E_{a}\sim 1/16\)). We assume that this deviation could be corrected, at least partly, via the next order quasiclassical CCs to the eikonal approximation. As is shown in Ref. [82] with 1D CCSFA for the nonrelativistical theory [see Eq. (47) in this reference], this kind of corrections lead to a decrease of the tunneling ionization probability. The high-order quasiclassical CCs within ARM is generally possible, but it would require the change of the matching procedure to the bound state and, consequently, the change of the complex shift of the time integration. From a technical point of view RARM has a clear advantage with respect to relativistic eikonal CCSFA of [92; 92] when applying SPA. While in RARM the ionization amplitude is found via a one-dimensional \(\eta\)-SPA, in CCSFA at least the two-dimensional (in the case of linear polarization), or four-dimensional (in the case of ellipictical polarization) SPA for \((\mathbf{r},\eta)\) integrations are required. Disadvantage of RARM is that it includes accurately the CC near the tunnel exit, but overestimates those due to rescatterings. To account for CC at hard recollisions, the generalized eikonal approximation (GEA) has been developed on the basis of CCSFA. Generalization to elliptical polarization in both cases (RARM/CCSFA) is possible. ## V Comparison with the nondipole CCSFA To test the derived RARM theory, it will be useful to compare its results with those of the nondipole CCSFA describing Coulomb effects in strong field ionization in the nondipole regime. In the nondipole CCSFA only the first relativistic correction to the dipole theory of the order of 1/c is included. We expect that the fully relativistic theory will coincide with the nondipole one in the limit \(I_{p}/c^{2}\ll 1\), with significant deviations at \(I_{p}\sim c^{2}\), as \(1/c^{2}\) terms are neglected in the nondipole theory. While in the nonrelativistic theory the electron transverse momentum distribution at the tunnel exit has a peak at zero momentum, in the relativistic treatment the peak is shifted to \(p_{i0}=I_{p}/3c\) along the laser propagation direction due to the sub-barrier effect of the laser magnetic field [14]. Recently, we showed [19] within nondipole CCSFA that the sub-barrier Coulomb effect increases counter-intuitively the nondipole Figure 2: The nondipole shift of the peak of the longitudinal momentum due to the sub-barrier CC for hydrogenlike highly charged ions: via RARM (blue solid) via Eq. (49), via nondipole CCSFA [19] (orange short-dashed), and via the approximate Eq. (50) with the leading correction \(\sim I_{p}/c^{2}\) (green long-dashed) Figure 1: Comparison of the theoretical data for the total probability \(W_{T}\) per the laser period with the result of the numerical calculation [104]: via RARM (yellow short-dashed line with squares) with CCSFA of [83] (green long-dashed line with diamonds) using dressed partition, ADK (blue solid line) theories at \(Z/\kappa=1\), and (red dash-dotted line with triangles) the numerical calculations via the Klein-Gordon equation [104]. Numerical calculations in [104] have been carried out for the ionization energies \(I_{p}/c^{2}=0.00866,\,0.0351,\,0.0809,\,0.158,\,0.259\), using \(E_{0}/E_{a}\approx 1/16\). shift of the longitudinal momentum \(p_{k}\) at the tunnel exit: \[p_{k} =p_{k0}+\delta p_{k} \tag{47}\] \[\delta p_{k} =6(E_{0}/E_{a})p_{k0}.\] The CC effect induces an additional dependence of the longitudinal momentum shift on \(E_{0}/E_{a}\). Let us compare the RARM result for the relativistic shift of the peak of the longitudinal momentum due to the sub-barrier CC with the nondipole theory of [19], see Fig. 2, where \(\delta p_{k}/[(E_{0}/E_{a})p_{k0}]\) for hydrogenlike highly charged ion is presented. According to the nondipole approximate theory [19], \(\delta p_{k}/[(E_{0}/E_{a})p_{k0}]=6\) [Eq. (47)]. The relativistic shift of the peak of the longitudinal momentum presented in Fig. 2 is calculated analytically using the exact RARM theory. To this end, the atomic as well as the laser action is expanded up to the next to leading order with respect to \(E_{0}/E_{a}\): \[S_{0}(p_{k}) =S_{0}(p_{k0})-\frac{2\kappa\lambda\left(\lambda^{2}+2\right)}{ \sqrt{12-3\lambda^{2}}\left(\lambda^{2}+1\right)^{2}}\frac{(p_{k}-p_{k0})^{2} }{E_{0}}.\] \[S_{1}(p_{k}) =S_{1}(p_{k0})+\frac{2\sqrt{-\lambda^{4}+5\lambda^{2}-4}\left(1- \frac{\lambda^{2}+\lambda-2}{\lambda}\right)}{(\lambda^{2}+1)\kappa}(p-p_{k0}). \tag{48}\] The longitudinal momentum distribution is given by \(\exp[S]=\exp[S_{0}(p_{k})+S_{1}(p_{k})]\). It has a maximum at \(p_{k}-p_{k0}=-S_{1}^{\prime}(p_{k0})/S_{0}^{\prime\prime}(p_{k0})\), which reads after using the expansion over \(\lambda-1\sim I_{p}/c^{2}\): \[p_{k}-p_{k0} =-\frac{\sqrt{3}\left(4-\lambda^{2}\right)^{3/2}\left(\lambda^{2} -2\right)\left(\lambda^{2}+1\right)}{\lambda^{2}\left(\lambda^{2}+2\right)} \frac{E_{0}}{\kappa^{3}}p_{k0} \tag{49}\] \[\approx\left[6-28(\lambda-1)\right]\frac{E_{0}}{\kappa^{3}}p_{k0 }+\mathcal{O}(\lambda-1) \tag{50}\] with \(p_{k0}=c(\lambda^{2}-1)/2\lambda\approx I_{p}/3c\). Thus, the first term of the shift of the most probable momentum in the propagation direction due to the sub-barrier CC corresponds to the nondipole result, and the second term \(\sim\lambda-1=I_{p}/(3c^{2})\) is the relativistic CC. The momentum shift coincides with the nondipole result at small \(I_{p}/c^{2}\ll 1\). It is reduced when taking into account the relativistic corrections \(\sim I_{p}/c^{2}\). This is because the sub-barrier CC originates from the bound state CC, as discussed in [19]. The decrease of the parameter \(\nu\approx 1-I_{p}/c^{2}\) with higher \(I_{p}/c^{2}\), see Eq. (38), yields larger width of the bound state in momentum space. Then, the most probable sub-barrier tunneling trajectory begins at the atomic core with larger \(p_{k}\) ending up at the tunnel exit with a smaller one, because the magnetic field-induced momentum drift along the propagation direction is fixed. Heuristically, the momentum shift can be estimated via \(\beta_{a}(p_{k})\sim\partial_{p_{k}}[\ln[\kappa\sqrt{\mathbf{r}(\eta)^{2}}]] \sim\partial_{p_{k}}[\ln[\kappa\sqrt{\mathbf{p}_{i}^{2}(s-t_{i})^{2}}/\Lambda ]]\sim\partial_{p_{k}}[\ln(\kappa^{2}(s-t_{i})/\Lambda)]\sim\partial_{p_{k}}[ \ln(\kappa)]\sim\partial_{p_{k}}[p_{k}/c]\sim 1/c\). With \(p_{k,0}\sim\kappa^{2}/(6c)\) and \(S_{0}^{\prime\prime}(p_{k0})\sim\kappa/E_{0}\), the momentum shift \(-S_{a}^{\prime}(p_{k0})/S_{0}^{\prime\prime}(p_{k0})\sim 6E_{0}/\kappa^{3}p_{k0}\) follows. ## VI High-energy Coulomb enhancement in the case of Hcis We apply the relativistic ARM theory for the investigation of the Coulomb enhancement effect at the cutoff of the direct ionization channel in the relativistic domain with HCIs. This effect of the high-energy Coulomb enhancement (HECE) in the nonrelativistic regime is known, described in [88; 106]. The effect emerges due to the Coulomb momentum transfer in the continuum. The electron trajectory that ends up at the cutoff of the direct channel starts at the tunnel exit at relatively weak fields and stay near the exit long time, obtaining rather large Coulomb momentum transfer [106]. The parameter which quantifies HECE is \(Z\omega/E_{0}\)[88]. In the calculation of the continuum CC, the continuum action is expanded in \(E_{0}/E_{a}\), which yields an expansion in the imaginary part of the complex trajectory: \[S_{1}({\bf r}(\eta))=S_{1}({\rm Re}[{\bf r}(\eta)])+i{\rm Im}[{ \bf r}(\eta)]\cdot\mathbf{\nabla}S_{1}({\rm Re}[{\bf r}(\eta)]). \tag{51}\] We calculated PMD for three cases via Eq. (46), presented in Fig. 3. In the first case we consider HECE for \({\rm Ar}^{9+}I_{p}=479.76\) eV, \({\rm Z}_{eff}=14.008\), \(\nu=2.34\), laser intensity \(1.75\times 10^{18}\) W/cm\({}^{2}\) (\(E_{0}=7.07\) a.u.) using IR laser beam with \(\omega=0.07\) a.u. (\(\nu=0.043\), \(\xi=0.74\), \(Z\omega/E_{0}=0.14\)). In the second case the same atomic species are used with XUV laser beam (\(\omega=0.5\) a.u.) of the same high intensity \(1.75\times 10^{18}\) W/cm\({}^{2}\) (\(\upsilon=0.043\), \(\xi=0.1\), \(Z\omega/E_{0}=0.99\)). And in the third example we consider \({\rm Xe}^{36+}\) (\(I_{p}=2556\) eV, \(\nu=2.626\)) exposed to the strong X-ray field (\(\omega=2\) a.u.) \(E_{0}=65.4\) a.u. (\(\upsilon=0.1\), \(\xi=0.24\), \(Z\omega/E_{0}=1.13\)). To elucidate the HECE effect we compare PMD via RARM with the plain relativistic SFA. The transverse width of PMD is \(p_{B}=\sqrt{E_{0}/\kappa}/2\). In the first example, the continuum relativistic parameter \(\xi\) is the largest. Consequently, we see the parabolic dependence of \(p_{k}\) with respect to \(p_{E}\), which is typical for the electron relativistic dynamics in the continuum, and absent in the nonrelativistic consideration (3rd column in Fig. 3). However, the Coulomb enhancement (HECE) parameter \(Z\omega/E_{0}\) is the smallest in the first example, and we do not see a significant Coulomb effect, HECE, as the integrated spectrum over \(p_{k}\) coincides with the plain SFA result. The HECE parameter increases for the second and the third cases, which results in appearance of significant shoulders in PMD at \(2U_{p}\) energies. The continuum relativistic features in PMD also enhance. The relativistic and nonrelativistic PMDs via ARM are clearly distinguishable (2nd and 3rd columns) by the parabolic feature in \(p_{k}\) dependence of \(p_{E}\), however, after \(p_{k}\) integration the HECE features are the same (Fig. 4). The bound state relativistic character is not very pronounced in the given examples as \(\upsilon<0.1\). ## VII Conclusion We have generalized ARM theory for the relativistic regime of strong-field ionization. The CCSFA based on the eikonal wave function for the continuum electron (accounting for the Coulomb interaction of the outgoing electron with the atomic core) has a singularity in the eikonal phase at the Coulomb center, where the strong-field ionization starts in the imaginary time. While in the PPT theory the singularity is remedied via matching the continuum wave function to the undisturbed Figure 4: The HECE spectra of Fig. 3 integrated over \(p_{k}\), for the same species and the laser fields: (blue dotted) via relativistic plain SFA (RSFA), (orange solid) via RARM, (green dashed) via nonrelativistic ARM; (first line) for \({\rm Ar}^{9+}\), \(\nu=2.34\), \(I_{p}=17.63\) a.u., laser intensity \(1.75\times 10^{18}\) W/cm\({}^{2}\) (\(E_{0}=7.07\) a.u.), and \(\omega=0.07\) a.u. (\(\nu=0.043\), \(\xi=0.74\), \(Z\omega/E_{0}=0.14\)); (second line) for \({\rm Ar}^{9+}\), and XUV beam \(\omega=0.5\) a.u. of intensity \(1.75\times 10^{18}\) W/cm\({}^{2}\) (\(\upsilon=0.043\), \(\xi=0.1\), \(Z\omega/E_{0}=0.99\)); (third line) for \({\rm Xe}^{36+}\), \(\nu=2.626\), \(I_{p}=93.94\) a.u., and X-ray beam of intensity \(8.6\times 10^{19}\) W/cm\({}^{2}\) (\(E_{0}=65.4\) a.u.) and \(\omega=2\) a.u. (\(\upsilon=0.1\), \(\xi=0.24\), \(Z\omega/E_{0}=1.13\)). The distributions are rescaled to the peak value. bound state one, in the ARM theory this procedure is equivalent to the shift of the starting point of the time integration in the ionization amplitude by an appropriate imaginary value. In this paper we have found how the value of the corresponding imaginary time shift is modified in the relativistic regime, which eliminate the singularity of the relativistic CCSFA amplitude for ionization. The advantage of RARM with respect to CCSFA is that it simplifies the calculations of the ionization amplitude using SPA. However, CCSFA offers a possibility for systematic second order Coulomb corrections when using SPA in the coordinate integration, rather than the matching procedure with the bound state. Moreover, CCSFA allows for the development of the generalized eikonal approximation to treat CC at hard recollisions. For sub-barrier CC, the RARM provides results similar to the nondipole CCSFA. Finally, we employed RARM theory to calculate the Coulomb enhancement of the above-threshold ionization yield at the cutoff of the directly ionized electrons in the relativistic regime.
2304.05691
Vers: fully distributed Coded Computing System with Distributed Encoding
Coded computing has proved to be useful in distributed computing. We have observed that almost all coded computing systems studied so far consider a setup of one master and some workers. However, recently emerging technologies such as blockchain, internet of things, and federated learning introduce new requirements for coded computing systems. In these systems, data is generated in a distributed manner, so central encoding/decoding by a master is not feasible and scalable. This paper presents a fully distributed coded computing system that consists of $k\in\mathbb{N}$ data owners and $N\in\mathbb{N}$ workers, where data owners employ workers to do some computations on their data, as specified by a target function $f$ of degree $d\in\mathbb{N}$. As there is no central encoder, workers perform encoding themselves, prior to computation phase. The challenge in this system is the presence of adversarial data owners that do not know the data of honest data owners but cause discrepancies by sending different data to different workers, which is detrimental to local encodings in workers. There are at most $\beta\in\mathbb{N}$ adversarial data owners, and each sends at most $v\in\mathbb{N}$ different versions of data. Since the adversaries and their possibly colluded behavior are not known to workers and honest data owners, workers compute tags of their received data, in addition to their main computational task, and send them to data owners to help them in decoding. We introduce a tag function that allows data owners to partition workers into sets that previously had received the same data from all data owners. Then, we characterize the fundamental limit of the system, $t^*$, which is the minimum number of workers whose work can be used to correctly calculate the desired function of data of honest data owners. We show that $t^*=v^{\beta}d(K-1)+1$, and present converse and achievable proofs.
Nastaran Abadi Khooshemehr, Mohammad Ali Maddah-Ali
2023-04-12T08:31:06Z
http://arxiv.org/abs/2304.05691v1
# Vers: fully distributed Coded Computing System with Distributed Encoding ## I Abstract Coded computing has proved to be useful in distributed computing, and has addressed challenges such as straggler workers. We have observed that almost all coded computing systems studied so far consider a setup of one master and some workers. However, recently emerging technologies such as blockchain, internet of things, and federated learning introduce new requirements for coded computing systems. In these systems, data is generated (and probably stored) in a distributed manner, so central encoding/decoding by a master is not feasible and scalable. This paper presents a fully distributed coded computing system that consists of \(k\in\mathbb{N}\) data owners and \(N\in\mathbb{N}\) workers, where data owners employ workers to do some computations on their data, as specified by a target function \(f\) of degree \(d\in\mathbb{N}\). As there is no central encoder, workers perform encoding themselves, prior to computation phase. The challenge in this system is the presence of adversarial data owners that do not know the data of honest data owners but cause discrepancies by sending different versions of data to different workers, which is detrimental to local encodings in workers. There are at most \(\beta\in\mathbb{N}\) adversarial data owners, and each distributes at most \(v\in\mathbb{N}\) different versions of data. Since the adversaries and their possibly colluded behavior are not known to workers and honest data owners, workers compute tags of their received data, in addition to their main computational task, and send them to data owners in order to help them in decoding. We introduce a tag function that allows data owners to partition workers into sets that previously had received the same data from all data owners. Then, we characterize the fundamental limit of this fully distributed coded computing system, denoted by \(t^{*}\), which is the minimum number of workers whose work can be used to correctly calculate the desired function of data of honest data owners. We show that \(t^{*}=v^{\beta}d(K-1)+1\), and present converse and achievable proofs. ## II Introduction Coded computing utilizes coding theory techniques to address the fundamental challenges in distributed computing systems such as straggler mitigation, security, and privacy preservation, and communication bandwidth issues [1]. The core idea in coded computing is to inject data or computation redundancy in order to accomplish the mentioned goals. For instance, coded computing can be used to accomplish a computation task by using the work of faster worker nodes, and not being dependent on any specific set of workers (straggler mitigation). This line of research has received so much attention, and many aspects of it have been explored, see [2] for a survey on coded distributed computing. However, almost all coded computing researches consider a setup of one master (also called other names like parameter server) and several workers, where the master encodes the data, distributes the data and computation tasks among workers, collects the partial results from workers, and constructs the final result. The following reasons motivate us to pursue a different path from the centralized, single-master coded computing systems. * In a distributed computing system where a single master encodes the whole initial data and then decodes all of the results, the master would be a bottleneck for scalability. As the number of workers grows, the master should get more powerful, in terms of computation and communication. * In some applications like federated learning [3], the data for computation tasks are not congregated in one place, for reasons like the privacy of the data owners, and/or the huge volume of data. Thus, one master does not have access to the whole data. * Computation schemes that are specifically designed for distributed sources of data are better suited for applications where the production of data is distributed by nature, like sharded blockchains, federated learning, and internet of things. * In practice, a cluster of servers that act as workers, would serve more than one master, and do computation tasks for several masters in parallel. In this way, the resources of workers would be efficiently utilized, and not left idle. A few works have studied _masterless_ coded computing or coded computing with several masters. In [4], a fully-decentralized and masterless setup, consisting of only \(N\in\mathbb{N}\) workers, has been considered. In such a setup, decentralized coding strategies for two particular computation tasks, matrix multiplication, and fast Fourier transform, have been introduced. In matrix multiplication of two arbitrary matrices \(\mathbf{A}\) and \(\mathbf{B}\), each worker has a portion of \(\mathbf{A}\) and \(\mathbf{B}\) initially. Then, each worker performs local encoding on its initial data, communicates with other workers, and then multiplies two matrices. At the end of the algorithm, each worker has a portion of the calculated \(\mathbf{A}\mathbf{B}\). Note that the encodings done by workers, are on their own initial data, and not on data received from other workers. We will argue that distributed encoding of data received from external sources introduces new challenges. In [5], several masters, each with a matrix-vector multiplication task, share some workers, and the workers have heterogeneous computing powers. The problem is to assign computation tasks to workers such that the overall completion time is minimum. Each worker either serves one master or several masters and divides its computation power between them. In the latter case, each worker simply performs the computation tasks of the masters separately, and no coding is involved. In [6], there are \(N\in\mathbb{N}\) workers, where each has a piece of data, and wants to have a linear combination of all data. There is no master and workers exchange messages with one another in a communication-efficient way, to calculate their target linear combinations. [7] considers a network graph of \(N\in\mathbb{N}\) nodes each with an initial value, where a subset of them wants to calculate some arbitrary functions of the initial values. The target function of each node in that subset could be different. In order to achieve this goal, an iterative linear strategy is deployed in which nodes receive messages from their neighbours and obtain a linear combination of the received values and their own value, without any leader. Given the graph of nodes has some particular connectivity properties, a finite number of rounds suffices for nodes to calculate the desired functions of initial values, using the linear combinations obtained in the previous iterations. A similar problem with the presence of adversaries is studied in [8], where adversaries may calculate the linear combinations incorrectly and send incorrect values to their neighbours. The adversaries do not send different values to different neighbours, and may only deviate in calculating the linear combination of the messages of their neighbours and their own message. Coded computing can be designed to guarantee security in an adversarial environment, as in [9, 10, 11, 12]. In the conventional setup of a single master and many workers, the security of the coded computing scheme translates to the resilience of the scheme against adversarial workers, because in such setups, the master is assumed to be honest. Adversarial workers can freely deviate from the protocol, e.g. return an arbitrary result, send different data to different workers, refuse to send data, and so forth. However, when there are several masters, it is reasonable to assume that some masters might be adversarial as well. In this paper, we study a coded computing system comprised of several masters and workers, where masters employ workers to accomplish computational jobs for them. We refer to masters as _data owners_, as each of them has a piece of data, and aims to have a polynomial target function of that data by using workers. Some of the data owners are adversarial and try to create discrepancies in the system, in order to cause errors in the computation tasks of honest data owners. There are communication links between data owners and workers, and also between adversarial data owners, since they can collude with one another. However, like many coded computing systems [1, 2], there is no communication means between workers in our system. The system works as follows. The data owners distribute their data among workers, but adversarial data owners may distribute different contradictory data to different workers, to corrupt the results. The adversarial data owners are free to cooperate in choosing what they send to workers. Upon the reception of data, each worker encodes the received data of all data owners, applies the target function on the encoded data, and then returns the result back to data owners, along with a small _tag_ of the received data, to inform the honest data owners about the adversarial data owners indirectly. Tags allow honest data owners to partition workers into sets who have received the same message from the adversaries in the first step, and enables honest data owners to decode the results. Finally, the honest data owners can extract their required information from the returned results of the workers. We name the described system _Vers_, which is short for versatile. The reason for this naming is that the workers are indeed versatile, and do a range of different tasks: they encode the received data, calculate the target function of the coded data, and calculate tags. We study the fundamental limit of Vers in the case where Lagrange encoding is deployed in workers. This fundamental limit, which we denote by \(t^{*}\), is the minimum number such that honest data owners can correctly and reliably extract their required information from the results of any set of \(t^{*}\) workers. In other words, for any adversarial behavior, any set of \(t^{*}\) workers should be enough for honest data owners to calculate the target function of their data correctly. On the other hand, for \(t^{*}\) to be the minimum indeed, there should exist an adversarial behavior and a set of \(t^{*}-1\) workers whose results do not determine the target of honest data owners uniquely. It is worth noting that in our previous work [13], we have studied the fundamental limit of a system in which some data owners want to store their data in some storage nodes, such that data can be correctly retrieved later. In that system, some data owners are adversarial and may send different data to different storage nodes. The type of adversarial behavior in our problem, i.e. adversaries sending inconsistent data to other nodes, has been explored from different points of views in a wide range of distributed systems, such as byzantine agreement [14, 15, 16], byzantine broadcast [17, 18, 19], verifiable information dispersal [20, 21, 22], and distributed key generation [23]. Even though the adversarial behaviour has been previously known, addressing such behaviour in the context of distributed computing systems like blockchain, federated learning and internet of things is very underexplored. The distinction between this work and the mentioned ones that consider a similar adversarial model is that we deal with computation, rather than consensus. Our goal is to correctly calculate functions of a subset of data, and this does not necessarily require consensus on that data before calculation. Generally, inconsistencies caused by the adversaries need to be resolved somehow, for which there are two main approaches: nodes can either resolve the inconsistencies between themselves by communicating with one another, or they can postpone dealing with inconsistencies to subsequent steps such as decoding. The first approach is pursued in many works such as [20], where nodes exchange enough information with each other to extract consistent data and then use that consistent data in their main task. Since in our model, there is no communication link between workers, they cannot detect the adversarial data owners and then mitigate their effect by exchanging information with one another. Therefore, the first approach alone is not sufficient. In this work, we choose a combination of these approaches for our particular model: we deal with inconsistencies in the final step in decoding, with help of tags that workers had previously produced and communicated to the data owners. The structure of this paper is as follows. We formulate the problem in Section III, and state the main result in Section VIII. Then, explain the concept of tag functions in Section VI and introduce a good tag function. We define some notions and analyze the system in Section VII. Finally, we state the fundamental limit of the system in Section VIII. ## III problem formulation In this section, we formally introduce \(\text{Vers}(f,N,K,\beta,v,\text{Enc})\), a fully distributed linear coded computing system with parameters \(f,N,K,\beta,v\), and encoding algorithm Enc. Consider a system of \(K\in\mathbb{N}\) data owners and \(N\in\mathbb{N}\) worker nodes, indexed \(1,\ldots,K\), and \(1,\ldots,N\), where \(K\leq N\). We denote the set \(\{1,\ldots,m\},m\in\mathbb{N}\) with \([m]\), and \(\{0,1,\ldots,m\}\) with \([m^{+}]\). The data owner \(k\in[K]\) has \(X_{k}\in\mathbb{U}\) and wants to have \(f(X_{k})\), where \(f:\mathbb{U}\rightarrow\mathbb{V}\) is an arbitrary polynomial function of degree \(d\in\mathbb{N}\), called _target function_, and \(\mathbb{U},\mathbb{V}\) are vector spaces over a finite field \(\mathbb{F}\). Data owners employ workers to do the calculations for them, in a distributed manner. Among the data owners, there are at most \(\beta\in\mathbb{N}\) adversarial nodes, \(\beta<K\). We denote the set of adversarial data owners by \(\mathcal{A}\in[K]\), and the set of honest data owners by \(\mathcal{H}=[K]\setminus\mathcal{A}\). The adversarial data owners try to cause discrepancies in order to mislead honest data owners about the correct values of \(f(X_{k})\), \(k\in\mathcal{H}\). The adversarial data owners sacrifice their chance to use workers for their calculations, and hence, do not want any specific correct computation at the end. The adversarial data owners are free to cooperate with each other, but they do not know the data of the honest nodes. We assume that all workers are honest. Moreover, honest data owners and workers do not know the adversarial data owners. We design Vers by using inspirations from [12]. In the following, we describe the workflow of Vers in detail. The workflow is also shown in Algorithm 1. In the first step, data owners transmit their data to the workers. Let \(X_{k,n}\) be the message sent from data owner \(k\in[K]\) to worker node \(n\in[N]\). An honest data owner \(k\in\mathcal{H}\) sends \(X_{k}\) to all workers, i.e. \(X_{k,n}=X_{k}\) for all \(n\in[N]\). We assume that \(X_{k},k\in\mathcal{H}\) are chosen independently and uniformly at random from \(\mathbb{U}\). An adversarial data owner \(k\in\mathcal{A}\) generates at most \(v\in\mathbb{N}\) different messages, which we denote by _versions_, \(X_{k}^{(1)},\ldots,X_{k}^{(v)}\), and sends one of them to each worker. In other words, \(\{X_{k,n},n\in[N]\}=\{X_{k}^{(1)},\ldots,X_{k}^{(v)}\}\), for \(k\in\mathcal{A}\). Adversarial data owners can generate up to \(v\) messages of their choice and can send any one of them to any worker they choose. The adversarial data owners may also collude so that their aggregate adversarial behavior is as detrimental as possible. In the second step, worker node \(n\) who has received \(X_{1,n},\ldots,X_{K,n}\) from data owners, computes a linear combination of the \(K\) received messages, denoted by \(W_{n}\in\mathbb{U}\), using the encoding algorithm Enc. Let \(\gamma_{k,n}\in\mathbb{F},k\in[K]\) be the \(K\) encoding coefficients of worker \(n\) that the encoding algorithm Enc determines. Therefore, \[W_{n}=\sum_{k\in[K]}\gamma_{k,n}X_{k,n},\quad n\in[N]. \tag{1}\] We emphasize that encoding in Vers in done in a decentralized fashion, and locally in each worker. This is the main idea of Vers that differentiates it from other coded computing systems that incorporate a central encoder. After local encoding of the received messages, in step 3, worker \(n\) computes \(f(W_{n})\), where \(f\) is the target function. In addition to \(W_{n}\), and \(f(W_{n})\), worker node \(n\) computes \(\mathtt{tag}_{n}=J(X_{1,n},\ldots,X_{K,n})\) in step 4, where \(J:\mathbb{U}\rightarrow\mathbb{W}\) is a tag function, and \(\mathbb{W}\) is a vector space over \(\mathbb{F}\). We will formally introduce tag function in Section VI. Intuitively, a tag function compresses its \(K\) inputs, and outputs a _fingerprint_ of inputs, so that outputs of two different sets of \(K\) inputs are almost always different. In the fifth step, worker node \(n\) sends \(\mathtt{tag}_{n}\) and \(f(W_{n})\) to all data owners. Honest data owners can use tags to identify workers whose received messages were the same in the first step, and therefore, their returned computation results are consistent. In the sixth step, the honest data owner \(k\in[K]\) uses \(\{f(W_{n}),n\in\mathcal{T}\}\cup\{\mathtt{tag}_{n},n\in\mathcal{T}\}\), where \(\mathcal{T}\) is an arbitrary subset of \([N]\) of size \(t\in\mathbb{N}\), to recover \(f(X_{k})\). An example \(\text{Vers}(f,N=5,K=3,\beta=1,v=2,\text{Enc})\) is shown in Fig 1. ``` 1:The honest data owner \(k\in\mathcal{H}\) sends \(X_{k}\) to all workers. The adversarial data owner \(k\in\mathcal{A}\) generates at most \(v\in\mathbb{N}\) different messages \(X_{k}^{(1)},\ldots,X_{k}^{(v)}\), and sends one of them to each worker. 2:Worker node \(n\in[N]\) computes \(W_{n}\in\mathbb{U}\) as a linear combination of the \(K\) received messages from data owners, using the encoding algorithm Enc, and according to (1). 3:Worker node \(n\in[N]\) computes \(f(W_{n})\). 4:Worker node \(n\in[N]\) computes \(\mathtt{tag}_{n}=J(X_{1,n},\ldots,X_{K,n})\), where \(J:\mathbb{U}\rightarrow\mathbb{W}\) is a tag function, and \(\mathbb{W}\) is a vector space over \(\mathbb{F}\). 5:Worker node \(n\in[N]\) sends \(\mathtt{tag}_{n}\) and \(f(W_{n})\) to all data owners. 6:An honest data owner \(k\) recovers \(f(X_{k})\) from \(\{f(W_{n}),n\in\mathcal{T}\}\cup\{\mathtt{tag}_{n},n\in\mathcal{T}\}\), where \(\mathcal{T}\) is an arbitrary subset of \([N]\) of size \(t\in\mathbb{N}\). ``` **Algorithm 1** The workflow of Vers **Remark 1**.: Note that each worker node calculates a single tag in the fourth step of Vers. Therefore, the total communication of Vers is \(O(N)\), i.e. linear in \(N\). Data owners use the indirect information from these tags to partition workers into sets that have received the same adversarial data from the adversarial data owners. Another possible design is having each worker calculate \(K\) tags, one for each data received from the \(K\) data owners. The direct information from such tags reveals the identities of adversarial data owners for honest data owners. However, such design results in \(O(KN)\) communication load, and hinders scalability when \(K\) grows. As previously stated in Section II, one of the main purposes of Vers is to compensate for scalability bottlenecks of single master systems. Therefore, in Vers, we incorporate the communication efficient single tag method, to allow \(K\) to scale. Another intermediate approach is to have each worker calculate a constant number of tags of the \(K\) received messages. This method probably gives more information about the adversarial behaviour to honest data owners. However, due to its complexity, we do not address this approach in this paper, and leave it for future research. The formal definition of fundamental limit is as following. **Definition 1** (The fundamental limit of Vers).: The fundamental limit of Vers\((f,N,K,\beta,v,\text{Enc})\), which we denote by \(t^{*}(f,N,K,\beta,v,\text{Enc})\), is the minimum \(t\) required such that \(f(X_{k}),k\in\mathcal{H}\) can be correctly computed from \(\{f(W_{n}),n\in\mathcal{T}\}\), and \(\{\text{tag}_{n},n\in\mathcal{T}\}\), under any adversarial behavior, where \(\mathcal{T}\) is any arbitrary subset of \([N]\) of size \(t\). Fig. 1: Vers\((f,N=5,K=3,\beta=1,v=2,\text{Enc})\) is shown as an example (only some of the messages between data owners and workers are shown). The leftmost data owner is adversarial. No matter how the adversaries choose and distribute their \(v\) different messages \(X_{k}^{(1)},\ldots,X_{k}^{(v)},k\in\mathcal{A}\) to workers, \(f(X_{k}),k\in\mathcal{H}\) should be correctly extractable from \(\{f(W_{n}),n\in\mathcal{T}\}\) and \(\{\texttt{tag}_{n},n\in\mathcal{T}\}\). This should hold for any subset of workers of size \(t^{*}(f,N,K,\beta,v,\text{Enc})\), for the system to be robust against stragglers and node failures which is common in distributed systems. Note that we do not set any requirement on being able to find all or any of \(f(X_{k}^{(1)}),\ldots,f(X_{k}^{(v)})\), \(k\in\mathcal{A}\), from \(\{f(W_{n}),n\in\mathcal{T}\}\), and \(\{\texttt{tag}_{n},n\in\mathcal{T}\}\). This is because the adversaries give up their chance of using workers for computation, to cause inconsistency in the system. In fact, there exist adversarial behaviors for which it is not possible to find all of \(f(X_{k}^{(1)}),\ldots,f(X_{k}^{(v)})\). For example, consider the case where all adversarial data owners \(k\in\mathcal{A}\) send \(X_{k}^{(1)}\) to workers \(1,\ldots,N-1\), and \(X_{k}^{(2)}\) to worker \(N\). In this case, there is not enough information to find \(f(X_{k}^{(2)}),k\in\mathcal{A}\) from any set of equations. Therefore, in order to avoid complex scenarios, we define the fundamental limit based on the requirement to find only \(f(X_{k}),k\in\mathcal{H}\). In Section VIII, we characterize the fundamental limit of \(\text{Vers}(f,N,K,\beta,v,\text{Lagrange})\). In other words, we choose Lagrange encoding as Enc algorithm in \(\text{Vers}(f,N,K,\beta,v,\text{Enc})\), set \(\gamma_{k,n}\) in (1) accordingly, and then find \(t^{*}(f,N,K,\beta,v,\text{Lagrange})\). **Remark 2**.: The challenging part of Vers is the last step where the data owners decode the results of workers because the adversarial data owners had injected inconsistencies in the received data of workers. In this work, data owners deal with inconsistencies in decoding and use tags to determine subsets of workers whose results are consistent. Another approach is to resolve the inconsistencies right in the first step, by using a _reliable broadcast_ algorithm, e.g. [24], for the message of each data owner, instead of having data owners simply send messages to workers. This approach requires communication between workers to help them reach a consensus on the received data from each data owner. ## IV Applications of Vers The motivation for studying Vers is that we can use it to model a variety of emerging computing systems in the presence of adversaries, such as internet of things networks and blockchains. In this section, we elaborate on some applications of Vers. In an IoT network, since sensors and devices are resource-constrained, they cannot process the data generated in them. Therefore, they need to offload the computations to external nodes, i.e. workers in Vers, to calculate complex target functions of their data. Thus, Vers can easily be used to model an IoT network of resource-constrained devices. Data owners in Vers are equivalent to devices in IoT network. Workers in Vers are equivalent to external computational nodes in IoT network. The target function in Vers is equivalent to any processing function that devices in IoT network require. For example, suppose that in agriculture, IoT is used for smart irrigation. Some devices need to know the efficient amount of water needed, which is a function of soil moisture, weather conditions and ambient temperature, crop type, and possibly more features. Such function can be modeled by target function in Vers. The adversarial data owners in Vers are equivalent to adversarial IoT devices, which include tampered and infected ones (e.g. see [25]). The adversarial devices may deliberately send inconsistent data to computational nodes, in order to mislead the decision making process of other devices. For example, suppose that IoT is used to deploy smart home, and some devices have been tampered with and try to affect the correct functionality of the smart lock on the door. Since IoT devices are resource-constrained, the upper limit on the different messages of adversarial data owners in Vers applies to them. The fundamental limit of the Vers model of an IoT network specifies the number of external computational nodes that are required for the system to be robust against the adversarial devices. Vers can also be used to model blockchains, and in particular, sharded blockchains. In a blockchain network of \(N\in\mathbb{N}\) nodes, nodes collectively process transactions, produce blocks of transactions, distribute the blocks, and also validate them. Blockchain is indeed the agreed-upon chain of blocks, and whenever nodes reach consensus on a new block, that block is appended to the chain. In a fully replicated blockchain system, such as Bitcoin, each node repeats what other nodes do. Such decentralized approach provides high security against adversaries, because no node relies on another node, but its throughput (the number of confirmed transactions per second) fails to scale due to its replication. Over the past few years, there has been extensive research into how to make blockchains scalable [26, 27]. A notable line of research in this regard, is _sharding_ which is inspired by the idea of parallel computing. Recently, Ethereum, the most popular blockchain platform, introduced _Danksharding_[28], which is a novel sharding protocol, and is going to deploy it in near future. In vanilla sharding, e.g. [29, 30], blockchain nodes are divided into groups called shards, say \(M\in\mathbb{N}\) shards. shards run in parallel, such that each shard of \(\frac{N}{M}\) nodes has a local chain and produces its own blocks. The allocation of nodes to shards can be random, or other methods. The purpose of sharding is to increase the throughput by \(M\) times. However, a shard of \(\frac{N}{M}\) nodes, and thus the whole sharded system, is more vulnerable to security attacks in comparison to \(N\) nodes. For example, carrying out a 51% attack on a shard of \(\frac{N}{M}\) nodes is easier for an adversary, compared to a system of \(N\) nodes. In order to preserve security and scale the blockchain simultaneously, the concept of coded sharding was first introduced in [31] and named _PolyShard_. Unlike vanilla sharding where nodes in a shard work on their own blocks, in Floyshard, all nodes in all shards work on coded blocks. Polyshard works as following. In round \(t\in\mathbb{N}\), each shard \(m\in[M]\) needs to validate its new block \(B_{m}^{t}\). Shards broadcast their blocks to all nodes. Each node \(n\) calculates a coded block \(\tilde{B}_{n}^{t}\) from the received blocks \(B_{1}^{t},\ldots,B_{M}^{t}\), using Lagrange coding. To put it more clearly, each node \(n\) makes a Lagrange polynomial whose coefficients are constructed from \(B_{1}^{t},\ldots,B_{M}^{t}\), and then evaluate that in a point \(\alpha_{n}\) to obtain \(\tilde{B}_{n}^{t}\). Then, nodes apply the block verification function1\(g\) on coded blocks that they calculated previously, instead of applying that \(M\) times on all \(M\) blocks separately. Validation of blocks in round \(t\) requires data of blocks in previous rounds \(1,2,\ldots,t-1\). Each node \(n\) stores coded chain of blocks in each round, evaluated at point \(\alpha_{n}\), rather than only blocks of its own shard. Therefore, it can use its stored coded local chain until round \(t-1\) when verifying its coded block in round \(t\). Nodes broadcast their results of verification in the network. In this step, adversaries may broadcast arbitrary erroneous values. Nodes can obtain the verification of \(B_{1}^{t},\ldots,B_{M}^{t}\) by decoding the verifications of coded blocks, and the adversaries' effect translates into errors in decoding data. The main result of [31] is that the throughput and security2 of this scheme scales linearly with \(N\), while the storage requirement of nodes does not need to scale. The adversarial model in [31] is limited to adversaries that broadcast an incorrect value instead of the verification result of their coded block. Even in that case, the adversaries send the same incorrect value to all nodes, and the consistency of the system is maintained. However, in a decentralized system like blockchain, adversarial nodes can freely deviate from the protocol. In [32], we introduced the discrepancy attack on PolyShard, in which the adversaries take control of some shards and adversarial shards disseminate different blocks to different nodes. This causes error in coded blocks of nodes and disrupts the validation process, and breaks the scalability of security in Polyshard. Vers is a comprehensive system that can model a coded sharding blockchain in the presence of adversaries as following. Data owner \(k\in[K]\) in Vers is equivalent to the representative of shard \(k\) in blockchain that propagates the new block of its shard in the network. Note that under the discrepancy attack, the representative of shard \(k\) may be adversarial. Data \(X_{k}\) of data owner \(k\) in Vers is equivalent to block \(B_{k}\) of shard \(k\) in the current round. Workers in Vers are equivalent to nodes of the network in blockchain. The target function \(f\) in Vers is equivalent to block verification function \(g\) in blockchain. In Vers, data owner \(k\) needs \(f(X_{k})\), and in blockchain, nodes of shard \(k\) need the result of the verification function of \(B_{k}\). The fundamental limit of the Vers model of a coded sharding blockchain specifies the number of nodes that are required for the system to be robust against the adversarial shards. ## V Main result The following theorem states the main result. **Theorem 1**.: The fundamental limit of \(\text{Vers}(f,N,K,\beta,v,\text{Lagrange})\) is \(t^{*}=v^{\beta}d(K-1)+1\). This result implies that \(N\geq v^{\beta}d(K-1)+1\) should hold. Otherwise, even when there is no straggler or faulty worker, and all workers send their computations back to data owners, data owners would not be able to recover \(f(X_{k}),k\in\mathcal{H}\). Since the fundamental limit is the minimum number of workers whose results are enough to correctly recover \(f(X_{k}),k\in\mathcal{H}\), two proofs are needed for Theorem 1. Firstly, in converse proof, we need to prove that \(t^{*}\) is truly the minimum, which means there should exist a particular adversarial behaviour and a set of \(t^{*}-1\) workers whose results do not determine \(f(X_{k}),k\in\mathcal{H}\) uniquely. But this is not as simple as showing a system of equations is underdetermined. Note that the honest data owners do not know the adversarial behaviour, so they do not know the correct way to form equations from the results of workers, even with the help of tags. Consequently, honest data owners need to consider every possible system of equations from the results of workers. Moreover, since Lagrange coding is used, \(f(X_{k}),k\in\mathcal{H}\) are not directly the unknowns of the equations of workers, rather, they are linear combinations of the unknowns. In the converse proof, we show that there exists a possible interpretation of the results of workers that leads to at least one incorrect output for \(f(X_{k}),k\in\mathcal{H}\). We provide the converse proof in Subsection VIII-A. Secondly, in the achievable proof, we need to prove that any set of \(t^{*}\) workers is enough to correctly recover \(f(X_{k}),k\in\mathcal{H}\). Again, this should hold for any adversarial behavior, despite the existence of many possible interpretations of the results of workers. We provide the achievable proof in Subsection VIII-B. **Remark 3**.: The fundamental limit of \(\text{Vers}(f,N,K,\beta=0,v,\text{Enc})\), i.e. when all data owners are honest, is not different from the fundamental limit of a similar system in which a central entity encodes the data and sends the encoded data to workers, because the encoded data in workers would be the same. Indeed, the challenge in Vers is due to the adversarial data owners that inject contradictory data in the system, and the resulting error that propagates through local encodings in workers. The coded computing system studied in [12] includes a central encoder, but it is essentially similar to \(\text{Vers}(f,N,K,\beta=0,v,\text{Lagrange})\), and its so called _recovery threshold_ is equivalent to the fundamental limit of \(\text{Vers}(f,N,K,\beta=0,v,\text{Lagrange})\). Therefore, using the result of [12], we know that the fundamental limit of \(\text{Vers}(f,N,K,\beta=0,v,\text{Lagrange})\) is \(t^{*}(f,N,K,\beta=0,v,\text{Lagrange})=d(K-1)+1\), where \(d\) is the degree of the target polynomial function \(f\). ## VI Tag functions In Algorithm 1, step 2, we explained that workers compute a _tag_ of their received messages. In this section, we introduce the concept of tags, and prove the existence of a tag function that can be deployed in \(\text{Vers}(f,N,K,\beta,v,\text{Enc})\). Recall that each adversarial data owner \(k\in\mathcal{A}\) distributes \(X_{k}^{(1)},\ldots,X_{k}^{(v)}\) among workers. The honest data owners that receive \(f(W_{n})\) from a worker \(n\in[N]\) after step 5 of Algorithm 1, do not know which one of the \(v\) messages of the adversaries this worker has received and has used in \(W_{n}\). Therefore, honest data owners do not know which workers have used the same set of input messages in their calculations. The purpose of tag is to inform the honest data owners about the data used by workers. Tags allow data owners to partition \([N]\) into some sets, at most \(v^{\beta}\) sets, where each set contains workers that have received the same messages from all data owners in the first step. Let us denote the concatenation of all \(X_{k},k\in\mathcal{H}\), i.e. the data of the honest data owners, with \(X_{\mathcal{H}}\). Also let \(X_{\mathcal{A},n}\) be the concatenation of all \(X_{k,n},k\in\mathcal{A}\), i.e. the messages of the adversarial data owners received by worker \(n\in[N]\). Worker \(n\in[N]\) applies the tag function \(J\) on \(X_{1,n},\ldots,X_{K,n}\), but since \(\{X_{1,n},\ldots,X_{K,n}\}=\{X_{\mathcal{H}},X_{\mathcal{A},n}\}\) for all \(n\in[N]\), we exploit the notation and use \(J(X_{\mathcal{H}},X_{\mathcal{A},n})\). We define a tag function formally as follows. **Definition 2** (Tag Function).: A function \(J:\mathbb{U}^{K}\rightarrow\mathbb{U}^{l}\), \(l<k\), is a tag function if for two adversaries that choose two different \(X_{A,n_{1}}\) and \(X_{A,n_{2}}\) independently from \(X_{\mathcal{H}}\), and send them to workers \(n_{1},n_{2}\in[N]\), \[\text{Pr}\big{(}J(X_{\mathcal{H}},X_{\mathcal{A},n_{1}})=J(X_{\mathcal{H}},X_{ \mathcal{A},n_{2}})\big{)}<\epsilon \tag{2}\] holds, where \(\epsilon\in(0,1)\) is a negligible value. There are some notable remarks about this definition. * The probability in (2) is over \(X_{\mathcal{H}}\), because the data of honest data owners come from an i.i.d uniform distribution on \(\mathbb{U}\). * As mentioned in section III, adversaries are unaware of the data of the honest data owners. Therefore, they can not choose \(X_{A,n_{1}}\) and \(X_{A,n_{2}}\) based on the knowledge of \(X_{\mathcal{H}}\), and they have to choose \(X_{A,n_{1}}\) and \(X_{A,n_{2}}\) independently from \(X_{\mathcal{H}}\). * The condition \(l<k\) is because we want the tag to be lightweight and communication-efficient. If we do not impose such constraint, each worker \(n\) could use the concatenation of all \(X_{k,n}\), \(k\in[K]\) as a tag, and data owners could use such tag very easily to spot the inconsistencies between workers. However, this trivial solution is not communication efficient at all. We need a tag function that compresses the received \(K\) messages in each worker so that the discrepancies between workers caused by adversaries become evident for honest data owners. Suppose that we have a tag function \(J\). Any data owner can compare tags of two workers \(n_{1},n_{2}\in[N]\), \(\mathtt{tag}_{n_{1}}=J(X_{\mathcal{H}},X_{\mathcal{A},n_{1}})\) and \(\mathtt{tag}_{n_{2}}=J(X_{\mathcal{H}},X_{\mathcal{A},n_{2}})\), and by Definition 2, conclude that \(\mathtt{tag}_{n_{1}}=\mathtt{tag}_{n_{2}}\) means \(n_{1}\) and \(n_{2}\) had received the same data from all data owners with high probability, i.e. \(X_{k,n_{1}}=X_{k,n_{2}}\), for all \(k\in[K]\). On the other hand, when \(\mathtt{tag}_{n_{1}}\neq\mathtt{tag}_{n_{2}}\), it is obvious for data owners that \(X_{k,n_{1}}\neq X_{k,n_{2}}\), for at least one \(k\in\mathcal{A}\) (this is simply because \(J\) is a function). Therefore, tags can be used by data owners to partition \(\mathcal{T}\) into sets of workers who had received the same data from data owners in the first step of Algorithm 1, and have used the same initial data in their subsequent computations. Tag functions are a type of the general _fingerprinting functions_. Fingerprinting is used to map an arbitrarily large data into a small and fixed length digest as its unique identifier, for many practical purposes like avoiding the comparison of bulky data. According to [21], an \(\epsilon\)-fingerprinting function \(fp:\mathcal{K}\times\mathbb{F}^{\delta}\to\mathbb{F}^{\gamma}\) satisfies \[\max_{\begin{subarray}{c}d,d^{\prime}\in\mathbb{F}^{\delta}\\ d\neq d^{\prime}\end{subarray}}\text{Pr}\left[fp(r,d)=fp(r,d^{\prime}):r \stackrel{{ R}}{{\leftarrow}}\mathcal{K}\right]\leq\epsilon, \tag{3}\] where data is a length \(\delta\) vector with elements in \(\mathbb{F}\), and \(r\in\mathcal{K}\) is a random seed, \(\mathcal{K}\subset\mathbb{R}\). One of the well-known fingerprinting functions is Rabin's fingerprint [33], and its derivatives, that uses random polynomials in the finite field to generate the fingerprint. Another class of fingerprinting functions are cryptographic hash functions. We know that hash function \(h\) is collision resistant if \[\text{Pr}\big{(}(x_{0},x_{1})\gets A,x_{0}\neq x_{1}:h(x_{0})=h(x_{1}) \big{)}\leq\epsilon, \tag{4}\] where \(A\) is a probabilistic polynomial adversary (This is just a rough definition. For formal definition, refer to [34]). We cannot use a collision-resistant hash function as a tag function for two reasons. * The probability of collision is small only when \(x_{0}\) and \(x_{1}\) are chosen by a probabilistic polynomial adversary. However, tag function has to be resilient against information theoretic adversary. * According to (2), inputs to tag function have a common part, \(X_{\mathcal{H}}\), but there is no such constraint on \(x_{0}\) and \(x_{1}\) in (4). Even if collision resistance was defined as \(\text{Pr}\big{(}(x_{0},x_{1},x_{2})\gets A:x_{1}\neq x_{2},h(x_{0}||x_{1}) =h(x_{0}||x_{2})\big{)}\leq\epsilon\), such \(h\) could not be used as tag, because adversaries could be any of the \(\beta\) data owners and not necessarily the last \(\beta\) data owners. Note that \(J(X_{\mathcal{H}},X_{\mathcal{A},n})\) in (2) is just notation exploitation, and in fact, worker \(n\in\mathbb{N}\) calculates \(J(X_{1,n},\ldots,X_{K,n})\), in which the \(\beta\) adversarial messages could be dispersed in any positions. In the following theorem, we prove that among all functions from \(\mathbb{U}^{K}\) to \(\mathbb{U}\), there exists a tag function. **Theorem 2**.: There exists a tag function \(J^{*}:\mathbb{U}^{K}\rightarrow\mathbb{U}\) which satisfies (2), given \(|\mathbb{U}|\) is large enough. The proof for this theorem is similar to Shannon's achievability proof for the capacity of discrete memoryless channels, and can be found in Appendix **Remark 4**.: Tags help data owners partition workers, but they cannot resolve all the ambiguity about the data that workers previously received from adversarial data owners. For example, assume that \[\mathtt{tag}_{1} =J^{*}(X_{1},X_{2}^{(1)},X_{3}^{(1)}),\] \[\mathtt{tag}_{2} =J^{*}(X_{1},X_{2}^{(1)},X_{3}^{(1)}),\] \[\mathtt{tag}_{3} =J^{*}(X_{1},X_{2}^{(2)},X_{3}^{(1)}),\] \[\mathtt{tag}_{4} =J^{*}(X_{1},X_{2}^{(2)},X_{3}^{(2)}),\] where \(J^{*}\) is a tag function. An honest data owner compares these tags and finds that \(\mathtt{tag}_{1}=\mathtt{tag}_{2}\), \(\mathtt{tag}_{2}\neq\mathtt{tag}_{3}\), \(\mathtt{tag}_{2}\neq\mathtt{tag}_{4}\), and \(\mathtt{tag}_{3}\neq\mathtt{tag}_{4}\). So it concludes that with high probability, workers \(1\) and \(2\) have received the same data from the adversaries, while workers \(3,4\) each have received different data. This honest data owner has no way to find out that workers \(3\) and \(4\) have received the same data from the second data owner, i.e. \(X_{2,3}=X_{2,4}=X_{2}^{(2)}\), or workers \(2\) and \(3\) have received the same data from the third data owner, i.e. \(X_{3,2}=X_{3,3}=X_{3}^{(1)}\). ## VII Analysis of \(\mathtt{Vers}(f,N,K,\beta,v,\textsc{Lagrange})\) In this section, we introduce some tools and use them to analyze the system. These tools help us understand the system better, and will be useful in the next section when we study the fundamental limit of Vers. In the following, we define the coefficient vector, monomial vector, characteristic matrix, and relation matrix, and all of them are constructed from parts associated with every possible adversarial behavior from the point of view of the workers. In other words, the notions we introduce in this section are inclusive of all adversarial behaviors. In the converse proof of the fundamental limit of the system in the next section, we will show that in the worst-case scenario, all adversarial behaviors from the point view of workers can be present in the system, and therefore, need to be addressed. Recall that there are at most \(\beta\) adversarial data owners, and each of them may send different messages to different workers, but honest data owners send one single message to all workers. The _version vector_ of a worker as we define in the following, indicates which version of each adversarial message that worker has received. **Definition 3** (version vector).: For worker \(n\in[N]\), we define a version vector \(\mathbf{v}\in[v^{+}]^{K}\) of length \(K\), whose \(k\)-th element, if \(k\in\mathcal{A}\), is in \([v]\), and denotes the version of the message of adversarial data owner \(k\in\mathcal{A}\) received by worker \(n\). Moreover, if \(k\in\mathcal{H}\), the \(k\)-th element of \(\mathbf{v}\) is \(0\), indicating that all workers receive the same message from the honest data owner \(k\). In other words, worker node \(n\) receives \(X_{k}^{(\mathbf{v}|k)}\) from the adversarial data owner \(k\). Note that worker \(n\) does not know \(\mathbf{v}\), because it neither knows the adversaries nor the versions of the adversarial messages. Worker \(n\) receives \(K\) messages from data owners, without knowing anything else about the adversarial or honest data owners, and performs some computations on them. The version vector of worker \(n\) is equivalent to the adversarial behavior that this worker observes. Since there are at most \(v\) different adversarial messages from each adversarial data owner, there exist \(v^{\beta}\) different version vectors. This brings us to the next definition. It is worth noting that we did not include subscript \(n\) for version vector of worker \(n\) in Definition 3 because we will not address version vectors by the workers who received them. Rather, we use subscripts in version vectors to enumerate them, as will be clear in the following definition. **Definition 4** (version vector set).: The version vector set denoted by \(\mathcal{V}\coloneqq\{\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{v^{ \beta}}\}\subset[v^{+}]^{K}\), is the set of all the \(v^{\beta}\) possible version vectors. According to the system model described in Section III, each worker computes a linear combination of the received messages, as in (1). Since we are studying Vers with Lagrange encoding, each worker \(n\in[N]\) whose version vector is \(\mathbf{v}\), forms the following Lagrange polynomial when encoding the received data from data owners. \[q^{(\mathbf{v})}(z)\coloneqq\sum_{k\in\mathcal{A}}X_{k}^{(\mathbf{v}|k)}\prod_ {j\neq k}\frac{z-\omega_{j}}{\omega_{i}-\omega_{j}}+\sum_{k\in\mathcal{H}}X_{k }\prod_{j\neq k}\frac{z-\omega_{j}}{\omega_{i}-\omega_{j}}, \tag{5}\] where \(\omega_{k}\in\mathbb{F},k\in[K]\) are \(K\) distinct elements assigned to data owners. This can be rewritten as \[q^{(\mathbf{v})}(z)=\sum_{i=0}^{K-1}L_{i}(X_{s},s\in\mathcal{H},X_{r}^{( \mathbf{v}|r)},r\in\mathcal{A},f(.))z^{i}, \tag{6}\] where \(L_{0},\ldots,L_{K-1}:\mathbb{F}^{K}\rightarrow\mathbb{F}\) are linear maps. Then, \(f(q^{(\mathbf{v})}(z))\) can be rewritten as \[f(q^{(\mathbf{v})}(z))=\sum_{i=0}^{d(K-1)}u_{i}(X_{s},s\in\mathcal{H},X_{r}^{( \mathbf{v}|r)},r\in\mathcal{A},f(.))z^{i}, \tag{7}\] where \(u_{0},\ldots,u_{d(K-1)}:\mathbb{F}^{K}\rightarrow\mathbb{F}\) are polynomials of degree \(d\) of \(X_{r}^{(\mathbf{v}|r)},r\in\mathcal{A}\) and \(X_{s},s\in\mathcal{H}\). Worker \(n\in[N]\) evaluates \(f(q^{(\mathbf{v})}(z))\) at \(\alpha_{n}\), i.e. \(f(W_{n})=f(q^{(\mathbf{v})}(\alpha_{n}))\), where \(\alpha_{n},n\in[N]\) are \(n\) elements from \(\mathbb{F}\), assigned to workers. In the following, we define the coefficient vector and coefficient sub-vector, so that we can express the computations of workers as multiplications of matrices. **Definition 5** (coefficient vector).: Let the version vector set be \(\mathcal{V}=\{\mathbf{v}_{1},\ldots,\mathbf{v}_{v^{\beta}}\}\). The coefficient vector for a \(\text{Vers}(f,N,K,\beta,v,\text{Lagrange})\) system is defined as \[\mathbf{U}\big{(}X_{s},s\in\mathcal{H},X_{r}^{(i)},r\in\mathcal{A},i\in[v],f(.)\big{)}\coloneqq\begin{bmatrix}\underbrace{\mathbf{u}_{\mathbf{v}_{1}}(X_{ r}^{(\mathbf{v}_{1}|r)}),X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{ \vdots}\\ \underbrace{\vdots}\\ \underbrace{\mathbf{u}_{\mathbf{v}_{v^{\beta}}}(X_{r}^{(\mathbf{v}_{v^{\beta }}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{ r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{ \beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H },f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in \mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{ \beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}( X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)} _{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in \mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{ \beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}( X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)} _{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in \mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{ \beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}( X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{ \mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s \in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_ {s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{( \mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{ \mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A}, s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in \mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{ \beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}( X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)} _{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in \mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{ \beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}( X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)} _{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in \mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{ \beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{( \mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{ \mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A}, s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{\beta}}|r)},X_{s},r\in \mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{(\mathbf{v}_{v^{ \beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{\beta}}(X_{r}^{( \mathbf{v}_{v^{\beta}}|r)},X_{s},r\in\mathcal{A},s\in\mathcal{H},f)}_{\mathbf{v}_{v^{ \beta}}(X_{r}^{(\mathbf{v} where \(\mathbf{u_{v}}\big{(}X_{s},s\in\mathcal{H},X_{r}^{(\mathbf{v}|r)},r\in\mathcal{A}, f(.)\big{)},\mathbf{v}\in\mathcal{V}\) is a coefficient sub-vector defined as \[\mathbf{u_{v}}\big{(}X_{s},s\in\mathcal{H},X_{r}^{(\mathbf{v}|r)},r\in\mathcal{ A},f(.)\big{)}\coloneqq\left[\begin{array}{c}u_{d(K-1)}\big{(}X_{s},s\in\mathcal{H},X_{r}^{( \mathbf{v}|r)},r\in\mathcal{A},f(.)\big{)}\\ \vdots\\ u_{0}\big{(}X_{s},s\in\mathcal{H},X_{r}^{(\mathbf{v}|r)},r\in\mathcal{A},f(. )\big{)}\end{array}\right],\quad\mathbf{v}\in\mathcal{V}. \tag{9}\] The length of each coefficient sub-vector is \(d(K-1)+1\), so the length of coefficient vector \(\mathbf{U}\) is \(v^{\beta}(d(K-1)+1)\). As mentioned before, the coefficient vector contains all of the \(v^{\beta}\) possible adversarial behaviors. Using the definition above, it is clear that for worker \(n\in[N]\) whose version vector is \(\mathbf{v}\in\mathcal{V}\), \[f(W_{n})=\big{[}\alpha^{d(K-1)}\ \ldots\ \alpha\ 1\big{]}\,\mathbf{u_{v}} \big{(}X_{s},s\in\mathcal{H},X_{r}^{(\mathbf{v}|r)},r\in\mathcal{A},f(.)\big{)}. \tag{10}\] Therefore, the coefficient vector helps us express the computations of the workers properly. This motivates us to study the coefficient vector more in-depth. The following example clarifies the above definition. **Example 1**.: Consider the following parameters. \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(K\) & \(N\) & \(\beta\) & \(v\) & \(\mathcal{A}\) & \(\mathcal{H}\) & \((\omega_{1},\omega_{2},\omega_{3})\) & \(f(x)\) \\ \hline 3 & 3 & 1 & 2 & \(\{1,2\}\) & \(\{3\}\) & \((1,2,3)\) & \(x^{2}\) \\ \hline \end{tabular} Since \(v=2\), the version vector set is \(\mathcal{V}=\{[1,1,0],[1,2,0],[2,1,0],[2,2,0]\}\). The Lagrange polynomial for the version vector \([i_{1},i_{2},0]\in\mathcal{V}\) is \[q^{([i_{1},i_{2},0])}(z) =\frac{(z-2)(z-3)}{2}X_{1}^{(i_{1})}+\frac{(z-1)(z-3)}{-1}X_{2}^{ (i_{2})}+\frac{(z-1)(z-2)}{2}X_{3}=z^{2}(\frac{X_{1}^{(i_{1})}}{2}-X_{2}^{(i_{ 2})}+\frac{X_{3}}{2})\] \[+z(-\frac{5X_{1}^{(i_{1})}}{2}+2X_{2}^{(i_{2})}-\frac{3X_{3}}{2}) +(3X_{1}^{(i_{1})}-3X_{2}^{(i_{2})}+X_{3}),\ [i_{1},i_{2},0]\in\mathcal{V}.\] Consequently, \[f(q^{([i_{1},i_{2},0])}(z)) =\Big{(}\frac{(X_{1}^{(i_{1})})^{2}}{4}-X_{1}^{(i_{1})}X_{2}^{(i_{ 2})}+\frac{X_{1}^{(i_{1})}X_{3}}{2}+(X_{2}^{(i_{2})})^{2}-X_{2}^{(i_{2})}X_{3}+ \frac{X_{3}^{2}}{4}\Big{)}z^{4}+\Big{(}\frac{-5(X_{1}^{(i_{1})})^{2}}{2}\] \[+9X_{1}^{(i_{1})}X_{2}^{(i_{2})}-4X_{1}^{(i_{1})}X_{3}-8(X_{2}^{( i_{2})})^{2}+7X_{2}^{(i_{2})}X_{3}-\frac{3X_{3}^{2}}{2}\Big{)}z^{3}+\Big{(}\frac{37(X_{1 }^{(i_{1})})^{2}}{4}-29X_{1}^{(i_{1})}X_{2}^{(i_{2})}\] \[+\frac{23X_{1}^{(i_{1})}X_{3}}{2}+22(X_{2}^{(i_{2})})^{2}-17X_{2}^ {(i_{2})}X_{3}+\frac{13X_{3}^{2}}{4}\Big{)}z^{2}+\Big{(}-15(X_{1}^{(i_{1})})^{2 }+39X_{1}^{(i_{1})}X_{2}^{(i_{2})}\] \[-14X_{1}^{(i_{1})}X_{3}-24(X_{2}^{(i_{2})})^{2}+17X_{2}^{(i_{2})}X _{3}-3X_{3}\Big{)}z+\Big{(}9(X_{1}^{(i_{1})})^{2}-18X_{1}^{(i_{1})}X_{2}^{(i_{2 })}+6X_{1}^{(i_{1})}X_{3}\] \[+9(X_{2}^{(i_{2})})^{2}-6X_{2}^{(i_{2})}X_{3}+X_{3}^{2}\Big{)}. \tag{11}\] Therefore, \[\mathbf{u}_{[i_{1},i_{2},0]}(X_{1}^{(i_{1})},X_{2}^{(i_{2})},X_{3},f)=\left[ \begin{array}{c}\frac{(X_{1}^{(i_{1})})^{2}}{4}-X_{1}^{(i_{1})}X_{2}^{(i_{2})}+ \frac{X_{1}^{(i_{1})}X_{3}}{2}+(X_{2}^{(i_{2})})^{2}-X_{2}^{(i_{2})}X_{3}+\frac {X_{3}^{2}}{4}\\ \frac{-5(X_{1}^{(i_{1})})^{2}}{2}+9X_{1}^{(i_{1})}X_{2}^{(i_{2})}-4X_{1}^{(i_{1} )}X_{3}-8(X_{2}^{(i_{2})})^{2}+7X_{2}^{(i_{2})}X_{3}-\frac{3X_{2}^{2}}{2}\\ \frac{37(X_{1}^{(i_{1})})^{2}}{4}-29X_{1}^{(i_{1})}X_{2}^{(i_{2})}+\frac{23X_{1 }^{(i_{1})}X_{3}}{2}+2(X_{2}^{(i_{2})})^{2}-17X_{2}^{(i_{2})}X_{3}+\frac{13X_{ 3}^{2}}{4}\\ -15(X_{1}^{(i_{1})})^{2}+39X_{1}^{(i_{1})}X_{2}^{(i_{2})}-14X_{1}^{(i_{1})}X_{3 }-24(X_{2}^{(i_{2})})^{2}+17X_{2}^{(i_{2})}X_{3}-3X_{3}\\ 9(X_{1}^{(i_{1})})^{2}-18X_{1}^{(i_{1})}X_{2}^{(i_{2})}+6X_{1}^{(i_{1})}X_{3}+ 9(X_{2}^{(i_{2})})^{2}-6X_{2}^{(i_{2})}X_{3}+X_{3}^{2}\end{array}\right], \tag{12}\] and \[\mathbf{U}(X_{1}^{(i_{1})},X_{2}^{(i_{2})},i_{1},i_{2}\in[2],X_{3},f)=\left[ \begin{array}{c}\mathbf{u}_{[1,1,0]}(X_{1}^{(1)},X_{2}^{(1)},X_{3},f)\\ \mathbf{u}_{[1,2,0]}(X_{1}^{(1)},X_{2}^{(2)},X_{3},f)\\ \mathbf{u}_{[2,1,0]}(X_{2}^{(1)},X_{2}^{(1)},X_{3},f)\\ \mathbf{u}_{[2,2,0]}(X_{1}^{(2)},X_{2}^{(2)},X_{3},f)\end{array}\right] \tag{13}\] **Remark 5**.: In Definition 5, we implied a particular order on the coefficient sub-vectors in (8). The first and topmost sub-vector in the coefficient vector is \(\mathbf{u}_{\mathbf{v}_{1}}\), the second is \(\mathbf{u}_{\mathbf{v}_{2}}\), and so forth, until \(\mathbf{u}_{\mathbf{v}_{\mathrm{s}}}\). In Example 1, we chose \(\mathbf{v}_{1}=[1,1,0]\), \(\mathbf{v}_{2}=[1,2,0]\), \(\mathbf{v}_{3}=[2,1,0]\), and \(\mathbf{v}_{4}=[2,2,0]\). We could have used another assignment for \(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4}\), as different assignments only lead to different permutations of the same coefficient vector. Therefore, the order of coefficient sub-vectors in coefficient vector is optional but fixed. Every element of the coefficient vector is a summation of some monomials. For example, in equation (12) from Example 1, \(\mathbf{u}_{[i_{1},i_{2},0]}(X_{1}^{(i_{1})},X_{2}^{(i_{2})},X_{3},f)\) shows that elements of the coefficient vector are made of some monomials. Therefore, in order to understand the coefficient vector better, we define a new vector that contains the monomials that appear in the coefficient vector. Before defining the monomial vector, we need to define the degree set of a function. The degree set of a single variable function \(f:\mathbb{U}\rightarrow\mathbb{V}\), denoted by \(\mathcal{D}(f)\), contains the degrees of the monomials in \(f\). For example, \(\mathcal{D}(f)=\{0,1,3\}\) for \(f(x)=x^{3}+x+1\). **Definition 6** (monomial vector).: The _monomial vector_ for a \(\mathrm{Vers}(f,N,K,\beta,v,\mathrm{Lagrange})\) system contains all degree \(e\in\mathcal{D}_{f}\) monomials of \(X_{k},k\in\mathcal{H}\) and \(X_{k}^{(i)},k\in\mathcal{A},i\in[v]\), and is defined as \[\mathbf{X}\big{(}X_{s},s\in\mathcal{H},X_{r}^{(i)},r\in\mathcal{A},i\in[v],f(.)\big{)}\coloneqq\left[\prod_{k\in\mathcal{A}}(X_{k}^{(\mathbf{v}[k])})^{i_{ k}}\prod_{k\in\mathcal{H}}(X_{k})^{j_{k}},\text{ s.t.}\sum_{k\in\mathcal{A}}i_{k}+\sum_{k\in\mathcal{H}}j_{k}\in\mathcal{D}(f), \text{and }\mathbf{v}\in\mathcal{V}\right]^{\intercal}. \tag{14}\] In the following example, we determine the monomial vector for Vers system introduced in Example 1. **Example 2**.: Recall that in Example 1, \(f(x)=x^{2}\), and \(\mathcal{D}(f)=\{2\}\). Therefore, the monomial vector contains all degree \(2\) monomials of \(X_{k},k\in\mathcal{H}\) and \(X_{k}^{(i)},k\in\mathcal{A},i\in[v]\). In particular, * for \(i_{1}=2,i_{2}=0,i_{3}=0\), we get the monomials \((X_{1}^{(1)})^{2},\ (X_{1}^{(2)})^{2}\), * for \(i_{1}=0,i_{2}=2,i_{3}=0\), we get the monomials \((X_{2}^{(1)})^{2},\ (X_{2}^{(2)})^{2}\), * for \(i_{1}=0,i_{2}=0,i_{3}=2\), we get the monomial \((X_{3})^{2}\), * for \(i_{1}=1,i_{2}=1,i_{3}=0\), we get the monomials \(X_{1}^{(1)}X_{2}^{(1)},\ X_{1}^{(2)}X_{2}^{(1)},X_{1}^{(1)}X_{2}^{(2)},\ X_{1}^{(2 )}X_{2}^{(2)}\), * for \(i_{1}=1,i_{2}=0,i_{3}=1\), we get the monomials \(X_{1}^{(1)}X_{3},\ X_{1}^{(2)}X_{3}\), * for \(i_{1}=0,i_{2}=1,i_{3}=1\), we get the monomials \(X_{2}^{(1)}X_{3},\ X_{2}^{(2)}X_{3}\). Therefore, \[\begin{split}\mathbf{X}\big{(}X_{1}^{(i_{1})},X_{2}^{(i_{2})},X _{3},i_{1},i_{2}\in[2],f(x)=x^{2}\big{)}=&\Big{[}(X_{1}^{(1)})^{2 },\ (X_{1}^{(2)})^{2},\ (X_{2}^{(1)})^{2},\ (X_{2}^{(2)})^{2},\ (X_{3})^{2},X_{1}^{(1)}X_{2}^{(1)},X_{1}^{(2 )}X_{2}^{(1)},\\ &\ X_{1}^{(1)}X_{2}^{(2)},\ X_{1}^{(2)}X_{2}^{(2)},X_{1}^{(1)}X_{ 3},\ X_{1}^{(2)}X_{3},X_{2}^{(1)}X_{3},\ X_{2}^{(2)}X_{3}\Big{]}^{\intercal}. \end{split} \tag{15}\] **Remark 6**.: Note that the order of the monomials in monomial vector is optional, but fixed in all subsequent equations. For example, we could have formed \(\mathbf{X}\big{(}X_{1}^{(i_{1})},X_{2}^{(i_{2})},X_{3},i_{1},i_{2}\in[2],f(x)=x ^{2}\big{)}\) in (15) as \[\begin{split}\Big{[}(X_{1}^{(2)})^{2},\ (X_{1}^{(1)})^{2},\ (X_{2}^{(2)})^{2},\ (X_{2}^{(1)})^{2},\ (X_{3})^{2},\ X_{1}^{(2)}X_{2}^{(1)},\ X_{1}^{(1)}X_{2}^{(1)},\ X_{1}^{(2 )}X_{2}^{(2)},X_{1}^{(1)}X_{2}^{(2)},\\ &\ X_{1}^{(1)}X_{3},\ X_{1}^{(2)}X_{3},X_{2}^{(1)}X_{3},\ X_{2}^{(2 )}X_{3}\Big{]}^{\intercal}.\end{split}\] However, we chose the order in (15), and will keep using that in the follow-up of Example 2. **Lemma 1**.: The length of monomial vector \(\mathbf{X}\big{(}X_{r}^{(i)},X_{s},r\in\mathcal{A},s\in\mathcal{H},i\in[v] \big{)}\) in a \(\mathsf{Vers}(f,N,K,\beta,v,\text{Lagrange})\) system is \[\mathsf{Len}(\mathbf{X})\coloneqq\sum_{e\in\mathcal{D}(f)}\sum_{m=0}^{\min(e, \beta)}\binom{\beta}{m}\binom{K-\beta-1+e}{e-m}v^{m} \tag{16}\] Proof.: For each \(e\in\mathcal{D}(f)\), we need to count the number of distinct monomials \(\prod_{k\in\mathcal{A}}(X_{k}^{(\mathbf{v}[k])})^{i_{k}}\prod_{k\in\mathcal{H }}(X_{k})^{j_{k}}\), that \(\sum_{k\in\mathcal{A}}i_{k}+\sum_{k\in\mathcal{H}}j_{k}=e\) and \(\mathbf{v}\in\mathcal{V}\). We count as following: * We choose a subset \(\mathcal{M}\in\mathcal{A}\), \(|\mathcal{M}|=m\), of adversarial messages to be in the monomial, so \(m\leq\beta\) and \(m\leq e\), thus \(0\leq m\leq\min(e,\beta)\). This makes the term \(\binom{\beta}{m}\) in (16). * There are \(v^{m}\) different version values for the \(m\) adversarial messages in the monomial, which makes the term \(v^{m}\) in (16). * We need to count the number of possible cases for the powers of the chosen \(m\) adversarial messages and the \(K-\beta\) honest messages. This is the number of solutions for \(\sum_{k\in\mathcal{M}}i_{k}+\sum_{k\in\mathcal{H}}j_{k}=e\), where \(1\leq i_{k}\), for \(k\in\mathcal{M}\), and \(0\leq j_{k}\), for \(k\in\mathcal{H}\). From combinatorics, we know that the number of such solutions is \(\binom{K-\beta-1+e}{e-m}\). This completes the proof. Now that the monomial vector is defined, we can express each element of the coefficient vector as a linear combination of the monomials in the monomial vector. We do so by defining the characteristic matrix. **Definition 7** (characteristic matrix).: The _characteristic matrix_ of a \(\text{\rm Vers}(f,N,K,\beta,v,\text{\rm Lagrange})\) system, denoted by \(\mathbf{M}(\mathcal{H},\mathcal{A},v,f)\) illustrates the relation between the coefficient vector and the monomial vector, i.e. \[\mathbf{U}\big{(}X_{s},s\in\mathcal{H},X_{r}^{(i)},r\in\mathcal{A},i\in[v],f(.) \big{)}=\mathbf{M}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}\mathbf{X}\big{(}X_{ s},s\in\mathcal{H},X_{r}^{(i)},r\in\mathcal{A},i\in[v],f(.)\big{)}. \tag{17}\] The dimension of the characteristic matrix is \(\big{(}v^{\beta}(d(K-1)+1)\big{)}\times\mathsf{Len}(\mathbf{X})\). It is worth noting that similar to the coefficient vector, the characteristic matrix contains all the \(v^{\beta}\) adversarial behaviors. **Example 3**.: For Vers system introduced in Example 1, we can easily form \(\mathbf{M}\big{(}\mathcal{H}=\{3\},\mathcal{A}=\{1,2\},v=2,f(x)=x^{2}\big{)}\) by using (12), (13), and also (15) from Example 2. The resulting characteristic matrix is shown in (19). We can decompose the characteristic matrix as \[\mathbf{M}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}=\begin{bmatrix}\mathbf{M} _{\mathbf{v}_{1}}\big{(}\mathcal{H},\mathcal{A},f\big{)}\\ \vdots\\ \mathbf{M}_{\mathbf{v}_{\beta}}\big{(}\mathcal{H},\mathcal{A},f\big{)}\end{bmatrix}, \tag{18}\] where \(\{\mathbf{v}_{1},\ldots,\mathbf{v}_{v^{\beta}}\}=\mathcal{V}\). The dimension of each sub-matrix \(\mathbf{M}_{\mathbf{v}}\big{(}\mathcal{H},\mathcal{A},f\big{)}\), \(\mathbf{v}\in\mathcal{V}\), is \(\big{(}d(K-1)+1\big{)}\times\mathsf{Len}(\mathbf{X})\). In fact, the sub-matrix \(\mathbf{M}_{\mathbf{v}}\) is dedicated to coefficient sub-vector \(\mathbf{u}_{\mathbf{v}}\big{(}X_{s},s\in\mathcal{H},X_{r}^{(\mathbf{v}[r])},r \in\mathcal{A},f(.)\big{)}\), and expresses its elements as linear combinations of the element of the monomial vector. Consequently, the order of the version vectors that we had chosen in the coefficient vector, as discussed in Remark 5, determines the order of the sub-matrices in the characteristic matrix. We emphasize that this order is optional but fixed. **Definition 8**.: (relation matrix) The _relation matrix_ in a \(\text{\rm Vers}(f,N,K,\beta,v,\text{\rm Lagrange})\) system, which we denote by \(\mathbf{P}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}\), consists of basis of the left null space of \(\mathbf{M}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}\), i.e. \[\mathbf{P}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}\mathbf{M}\big{(} \mathcal{H},\mathcal{A},v,f\big{)}=\mathbf{0}. \tag{20}\] The dimension of the relation matrix is \(\big{(}v^{\beta}(d(K-1)+1)-\text{\rm rank}(\mathbf{M})\big{)}\times\big{(}v^ {\beta}(d(K-1)+1)\big{)}\). According to Definition 7, \(\mathbf{P}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}\mathbf{M}\big{(}\mathcal{ H},\mathcal{A},v,f\big{)}=\mathbf{0}\) is equivalent to \[\mathbf{P}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}\mathbf{U}\big{(}X_{s},s\in \mathcal{H},X_{r}^{(i)},r\in\mathcal{A},i\in[v],f(.)\big{)}=\mathbf{0}. \tag{21}\] This is why we have named \(\mathbf{P}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}\) as relation matrix, because it reveals the linear relations of the elements of the coefficient vector. Similar to the characteristic matrix, we can decompose the relation matrix as \[\mathbf{P}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}=\Big{[}\ \mathbf{P}_{ \mathbf{v}_{1}}\big{(}\mathcal{H},\mathcal{A},f\big{)}\ \Big{\vdots}\ldots\ \Big{\vdots}\ \mathbf{P}_{\mathbf{v}_{v^{\beta}}}\big{(}\mathcal{H}, \mathcal{A},f\big{)}\ \Big{]}, \tag{22}\] where \(\{\mathbf{v}_{1},\ldots,\mathbf{v}_{v^{\beta}}\}=\mathcal{V}\). The dimension of each sub-matrix \(\mathbf{P}_{\mathbf{v}}\big{(}\mathcal{H},\mathcal{A},f\big{)}\), \(\mathbf{v}\in\mathcal{V}\), is \(\big{(}v^{\beta}(d(K-1)+1)-\text{\rm rank}(\mathbf{M})\big{)}\times(d(K-1)+1)\). Due to (18) and (20), \[\mathbf{P}_{\mathbf{v}_{1}}\mathbf{M}_{\mathbf{v}_{1}}+\cdots+\mathbf{P}_{ \mathbf{v}_{v^{\beta}}}\mathbf{M}_{\mathbf{v}_{v^{\beta}}}=\mathbf{0},\] and due to (7), \[\mathbf{P}_{\mathbf{v}_{1}}\mathbf{u}_{\mathbf{v}_{1}}+\cdots+\mathbf{P}_{\mathbf{ v}_{v,\theta}}\mathbf{u}_{\mathbf{v}_{\theta}}=\mathbf{0}.\] Now, we introduce _effective_ and _non-effective_ permutations of the relation matrix. **Definition 9**.: (effective permutations of the relation matrix) Suppose that \[\mathbf{P}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}=\Big{[}\begin{array}{c} \mathbf{P}_{\mathbf{v}_{1}}\big{(}\mathcal{H},\mathcal{A},f\big{)}\end{array} \Big{]}\cdot\ldots\cdot\Big{[}\begin{array}{c}\mathbf{P}_{\mathbf{v}_{\theta }}\big{(}\mathcal{H},\mathcal{A},f\big{)}\end{array}\Big{]}\] is the relation matrix of a \(\text{Vers}(f,N,K,\beta,v,\text{Lagrange})\) system. A permutation \(\Pi:[v^{\beta}]\rightarrow[v^{\beta}]\) of the relation matrix is \(\mathbf{P}^{\Pi}\big{(}\mathcal{H},\mathcal{A},v,f\big{)}=\Big{[}\begin{array}[] {c}\mathbf{P}_{\mathbf{v}_{\Pi(i)}}\big{(}\mathcal{H},\mathcal{A},f\big{)} \end{array}\Big{]}\cdot\ldots\cdot\Big{[}\begin{array}{c}\mathbf{P}_{\mathbf{ v}_{\Pi(i^{\beta})}}\big{(}\mathcal{H},\mathcal{A},f\big{)}\end{array}\Big{]}\), and is effective if \(\mathbf{P}^{\Pi}\mathbf{M}\neq\mathbf{0}\), and non-effective if \(\mathbf{P}^{\Pi}\mathbf{M}=\mathbf{0}\). The intuition behind this definition is the special structure of the characteristic matrix, which is also evident in (19). This special structure will get clear in the following example. **Example 4**.: Recall that for Vers system introduced in Example (1), we had set \(\mathbf{v}_{1}=[1,1,0]\), \(\mathbf{v}_{2}=[1,2,0]\), \(\mathbf{v}_{3}=[2,1,0]\), \(\mathbf{v}_{4}=[2,2,0]\), and \(\mathbf{P}\big{(}\mathcal{H}=\{3\},\mathcal{A}=\{1,2\},v=2,f(x)=x^{2}\big{)}= [\mathbf{P}_{\mathbf{v}_{1}}\ \mathbf{P}_{\mathbf{v}_{2}}\ \mathbf{P}_{\mathbf{v}_{3}}\ \mathbf{P}_{ \mathbf{v}_{4}}]\). We show that in this Vers system, there are \(3\) non-effective permutations of \(\mathbf{P}\), say \(\Pi_{1},\Pi_{2},\) and \(\Pi_{3}\), as following. \[\mathbf{P}^{\Pi_{1}}=[\mathbf{P}_{\mathbf{v}_{4}}\ \mathbf{P}_{ \mathbf{v}_{3}}\ \mathbf{P}_{\mathbf{v}_{2}}\ \mathbf{P}_{\mathbf{v}_{1}}] \tag{23}\] \[\mathbf{P}^{\Pi_{2}}=[\mathbf{P}_{\mathbf{v}_{3}}\ \mathbf{P}_{ \mathbf{v}_{4}}\ \mathbf{P}_{\mathbf{v}_{1}}\ \mathbf{P}_{\mathbf{v}_{2}}]\] (24) \[\mathbf{P}^{\Pi_{3}}=[\mathbf{P}_{\mathbf{v}_{2}}\ \mathbf{P}_{ \mathbf{v}_{1}}\ \mathbf{P}_{\mathbf{v}_{4}}\ \mathbf{P}_{\mathbf{v}_{3}}] \tag{25}\] In other words, \(\mathbf{P}^{\Pi_{1}}\mathbf{M}=\mathbf{P}^{\Pi_{2}}\mathbf{M}=\mathbf{P}^{\Pi _{3}}\mathbf{M}=\mathbf{0}\), where \(\mathbf{M}\) is given in (19). Let us consider \(\Pi_{1}\) and show that \(\mathbf{P}^{\Pi_{1}}\mathbf{M}=\mathbf{0}\). The arguments for the other two permutations are similar. We denote a column \(i\), \(1\leq i\leq 13\), of \(\mathbf{M}\) as \(\mathbf{c}_{i}=[c_{i,1}^{\intercal}\ c_{i,2}^{\intercal}\ c_{i,3}^{\intercal} \ c_{i,4}^{\intercal}]^{\intercal}\), where the length of \(c_{i,1},c_{i,2},c_{i,3},c_{i,4}\) is \(5\). We know that \(\mathbf{P}c_{i}=\mathbf{P}_{\mathbf{v}_{1}}c_{i,1}+\mathbf{P}_{\mathbf{v}_{2} }c_{i,2}+\mathbf{P}_{\mathbf{v}_{3}}c_{i,3}+\mathbf{P}_{\mathbf{v}_{4}}c_{i,4 }=\mathbf{0}\). We consider all \(1\leq i\leq 13\) in the following. * \(i=1,i=2\) A close look at the first two columns of \(\mathbf{M}\) in (19) reveals that \(c_{1,1}=c_{1,2}=c_{2,3}=c_{2,4}\coloneqq c_{\{1,2\}}^{*}\), and \(c_{1,3}=c_{1,4}=c_{2,1}=c_{2,2}=\mathbf{0}\). Therefore \[\mathbf{P}\mathbf{c}_{1}=\mathbf{P}_{\mathbf{v}_{1}}c_{1,1}+\mathbf{P}_{ \mathbf{v}_{2}}c_{1,2}+\mathbf{P}_{\mathbf{v}_{3}}c_{1,3}+\mathbf{P}_{\mathbf{ v}_{4}}c_{1,4}=\mathbf{0},\] is equivalent to \[(\mathbf{P}_{\mathbf{v}_{1}}+\mathbf{P}_{\mathbf{v}_{2}})c_{\{1,2\}}^{*}= \mathbf{0},\] (26) and \[\mathbf{P}\mathbf{c}_{2}=\mathbf{P}_{\mathbf{v}_{1}}c_{2,1}+\mathbf{P}_{ \mathbf{v}_{2}}c_{2,2}+\mathbf{P}_{\mathbf{v}_{3}}c_{2,3}+\mathbf{P}_{ \mathbf{v}_{4}}c_{2,4}=\mathbf{0},\] is equivalent to \[(\mathbf{P}_{\mathbf{v}_{3}}+\mathbf{P}_{\mathbf{v}_{4}})c_{\{1,2\}}^{*}= \mathbf{0}.\] (27) Consequently, for the permuted relation matrix in (23), \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{1}=\mathbf{P}_{\mathbf{v}_{4}}c_{1,1}+\mathbf{ P}_{\mathbf{v}_{3}}c_{1,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{1,3}+\mathbf{P}_{ \mathbf{v}_{1}}c_{1,4}=(\mathbf{P}_{\mathbf{v}_{4}}+\mathbf{P}_{\mathbf{v}_{3} })c_{\{1,2\}}^{*},\] and due to (27), we conclude that \(\mathbf{P}^{\Pi_{1}}\mathbf{c}_{1}=\mathbf{0}\). Similarly, \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{2}=\mathbf{P}_{\mathbf{v}_{4}}c_{2,1}+\mathbf{ P}_{\mathbf{v}_{3}}c_{2,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{2,3}+\mathbf{P}_{\mathbf{ v}_{1}}c_{2,4}=(\mathbf{P}_{\mathbf{v}_{2}}+\mathbf{P}_{\mathbf{v}_{1}})c_{\{1,2\}}^{*},\] and due to (26), we conclude that \(\mathbf{P}^{\Pi_{1}}\mathbf{c}_{2}=\mathbf{0}\). * \(i=3,i=4\) According to \(\mathbf{M}\) in (19), \(c_{3,1}=c_{3,3}=c_{4,2}=c_{4,4}\coloneqq c_{\{3,4\}}^{*}\), and \(c_{3,2}=c_{3,4}=c_{4,1}=c_{4,3}=\mathbf{0}\). Therefore, \[\mathbf{P}\mathbf{c}_{3}=\mathbf{P}_{\mathbf{v}_{1}}c_{3,1}+\mathbf{P}_{ \mathbf{v}_{2}}c_{3,2}+\mathbf{P}_{\mathbf{v}_{3}}c_{3,3}+\mathbf{P}_{\mathbf{ v}_{4}}c_{3,4}=(\mathbf{P}_{\mathbf{v}_{1}}+\mathbf{P}_{\mathbf{v}_{3}})c_{\{3,4\}}^{*}= \mathbf{0},\] (28) \[\mathbf{P}\mathbf{c}_{4}=\mathbf{P}_{\mathbf{v}_{1}}c_{4,1}+\mathbf{P}_{\mathbf{v}_ {2}}c_{4,2}+\mathbf{P}_{\mathbf{v}_{3}}c_{4,3}+\mathbf{P}_{\mathbf{v}_{4}}c_{4,4 }=(\mathbf{P}_{\mathbf{v}_{2}}+\mathbf{P}_{\mathbf{v}_{4}})c_{\{3,4\}}^{*}= \mathbf{0}.\] (29) As a result, \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{3}=\mathbf{P}_{\mathbf{v}_{4}}c_{3,1}+ \mathbf{P}_{\mathbf{v}_{3}}c_{3,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{3,3}+\mathbf{ P}_{\mathbf{v}_{1}}c_{3,4}=(\mathbf{P}_{\mathbf{v}_{4}}+\mathbf{P}_{\mathbf{v}_{2}} )c_{\{3,4\}}^{*},\] and due to (29), we conclude that \(\mathbf{P}^{\Pi_{1}}\mathbf{c}_{3}=\mathbf{0}\). Similarly, \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{4}=\mathbf{P}_{\mathbf{v}_{4}}c_{4,1}+ \mathbf{P}_{\mathbf{v}_{3}}c_{4,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{4,3}+ \mathbf{P}_{\mathbf{v}_{1}}c_{4,4}=(\mathbf{P}_{\mathbf{v}_{3}}+\mathbf{P}_{ \mathbf{v}_{1}})c_{\{3,4\}}^{*},\] and due to (28), we conclude that \(\mathbf{P}^{\Pi_{1}}\mathbf{c}_{4}=\mathbf{0}\). * \(i=5\) Since \(c_{5,1}=c_{5,2}=c_{5,3}=c_{5,4}\coloneqq c_{\{5\}}^{*}\) in the fifth column of \(\mathbf{M}\) according to (19), \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{5}=\mathbf{P}_{\mathbf{v}_{4}}c_ {5,1}+\mathbf{P}_{\mathbf{v}_{3}}c_{5,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{5,3}+ \mathbf{P}_{\mathbf{v}_{1}}c_{5,4} =(\mathbf{P}_{\mathbf{v}_{4}}+\mathbf{P}_{\mathbf{v}_{3}}+ \mathbf{P}_{\mathbf{v}_{2}}+\mathbf{P}_{\mathbf{v}_{1}})c_{\{5\}}^{*}\] \[=\mathbf{P}_{\mathbf{v}_{1}}c_{5,1}+\mathbf{P}_{\mathbf{v}_{2}}c_ {5,2}+\mathbf{P}_{\mathbf{v}_{3}}c_{5,3}+\mathbf{P}_{\mathbf{v}_{4}}c_{5,4}= \mathbf{P}\mathbf{c}_{5}=\mathbf{0}.\] * \(i=6,i=7,i=8,i=9\) According to \(\mathbf{M}\) in (19), \(c_{6,1}=c_{7,3}=c_{6,2}=c_{9,4}\coloneqq c_{\{6,7,8,9\}}^{*}\). Thus, \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{6}=\mathbf{P}_{\mathbf{v}_{4}}c_ {6,1}+\mathbf{P}_{\mathbf{v}_{3}}c_{6,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{6,3}+ \mathbf{P}_{\mathbf{v}_{1}}c_{6,4} =\mathbf{P}_{\mathbf{v}_{4}}c_{\{6,7,8,9\}}^{*}=\mathbf{P}\mathbf{c}_{ 9}=\mathbf{0},\] \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{7}=\mathbf{P}_{\mathbf{v}_{4}}c_ {7,1}+\mathbf{P}_{\mathbf{v}_{3}}c_{7,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{7,3}+ \mathbf{P}_{\mathbf{v}_{1}}c_{7,4} =\mathbf{P}_{\mathbf{v}_{2}}c_{6,7,8,9\}^{*}=\mathbf{P}\mathbf{c}_{ 8}=\mathbf{0},\] \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{8}=\mathbf{P}_{\mathbf{v}_{4}}c_ {8,1}+\mathbf{P}_{\mathbf{v}_{3}}c_{8,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{8,3}+ \mathbf{P}_{\mathbf{v}_{1}}c_{8,4} =\mathbf{P}_{\mathbf{v}_{3}}c_{\{6,7,8,9\}}^{*}=\mathbf{P}\mathbf{c}_{ 7}=\mathbf{0},\] \[\mathbf{P}^{\Pi_{1}}\mathbf{c}_{9}=\mathbf{P}_{\mathbf{v}_{4}}c_ {9,1}+\mathbf{P}_{\mathbf{v}_{3}}c_{9,2}+\mathbf{P}_{\mathbf{v}_{2}}c_{9,3}+ \mathbf{P}_{\mathbf{v}_{1}}c_{9,4} =\mathbf{P}_{\mathbf{v}_{1}}c_{\{6,7,8,9\}}^{*}=\mathbf{P}\mathbf{c}_{ 6}=\mathbf{0}.\] * \(i=10,i=11\) This case is exactly like \(i=1,i=2\). * \(i=12,i=13\) This case is exactly like \(i=3,i=4\). We proved that in Example 1, \(\mathbf{P}^{\Pi_{1}}\) in (23) is a non-effective permutation of \(\mathbf{P}\), i.e. \(\mathbf{P}^{\Pi_{1}}\mathbf{M}=\mathbf{0}\). **Remark 7**.: Any non-effective permutation \(\Pi:\{1,\dots,v^{\beta}\}\rightarrow\{1,\dots,v^{\beta}\}\) of the relation matrix, \(\mathbf{P}^{\Pi}\), is in the row span of the relation matrix \(\mathbf{P}\). Since \(\mathbf{P}\) makes up the left null space of \(\mathbf{M}\) by definition, \(\mathbf{P}^{\Pi}\mathbf{M}=\mathbf{0}\) means \(\mathbf{P}^{\Pi}\) should be in the row span of \(\mathbf{P}\). In other words, any row of \(\mathbf{P}^{\Pi}\) is a linear combination of the rows of \(\mathbf{P}\). **Lemma 2**.: In a Vers\((f,N,K,\beta,v,\text{Lagrange})\) system, there are \((v!)^{\beta}\) non-effective permutations of the relation matrix, out of all \((v^{\beta})!\) permutations. We know that the relation matrix has \(v^{\beta}\) sub-matrices and hence, there are \((v^{\beta})!\) possible permutations of relation matrix, including effective and non-effective. Moreover, we know that each sub-matrix is associated with a version vector that has \(\beta\) elements from \([v]\). There are \(v!\) permutations for each of those \(\beta\) elements, and \((v!)^{\beta}\) permutations in total. This lemma states that only those \((v!)^{\beta}\) permutations are non-effective. The proof of this lemma is in Appendix B. ## VIII Fundamental Limit of \(\text{Vers}(f,N,K,\beta,v,\text{Lagrange})\) In this section, we prove Theorem 1, and provide achievability and converse proofs. ### _Converse proof_ In this proof, we show that there exists a particular adversarial behavior, for which, \(t=v^{\beta}d(K-1)\) messages from \(t\) workers are not enough to find \(f(X_{k}),k\in\mathcal{H}\) correctly. In other words, we show that \(\{f(W_{n}),\text{tag}_{n},\ n\in\mathcal{T}\}\), where \(\mathcal{T}\subseteq[N]\), and \(|\mathcal{T}|=v^{\beta}d(K-1)\), leads to more that one possible value for at least one \(f(X_{k}),k\in\mathcal{H}\). Consider the adversarial behavior where the \(\beta\) adversarial data owners collude and distribute their messages to workers in \(\mathcal{T}\) such that for every \(\mathbf{v}\in\mathcal{V}\), there exist exactly \(d(K-1)\) workers in \(\mathcal{T}\) that receive adversarial messages whose versions are according to \(\mathbf{v}\). For a set \(\mathcal{S}\subseteq\mathbb{F}\), and an integer \(D\in\mathbb{N}\), let \(\text{Van}_{\mathcal{S}}^{D}\) be a \(|\mathcal{S}|\times(D+1)\) Vandermonde matrix that has \(|\mathcal{S}|\) rows, and each row consists of powers \(0,1,\ldots,D\) of an element of \(\mathcal{S}\). For example, \[\text{Van}_{\{\alpha_{1},\alpha_{2}\}}^{3}=\begin{bmatrix}\alpha_{1}^{3}& \alpha_{1}^{2}&\alpha_{1}&1\\ \alpha_{2}^{3}&\alpha_{2}^{2}&\alpha_{2}&1\end{bmatrix}.\] **Step \(1\)**. Suppose that an honest data owner \(k\in\mathcal{H}\) receives \(y_{n}=f(W_{n})\), and \(\text{tag}_{n}\), from worker \(n\in\mathcal{T}\). The honest data owner uses tags from workers in \(\mathcal{T}\) to make up \(v^{\beta}\) disjoint sets \(\mathcal{T}_{1},\ldots,\mathcal{T}_{v^{\beta}}\subseteq\mathcal{T}\), such that each set contains workers whose tags are equal, but different from tags of workers in other sets. According to the adversarial behavior that we consider in this proof, \(N_{i}\coloneqq|\mathcal{T}_{i}|=d(K-1)\), \(i\in[v^{\beta}]\). The properties of the tag function ensure the honest data owner that the \(\beta\) adversarial messages used by workers \(n\in\mathcal{T}_{i}\) and \(n^{\prime}\in\mathcal{T}_{j}\) from different sets are different in version of at least one message. Recall that the honest data owner does not know the adversarial data owners. Moreover, the honest data owner cannot know which workers have previously received the same message from the adversarial data owners, and thus have used the same messages in their computations, or, which workers have previously received different messages from the adversarial data owners, and thus have used different messages in their computations. **Step \(2\)**. Let \(\mathbf{y}_{i}=[y_{j}]_{j\in\mathcal{T}_{i}}\), and \(\mathbf{Q}_{i}=\text{Van}_{\{\alpha_{n},n\in\mathcal{T}_{i}\}}^{d(K-1)}\), for \(i\in[v^{\beta}]\). There is an underlying mapping \(\phi:[v^{\beta}]\rightarrow[v^{\beta}]\), that indicates the adversarial messages that workers in \(\mathcal{T}_{1},\ldots,\mathcal{T}_{v^{\beta}}\) have previously received from adversarial data owners. In particular, \(\phi(i)=j\) means \(\mathbf{y}_{i}=\mathbf{Q}_{i}\mathbf{u}_{\mathbf{v}_{j}}\), where \(i,j\in[v^{\beta}]\) and \(u_{\mathbf{v}_{j}}\), \(\mathbf{v}_{j}\in\mathcal{V}\) is a coefficient sub-vector defined in (9). In other words, \(\mathbf{y}_{i}=\mathbf{Q}_{i}\mathbf{u}_{\mathbf{v}_{j}}\) means that workers \(n\in\mathcal{T}_{i}\) have used adversarial messages whose versions are in \(\mathbf{v}_{j}\). Without loss of generality, assume that \(\phi\) is the identity permutation, i.e. \(\mathbf{y}_{i}=\mathbf{Q}_{i}\mathbf{u}_{\mathbf{v}_{i}}\), for all \(i\in[v^{\beta}]\). The honest data owner \(k\) considers a coefficient vector \(\mathbf{U}^{\prime}\) as unknown, and tries to solve equations to find \(\mathbf{U}^{\prime}\). However, the honest data owner does not know \(\phi\), so it needs to consider all different possible cases for \(\phi\), and make sure that all cases result in a single \(f(X_{k})\). Suppose that the honest data owner considers an effective permutation \(\Pi:[v^{\beta}]\rightarrow[v^{\beta}]\), and assumes that \(\mathbf{y}_{i}=\mathbf{Q}_{i}\mathbf{u}^{\prime}_{\mathbf{v}_{\Pi(i)}}\). Therefore, \[\mathbf{Q}_{i}(\mathbf{u}_{\mathbf{v}_{i}}-\mathbf{u}^{\prime}_{\mathbf{v}_{ \Pi(i)}})=\mathbf{0},\quad i\in[v^{\beta}]. \tag{30}\] **Step \(3\)**. Recall that \(f(X_{k})=f(q^{(\mathbf{v})}(\omega_{k}))\), \(k\in\mathcal{H}\), for any \(\mathbf{v}\in\mathcal{V}\), so \[\left[f(X_{k})\right]_{k\in\mathcal{H}}=\big{(}\text{Van}_{\{\omega_{k},k\in \mathcal{H}\}}^{d(K-1)+1}\big{)}\mathbf{u}_{\mathbf{v}_{i}},\quad i\in[v^{ \beta}] \tag{31}\] Consequently, all values of \(\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}\mathbf{u}_{\mathbf{v}_ {i}}\) for \(i\in[v^{\beta}]\) are equal. Similarly, all values of \(\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}\mathbf{u}^{\prime}_{ \mathbf{v}_{\Pi(i)}}\), for \(i\in[v^{\beta}]\) are equal as well. We will show that any effective permutation \(\Pi\) results in different values for \(\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}\mathbf{u}_{\mathbf{v}_ {i}}\) and \(\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}\mathbf{u}^{\prime}_{ \mathbf{v}_{\Pi(i)}}\), \(i,j\in[v^{\beta}]\). If there exists one \(i^{*}\in[v^{\beta}]\) such that \[\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}(\mathbf{u}_{\mathbf{v}_ {i^{*}}}-\mathbf{u}^{\prime}_{\mathbf{v}_{\Pi(i^{*})}})=\mathbf{0}, \tag{32}\] then we can conclude that \[\left[f(X_{k})\right]_{k\in\mathcal{H}}=\big{(}\text{Van}_{\{\omega_{k},k\in \mathcal{H}\}}^{d(K-1)+1}\big{)}\mathbf{u}^{\prime}_{\mathbf{v}_{\Pi(i)}}, \quad i\in[v^{\beta}], \tag{33}\] because \[\left[f(X_{k})\right]_{k\in\mathcal{H}}=\big{(}\text{Van}_{\{\omega_{k},k\in \mathcal{H}\}}^{d(K-1)+1}\big{)}\mathbf{u}_{\mathbf{v}_{i^{*}}} \tag{34}\] and \[\big{(}\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}\big{)}\mathbf{u }^{\prime}_{\mathbf{v}_{\Pi(i)}}=\big{(}\text{Van}_{\{\omega_{k},k\in\mathcal{ H}\}}^{d(K-1)+1}\big{)}\mathbf{u}^{\prime}_{\mathbf{v}_{\Pi(i^{*})}},\quad i \in[v^{\beta}]. \tag{35}\] In other words, if there exists such \(i^{*}\), permutation \(\Pi\) results in correct answer for \(f(X_{k}),k\in\mathcal{H}\). By contradiction, we suppose that there exists such \(i^{*}\). Therefore, \[\mathbf{Q}_{i}(\mathbf{u}_{\mathbf{v}_{i}}-\mathbf{u}^{\prime}_{ \mathbf{v}_{\Pi(i)}})=\mathbf{0},\quad i\in[v^{\beta}], \tag{36}\] \[\big{(}\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1} \big{)}(\mathbf{u}_{\mathbf{v}_{i}}-\mathbf{u}^{\prime}_{\mathbf{v}_{\Pi(i)}}) =\mathbf{0},\quad i\in[v^{\beta}].\] The matrix \(\mathbf{Q}_{i}\) only contains powers of \(\alpha_{n},n\in\mathcal{T}_{i}\), and \(\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}\) contains powers of \(\omega_{k},k\in\mathcal{H}\), and these elements are distinct and chosen uniformly at random from \(\mathbb{F}\). Therefore, the matrix \[\left[\overbrace{\text{Van}_{\{\omega_{k},k\in\mathcal{H}\}}^{d(K-1)+1}}^{d(K- 1)+1}\right]\] is MDS, its dimension is \((d(K-1)+h)\times(d(K-1)+1)\), and its rank is \(d(K-1)+1\). The length of \(\mathbf{u}_{\mathbf{v}_{i}}-\mathbf{u}^{\prime}_{\mathbf{v}_{\Pi(i)}}\) is \(d(K-1)+1\), therefore any \(d(K-1)+1\) equations of (36) result in \[\mathbf{u}_{\mathbf{v}_{i}}=\mathbf{u}^{\prime}_{\mathbf{v}_{\Pi(i)}},\quad i \in[v^{\beta}]. \tag{37}\] **Step \(4\)**. We know that \(\mathbf{PU}=0\) and \(\mathbf{PU}^{\prime}=0\), thus, \[\mathbf{P}_{\mathbf{v}_{1}}\mathbf{u}_{\mathbf{v}_{1}}+\cdots+ \mathbf{P}_{\mathbf{v}_{\alpha}\beta}\mathbf{u}_{\mathbf{v}_{\alpha}\beta}= \mathbf{0}, \tag{38}\] \[\mathbf{P}_{\mathbf{v}_{\Pi(1)}}\mathbf{u}_{\mathbf{v}_{\Pi(1)}}^ {\prime}+\cdots+\mathbf{P}_{\mathbf{v}_{\Pi(\mathbf{v}^{\beta})}}\mathbf{u}_{ \mathbf{v}_{\Pi(\mathbf{v}^{\beta})}}^{\prime}=\mathbf{0}. \tag{39}\] We substitute (37) in (39), \[\mathbf{P}_{\mathbf{v}_{\Pi(1)}}\mathbf{u}_{\mathbf{v}_{1}}+\cdots+ \mathbf{P}_{\mathbf{v}_{\Pi(\mathbf{v}^{\beta})}}\mathbf{u}_{\mathbf{v}^{ \beta}}=\mathbf{0}, \tag{40}\] Recall that \(\Pi\) is an effective permutation, so according to definition, \([\mathbf{P}_{\mathbf{v}_{\Pi(1)}}\ \ldots\ \mathbf{P}_{\mathbf{v}_{\Pi(\mathbf{v}^{ \beta})}}]\) is not in row span of \(\mathbf{P}\). Therefore, equation (40) exerts additional constraint on \(\mathbf{U}\) and consequently all messages of data owners. However, messages of data owners, honest and adversarial, can have any value and any constraint on them means contradiction. This concludes the converse proof. ### _Achievability Proof_ In the achievability proof, we suppose that the honest data owner \(k\in\mathcal{H}\) receives \(t^{*}=v^{\beta}d(K-1)+1\) messages from \(t^{*}\) workers. The honest data owner uses the received tags to make up \(v^{\beta}\) disjoint sets of workers, such that each set contains workers whose tags are equal but different from tags of workers in other sets. Since \(t^{*}>v^{\beta}d(K-1)\), one of these sets definitely contains at least \(d(K-1)+1\) workers, which we call \(\mathcal{T}^{*}\). There exists \(i^{*}\in[v^{\beta}]\) such that \[\big{(}\text{Van}_{\{\alpha_{n},n\in\mathcal{T}^{*}\}}^{d(K-1)} \big{)}\mathbf{u}_{\mathbf{v}_{i^{*}}}=[y_{j}]_{j\in\mathcal{T}^{*}}. \tag{41}\] This equation determines \(\mathbf{u}_{\mathbf{v}_{i^{*}}}\) uniquely. Then, the honest data owner can find \(f(X_{k})=\big{(}\text{Van}_{\omega_{k}}^{d(K-1)}\big{)}\mathbf{u}_{\mathbf{v} _{i^{*}}}\). This completes the proof. One of the assumptions in Theorem 1 is that workers are honest and send the correct values of \(f(W_{n}),\text{tag}_{n},\ n\in[N]\) to data owners. Suppose that \(a\in\mathbb{N},a\leq N\) workers are malicious and may send incorrect \(f(W_{n})\) or \(\text{tag}_{n}\) back to data owners. Incorrect tag values are detrimental, because they mislead data owners in partitioning. Following the notations in converse proof, assume that \(\mathbf{y}_{i}=\mathbf{Q}_{i}\mathbf{u}_{\mathbf{v}_{i}}+\mathbf{e}_{i}\) is the truth, and \(\mathbf{y}_{i}=\mathbf{Q}_{i}\mathbf{u}_{\Pi(\mathbf{v}_{i})}^{\prime}+ \mathbf{e}_{i}^{\prime}\) is an assumption that an honest data owner makes because of not knowing the truth, and \(\mathbf{e}_{i},\mathbf{e}_{i}^{\prime}\) are error vectors. Therefore, in the presence of adversarial workers, equation (30) does not hold anymore, and the approach used in the previous converse proof can not be used here. However, we know from coding theory that results from \(t=v^{\beta}d(K-1)+1+2a\) workers are enough to decode \(f(X_{k}),k\in\mathcal{H}\) correctly. ## Appendix A Proof of Theorem 2 In order to show that a tag function exists, we consider all \(J\) functions from \(\mathbb{U}^{K}\) to \(\mathbb{U}\) to be equiprobable and calculate the average probability of the event that two arbitrary workers produce the same tag. Consider two workers \(n_{1},n_{2}\in[N]\) that receive different \(X_{\mathcal{A},n_{1}}\) and \(X_{\mathcal{A},n_{2}}\) from adversarial data owners, and \(X_{\mathcal{H}}\) from honest data owners. We show that the average probability of \(\mathtt{tag}_{n_{1}}=\mathtt{tag}_{n_{2}}\), taken over all \(J\) functions, and the messages of honest data owners, is very small. Then, we conclude that there exists a function, for which the average probability of \(\mathtt{tag}_{n_{1}}=\mathtt{tag}_{n_{2}}\), taken over the messages of honest data owners, is very small, which means a tag function exists. Any function \(J\) from \(\mathbb{U}^{K}\) to \(\mathbb{U}\), i.e. from a large space to a small space, would map some different inputs to the same outputs. For a function \(J\), and two adversarial messages \(X_{\mathcal{A},n_{1}}\) and \(X_{\mathcal{A},n_{2}}\), we define a function \(g\) such that \(\mathcal{S}\coloneqq g(J,X_{\mathcal{A},n_{1}},X_{\mathcal{A},n_{2}})\subseteq \mathbb{U}^{h}\) is the set that \[X_{\mathcal{H}}\in\mathcal{S}:J(X_{\mathcal{H}},X_{\mathcal{A},n _{1}}) =J(X_{\mathcal{H}},X_{\mathcal{A},n_{2}}),\] \[X_{\mathcal{H}}\in\mathbb{U}^{h}\setminus\mathcal{S}:J(X_{ \mathcal{H}},X_{\mathcal{A},n_{1}}) \neq J(X_{\mathcal{H}},X_{\mathcal{A},n_{2}}).\] Since the data of honest data owners are uniformly and independently chosen from \(\mathbb{U}^{h}\), and are independent of \(X_{\mathcal{A},n_{1}}\) and \(X_{\mathcal{A},n_{2}}\), \[\text{Pr}\big{(}J(X_{\mathcal{H}},X_{\mathcal{A},n_{1}})=J(X_{ \mathcal{H}},X_{\mathcal{A},n_{2}})\big{)}=\frac{|\mathcal{S}|}{\mathbb{U}^{h}}, \tag{42}\] where \(\mathcal{S}=g(J,X_{\mathcal{A},n_{1}},X_{\mathcal{A},n_{2}})\subseteq\mathbb{ U}^{h}\). Since all \(J\) functions are equiprobable, i.e. \(\text{Pr}(J)=\frac{1}{|\mathbb{U}|^{h}}\), \[\sum_{J:g(J,X_{\mathcal{A},n_{1}},X_{\mathcal{A},n_{2}})=\mathcal{ S}}\text{Pr}(J)=(1-\frac{1}{|\mathbb{U}|})^{(|\mathbb{U}|^{h}-|\mathcal{S}|)}( \frac{1}{|\mathbb{U}|})^{|\mathcal{S}|}. \tag{43}\] Using (42) and (43), \[\text{Pr}(\mathtt{tag}_{n_{1}}=\mathtt{tag}_{n_{2}}) =\sum_{\mathcal{S}}\sum_{J:g(J,X_{\mathcal{A},n_{1}},X_{\mathcal{ A},n_{2}})=\mathcal{S}}\text{Pr}(J)\text{Pr}\big{(}J(X_{\mathcal{H}},X_{ \mathcal{A},n_{1}})=J(X_{\mathcal{H}},X_{\mathcal{A},n_{2}})\big{)}\] \[=\sum_{\mathcal{S}}(1-\frac{1}{|\mathbb{U}|})^{(|\mathbb{U}|^{h}- |\mathcal{S}|)}(\frac{1}{|\mathbb{U}|})^{|\mathcal{S}|}\frac{|\mathcal{S}|}{ \mathbb{U}^{h}}\] \[=\sum_{s=1}^{|\mathbb{U}|^{h}}\binom{|\mathbb{U}|^{h}}{s}(1-\frac {1}{|\mathbb{U}|})^{(|\mathbb{U}|^{h}-s)}(\frac{1}{|\mathbb{U}|})^{s}\frac{s}{ \mathbb{U}^{h}}\] \[=\sum_{s=1}^{|\mathbb{U}|^{h}}\binom{|\mathbb{U}|^{h}-1}{s-1}(1- \frac{1}{|\mathbb{U}|})^{(|\mathbb{U}|^{h}-s)}(\frac{1}{|\mathbb{U}|})^{s}= \frac{1}{|\mathbb{U}|} \tag{44}\] If \(\mathbb{U}\) is large enough that \(\frac{1}{\epsilon}\leq|\mathbb{U}|\), then \(\text{Pr}(\mathtt{tag}_{n_{1}}=\mathtt{tag}_{n_{2}})\leq\epsilon\), and the proof is complete. ## Appendix B Proof of Lemma 2 In this appendix, we prove that there are \((v!)^{\beta}\) non-effective permutations of the relation matrix. In Section VII, we examined an example, and proved the non-effectiveness of a permutation of its relation matrix. The understanding of that example is helpful for understanding the proof in this section. We show that all permutations of the relation matrix of the form \(\Pi=\pi_{1}\times\pi_{2}\times\cdots\times\pi_{\beta}\) are non-effective, i.e. \(\mathbf{P}^{\Pi}\mathbf{M}=\mathbf{0}\), where \(\pi_{i}:[v]\rightarrow[v]\), \(i\in[\beta]\) is a permutation on \([v]\). We parse the proof into the following steps. _Step 1_ Consider column \(i\in[L]\), \(\mathbf{c}_{i}=[c_{\mathbf{v}_{1},i}^{\intercal}\ \ldots\ c_{\mathbf{v}_{\varnothing},i}^{\intercal}]^{\intercal}\) of \(\mathbf{M}\), where the length of each so called _sub-column_\(c_{\mathbf{v}_{i}},i\in[v^{\beta}]\) is \(d(K-1)+1\), \(\{\mathbf{v}_{1},\ldots,\mathbf{v}_{v^{\beta}}\}=\mathcal{V}\), and \(L\) is the length of monomial vector \(\mathbf{X}\). The sub-column \(c_{\mathbf{v},i}\), \(\mathbf{v}\in\mathcal{V},i\in[L]\), contains the coefficients of monomial \(\mathbf{X}[i]\) in coefficient sub-vector \(\mathbf{u}_{\mathbf{v}}\). If coefficient sub-vector \(\mathbf{u}_{\mathbf{v}}\) does not have the monomial \(\mathbf{X}[i]\), then \(c_{\mathbf{v},i}=\mathbf{0}\). For two coefficient sub-vectors \(\mathbf{u}_{\mathbf{v}}\) and \(\mathbf{u}_{\mathbf{v}^{\prime}}\) that both have \(\mathbf{X}[i]\), \(c_{\mathbf{v},i}=c_{\mathbf{v}^{\prime},i}\coloneqq c_{i}^{*}\), because these coefficient sub-vectors only differ in adversarial messages, and not the constant coefficients. Therefore, the nonzero sub-columns in a column of \(\mathbf{M}\) are the same. This is evident in \(\mathbf{M}\) of Example 1 in (19). We define two neighbour columns of \(\mathbf{M}\) as following. Consider two columns \(i,i^{\prime}\in[L]\), where the adversarial and honest messages in monomials \(\mathbf{X}[i]\) and \(\mathbf{X}[i^{\prime}]\) are the same, but the versions of adversarial messages in them are different. For example, \(i=6,i^{\prime}=7\) in (19) correspond to monomials \(X_{1}^{(1)}X_{2}^{(1)}\) and \(X_{1}^{(2)}X_{2}^{(1)}\), which differ in the version of \(X_{1}\), but both are made of the same messages \(X_{1}\) and \(X_{2}\). For two neighbour columns, \(c_{i}^{*}=c_{i^{\prime}}^{*}\), i.e. the nonzero sub-columns in two neighbour columns are the same. For example, the nonzero sub-columns of columns \(6\) and \(7\) in (19) are the same. _Step 2_ Let \(\mathcal{V}_{\mathbf{c}_{i}}\), \(i\in[L]\), be the set of version vectors of coefficient sub-vectors that have the monomial \(\mathbf{X}[i]\), or in other words, version vectors that correspond to nonzero sub-columns of \(\mathbf{c}_{i}\). For instance, in Example 1, using (13) and (19), we find that \(\mathcal{V}_{\mathbf{c}_{1}}=\{[1,1,0],[1,2,0]\}\), \(\mathcal{V}_{\mathbf{c}_{2}}=\{[2,1,0],[2,2,0]\}\), \(\mathcal{V}_{\mathbf{c}_{3}}=\{[1,1,0],[2,1,0]\}\), and so forth. From the previous step, we know that \(c_{\mathbf{v},i}=c_{i}^{*}\) for \(\mathbf{v}\in\mathcal{V}_{\mathbf{c}_{i}}\). For each column \(\mathbf{c}_{i}\) of \(\mathbf{M}\), there are two possible cases. * **Case 1**. The monomial \(\mathbf{X}[i]\) is comprised of honest messages only, e.g. \((X_{3})^{2}\). In this case, all sub-columns of \(\mathbf{c}_{i}\) are equal and nonzero, i.e. \(c_{\mathbf{v}_{1},i}=\cdots=c_{\mathbf{v}_{v^{\beta}},i}\coloneqq c_{i}^{*}\), and \(\mathcal{V}_{\mathbf{c}_{i}}=\mathcal{V}\). In Example 1, \(\mathbf{c}_{5}\) of \(\mathbf{M}\) in (19) is of this kind, as \[c_{\mathbf{v}_{1},5}=c_{\mathbf{v}_{2},5}=c_{\mathbf{v}_{3},5}=c_{\mathbf{v}_ {4},5}=\begin{bmatrix}\frac{1}{4}\\ -\frac{3}{2}\\ \frac{13}{4}\\ -3\\ 1\end{bmatrix}\] (45) * **Case 2**. The monomial \(\mathbf{X}[i]\) contains adversarial messages, e.g. \(X[i]=(X_{1}^{(1)})^{2}\). In this case, version vectors \(\mathbf{v}\in\mathcal{V}_{\mathbf{c}_{i}}\) have the same version value for the adversarial messages in \(X[i]\). For example, \(\mathcal{V}_{\mathbf{c}_{1}}=\{[1,1,0],[1,2,0]\}\), and version vectors in \(\mathcal{V}_{\mathbf{c}_{1}}\) indicate version \((1)\) for \(X_{1}\) because \(X[i]=(X_{1}^{(1)})^{2}\). As another example, for \(X[11]=X_{1}^{(2)}X_{3}\) in (19), \(\mathcal{V}_{\mathbf{c}_{11}}=\{[2,1,0],[2,2,0]\}\), and version vectors in \(\mathcal{V}_{\mathbf{c}_{11}}\) indicate version \((2)\) for \(X_{1}\). _Step 3_ We know \(\mathbf{P}\mathbf{c}_{i}=\mathbf{0}\), \(i\in[L]\). Recall that \(\mathbf{P}=[\mathbf{P}_{\mathbf{v}_{1}}\dots\mathbf{P}_{\mathbf{v}_{v^{\beta}}}]\), where \(\{\mathbf{v}_{1},\dots,\mathbf{v}_{v^{\beta}}\}=\mathcal{V}\). Therefore, \[\mathbf{P}\mathbf{c}_{i}=\sum_{\mathbf{v}\in\mathcal{V}_{\mathbf{c}_{i}}} \mathbf{P}_{\mathbf{v}}c_{\mathbf{v},i}=\bigg{(}\sum_{\mathbf{v}\in\mathcal{V }_{\mathbf{c}_{i}}}\mathbf{P}_{\mathbf{v}}\bigg{)}c_{i}^{*}=\mathbf{0} \tag{46}\] Consider a permutation of form \(\Pi=\pi_{1}\times\pi_{2}\times\dots\times\pi_{\beta}\), where \(\pi_{i}:[v]\rightarrow[v]\), \(i\in[\beta]\). Assume that this permutation is as following. * For a particular adversarial message \(X_{k},k\in\mathcal{A}\), and two particular version values \(m,m^{\prime}\in[v]\), \[\pi_{k}[m] =m^{\prime},\] \[\pi_{k}[m^{\prime}] =m,\] \[\pi_{k}[l] =l,l\neq m,m^{\prime}.\] * For all \(j\in\mathcal{A},j\neq k\), \[\pi_{j}[l]=l,\quad l\in[v].\] It suffices to prove the non-effectiveness of this permutation, because any other permutation is built from units of permutations like this. For each column \(\mathbf{c}_{i}\), there are two possible cases following the cases in the previous step. * **Case 1**. If \(\mathcal{V}_{\mathbf{c}_{i}}=\mathcal{V}\), then \[\mathbf{P}^{\Pi}\mathbf{c}_{i}=(\sum_{\mathbf{v}\in\mathcal{V}}\mathbf{P}_{ \mathbf{v}})c_{i}^{*}=\mathbf{P}\mathbf{c}_{i}=\mathbf{0}.\] (47) * **Case 2**. Assume that monomial \(\mathbf{X}[i]\) contains version \(m\) of the adversarial message \(k\). There is a monomial \(\mathbf{X}[i^{\prime}]\), \(i^{\prime}\in[L]\) that differs from \(\mathbf{X}[i]\) only in the version of \(X_{k}\), which is \(m^{\prime}\) in \(\mathbf{X}[i^{\prime}]\). Clearly, \[\mathbf{P}\mathbf{c}_{i^{\prime}}=\sum_{\mathbf{v}\in\mathcal{V}_{\mathbf{c}_ {i^{\prime}}}}\mathbf{P}_{\mathbf{v}}c_{\mathbf{v},i^{\prime}}=\bigg{(}\sum_{ \mathbf{v}\in\mathcal{V}_{\mathbf{c}_{i^{\prime}}}}\mathbf{P}_{\mathbf{v}} \bigg{)}c_{i^{\prime}}^{*}=\mathbf{0}.\] (48) It is easy to verify that for each \(\mathbf{v}^{\prime}\in\mathcal{V}_{\mathbf{c}_{i^{\prime}}}\), there is a \(\mathbf{v}\in\mathcal{V}_{\mathbf{c}_{i}}\), where \(\mathbf{v}^{\prime}[k]=m^{\prime}\) and \(\mathbf{v}[k]=m\). From the definition in Step 1, we know that columns \(i\) and \(i^{\prime}\) are neighbours and thus, \(c_{i}^{*}=c_{i^{\prime}}^{*}\). Therefore, \[\mathbf{P}^{\Pi}\mathbf{c}_{i}=\sum_{\begin{subarray}{c}\mathbf{v}^{\prime} \in\mathcal{V}_{\mathbf{c}_{i^{\prime}}}\\ \mathbf{v}\in\mathcal{V}_{\mathbf{c}_{i}}\end{subarray}}\mathbf{P}_{\mathbf{v} ^{\prime}}c_{\mathbf{v},i}=\bigg{(}\sum_{\mathbf{v}\in\mathcal{V}_{\mathbf{c}_ {i^{\prime}}}}\mathbf{P}_{\mathbf{v}}\bigg{)}c_{i}^{*}=\bigg{(}\sum_{\mathbf{v }\in\mathcal{V}_{\mathbf{c}_{i^{\prime}}}}\mathbf{P}_{\mathbf{v}}\bigg{)}c_{i^ {\prime}}^{*}=\mathbf{0},\] (49) where the last equality is due to (48). In summary, we proved \(\mathbf{P}^{\Pi}\mathbf{c}_{i}=\mathbf{0}\) using \(\mathbf{P}\mathbf{c}_{i^{\prime}}=\mathbf{0}\).
2308.13326
ExD: Explainable Deletion
This paper focuses on a critical yet often overlooked aspect of data in digital systems and services-deletion. Through a review of existing literature we highlight the challenges that user face when attempting to delete data from systems and services, the lack of transparency in how such requests are handled or processed and the lack of clear assurance that the data has been deleted. We highlight that this not only impacts users' agency over their data but also poses issues with regards to compliance with fundamental legal rights such as the right to be forgotten. We propose a new paradign-explainable deletion-to improve users' agency and control over their data and enable systems to deliver effective assurance, transparency and compliance. We discuss the properties required of such explanations and their relevance and benefit for various individuals and groups involved or having an interest in data deletion processes and implications. We discuss various design implications pertaining to explainable deletion and present a research agenda for the community.
Kopo M. Ramokapane, Awais Rashid
2023-08-25T11:59:37Z
http://arxiv.org/abs/2308.13326v1
# ExD: Explainable Deletion ###### Abstract This paper focuses on a critical yet often overlooked aspect of data in digital systems and services--_deletion_. Through a review of existing literature we highlight the challenges that user face when attempting to delete data from systems and services, the lack of transparency in how such requests are handled or processed and the lack of clear assurance that the data has been deleted. We highlight that this not only impacts users' agency over their data but also poses issues with regards to compliance with fundamental legal rights such as the _right to be forgotten_. We propose a new paradigm - _explainable deletion_ - to improve users' agency and control over their data and enable systems to deliver effective assurance, transparency and compliance. We discuss the properties required of such explanations and their relevance and benefit for various individuals and groups involved or having an interest in data deletion processes and implications. We discuss various design implications pertaining to explainable deletion and present a research agenda for the community. data deletion, explainability, privacy, usable security, user agency + Footnote †: _NSPW ’23, September 18–21, 2023, Segovia, Spain_ © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN xxxx.. popularity. Explanations are seen as a means to achieve transparency and accountability. The AI literature posits that explanations enable users (or humans) to understand and interpret the system, providing insights into the rationale behind certain decisions and predicting their consequences. This perspective adopts a goal-oriented approach, focusing on the intended achievements of explanations. Consequently, the main goal of an explanation is to attain interpretability and comprehension of a system and its decision-making processes. This suggests that explanations should include pertinent information that is easy to understand. Considering these arguments, we propose a novel paradigm in security known as _Explainable Deletion_ (ExD). The core objective of explainable deletion is to enhance transparency and accountability around data deletion and facilitate user understanding of data deletion processes. This approach empowers users to make informed decisions regarding systems and how they handle data deletion. Current approaches to deletion often leave users with limited or no understanding of the actual consequences of their actions [54, 59], the extent or completeness of the deletion process [40, 54, 60, 63], and the potential risks of system failures in effectively removing their data [63, 66, 87]. By addressing the need for explainability in data deletion, we aim to contribute to advancing security practices, and fostering user trust in technology systems. Prior works [54, 59, 60] show that aspects of deletion are often missing yet users desire to know some of them. Moreover, Liu et al. [45] and Habib et al. [29] showed that when users are given extra bit of information on deletion their privacy-related responses change. By providing explanations, users' mental models can be enhanced, the gap between the "actual data deletion process" and the "perceived deletion process" can be bridged. For service providers, the realization of explainable deletion holds significant potential in fostering trust between them and their users. Furthermore, it facilitates accountability in data deletion practices vis-a-vis regulatory requirements. By shedding light on the various aspects of data deletion, service providers can cultivate a sense of trust and reliability in their users, assuring them that their data will be appropriately managed and safeguarded. While our work focuses on data deletion, we also acknowledge that the principle of explainability holds significant importance for other facets of the data cycle. The tenet can be applied to other data management aspects such as data storage, portability, archiving, de-identification, and accessibility. Data deletion should not be perceived as an isolated process but rather intricately connected to other facets of the data lifecycle. In fact, understanding the explainability of deletion can have profound implications for broader data management processes and may help contribute towards Equitable Privacy. Our work focuses specifically on explainable deletion because we recognize a significant gap in the literature concerning this critical aspect of the data lifecycle. While there is no doubt that a holistic view of explainability is essential, it is also important to address individual components in-depth to develop a more comprehensive understanding of the data management process. Addressing the explainability of data deletion is a foundational step toward tackling more complex challenges in the data lifecycle. This paper makes the following contributions: * **Introduction of Explainable Deletion:** We introduce the concept of Explainable Deletion as a novel approach to enhance transparency in data deletion processes. We highlight the significance of transparency in addressing the challenges associated with data deletion. * **Exploration of the Benefits of Explainable Deletion:** We shed light on the potential benefits of Explainable Deletion in addressing data deletion challenges. We emphasize the importance of deletion transparency in fostering trust, ensuring accountability, and empowering users to make informed decisions about their data. * **Framework for Implementing Explainable Deletion:** We provide a framework for implementing Explainable Deletion. This framework offers design considerations for practitioners and developers to effectively incorporate transparency mechanisms into data deletion processes. * **Provision of a Research Agenda:** We identify key research areas that require further investigation to advance the understanding and practical application of Explainable Deletion. This research agenda aims to promote deletion transparency in systems and guides researchers and practitioners to contribute to the ongoing development of Explainable Deletion. The rest of the paper proceeds as follows. Section 2 discusses the dimensions of Explainable Deletion, utilizing examples to demonstrate specific situations that emphasize the importance and relevance of ExD. In Section 3, we review the current state of the art on data deletion, while Section 4 introduces the concept of ExD, including its definition, the aspects of Deletion, and the beneficiaries of ExD. Section 5 delves into design considerations for Explanation Deletion, outlining what designers should prioritize. In Section 6, we illustrate with an example how various aspects of deletion can help realize dimensions of ExD. Section 7 presents a research agenda for the community, and finally, we conclude in Section 8. ## 2 Dimensions of Explainability In this section, we present illustrative examples that highlight various dimensions of explainable deletion. These examples are intended to demonstrate specific situations or scenarios emphasizing the importance and relevance of explainable deletion in practice. While multiple dimensions may be applicable in each case, we utilize each example to emphasize the significance of a particular dimension. ### Agency [style=] _Instant messaging apps._ In 2021, WhatsApp introduced a "view once" \(a\) feature for photos and videos. With this feature, the sender can mark a photo as view once, and the photo will disappear after the receiver has opened it. While this type of media sharing promises privacy, it remains unclear whether the disappearance means the deletion is permanent or not. Another feature offered by WhatsApp allows users to delete messages for themselves or everyone involved in the conversation. However, when users choose the "Delete for me" option, they lose control over the message they have sent. If they later desire to delete the message for everyone, they will no longer have access to it, though other participants in the chat still have access. The consequences of the deleting messages in this manner is not made clear or well-communicated to users [67]. Footnote 1: [https://faq.whatsapp.com/1077018839582332](https://faq.whatsapp.com/1077018839582332) Providing explanations regarding data deletion serves multiple purposes. Firstly, it empowers users with a comprehensive understanding of the available mechanisms for deleting data and the processes involved during and after deletion. By knowing where these mechanisms are located, users can effectively exercise their right to delete data. Secondly, explanations will assist users in developing a deeper understanding of how different systems handle data deletion and the extent to which data is deleted. This is particularly significant as users often possess incomplete mental models of deletion. Moreover, by gaining insights into the variations among systems regarding deletion procedures and the scope of data deletion, users can make more informed choices regarding their privacy. This includes decisions concerning selecting or avoiding specific systems or services that align with their desired level of data deletion or privacy through deletion. ### Assurance [style=] _In 2017, there were reports that some deleted files and folders from Dropbox \(a\) unexpectedly reappeared in users' accounts [20]. According to the reports [13, 20], some of these files had been deleted for over six years, despite Dropbox's stated retention policy of 30 days. The risks around the method used to delete data from Dropbox were not fully disclosed to users. The procedures handling data deletion are usually undisclosed to users, leaving them uncertain about the specific mechanisms employed and the outcomes to expect following the deletion of their data._ The actual workings of data deletion usually take place away from users' eyes. In most cases, they only get to see the outcome of the deletion process. Consequently, they usually rely on trust that the system or service provider has fulfilled their commitment to delete their data. By explaining and being transparent about the procedures and outcomes associated with data deletion, users can obtain a more comprehensive understanding of the underlying mechanisms at work. This, as a result, will foster a sense of assurance and trust that their data is managed responsibly and ethically. Moreover, providing feedback may act as verification or reassurance, confirming that the users' intended actions have been fulfilled as expected. ### Control [style=] _Accidental Deletion._ In 2014, a Dropbox user named Jan "Curn lost 8,343 photos and videos due to accidental deletion [17]. Jan discovered this unfortunate incident two months later while searching for a presentation for his Ph.D. thesis defense. Despite seeking help from Dropbox, he was informed that he had deleted his files several months ago, and since the retention period had expired (60 days). Unfortunately, most of the files were permanently deleted. The accidental deletion occurred when Jan attempted to desynchronize some folders to create space in his computer, but his computer Dropbox client crashed. Upon restart, the client considered all files deleted and removed them from the server. Explainable deletion affords users freedom and control over their deletion actions while interacting with systems. Users often inadvertently delete their data, and it is crucial for them to be aware of the available options for recovering from such mistakes. By providing explanations regarding possible courses of action, such as data recovery, their anxieties can be alleviated, and they can continue to explore and utilize the system with confidence. Moreover, understanding the various types of deletions available to them will give them the autonomy to choose the deletion type that is right and beneficial to them and their businesses. Understanding deletion can also act as a proactive strategy for users; it will help users reflect on their choices and behaviour before choosing or while interacting with various technologies. Overall, explaining deletion will help users understand how much control they have over their data while using the system. ### Compliance [style=] _Privacy regulations, for example, the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) afford users (i.e., data subjects) the fundamental right to erasure. This right empowers users to request the deletion of their personal information from service providers. However, despite being able to exercise this right, users often encounter significant challenges in understanding the process of initiating such requests. Crucial guidance about the specific methods for requesting data erasure is frequently either absent or deliberately obscured, leaving users uninformed about the necessary steps to take. Regulations around deletion, for example, GDPR Article 17 requires service providers to provide processes and mechanisms to ensure that they can fulfil users' data deletion requests promptly and effectively. Therefore, explaining deletion can demonstrate that service providers are committed to complying with deletion regulations and have measures to handle deletion requests from their customers. Furthermore, explainable deletion will assist service providers in meeting users' right to be informed. Lastly, by being transparency about data deletion practices, service providers can demonstrate their commitment to regulatory compliance. ### Transparency _Smart home technologies._ Smart home technologies present a complex ecosystem to users, encompassing interconnected devices and involving multiple parties [2, 14]. The usage of these devices entails the sharing or storing of data by various parties, often using various technologies, such as the cloud, to serve a range of purposes. Data deletion processes in this context are not always confined to the devices themselves but may instead happen through web or mobile app interfaces. However, in most cases, how data is deleted, the specific data being deleted, retention policies and the underlying mechanisms for deletion remain undisclosed and non-transparent to users. Deleting data from complex systems, such as smart home technologies and cloud computing systems, is a multifaceted task that often requires the execution of multiple processes. Ensuring complete data deletion from such ecosystems is inherently challenging. For instance, data stored in the cloud can be distributed across various layers (including logical layers) and multiple servers located worldwide. Consequently, when a deletion request is made, the deletion process must propagate through all these layers and services to remove the data effectively. Unfortunately, this process is generally unknown to users. Explaining (e.g., whitebox explanations [31]) such a process would not only assist users in comprehending that their data may not be entirely deleted but also help them understand the reasons behind this limitation. Furthermore, explaining the deletion process can help service providers show that their actions are not malicious or compromising users' data. ## 3 Current State of the Art ### Nominal Data Deletion Modern computing systems allow users to delete their data when deemed unnecessary. The deletion process typically involves user-initiated actions, such as selecting the desired item for deletion and executing the corresponding command. In the case of computer systems, the deleted item is often moved to the trash can or recycle bin. If the user wishes to delete the item permanently, they can select it from the recycle bin and use the "delete permanently" option. However, while the deleted item may no longer be visible, it is technically not fully deleted and can still be recovered [18]. When a user requests the deletion of an item from the recycle bin, the item is first marked as deleted and made inaccessible through any user interface. The system then proceeds to remove the file from the storage medium. Depending on the medium used, the system may mark the memory blocks previously occupied by the data as available for reuse. The item's metadata is also updated to ensure it is no longer linked to other data. However, the entire contents of the deleted item remain in the system. The complexity of the process increases when data is replicated across multiple systems or accessible by multiple devices. Regardless of the system, remnants of the supposedly deleted data may be left behind, and using certain tools makes it possible to recover the supposedly "deleted" data [18, 63, 70]. ### Assured deletion To address the limitations of nominal deletion and facilitate complete data deletion, researchers have proposed various methodologies. Some approaches [35, 63, 64] focus on the physical destruction of the storage medium, which provides a high degree of assurance but is expensive and rarely employed. Alternatively, other techniques [18, 47] involve overwriting the memory blocks previously used for storing the data or filling the storage media with new insensitive data, thus concealing the original data. Other strategies [57, 76, 77] concentrate on rendering the data unusable or inaccessible. Rather than removing the data from the media, it is encrypted, and the encryption key is securely disposed of. Another way to satisfy deletion requirements without removing data is through privacy concepts such as differential privacy, where identifiable data is removed or disassociated from the dataset [26, 68]. In the case of protecting "deleted" tweets, Minaei et al. [51] developed a tool to hide damaging tweet deletions. Paul and Saxena [56] proposed a scheme to proof deletion in the cloud. Regarding emails, Monson et al. [53] evaluated two secure email prototypes with users on how deletion can be achieved in secure email communications. Numerous studies have pointed out several obstacles that make it difficult to delete data completely. Complete deletion methods can be expensive and may render storage unusable. For instance, securely overwriting data on magnetic media may require performing 35 specific-pattern overwrites on each block [28]. Scrubbing methods, which involve draining the electrical charge from cells, as explored by Wei et al. [82], render the cells unusable after deletion. Moreover, other technologies present their own challenges. For instance, Solid State Disks (SSDs) implement a wear-leveling technique that can prevent the complete overwriting of all cells, potentially leaving deleted data vulnerable to recovery [71]. Machine learning models can memorize the data they have been trained on, but making them forget it is challenging [72, 78]. Almuhimedi et al. [6] also revealed that Twitter posts are challenging to delete due to replies, comments, and the presence of internet archives. Similarly, Zhou et al. [49] showed that users' historical deletion patterns could reveal regrettable tweets. Reardon et al. [64] also highlighted that secure deletion is challenging because of many different adversarial capabilities. In cloud deletion, Ramokapane et al. [58] found that despite the intentions of service providers, the inherent characteristics of cloud infrastructure pose significant challenges to achieving secure data deletion. Shu et al. [70] also demonstrated that data erasure flaws in the Android system are inherited from the underlying Linux kernel. Encryption-based solutions [58, 76, 77] for cloud storage often introduce computational overhead and raise key management concerns. ### Users' Perceptions and Practices _Understanding of deletion._ Regarding users' perceptions and practices of data deletion, numerous studies have revealed that users generally lack understanding or awareness that nominal deletion does not entirely delete data. For example, Murillo [54] found that users' understanding of deletion was limited to the interface; they believed that data was deleted if it was not visible to them. Moreover, they found that some users believed data was retained after deletion for business purposes rather than due to technical constraints. Gutmann and Mark Warner [27] argued that users often conflate the terms 'deleted' and 'erased.' Furthermore, after analyzing the operating systems macOS 10.14 and Windows 10, they discovered unclear or incomprehensible information regarding delete and erase functions. This lack of clarity puts data subjects at risk of accidental data breaches when decommissioning storage devices. In a study by Liu et al. [45], participants displayed varied understandings of account deletion, such as revoking authorization. In the context of instant messaging apps, Schnitzler et al. [67] argued that the term "deleting messages" was ambiguous for users, leading them to estimate the consequences of their deletion actions inaccurately. This was also alluded to by Abu-Salma et al. [4], who attributed this misconception to misleading feedback from devices. They pointed out that the warning messages users receive from their devices (e.g., iPhone and Nexus) do not specify whether "all" the data being deleted refers to application-related data stored on the phone or the data associated with the account on the provider's servers. Also, Liu et al. [45] found that users leave their mobile app accounts undeleted because they are unaware of their existence after deleting the apps or that they even need to delete accounts. Regarding cloud deletion, Ramokapane et al. [59] found that some users faced difficulties in deletion due to incomplete mental models of the deletion process or how the cloud works. Also, Wermke et al. [83] found that cloud users are always unsure of the number of copies in the cloud or the procedures to delete such copies. _Reasons for Deletion._ In terms of reasons for deletion, prior studies [40, 54, 59] have indicated that users employ deletion as a means to safeguard their privacy. Some users delete data to free up storage [5, 59], rectify mistakes [37, 67], or eliminate unwanted or outdated information [40, 59]. Others [49, 81, 86] delete as a coping strategy for regrettable content. _Consequences of Deletion._ Deletion actions can give rise to conflicts and distress. Yilmaz et al. [84] discovered that when a social media post evoked positive memories, other users deemed it unacceptable for the owner to delete it. Minaei et al. [50, 51] found that third-party archival services could deduce damaging content from deleted tweets. Regarding deleted posts about themselves, social media users desired privacy for their deleted posts, particularly from large-scale data collectors [49]. In a survey by Alhelali et al. [5], 13% of conflicts were linked to deletion, especially where folder members deleted files without consulting others. Ramokapane et al. [59] explained that conflicts around shared folders can also arise from members' lack of awareness that deleting an item from the folder removes it for everyone. _Users' Deletion Challenges._ Several studies have identified usability as a hindering factor in data deletion from services. For instance, Minaei et al. [49] reported that users found selective deletion mechanisms on social media ineffective in protecting their sensitive deletions. Schaffner et al. [65] found that many social media account deletion interfaces incorporated dark patterns, such as confusing terminologies that could result in accounts not being fully deleted. Previous works [45, 59] have shown that users struggle to delete data due to poorly designed deletion mechanisms. Furthermore, Ramokapane et al. [60] argued that current cloud deletion mechanisms fail to accommodate users' diverse deletion preferences and emphasized the lack of standardization in deletion-related information. Liu et al. [45] found that most individuals who deleted their accounts only read the introduction to account deletion information in privacy policies. Other studies [29, 30, 45, 65, 75] have highlighted users' challenges in locating deletion controls across various technologies. For instance, Schaffner et al. [65] and Liu et al. [45] found that while account deletion is a common desire, users may abandon the process if they cannot easily locate the deletion option. _Empowerment of Users._ Experts have suggested that users need knowledge in six areas to understand data deletion effectively: backend processes, time duration, backups, derived information, anonymization, and shared copies [54]. Schnitzler et al. [67] reported that users could make more informed decisions about the effects of deletion mechanisms in instant messaging apps if the deletion types available were explained more clearly. Ramokapane et al. [60] have advocated for developing improved ways of communicating data deletion to users. ## 4 New Paradigm ### Explainable Deletion (ExD) Explainable Deletion is _a collection of methods and techniques that provide agency, control, compliance, assurance, and transparency concerning data deletion in a given system._ As per our definition, we argue that the dimensions of ExD are not orthogonal; they are interconnected and dependent on each other. This means that achieving explainable deletion requires finding the right balance or optimal point that considers all of these dimensions simultaneously rather than focusing on them in isolation. For instance, simply providing more technical details may not guarantee user agency, while offering clearer assurance markers with less technical complexity may be more helpful [52]. Drawing parallels to e-voting systems [16], one of the main challenges they face is the lack of trust markers. In traditional physical voting, numerous trust markers such as booths, sealed boxes, and observers during unesaling boxes and counting contribute to transparency and trust. On the other hand, e-voting systems rely on transparency through open algorithms and mathematical proofs. However, these technical aspects are often not easily understandable by experts and citizens, leading to trust issues. _Explainable Deletion_ represents a novel paradigm that seeks to provide deletion explanations that must address related yet distinct questions concerning data deletion, including aspects such as what, how, when, who, where, and why, tailored to a specific entity. Our vision for deletion explanations is to offer diverse perspectives that cater to specific deletion aspects that are meaningful to the user. It is not necessary for deletion explanations to cover all these aspects, as the explanations themselves may prioritize certain dimensions over others. Moreover, not all users will require or benefit from all explanations, so providing every explanation for every system is unnecessary. We argue that deletion explanations should have a clear objective and be provided to relevant users only when necessary to avoid complicating their interaction with the system. These explanations should be valuable to the user, enabling them to make informed decisions and take appropriate actions based on the information. For instance, consider the scenario of deleted files resurfacing. A regular user may wonder, "Why have my files reappeared after I deleted them years ago?" This experience may create a conflict between their expectations and reality, contradicting their existing knowledge. However, this phenomenon may not surprise an expert user, who may attribute it to the system's deletion method or the level of completeness in the deletion process. In this case, both parties need to know _why_ it happened, but providing them with the same explanation may not be ideal. It is essential to balance comprehensibility and usefulness, which may differ for each party. Next, we will discuss the specific aspects of data deletion that deletion explanations may aim to address. _What - Explanations._ The "what" component of deletion explanations focuses on the type of data the system is capable of, intended to, or has already acted upon. Under this component, explanations provide the user with information regarding the data that can potentially be deleted within a system, the data that a deletion request will impact, or the data the system has already deleted. _How - Explanations._ The "how" aspect outlines the deletion procedure, including the specific methods employed by the system to delete data. Explanations pertaining to the "how" explain the system's internal mechanisms responsible for fulfilling the deletion request. These explanations provide the user with insights into deletion procedures, such as how all copies of the data and related metadata are deleted or if the deletion is irreversible with no possibility of recovery. _When - Explanations._ These explanations aim to address questions regarding time, providing clarity on when the data will be deleted, the duration of the retention period, and the estimated time required to complete the deletion of data successfully. By offering insights into the timing aspects, the explanations effectively bridge the gap between when the user request for deletion and when the actual deletion procedure takes place or completes. _Who - Explanations._ The "who" explanations provide information about the parties involved in the data deletion process and their responsibilities. They clarify who is authorized Figure 1. Dimensions of ExD to request data deletion and who will be responsible for the necessary actions. For instance, an explanation may provide information on individuals or entities with the authority to delete data permanently, or information on the entities which will need execute the deletion procedure for deletion to complete successfully. _Where - Explanations._ The "where" explanations are centered around the aspect of the location, providing information about the whereabouts of certain elements. This could include data, controls, or guide. "Where" explanations may offer insights into where specific deletion controls can be found or indicate where deleted data may be stored or found (i.e., recycle bin). _Why - Explanations._ "Why" explanations seek to assist users in understanding the behavior of the system prior to, during, and following the deletion request. They shed light on the underlying reasons why the system behaves in a particular way. For instance, a user might get a message explaining why the free cloud storage quota has not changed after deleting some files, or the reason why other users are still able to interact with a deleted social media post. ### Explainable Deletion for whom? We envision explainable deletion as having relevance and benefit for various individuals and groups involved or having an interest in data deletion processes and implications. This includes not only users but also other entities involved in developing or using technology. In this section, we discuss these different groups. By doing so, we emphasize that explanations for data deletion can vary depending on who is involved and their specific needs. For example, explanations intended for system users may differ in context and length compared to those aimed at regulatory bodies. _Users._ Users of various systems that collect and delete data are critical stakeholders in explainable deletion. We see system users as having a direct interest in understanding data deletion and the mechanisms to delete data from systems. _Service Providers._ Entities and organizations that develop technology or provide services to users should also be interested in explainable deletion. Service providers are responsible for complying with regulations, particularly those related to data deletion [10, 25]. Moreover, they are also responsible for providing deletion mechanisms in their system to allow users to delete their data. Lastly, providing a transparent system would benefit them business-wise, fostering trust and confidence in their customers and partners [62]. _Examples._ According to Statista [73], it has been reported that one of the most challenging obligations for companies in the U.S. and Europe to comply with in 2019 was the GDPR right to be forgotten. More recently, there have been debates about how ChatGPT handles inaccurate or misleading information and inquiries regarding its compliance with the GDPR right to be forgotten. Reports (e.g., [24]) continue to say that currently, ChatGPT does not provide procedures for individuals to request data deletion. _Regulators._ We also anticipate regulatory bodies responsible for ensuring that organizations and companies offering services comply with data protection and privacy regulations being interested in explainable deletion. These entities would want to ensure that organizations and companies adhere to the prescribed guidelines concerning data deletion practices. _Examples._ Many countries are now following the E.U. (i.e., GDPR) and California (i.e., CCPA) by establishing laws focusing on data deletion. As regulators enact these laws, they will be interested in how companies comply with the requirements or the mechanisms they employ to show compliance. Australia is the latest to consider the right to delete in their privacy act [38]. _Developers._ Designers and application developers who build systems should also be interested in explainable deletion as they need to build compliant systems. They need to know how to implement deletion explanations or what to do to meet the regulatory requirements. _Examples._ Considering the need to build compliant applications, developers will be interested in ExD. Previous research [1, 48, 74] has argued that while most systems are not compliant with regulations and many developers lacking understanding of compliance, most developers are interested in complying with regulations. ExD would provide developers with methods and techniques to achieve compliance. In his article for REUTERS, Bellamy [8] argues that the new era requires an understanding of laws and that this understanding will create a foundation from which to analyze and comprehend requirements. _Researchers._ We also see academics and industry researchers having an interest in explainable deletion. They may be interested in exploring and developing novel approaches, frameworks, and best practices for implementing deletion explanations. Example.There is a significant amount of research focusing on explainable AI; however, there is a need for empirical evidence, particularly on ExD. In Section 7 of this paper, we discuss the gaps that should spark interest from researchers. Legal ExpertsDeletion explanations can be useful to legal professionals specializing in data protection and privacy laws. They can use explanations to ensure clients adhere to regulations and respect users' rights. Example.In 2014, there was a legal case 4 between Google Spain and Google Inc. versus the Spanish Data Protection Agency (AFPD) regarding Mario Costeja Gonzalez's search results. The AEPD requested that Google remove inaccurate data related to Mario from the Google search results. Commentators have pointed out that while search results may be denied in the EU, it does not necessarily mean the search results are completely deleted. This case highlights the importance of understanding the legal aspects and how they translate into technical implementation. Footnote 4: [https://purvacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data](https://purvacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data) Footnote 4: [https://privacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data](https://privacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data) Privacy AdvocatesLastly, deletion explanations can be valuable to individuals and organizations advocating data privacy rights. These advocates emphasize the importance of transparency and accountability, and deletion explanations may serve to achieve these objectives. Example.Recently, many advocacy groups have supported various groups and individuals in understanding how to delete their data. For instance, in 2023, privacyrights.org\({}^{a}\) sponsored the California Delete Act (S.B) [7], which aims to provide residents of California with essential tools to take control of their personal information and protect their privacy. Other organizations, such as PrivacyDuck 5, would also benefit from ExD since they offer free and paid resources to users. Media outlets like BusinessInsider [36] have also published guides on deleting data online, further promoting awareness and education about deletion. They need to know how to delete to help others delete. Footnote 5: [https://privacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data](https://privacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data) Footnote 6: [https://privacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data](https://privacyrights.org/resources/california-delete-act-bill-give-californians-more-control-over-their-personal-data) ## 5. How to Explain Deletion We posit that the foundation of designing deletion explanations lies in providing high-quality information about data deletion in a meaningful and useful manner to the intended audience. A useful deletion explanation is one that meets a certain standard of what constitutes a satisfactory explanation based on the recipient, usage conditions, and the specific task at hand. Generally, the criteria for a satisfactory explanation are typically qualitative and not quantifiable. Previous research [9, 11, 12, 23, 33, 44] has proposed several desirable properties of satisfactory explanations. For instance, Chazette et al. [12] emphasized that explanations should fulfill criteria such as accessibility, usability, informativeness, understandability, and auditability to achieve system transparency. Other studies [23, 34, 62] suggest that the quality of explanations is dependent on the user and the specific context. This implies that deletion explanations should not aim to be one-size-fits-all; rather, they should be tailored to the individual user and the technology they interact with. In terms of deployment, the implementation of explainable deletion should be tailored to accommodate the distinct characteristics of various systems. Variations in interfaces and technical constraints might necessitate the utilization of diverse methods for delivering explanations. Nevertheless, it is crucial to ensure that the core topics and concepts covered in the explanations remain consistent across all systems. This section discusses various design aspects that should be considered when developing deletion explanations. We recognize that efforts must be broadly focused on three design aspects: content, presentation, and usability. ### Content This category pertains to the information that should be included in deletion explanations. Deletion explanations should primarily address the process of data deletion or how the system handles the deletion of data. They should be communicated using language that is accessible and understandable to users, enabling them to make informed decisions regarding data deletion and privacy. The content should be informative, reducing ambiguity and clarifying any uncertainties about the actual deletion process or user expectations. Users normally focus on high-level functionality rather than intricate technical details, so the explanations should be tailored to the user and the specific context. Consistency in content is crucial to establish reliability and trust. For example, an explanation regarding data retention for a particular service should remain consistent regardless of how users access it. Designers should also ensure that the explanations are user- and context-specific to provide only the necessary information for understanding. ### Presentation This category focuses on how deletion explanations should be delivered to the receiver, including the delivery method and modality. To promote inclusivity, deletion explanations should be made as accessible as possible. Designers should prioritize making users aware of the existence of these explanations and ensuring their accessibility. Explanations can be presented to the receiver in different modalities, such as textual, audio, or graphical formats. For example, this could be facilitated through existing privacy label frameworks [22, 39, 43, 69]. While mobile apps and IoT device labeling frameworks are still in their infancy stages, they can be used to present deletion explanations. Various symbols can be designed to represent various deletion concepts and types. Integrating data deletion indicators into existing label frameworks will allow for a more holistic representation of data management practices. The choice of modality should be based on the user and the specific context, focusing on what will effectively explain deletion concepts to the receiver. For example, using privacy labels to represent deletion explanations for IoT devices might work better for users. Deletion explanations can be delivered to users automatically or upon request. Automatically delivered explanations can be integrated into the user's interaction with the technology and presented when needed. It is essential to ensure deletion explanations do not disrupt user journeys or interfere with their primary tasks. The other option is to allow users to request or invoke an explanation. Explanations should be placed where users can easily find them when needed. In the earlier example of Dropbox, users could have been allowed to visit a specific deletion page to find additional information about how Dropbox handles data deletion. Identifying the appropriate locations for these explanations is crucial, and designers should choose places where users are most likely to search for such information. Currently, users struggle to find information about data deletion [30, 54, 59, 60]. Signposting them to these locations is also essential. Allowing users to request explanations when needed can prevent them from hindering their primary tasks. The mechanisms used to relay the information should give the user control, allowing them to skip an explanation if desired. When determining how much information should be provided, explanations should be concise and succinct to prevent cognitive overload and overwhelming the user. Reusing explanations, when applicable, can help reinforce understanding and beliefs, as well as maintain consistency. Moreover, by reusing explanations, fidelity is preserved, and users can develop a consistent mental model of deletion processes. ### Usability This category emphasizes the importance of considering the usefulness and comprehensibility of deletion explanations from the users' perspective. The goal of an explanation is to facilitate users' understanding of data deletion, which requires tailoring the content to their specific needs and level of expertise. Therefore, transparency in deletion will become meaningful if it enhances users' comprehension, ensuring that the information provided is understandable and accessible to the intended audience. Furthermore, the usefulness of the information lies in its potential to influence users' perceptions and guide their actions. Deletion explanations should empower information receivers to make informed decisions and act appropriately. To achieve this, explanations should address relevant questions and omit irrelevant or redundant information that users may already be familiar with. For instance, an explanation designed to guide users on locating deletion controls should not go into detailed instructions on how to use those controls. We are not advising against multiple explanations being presented together. We are only suggesting that when multiple explanations co-exist, they should be logically connected and presented to the user for a specific purpose. Consideration should also be given to users' existing background knowledge and beliefs. Building upon their existing understanding of deletion processes can enhance acceptance and prevent misconceptions. Previous research [79] suggests that introducing new information that contradicts or does not align with users' beliefs can create resistance. Therefore, explanations should be designed to align with users' mental models and facilitate the development of proficiency in deleting data from various systems. Usability also entails providing users with control and interactivity when possible and appropriate. Users should be able to engage with the explanation according to their needs, such as requesting additional information for clarification. By allowing users to interact with the explanation, it becomes more personalized and responsive to their specific requirements. Overall, deletion explanations should be designed using a user-centric approach, considering users' needs and mental gaps, as their acceptance and usefulness depend not only on their accuracy (or correctness) but also on other various factors such as users' background, the context of use, experience, and the specific technology they seek to understand. Recognizing the diverse range of users and their unique requirements is crucial in creating explanations that effectively convey information and foster understanding. By tailoring explanations to align with users' perspectives and providing information that resonates with their cognitive abilities and prior knowledge, the overall quality and effectiveness of the explanations can be significantly enhanced. ## 6. Operationalizing Explainable Deletion We envision ExD to be an integral component of the system development cycle. Designers and developers should consider the concept of explainable deletion during the initial stages of system conceptualization and creation. This approach not only allows for the integration of user-centric design principles but also promotes effective data management and proactive compliance efforts. For example, designers can leverage this opportunity to create interfaces and processes that align with users' needs and preferences right from the outset. While we believe that the optimal time to operationalize ExD is during the design and developmental phases emphasizing privacy-, explainable-, and transparency-by-design, it can also be effectively deployed in existing systems. We do not advocate for dismantling existing systems; rather, we emphasize that ExD can be seamlessly integrated into established systems. Integrating ExD into existing systems ensures ongoing good data management, compliance with regulations, and the rebuilding of user trust. Furthermore, deploying ExD is not a one-off effort but rather an iterative process that requires revisiting and improvement as requirements and regulations around data deletion evolve. Figure 2 depicts the different steps involved in operationalizing ExD. The following sections provide a comprehensive description of these steps: _(1) Identifying the explanation audience and stakeholders._ The initial step entails identifying the target audience or the specific group for whom the explanations are being crafted. The selection process is crucial because explanations should be personalized and not too generalized. However, this is not to say that explanations cannot serve more than one group or be generalized. The second category to identify comprises the experts who will actively engage in designing and implementing deletion explanations. It is important to note that this process is not exclusive to system designers; it encompasses anyone with data deletion expertise, such as legal teams, developers, and user experience (UX) experts. _(2) Defining Explainable deletion goals._ The second step revolves around outlining the objectives of the deletion explanations. These goals should outline what the explanations aim to achieve by making data deletion processes more transparent and comprehensible. It is vital that these goals are aligned with the objectives of the audience engaging with the system or service. _(3) Data inventory and Mapping._ Progressing to the subsequent stage involves carefully identifying and compiling various data types collected, shared, or generated by the system. This inventory should also encompass the data storage locations or potential points of passage. During this stage, it is also vital to identify the corresponding deletion controls and mechanisms employed for deleting such data. _(4) Assess Current Deletion Practices._ For deployed systems, this stage centers on evaluating the existing deletion practices and associated informational elements. It begins with assessing how the data identified in the preceding stage is currently being deleted. It also involves scrutinizing the alignment between deletion controls, policies, and the data that can be deleted. Furthermore, it aims to determine if the provided deletion information sufficiently explains data deletion and associated controls, thereby identifying any informational gaps. In cases where the system is yet to be launched, this phase may extend to ensuring all data within the system is accounted for in terms of deletion controls, mechanisms, and information. _(5) Develop Explanations Framework._ Guided by the data inventory and data deletion assessment outcomes, the next step involves creating an encompassing explanation framework that addresses various dimensions of data deletion. This should be a collaborative effort that includes all stakeholders (i.e., experts) and aims to clarify data deletion of all collected and shared data. Explanations should attempt to explain the 'what,' how,' when,' who,' where,' and 'why' aspects of deletion. During this stage, prioritizing the comprehensibility and usability of content (i.e., explanations) is essential, and this should be based on the intended target audience. As explanations are developed, they should undergo testing with the intended audience to ascertain their clarity. Introducing new features post-explanation deployment should trigger the development of corresponding new explanations. _(6) Design and Integrate Explanations into Interfaces._ Following the formulation of the framework, the following stage involves crafting user-friendly interfaces (e.g., voice, graphical) that meet audience' expectations, concerns, and preferences concerning data deletion explanations. During this phase, decisions regarding the presentation and usability of the interfaces should be made. The designs should seamlessly integrate with the existing user experience while causing minimal disruption to established system functionality. _(7) Testing and Evaluating Explanations with the Relevant Audience._ This phase is intertwined with the previous two steps, the design, development, and testing of the explanations and interfaces conducted with the intended audience. Should testing not yield satisfactory outcomes (e.g., explanations being hard to understand or the interfaces affecting the system's usability), revisiting the design and development of explanations should become necessary. Figure 2. Summary of various steps for operationalizing ExD. _(8) Deployment and Continuous Improvement._ Upon successfully meeting testing requirements, the explanations can then be deployed. Nonetheless, our viewpoint emphasizes that developing and deploying data deletion explanations should not be a singular exercise. Teams must continuously refine and enhance deletion explanations, informed by user feedback, integration of new features, evolving best practices, and shifts in data protection regulations. #### 6.2.2 Example: Permanently Deleting a Cloud Account A simplified example of an explainable deletion framework for permanently deleting a cloud account is presented in Figure 3. In this illustration, the intended audience is the users of the cloud platform to provide them agency, control, and assurance when engaging with the deletion process offered by the cloud service provider. We assume the user intends to delete their account entirely, including all associated data. It is also assumed that the user has successfully located the deletion controls to initiate the process of permanent account erasure. The exemplar provides high-level information in textual form, designed to help users in comprehending the procedure and outcomes of deleting their cloud account. In practice, this content would be identified through a series of empirical inquiries involving relevant stakeholders such as system designers, legal teams, and users. ## 7 Research Agenda While providing explanations for deletion can benefit users, it may also introduce new challenges. In this section, we explore the potential challenges that can arise due to explaining deletion. Based on these challenges, we draw up a research agenda for future studies aimed at promoting deletion transparency in systems. ### Adversarial Tactics Adversarial tactics pose challenges to the realization of explainable deletion by exploiting vulnerabilities in the system and undermining the transparency and effectiveness of the deletion process. Certain techniques may hinder deletion, making deletion explanations untrustworthy and unreliable. For example, adversaries may attempt to evade deletion by exploiting system vulnerabilities or manipulating the logic (using dark patterns to discourage deletion [65]). _Research Agenda._ Researchers should focus on developing new threat models that consider various adversaries and their motivations regarding deletion. This will help identify potential vulnerabilities, develop robust security measures to protect systems, and ensure the credibility of deletion explanations. Future research could also explore methods for detecting and preventing deletion evasion. ### Commercial constraints or proprietary technologies Explaining the internal workings of a system, even for deletion purposes, may not be easy to achieve due to commercial constraints and proprietary reasons [62]. Service providers may risk losing their competitive edge against other companies. Moreover, ExD might be challenging to deliver, given that data often cross many system boundaries. _Research Agenda._ Future research is needed to investigate methods that can provide explainability in deletion processes without compromising the business competition of the provider. Research should identify which information is suitable to share and determine the appropriate depth of that information. ### New Concerns Providing detailed explanations about data deletion, for instance, information about what data is or is not deleted, can raise new privacy concerns. The introduction of new information might inadvertently reinforce misunderstandings and biases about systems, which could discourage users from wanting to interact with these systems [3, 19, 21]. Rather than fostering trust, misunderstanding of certain disclosures may raise doubts or suspicions among users, undermining their confidence in the system's integrity and reliability. Figure 3. This is sample of how deletion explanations may look like for the deletion of a cloud account. With the application of ExD the user has a better chance of understanding what will happen when they request for their account to be deleted. _Research Agenda._ Further research is required to examine the influence of deletion explanations on users' trust in systems and to develop strategies for establishing and sustaining trust. Studies can assess trust levels before and after providing explanations, providing valuable insights into the impact of transparency. This calls for more research concerning misconceptions around data deletion, prior work [59] found that users fail to delete in the cloud because of their incomplete mental models. There is a need to explore users' expectations, concerns, and requirements concerning deletion processes in various technologies (e.g., smart home devices). A comprehensive understanding of users' expectations, concerns, and needs will inform the design and implementation of effective deletion mechanisms across various technologies. While there are existing efforts aimed at understanding the deletion of various technologies, such as cloud [40, 59, 60] and web technologies [45, 54], a knowledge gap exists concerning users' comprehension of deleted data, associated risks, and preferences. Future studies should investigate users' understanding of deleted data, the risks involved, and their preferences regarding it. ### Cost implications Explaining deletion can be resource-intensive and costly for system owners. Designing and implementing comprehensive explanations may require significant investments in terms of time, effort, and financial resources [80]. Moreover, there may be a need to acquire new skills or expertise to effectively communicate the complexities of data deletion to users. These factors contribute to the potential cost implications of implementing explainable deletion practices. _Research Agenda._ Future research should assess the skills and expertise needed to provide deletion explanations effectively. There is also a need for interdisciplinary research to identify and promote best practices for cost-efficient and effective implementation of explainable deletion. There could be some collaborative effort with various stakeholders to develop guidelines, standards, or frameworks to help organizations navigate the cost implications while ensuring transparency and accountability in data deletion practices. Lastly, system owners should evaluate the benefits of gaining trust and regulatory compliance against the cost of implementing the ExD. ### User overload Introducing deletion explanations within modern systems may aggravate the existing complexities and overload users with additional information, potentially worsening their overall user experience. Offering detailed explanations might overwhelm users, affecting their ability to navigate and comprehend the system effectively [15, 62]. _Research Agenda._ Future research could focus on strategies to mitigate information overload and ensure effective user understanding. This can include exploring techniques such as friendly visualizations or summaries to reduce cognitive load and enhance comprehension. Adopting a user-centered design approach for deletion explanations can ensure intuitive interfaces that meet user needs and are part of user experience [85]. Additionally, there is a need to determine the appropriate context and timing for delivering explanations to maximize their usefulness to users. ### Challenges of explaining technical concepts Explaining technical operations accurately, comprehensively, and briefly in a simple and non-technical way presents a significant challenge [15, 61, 62]. Deletion processes, in particular, can be inherently complex and may need specialized technical knowledge to comprehend and effectively explain to others fully. Additionally, the complex nature of certain infrastructures, including their data storage and deletion mechanisms, further complicates explaining their behavior to designers and end users. _Research Agenda._ Future work could aim to understand the complexities of different data storage and deletion mechanisms to identify key challenges and develop easy-to-understand strategies for explaining their behavior. Research could analyze existing infrastructures, protocols, and standards to uncover areas where explanation gaps exist and propose solutions. There is also a need to explore methods that can simplify technical operations and complex processes, making them comprehensible to relevant parties. ### Criteria for Good Explanation Defining the criteria for a good explanation might pose another serious challenge [32, 42]. Should explanations prioritize providing the correct answer or ensuring that users can understand the answer? The former often disregards the listener's level of understanding and assumes there is only one correct explanation. However, the latter is more pragmatic, considers the audience, and aims to provide explanations that are easy to understand [41]. While we argue for pragmatic deletion explanations, it is essential to acknowledge that such explanations may introduce confusion and potentially include incorrect information. Resolving these issues presents a significant challenge in achieving explainable deletion. _Research Agenda._ There is a need for user-centric explanations; research should engage users to understand their preferences and needs regarding deletion explanations. Ramokapane [60] investigated users' needs concerning deletion information for the cloud. Future works should extend that work and investigate how various groups perceive and evaluate explanations. There is also a need to balance correctness and understandability. Explanations should contain information that is correct and understandable by users. ### Implementation Challenges Lastly, deletion explanations might be technically challenging to implement. Incorporating these explanations into existing systems can be technically demanding. Designing new interfaces and user interactions to accommodate deletion explanations may pose challenges for developers. Moreover, integrating explanations seamlessly into user journeys may result in increased complexity. Research AgendaFuture research should investigate how deletion explanation could be integrated into existing systems and user journeys without disrupting the system functionality and user experience. Future works also need to address scalability and performance issues. Lastly, developers need to be supported; tools, libraries, APIs, documentation, and tutorials are all crucial for realizing ExD. ## 8. Conclusion Data has a lifecycle. It is created, stored, used/reused, shared, combined, analysed and synthesised. And, finally, it is deleted. Or it should be. After all, this right to removal of one's data from digital systems is enshringed in many legal regulations such as the GDPR. Yet deletion remains shrouded in mystery - from complexity and usability issues regarding initiating data deletion to understanding what happens as a consequence through to gaining assurance that it has indeed happened and transparency about the processes utilised by the requisite party (or parties) in deleting the data. This leads to inherent and fundamental asymmetry between providers and users. Provision of data is easy yet removal is difficult high impossible. In this paper, we have highlighted these very challenges and issues and argued that explainability of data deletion is as critical as clarity over data collection, use and sharing. Explainable deletion is, however, not without its challenges. Many key facets need to be addressed to deliver this new paradigm - from design and properties of the explanations to suitable means of presenting and communicating them to different stakeholder groups through to technical measures that deliver transparency, control and assurance. The research agenda we propose highlights a number of directions to develop the empirical basis, technical underpinnings, socio-economic insights and user-centred designs for explainable deletions. These research advances are critical in redressing the aforementioned asymmetry between service providers and users. Without explainable deletion, users will never have true agency over their data. ## Acknowledgments The authors would like to express their gratitude to Dr. Partha Das Chowdhury for providing thorough feedback on the initial draft of this paper. The work is part of our efforts in making privacy equitable, it has been supported by EPSRC EP/W025361/1.
2310.09134
Coexistence of Anomalous Hall Effect and Weak Net Magnetization in Collinear Antiferromagnet MnTe
Anomalous Hall effect (AHE) plays important role in the rapidly developing field of antiferromagnetic spintronics. It has been recently discussed that it can be a feature of not only uncompensated magnetic systems but also in altermagnetic materials. Hexagonal MnTe belongs to this appealing group of compounds exhibiting AHE and is commonly perceived as magnetically compensated. Here, we demonstrate that bulk form of MnTe exhibits small but detectable magnetic moment correlating with hysteretic behaviour of the AHE. We formulate a phenomenological model which explains how this feature allows to create a disbalance between states with opposite N\'eel vector and prevent the AHE signal from averaging out to zero. Moreover, we show how the dependence of AHE on the N\'eel vector arises on microscopical level and highlight the differences in Berry curvature between magnetically compensated and uncompensated systems.
K. P. Kluczyk, K. Gas, M. J. Grzybowski, P. Skupiński, M. A. Borysiewicz, T. Fąs, J. Suffczyński, J. Z. Domagala, K. Grasza, A. Mycielski, M. Baj, K. H. Ahn, K. Výborný, M. Sawicki, M. Gryglas-Borysiewicz
2023-10-13T14:23:17Z
http://arxiv.org/abs/2310.09134v1
# Coexistence of Anomalous Hall Effect and Weak Net Magnetization in Collinear Antiferromagnet MnTe ###### Abstract Anomalous Hall effect (AHE) plays important role in the rapidly developing field of antiferromagnetic spintronics. It has been recently discussed that it can be a feature of not only uncompensated magnetic systems but also in electromagnetic materials. Hexagonal MnTe belongs to this appealing group of compounds exhibiting AHE and is commonly perceived as magnetically compensated. Here, we demonstrate that bulk form of MnTe exhibits small but detectable magnetic moment correlating with hysteretic behaviour of the AHE. We formulate a phenomenological model which explains how this feature allows to create a disbalance between states with opposite Neel vector and prevent the AHE signal from averaging out to zero. Moreover, we show how the dependence of AHE on the Neel vector arises on microscopical level and highlight the differences in Berry curvature between magnetically compensated and uncompensated systems. + Footnote †: These two authors contributed equally. + Footnote †: These two authors contributed equally. + Footnote †: These two authors contributed equally. ## I Introduction Antiferromagnetic materials attract a lot of attention due to the groundbreaking demonstrations of current-induced control of the spin axis [1; 2; 3], recently observed long-distance spin transport [4; 5] and numerous fundamental findings related to the role of the local atomic site asymmetries. They can give rise to the Neel order spin orbit torques [6; 7] or recently recognized new group of magnetic materials called alternagnets [8; 9] in which different spin sublattices are coupled by certain rotation symmetry operations giving rise to multiple interesting phenomena including large bands spin-splitting. The intensive interest in both ordinary antiferromagnets (AFs) and alternagnets results in rapid development of the array of experimental methods capable to detect the reorientation of the Neel vector [10; 11; 12]. The electrical methods such as anisotropic magnetoresistance (AMR) [13; 14; 15], spin Hall magnetoresistance (SMR) [16; 17; 18; 19] or anomalous Hall effect (AHE) give insight into the magnetic properties. Remarkably, AHE was shown to be present in systems that are magnetically compensated such as non-collinear antiferromagnets [20; 21; 22] or alternagnets [23; 24] including a very compelling, semiconducting hexagonal MnTe. The very fact that MnTe is conducting allows to directly probe its properties in electronic transport experiment. In this report, we have addressed high quality free-standing bulk samples eliminating the influence of epitaxial strain on the magnetic anisotropy [25] and the domain structure [26; 27] opening a way for precise quantitative volume magnetometry due to the absence of bulky substrates. Also, for bulk samples the impact of possible parasitic substrate or interface conductivity may be neglected. The magnetization and resistivity studies both with the Raman spectroscopy evidenced the antiferromagnet-paramagnet phase transition. Moreover, not only do we show that the anomalous Hall effect is present also in bulk samples, complementing the recent study of thin layers epitaxially grown on InP(111)A substrates [24], but also we demonstrate the existence of relatively weak ferromagnetic-like (WFL) signal, which presence correlates with the AHE and the antiferromagnetic (AF) state. The anomalous Hall signal recently detected in MnTe [24], is believed to originate from the interplay of non-relativistic [28] symmetry-related spin splitting of bands and spin-orbit interaction. This can result in: (1) the dependence of AHE on the direction of the Neel vector and (2) the effective control of magnetic moments in the basal plane \(\perp c\) by external magnetic field applied along the \(c\)-axis. To substantiate this notion we formulate a phenomenological macrospin model of this material system. The model explains the existence of finite magnetic moment (even in the absence of an external magnetic field) by the interaction of the Dzyaloshinskii-Moriya type and for a reasonable set of parameters it can reproduce well the hysteretic behavior of AHE. While in real samples the finite net magnetization \(\vec{M}_{net}\) is essential to achieve an uncompensated domain structure (where the AHE does not average out to zero), a hypothetical single-domain MnTe system can exhibit AHE even if the magnetic moments lie exactly in-plane (i.e. \(\vec{M}_{net}=0\)), as shown by Berry curvature (BC) analysis. ## II Samples Bulk MnTe crystals were produced from vapour phase. 5N tellurium and manganese powders were placed at two ends of quartz ampule, which was subsequently sealed under vacuum and heated. After evaporation of tellurium at tempera ture of 600\({}^{\circ}\)C, at about 950\({}^{\circ}\)C tellurium vapors reacted with manganese, forming few mm large MnTe crystals of irregular shape. The composition of the crystals was studied with X-ray diffraction experiment which revealed the desired NiAs-type MnTe (Fig. 1) and no alien phases (see Sec. S. I. in Supplementary Materials [29]). Preliminary transport experiments were performed on samples of irregular shapes. For quantitative transport experiments a small bar structure was prepared (with the width and heights of about 0.2 mm, and length of about 2 mm). Two pairs of contacts in a six terminal Hall configuration (Fig. 2) were made on each side of the bar by gold pre-deposition and silver-based electrically conductive epoxy. Both orientation of this hallbar and its homogeneity were determined in high resolution X-ray diffraction experiment (HR XRD), showing that \(c\) axis was perpendicular to the current, which was along [11\(\overline{2}\)0] direction (see Fig.2 for sample scheme), with the accuracy of 5\({}^{\circ}\). Besides, a formation of grains with a slight disorientation of \(\sim 0.20^{\circ}\) was also evidenced. The calculated lattice parameters are: \(c=6.7116\) A, \(a=4.1483\) A, as expected for the hexagonal MnTe and slightly different than for the MBE-grown samples [30; 31]. Additionally, we characterise our samples using Raman spectroscopy (see Sec IIIA of the Supplementary Information). A dependence on temperature of spectral properties of \(E_{2g}\) phonon transition (intensity, energy and linewidth) exhibits a clear peculiarity in the vicinity of the Neel temperature. This allows us to independently confirm the \(T_{\rm N}\) value. ## III Experimental results ### Magnetic Properties Magnetic measurements were performed using superconducting quantum interference device (SQUID) magnetometer MPMS XL5 of Quantum Design. Two specimens were investigated. For temperature dependent studies at weak magnetic fields exactly the same sample was used as for the transport measurements, however after removing the electrical connections. Since some residues of silver epoxy remained on the specimen, for high field studies another, nearly cube-shaped sample was prepared of sub-millimeter dimensions, cut from a neighboring location to the transport sample, with crystallographic orientation confirmed by HRXRD. For the magnetic studies the samples are fixed to the sample holders formed from about 20 cm long and 1.7 mm wide Si sticks by means of strongly diluted GE varnish. We strictly follow all rules formulated for precision magnetometry and quantitative determination of weak magnetic signals in commercial magnetometers [32]. The samples do exhibit typical basic magnetic characteristics expected for highly oriented hexagonal AF crystals. Namely, a sizable magnetic anisotropy [33] with respect to the hexagonal \(c\)-axis was observed, as presented in Fig. 3(a). At low temperatures the magnetic susceptibility, \(\chi\), measured along the \(c\)-axis \(\chi_{\parallel}\) is substantially smaller than that measured with magnetic field \(H\) perpendicular to \(c\)-axis, \(\chi_{\perp}\), revealing the in-plane orientation of the Neel vector. Both \(\chi_{\parallel}\) and \(\chi_{\perp}\) coincide just above 300 K and follow same trend on further warming. Those results stay in perfect agreement with results of Komatsubara et al. [34]. Despite the fact that both dependencies pass through a weak maximum at about 330 K Figure 1: Crystal structure of hexagonal MnTe, with cyan and yellow balls representing Mn atoms with arrows indicating magnetic moments. Violet arrow shows Néel vector orientation. Figure 3: (a) Resistivity and magnetic susceptibility dependence on temperature of the studied hexagonal MnTe samples. (b) The peak in the first derivative of susceptibility \(\mathrm{d}\chi/\mathrm{d}T\) and resistivity \(\mathrm{d}\rho_{xx}/\mathrm{d}T\) at the Néel temperature. Figure 2: Configuration of the measuring system in relation to the crystal structure of the sample. we assign the temperature the two curves merge to the Neel temperature, which magnitude is more precisely determined from temperature derivatives of both \(\chi\). Both results, collected in Fig. 3(b), indicate the existence of a clear maximum at \(T=(307\pm 1)\) K. We assign this temperature to the Neel temperature of the material. A clear mark of magnetic phase transition is visible also in the Raman scattering spectra (see Fig. S6 in Supplementary Materials [29]), as well as in the resistivity dependence on the temperature, where a kink in the vicinity of \(T_{N}\) was observed on the rising slope of \(\rho_{xx}(T)\) (Fig. 3(a)). Interestingly, the peak in d\(\rho_{xx}\)/d\(T\) occurs at the same temperature and exhibits striking similarity to the peaks in d\(\chi_{\perp}\)/d\(T\) and d\(\chi_{\parallel}\)/d\(T\) at \(T_{\rm N}\) (see Fig. 3(b)) together reflecting the phase transition [35; 36; 37]. Hex-MnTe has been regarded as a collinear AF [38; 24; 31], so no magnetization is expected to be seen at \(H=0\). First, to avoid any unintentional magnetization of the sample by a residual field in the superconducting magnet, we use the so-called,,magnet reset option" to completely quench \(H\). As established independently the magnitude of \(\mu_{0}H\) during collecting of \(M_{\rm REM}\) has been maintained below 0.01 mT. Second, for all three orientations a non-zero magnetic moment was registered at all temperatures, including \(T>T_{\rm N}\). This feature is found in all samples studied and its origin stays currently unknown. Upon this reasoning we can treat magnitudes of both \(M_{\rm REM}^{a}\) and \(M_{\rm REM}^{m}\) as a baseline to assess true magnitudes of \(M_{\rm REM}^{c}\), as indicated in Fig. 4 by the hatched area. In following we conclude that the magnitude of \(M_{\rm REM}\) attains \(3\times 10^{-5}\)\(\mu_{\rm B}\)/Mn, or the angle of canting of Mn spins from the \(c\)-plane towards the \(c\) axis is about 0.05 deg (or 3 arc minutes). Such a very small magnitude of this \(M_{\rm REM}\) is most likely the reason it has not been revealed in the previous studies of the thin films [38; 24]. Now we note briefly the other important point that this \(M_{\rm REM}^{c}\) vanishes exactly at \(T_{\rm N}\). Thus it must be related to the system of antiferromagnetically coupled Mn moments and, whatever causes the Mn spins to cant, the rapidly vanishing excess \(M_{\rm REM}^{c}\) magnetization offers a great opportunity to directly study the properties of the Neel transition in this material. ### Anomalous Hall effect Fig. 5 presents magnetic field dependence of the Hall resistivity \(\rho_{yx}\), measured below the Neel temperature. The overall linear slope of the Hall resistivity has a small hysteresis loop superimposed, which however in contrast to [24] is of an opposite sign (see Fig. S3 in the Supplementary Materials [29]). If this linear Hall resistivity reflects the ordinary Hall effect, its sign points out holes as dominant charge carriers. This is consistent with the sign of thermopower voltage checked at room temperature. By subtracting the linear part, anomalous Hall component \(\rho_{yx}^{AHE}\) is obtained (for details see Sec. S. III. in the Supplementary Materials [29]), where the hysteresis loop is clearly visible, see Fig. 5(b). The two curves come from two different pairs of Hall contacts. Both the magnitude and the shapes are very similar, pointing to the high uniformity of the sample. The AHE component stays saturated above 2 T, and at lower magnetic fields some steps are observed. Possibly they reflect locally different antiferromagnetic domain structure. It is interesting to trace temperature evolution of the Hall coefficient \(R_{\rm H}(T)\) (\(R_{\rm H}=\rho_{yx}/B\)), the saturation value of \(\rho_{yx}(T)\) and hysteresis width \(w(T)\) of \(\rho_{yx}^{AHE}\). The results are presented in Fig. 6(a)-(c). The evolution of \(R_{\rm H}(T)\) shows striking resemblance to magnetic remanence dependence \(M_{REM}\) on temperature, recalled for convenience on the right axis in Fig. 6(a), with a clear critical character at the Neel temperature. Although magnetic ions and holes are coupled systems, this Hall coefficient evolution Figure 4: Temperature dependence of the remnant magnetization, \(M_{\rm REM}\), acquired for all 3 major orientations of the sample. Colors of the symbols match the picture represent the experimental configuration given in the inset. The hatched area represents the temperature dependent magnitude of the WFL \(M_{\rm REM}^{c}\). Figure 5: (a) Hall resistivity \(\rho_{yx}\) below \(T_{\rm N}\) with clearly visible hysteresis loop. Arrows indicate direction of field sweep. (b) The hysteresis \(\rho_{yx}^{AHE}\) extracted from \(\rho_{yx}\) with its characteristic parameters: width \(w\) and saturation \(\rho_{sat}\). is astonishing, revealing that the linear Hall signal cannot be exclusively related to the ordinary Hall effect dependent solely on free carrier concentration, but must be somehow influenced by the magnetic properties of the material. The results show even a change of sign of \(\rho_{yx}\) close to the Neel temperature (see Fig. S2 in the Supplementary Materials [29]), which interpreted in a straightforward way would signify electron conductivity in the paramagnetic phase. Similar sign variation of \(R_{\rm H}\) close to the magnetic phase transition has been observed in manganese-based perovskites [39] and ascribed to effects related to spin-orbit splitting. Those effects are certainly also vital here, as the top of the valence band (including curvature of the bands) is strongly influenced by spin-orbit interaction [40] but microscopic mechanism could be very different in MnTe (we elaborate on this in the Supplementary Materials [29]). The temperature impacts also the shape of the hysteresis loop. First of all, the hysteresis vanishes (stays below our detection limits) above 307 K (see Fig. 6(b)-(c)), clearly indicating relation with the magnetically ordered system of Mn ions. The \(\rho_{sat}(T)\) and \(w(T)\) initially follow \(M_{REM}\), however in a more limited temperature range. Similar observation have just been made by R. D. Gonzalez Betancourt et al. [24] for thin MnTe films and by Y. Liang et al. [41] for metal/antiferromagnetic insulator heterostructures, where a model of antiferromagnetic topological textures was proposed with colinear AF deflecting due to both AF exchange coupling and Dzyaloshinskii-Moriya interactions (DMI). Although the studied system here is different, same phenomena occur as well, with the DMI-like interactions playing an important role. We argue below that \(M_{REM}\) is in principle not necessary for the AHE to occur, were we somehow able to prepare the system in a single-domain state. However, if the \(\vec{N}\) and \(-\vec{N}\) states (in other words, \(\uparrow\downarrow\) and \(\downarrow\uparrow\)) have the same energy and occur with equal probability then the AHE will average out to zero as demonstrated in Fig. 1(e) of Ref. [24]. Combined effect of DMI and external magnetic field can lift the degeneracy of \(\uparrow\downarrow\) and \(\downarrow\uparrow\) in energy, thereby allowing for AHE that does not average out to zero, but symmetry of NiAs does not allow for DMI. Finite value of \(M_{REM}\), as found experimentally suggests that a more complicated interaction between Mn magnetic moments is present instead, that lifts the degeneracy of \(\uparrow\downarrow\) and \(\downarrow\uparrow\) when magnetic field along \(c\)-axis is applied and we elaborate on this argument in the subsequent section. ## IV Discussion of the origin of AHE The origin of the anomalous Hall effect is complex, depending on the studied system [42; 43; 44]. Generally, the diverse mechanisms behind it are classified into intrinsic ones, related to the symmetry of the band structure and extrinsic, related to the scattering mechanism of charge carriers, all leading to an additional correction \(\rho^{AHE}\) to Hall component of the resistivity tensor \[\rho_{\rm H}=R_{\rm H}\cdot B+\rho^{AHE}, \tag{1}\] whereas traditionally, \(\rho^{AHE}\propto m\) was assumed -- based on phenomenology for most ferromagnets [45] -- and the excitement about AHE in antiferromagnets has been coming from the realisation that non-zero \(\rho^{AHE}\) can indeed occur even if the magnetization \(m\) vanishes. The observed coexistence of finite magnetic remanence \(M_{REM}\) and AHE may lead to the attempt of describing the AHE as originating solely from \(M_{REM}\). We believe however that the situation is more complex. There are numerous systems in which in spite of \(M_{REM}=0\) still the AHE is predicted (see e.g. [46]) and explained by wavefunction symmetry considerations and non-vanishing Berry curvature. Below we show that for some Neel vector directions Berry curvature of the MnTe valence band stays finite, thus contributing to the AHE [42]. Then we switch to modelling of impact of this weak magnetic signal within a macrospin model, where finite magnetic moment is evoked by the interaction of the Dzyaloshinskii-Moriya type and it can reproduce well the hysteretic behavior of AHE. ### Berry curvature Symmetry arguments [24] regarding the existence of AHE are valid irrespective of its detailed microscopic mechanism. If we assume that the origin of this effect in MnTe is intrinsic, we can attain additional understanding regarding the on/off switching of AHE (depending on Neel vector direction) by analysing the Berry curvature \(\Omega_{z}(\vec{k})\). In Fig. 7, we consider a section of the crystal-momentum space (i.e. \(\vec{k}\) space) in a plane perpendicular to \(\Gamma\Lambda\) line so that the situation around the Figure 6: Comparison of the Hall effect characteristic parameters and remnant magnetization as a function of temperature: (a) Hall coefficient, (b) hysteresis width and (c) saturation. valence band top is shown (this choice is explained in Supplementary Materials [29], Sec. S. V.). Unlike in ferromagnets where the intrinsic AHE is believed to originate from hot spots of Berry curvature [47], its structure in MnTe is richer, comprising minima and maxima (such character has recently been analysed in terms of _multipoles_[48] as opposed to the monopole character of the aforementioned hot spots) and their balance depends on the orientation of the Neel vector. We now turn our attention to this feature of the BC. On the left panel of Fig. 7, the integral of \(\Omega_{c}^{k_{x}}(k_{x},k_{y})\) is zero: even if the BC is non-trivial also for this direction of the Neel vector, Fermi sea integral is still zero and the AHE is forbidden. For other directions of the Neel vector (except \(120^{\circ}\) rotated cases), this symmetry is broken and the BC has non-zero integral. As an example we show \(\vec{N}\parallel y\) on the right panel. We conclude that while the BC in ferromagnets has a monopole character in reciprocal space [45] its structure in AF is less trivial. Since collinear magnetic order is simpler than e.g. kagome-lattice AF systems (such as Mn\({}_{3}\)Sn [20]), the symmetry of BC plots can be straightforwardly analysed: from the case of Neel vector parallel to \(c\)-axis (not shown) where three-fold symmetry is preserved to \(\vec{N}\parallel x\) where symmetry is lowered but integrated BC remains zero and finally to cases as shown in Fig. 7(b) where the AHE becomes allowed. ### Modelling of AHE hysteresis loop To demonstrate how external field applied along the \(c\)-axis lifts the degeneracy of energies of \(\uparrow\downarrow\) and \(\downarrow\uparrow\) domains, we now attempt to describe the behavior of a weak non-zero magnetization found in experimental data of Fig. 4. Such situation can arise due to the DMI that is known to be responsible for weak magnetization in many systems such as hematite [49]. Symmetry of NiAs-type MnTe allows for a higher-order interaction [50] that shares the property of creating an uncompensated net magnetic moment with DMI. In its basic form \(\mathbf{D}_{12}(\mathbf{M}_{1}\times\mathbf{M}_{2})\), the DMI is forbidden in NiAs-type structure but we use a more general approach as follows. We construct a 2D macrospin model in which we consider two antiferromagnetic moments coupled by exchange interaction \(J\), in the external magnetic field \(\mathbf{B}\), uniaxial anisotropy \(K_{u}\) and an interaction related to the spin-orbit parametrised by \(D_{12y}\). The set of coordinates is chosen so that the magnetic moments lie in \(xz\)-plane, which corresponds to a cross section through a magnetic-easy \(xy\)-plane of MnTe and the magnetic field is applied out of plane. The energy expression has the following form: \[\varepsilon=J\mathbf{M}_{1}\cdot\mathbf{M}_{2}-\mathbf{B}\cdot(\mathbf{M}_{1}+ \mathbf{M}_{2})+K_{u}V\left(\sin^{2}\alpha_{1}+\sin^{2}\alpha_{2}\right)+D_{1 2y}f(\phi,\gamma) \tag{2}\] where \(\alpha_{1,2}\) describes the angle between a corresponding magnetic moment and a common crystalline direction within the \(xz\)-plane. The first three terms correspond to the usual [51] Stoner-Wohlfarth model of an antiferromagnet with two magnetic sublattices. If \(\mathbf{D}_{12}\|\hat{y}\) were allowed by symmetry then \(f\) would only depend on the canting angle \(\gamma=\alpha_{2}=\pi-\alpha_{1}\). In our case, the basic form of Dzyaloshinskii-Moriya is replaced by a more complicated functional dependence [50] (including the azimuthal angle \(\phi\)) but when the moments are confined to \(xz\)-plane, and \(|\mathbf{M}_{1}|=|\mathbf{M}_{2}|=M_{0}\), only \(f=M_{0}^{2}\sin(2\gamma)\) remains as the canting angle \(\gamma\) is the same for both sublattices. The energy can be explicitly expressed as function of these two angles \(E(\alpha_{1},\alpha_{2})\) and consequently its minimization brings information about the expected equilibrium state of the system. It is convenient to divide the Equation 2 by the sublattice saturation magnetization \(M_{0}\) as the resulting \(E=\varepsilon/M_{0}\) is equivalent to \(\varepsilon\) from the minimization point of view. We perform numerical calculation of the energy landscape for all possible angles \(\alpha_{1,2}\in[0,2\pi]\) and chosen set of parameters expressed in Teslas: \(B_{J},B_{K},B_{D}\), where \(B_{J}=JM_{0}\) is exchange field, \(B_{K}=K_{u}V/M_{0}\) is anisotropy field and \(B_{D}=D_{12y}M_{0}\) is the DMI-like field and \(V\) is the volume of the system. We notice that for \(B=0\) we obtain two equivalent energy minima both for zero and non-zero \(B_{D}\). Moreover, for the chosen set of parameters, it is enough to minimize \(\epsilon\) with respect to one of the \(\alpha\), because \(\alpha_{2}\) is dependent on \(\alpha_{1}\) and can be unequivocally extracted: they are coupled via the exchange interaction which is very strong compared to the external magnetic field. Energy landscape may exhibit multiple minima for small values of external magnetic field. An example of such case is presented in Fig. 8, where the energy colour map is shown as a function of magnetic field along \(z\)-axis and \(\alpha_{1}\). It can be easily seen that while for \(B_{z}=0\) two states producing magnetization with different signs along \(z\) are equivalent and Figure 7: Berry curvature (in Å\({}^{2}\)) of the topmost valence band in the vicinity of its maximum (section \(k_{z}=\) const. is shown, \(k_{x,y}\) in Å\({}^{-1}\); see Sec. S. V. in Supplementary Materials [29] for further details). Position of the Néel vector is indicated at the foot of the image although strong magnetic field favours one of them, there is still a region of moderate fields in which the magnetization can have two competitive positions due to the local energy minimum (minima are marked by red dots in Fig. 8). This observation can reproduce the appearance of hysteresis in experiments while sweeping the magnetic field as shown by the red dots. The red dashed arrows indicate the magnetic field at which an abrupt Neel vector reorientation occurs. The disappearance of the local energy minimum can be clearly seen in Fig. S4, presented in the Supplementary Materials [29], where only few chosen energy curves for different fields are presented. Non-zero external magnetic field changes the number of available energy minima. Upon increasing the magnitude of the magnetic field local energy minimum around \(\alpha_{2}=0\) disappears. This is expected to cause the reorientation of the magnetic moments since the \(\alpha_{2}\) will change by nearly \(180^{\circ}\) to reach the new energy minimum. As a result, the net magnetic moment will reorient to the direction parallel to the magnetic field if it was antiparallel originally. This scenario provides hysteretic behavior of magnetization. We notice that the hysteresis can be reproduced for certain range of parameters, and the width of the hysteresis depends on \(B_{D}\) and \(B_{K}\) (Fig. S5 in the Supplementary Materials [29]). Typically, values of the exchange fields are orders of magnitudes higher the anisotropy fields [52; 53; 54]. We keep \(B_{J}=100\) T, which is a common general assumption [52; 53; 54], very reasonable agreement with [55] and the same order of magnitude as [31]. The anisotropy field values used for the simulations are below 0.1 T (common limit in the literature is \(<1\) T [52; 53; 54], see also [56]). Then, for such set of parameters we notice that \(B_{D}\) has to reach values at the level of single Teslas for the onset of hysteretic behavior of magnetic orientation in the external magnetic field. If compared to the values of the DMI fields, we notice that while interfacial DMI tend to have significantly lower values [57], the DMI field in hematite, which is a canted antiferromagnet, was determined to be 2.72 T [56], of the same order of magnitude as in our case. When \(B_{D}\) is too small compared to \(B_{J}\), the two energy minima persist for all tested magnetic fields similarly to the case of fully compensated antiferromagnet where the two opposite in-plane states of the Neel vector remain degenerate in the out of plane magnetic field. When \(B_{D}\) is large, magnetic moments may still reorient close to zero field but the hysteresis may become invisible. Additionally (as shown in Fig. S5 in the Supplementary Materials [29]) increasing anisotropy may enhance the hysteresis width which may explain the temperature evolution of the observed hysteresis (Fig. 6). We also emphasise that our model reveals an interesting mechanism in which the Neel vector rotates by \(180\,^{\circ}\) upon sweeping the magnetic field magnitude in perpendicular direction. ## V Summary We report on the observation of a hysteretic behavior of transverse resistivity in the magnetic field that can be ascribed to anomalous Hall effect as well as detection of small but measurable magnetic moment in bulk, antiferromagnetic hexagonal MnTe. The AHE hysteresis width, magnitude and the remnant magnetization - exhibit similar dependencies on temperature in the high temperature range, i.e. they monotonously decrease with temperature and vanish above \(T_{\text{N}}\), which is carefully determined by multiple, independent experimental methods. This clear correlation between the magnetization and transport data paved the way to construct a macrospin model with additional, phenomenological energy term inspired by Dzyaloshinskii-Moriya interaction, in which the uncompensated magnetic moment is found to be responsible for the hysteretic behavior, within a certain range of material parameters. The direct discrimination between different magnetic and non-magnetic effects to the magnetotransport data and detailed discussion of their magnitudes remains challenging and therefore the existence of other mechanisms giving rise to AHE in our samples cannot be unequivocally ruled out. ###### Acknowledgements. This work was partially supported by the Polish National Centre for Research and Development through grant No. TECHMATSTRATEG1/346720/8/NCBR/2017 and by the National Science Centre, Poland under Grant 2021/40/C/ST3/00168. Czech Science Foundation (GACR) provided support under grant 22-219745. JS acknowledges a support within "New Ideas 2B in POB II" IDUB project financed by the University of Warsaw. _MG-B_ and _JS_ express their gratitude to prof. W. Natorf. We are indebted to Prof. U. Zuelicke for stimulating discussions and KV thanks O. Sedlacek for assistance in numerical analysis of Berry curvature. Figure 8: Energy landscape as a function of external magnetic field applied along \(z\) axis and the orientation of one magnetic moment with respect to the easy-plane described by \(\alpha_{1}\) calculated for \(B_{J}=100\) T, \(B_{K}=0.04\) T and \(B_{D}=10\) T. The red points indicate the position of energy minima. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2307.12865
Observation of $π$ solitons in oscillating waveguide arrays
Floquet systems with periodically varying in time parameters enable realization of unconventional topological phases that do not exist in static systems with constant parameters and that are frequently accompanied by appearance of novel types of the topological states. Among such Floquet systems are the Su-Schrieffer-Heeger lattices with periodically-modulated couplings that can support at their edges anomalous $\pi$ modes of topological origin despite the fact that the lattice spends only half of the evolution period in topologically nontrivial phase, while during other half-period it is topologically trivial. Here, using Su-Schrieffer-Heeger arrays composed from periodically oscillating waveguides inscribed in transparent nonlinear optical medium, we report experimental observation of photonic anomalous $\pi$ modes residing at the edge or in the corner of the one- or two-dimensional arrays, respectively, and demonstrate a new class of topological $\pi$ solitons bifurcating from such modes in the topological gap of the Floquet spectrum at high powers. $\pi$ solitons reported here are strongly oscillating nonlinear Floquet states exactly reproducing their profiles after each longitudinal period of the structure. They can be dynamically stable in both one- and two-dimensional oscillating waveguide arrays, the latter ones representing the first realization of the Floquet photonic higher-order topological insulator, while localization properties of such $\pi$ solitons are determined by their power.
Antonina A. Arkhipova, Yiqi Zhang, Yaroslav V. Kartashov, Sergei A. Zhuravitskii, Nikolay N. Skryabin, Ivan V. Dyakonov, Alexander A. Kalinkin, Sergei P. Kulik, Victor O. Kompanets, Sergey V. Chekalin, Victor N. Zadkov
2023-07-24T15:03:42Z
http://arxiv.org/abs/2307.12865v1
# Observation of \(\pi\) solitons in oscillating waveguide arrays ###### Abstract Floquet systems with periodically varying in time parameters enable realization of unconventional topological phases that do not exist in static systems with constant parameters and that are frequently accompanied by appearance of novel types of the topological states. Among such Floquet systems are the Su-Schrieffer-Heeger lattices with periodically-modulated couplings that can support at their edges anomalous \(\pi\) modes of topological origin despite the fact that the lattice spends only half of the evolution period in topologically nontrivial phase, while during other half-period it is topologically trivial. Here, using Su-Schrieffer-Heeger arrays composed from periodically oscillating waveguides inscribed in transparent nonlinear optical medium, we report experimental observation of photonic anomalous \(\pi\) modes residing at the edge or in the corner of the one- or two-dimensional arrays, respectively, and demonstrate a new class of topological \(\pi\) solitons bifurcating from such modes in the topological gap of the Floquet spectrum at high powers. \(\pi\) solitons reported here are strongly oscillating nonlinear Floquet states exactly reproducing their profiles after each longitudinal period of the structure. They can be dynamically stable in both one- and two-dimensional oscillating waveguide arrays, the latter ones representing the first realization of the Floquet photonic higher-order topological insulator, while localization properties of such \(\pi\) solitons are determined by their power. keywords: Floquet topological insulators, \(\pi\) states, edge solitons, SSH model + Footnote †: journal: Science Bulletin ## 1 Introduction Photonic topological insulators [1; 2] are unique materials hosting localized topologically protected states at their edges by analogy with edge modes in electronic topological insulators, first predicted in solid-state physics [3; 4]. Various mechanisms of formation of the photonic topological edge states were discovered, most of which are associated with breakup of certain symmetries of the underlying system possessing specific degeneracies in the linear spectrum. The most representative feature of topological edge states is their remarkable robustness with respect to deformations of the structure, disorder, and their persistence for different geometries of the edge between topologically distinct materials. Their formation and robustness has been predicted and demonstrated for various photonic systems with broken time-reversal symmetry, for valley-Hall systems with broken inversion symmetry, and in higher-order topological insulators [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Particularly nontrivial situation is realized when topological phase is induced by periodic modulations of system parameters in the evolution variable [21], for example in the direction of light propagation. Characterization of such driven Floquet systems usually requires special topological invariants, as shown in [22; 23; 24]. Among the most intriguing manifestations of topological effects in Floquet systems is the formation of unidirectional edge states as proposed in [8] and observed at optical frequencies in [9], observation of anomalous topological states [25; 26; 27], and of so-called anomalous \(\pi\) modes associated with nonzero \(\pi\) gap invariant and studied in [28; 29; 30; 31; 32]. Recent surge of interest to topological pumping in near-solitonic regime should be mentioned too [33; 34; 35]. \(\pi\) modes are unique topological states that may appear in a quasi-energy spectrum of the Floquet system that due to modulation of its parameters spends half of the evolution period in "instantaneous" nontopological phase, while on the other half of the period it is topologically nontrivial. In tight-binding models describing Floquet systems, \(\pi\) modes usually appear at the edges of the "longitudinal" Brillouin zone with quasi-energies equal to \(\pm\pi/T\) (where \(T\) is period of the Floquet system), in contrast to conventional "zero-energy" edge states in static topolog ical systems, hence the notion of \(\pi\) modes that we also use in this work for convenience. So far, the photonic \(\pi\) modes have been observed only in linear regime in one-dimensional modulated Su-Schrieffer-Heeger (SSH) arrays in a microwave range [36] and at optical frequencies in non-Hermitian or plasmonic SSH arrays [37; 38; 39] with high refractive index contrast, where, however, considerable losses limit propagation distances to hundreds of micrometers. \(\pi\) modes may have applications in the design of systems supporting high-quality cavity modes [40; 41], for realization of low-threshold lasers [42; 43; 44; 45], in strongly correlated electron-photon systems [46], and other areas. They have been also encountered beyond the realm of optics, e.g., in acoustics [47; 48]. Nevertheless, to date the photonic \(\pi\) modes remain unobserved in higher-dimensional conservative systems and their nonlinear analogs were never reported experimentally. At the same time, photonic Floquet systems offer unique testbed for the exploration of nonlinear effects in specifically designed low-loss topological guiding structures, where observation of so far elusive class of \(\pi\) solitons is possible. It should also be mentioned that nonlinearity is playing an increasingly important role in all-optical control of topological systems, see recent reviews [2; 49]. In particular, nonlinearity may stimulate modulational instability of the nonlinear edge states [50; 51; 52], it leads to rich bistability effects for edge states in pumped dissipative resonator structures [53; 54; 55], it may cause power-controlled topological transitions [56], and enables the formation of topological solitons both in the bulk of the insulator [57; 58] and at its edges [51; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70]. The important property of such solitons is that they remain localized due to nonlinearity, and at the same time they inherit topological protection from linear edge modes, from which they usually bifurcate. Corner solitons in higher-order topological insulators have been reported too [71; 72; 73]. Very recently it was theoretically predicted [74] that Floquet topological systems may support a new class of topological solitons, qualitatively different from previously observed unidirectional states [67] - namely, \(\pi\) soliton - that represents dynamically oscillating nonlinear Floquet state with a quasi-energy in the topological bandgap that exactly reproduces its intensity distribution after each longitudinal period of the structure. Even strongly localized \(\pi\) solitons are practically free from radiative losses that usually restrict propagation distances for unidirectional edge solitons in Floquet waveguiding systems [64; 67]. In this work, we report on the experimental observation of \(\pi\) solitons in one- and two-dimensional Floquet waveguide arrays, where nontrivial topological properties arise due to \(z\)-periodic oscillations of waveguide centers in each unit cell of the structure. The arrays considered here are inscribed in a transparent nonlinear dielectric medium (fused silica) using the technique of direct femtosecond laser writing [75; 76; 77] and represent SSH-like structures, which, however, are not static, but spend half of the \(z\)-period in "instantaneous" topological phase, while during other half of the period they are "instantaneously" non-topological, as defined by periodically varying intra- and inter-cell coupling strengths. In two dimensions such arrays represent the realization of the photonic Floquet higher-order insulator. Floquet spectrum of such arrays is characterized by the presence of in-gap topological \(\pi\) modes, from which robust \(\pi\) solitons can bifurcate in the nonlinear regime. We observe such solitons using single-site excitations, study their periodic evolution with distance, and dependence of their localization properties on the amplitude of the waveguide oscillations and power. ## 2 Results and discussions We consider paraxial propagation of a light beam along the \(z\) axis of the medium with focusing cubic nonlinearity and shallow transverse modulation of the refractive index that can be described by the nonlinear Schrodinger-like equation for the dimensionless light field amplitude \(\psi\): \[i\frac{\partial\psi}{\partial z}=-\frac{1}{2}\left(\frac{\partial^{2}}{ \partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right)\psi-\mathcal{R}(x, y,z)\psi-|\psi|^{2}\psi. \tag{1}\] Here \(x,y\) are the scaled transverse coordinates, \(z\) is the propagation distance that plays in Eq. (1) the same role as time in the Schrodinger equation describing a quantum particle in a potential, and the function \(\mathcal{R}(x,y,z)\) describes array of periodically oscillating waveguides. For details of normalization of Eq. (1) see Section S1 in the Supplementary materials. ### 1D \(\pi\) solitons First of all, for observation of 1D \(\pi\) solitons we consider the SSH-like arrays of oscillating waveguides containing 7 unit cells. Refractive index distribution in such arrays can be described by the following function: \[\mathcal{R}=p\sum_{m}[e^{-(x_{1m}^{2}/a_{x}^{2}+y^{2}/a_{y}^{2})}+e^{-(x_{2m} ^{2}/a_{x}^{2}+y^{2}/a_{y}^{2})}],\] where \(x_{1m}=x_{m}+d/2+r\cos(\omega z)\) and \(x_{2m}=x_{m}-d/2-r\cos(\omega z)\) are the \(x\)-coordinates of the waveguide centers in each unit cell containing two waveguides, \(\omega=2\pi/Z\) is the spatial frequency of oscillations of the waveguide centers, \(Z\) is the longitudinal period of the array, \(x_{m}=x-2md\), \(m\) is the integer index of the cell, \(r\) is the amplitude of the waveguide oscillations, which was varied from 1 to 11 \(\mu\)m, \(d=30\,\mu\)m is the spacing between waveguides at \(r=0\) (i.e. unit cell size is \(2d\)), \(a_{x}=2.5\)\(\mu\)m and \(a_{y}=7.5\)\(\mu\)m are the widths of waveguides that are elliptical due to writing process, and \(p\) is the array depth proportional to the refractive index contrast \(\delta n\) in the structure (see Section S1 in the Supplementary materials ). Schematic illustration of such array is presented in Fig. 1a. As one can see, the separation \(d-2r\cos(\omega z)\) between two waveguides in the unit cell (intracell separation) of this structure varies dynamically, leading to periodic transformation between "instantaneously" topological (inter-cell coupling exceeds intracell one) and non-topological (inter-cell coupling is weaker than intra-cell one) SSH configurations. Microphotographs of such 1D fs-laser written arrays in fused silica at different distances within the sample are presented in Fig. 1b. The array period \(Z=33\,\)mm was selected such that our samples contained three full \(z\)-periods of this Floquet structure (see Section S2 in the Supplementary materials ). Nontrivial topological properties in this system arise due to longitudinal variations of the structure (oscillations of the waveguides). Its modes are the Floquet states \(\psi=u(x,y,z)e^{ibz}\), where \(b\) is a quasi-propagation constant [for first Brillouin zone \(b\in[-\omega/2,+\omega/2)]\), and \(u(x,y,z)=u(x,y,z+Z)\) is the \(Z\)-periodic complex field that satisfy the equation \[bu=\frac{1}{2}\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{ \partial y^{2}}\right)u+\mathcal{R}u+i\frac{\partial u}{\partial z}+|u|^{2}u. \tag{2}\] Neglecting nonlinear term in Eq. (2) we calculate linear spectrum of 1D array using the method proposed in [(27)] (see Section S5 in the Supplementary materials ). The transformation of linear spectrum with increase of the amplitude \(r\) of waveguide oscillations is shown in Fig. 1c. Quasi-propagation constant \(b\) is defined modulo \(\omega\) and in Fig. 1c we show the spectrum within three longitudinal Brillouin zones. Gray lines correspond to the delocalized bulk modes, while the red lines correspond to the linear topological \(\pi\) modes [(28; 29; 30; 31; 32)]. Notice that they emerge around the points, where Floquet replicas of the bands spectrally overlap, and longitudinal modulation hybridizes states at the band edges lifting their degeneracy and opening the topological gap. Because our structure is symmetric, \(\pi\) modes appear near both edges of the array. Quasi-propagation constants of \(\pi\) modes are located in the forbidden gap in the Floquet spectrum that guarantees absence of coupling with bulk states. Their localization near a given edge increases with increase of the gap width, cf Fig. 1d and e. Such modes show strong shape transformations within longitudinal period, but exactly reproduce their shape after each period \(Z\). Remarkably, Fig. 1e clearly shows that global intensity maximum of the \(\pi\) mode is not always located in the edge waveguide. For instance, at \(z=Z/2\), where the array is in instantaneous nontopological phase, the intensity maximum switches into next to edge waveguide, while at \(z=Z\), exactly after one oscillation period, where structure returns into instantaneous topological configuration, the light also switches back to the edge waveguide. Already for \(r=7\,\mu\)m the \(\pi\) mode contracts practically to single waveguide in some points within evolution period that enables its efficient excitation in the experiment [this determined our choice of the initial "phase" of the waveguide oscillations in Fig. 1a; linear spectrum clearly does not depend on this phase]. Very similar results and Floquet spectrum were obtained also for arrays with odd number of waveguides (where one of the unit cells is incomplete), because even in this case due to waveguide oscillations both edges periodically pass through stages when truncation becomes topological or nontopological and therefore support \(\pi\) modes. The appearance of topological \(\pi\) modes in the spectrum of this Floquet system is associated with nonzero value of the \(\pi\) gap invariant \(w_{\pi}\) (see Section S4 in the Supplementary materials for details of its calculation and literature [(30; 36)]). One can observe the formation of \(\pi\) modes in the spectrum of truncated array when \(w_{\pi}=1\) Figure 1: (Color online) (a) Schematic image of the 1D oscillating waveguide array (three longitudinal \(z\)-periods) containing 7 unit cells. (b) Microphotographs of the fs-laser written oscillating waveguide array at different distances in topological phase (\(z=0\)), uniform phase (\(z=Z/4\)), and trivial phase (\(z=Z/2\)). (c) Quasi-propagation constants of the Floquet modes of the oscillating array versus amplitude of the waveguide oscillations \(r\) within three longitudinal Brillouin zones. (d) and (e) show intensity distributions of the linear \(\pi\) modes at different distances \(z\) for two selected oscillation amplitudes \(r\) corresponding to the red dots in (c). In all cases \(Z=33\,\)mm. (for sufficiently large oscillation amplitudes \(r\)), while these modes are absent when \(w_{\pi}=0\) (e.g., at \(r\to 0\)). Inspecting spectrum in Fig. 1c one can see that while longitudinal modulation with frequency \(\omega\) creates Floquet replicas of bands and localized \(\pi\) modes, by itself it does not induce parametric resonances between localized and bulk modes, which nevertheless can occur, if additional weak modulation of optical potential at frequencies \(2\omega\), \(3\omega\), \(\cdots\) is added that keeps \(Z\)-periodicity of array. The \(\pi\) solitons are the topological nonlinear Floquet states bifurcating from the linear \(\pi\) modes. To find their profiles we iteratively solve Eq. (2) with the last nonlinear term included (see Section S6 in the Supplementary materials ), by varying soliton power \(U=\iint|\psi|^{2}dxdy\) and calculating for each \(U\) corresponding \(Z\)-periodic soliton profile \(u(x,y,z)\), quasi-propagation constant \(b\), and averaged amplitude \(A=Z^{-1}\int_{z}^{z+2}\max\lvert\psi\rvert dz\). The \(\pi\) solitons bifurcate from linear \(\pi\) modes, as evident from the representative \(b(U)\) dependence in Fig. 2a, where quasi-propagation constant of linear mode is shown by the dashed line. Their amplitude \(A\) increases with the power (Fig. 2b). Importantly, nonlinearity changes the location of \(b\) inside the topological gap, gradually shifting it towards the bulk band (gray region). This is accompanied by changes in soliton localization in the \((x,y)\) plane (it may first increase and then decrease depending on the value of \(r\)), especially when \(b\) shifts into the band, where coupling with the bulk modes occurs. Periodic transformation of soliton intensity distribution with \(z\) is illustrated in Fig. 2c. It should be stressed that for our parameters even for the amplitude of oscillations \(r\sim 9\)\(\mu\)m the \(\pi\) solitons obtained here are robust objects that practically do not radiate and survive over hundreds of \(Z\) periods that is beneficial in comparison with the previously observed unidirectional edge solitons [67]. We tested stability of such states by adding broadband small noise (typically, 5% in amplitude) into input field distributions and propagating such perturbed \(\pi\) solitons over distances \(\sim 500Z\) that allows to detect the presence of even very weak instabilities. Such stability analysis has shown that for our parameters and for amplitudes of oscillations \(r>5\)\(\mu\)m 1D solitons belonging to forbidden gap are stable, while they become unstable, when they shift into band. Notice that stabilization of such states that have powers well below power of Townes soliton in uniform cubic medium is consistent with arguments of Ref. [78]. To demonstrate 1D \(\pi\) solitons experimentally we inscribed in a fused silica sample the series of SSH-like arrays with the different amplitudes \(r\) of waveguide oscillations ranging from 1 to 11 \(\mu\)m, with a step in \(r\) of 2 \(\mu\)m using fs-laser writing technique (see Fig. 1b with exemplary microphotographs of the array and Section S2 in the Supplementary materials for the details of inscription). While full sample length contains three \(Z\)-periods of the structure, to demonstrate dynamics in the internal points of the last period, we additionally inscribed arrays with fractional lengths \(2.25Z\), \(2.50Z\), \(2.75Z\) with the same parameters (see Section S2 in the Supplementary materials for the details). In experiments we excited the waveguide at the left edge using the fs-laser pulses of variable energy \(E\) (for correspondence between pulse energy \(E\) and input peak power in the waveguide see Section S3 in the Supplementary materials ). Output intensity cross-sections at \(y=0\) (red lines) and 2D distributions (blue insets) are compared in Fig. 3 with the results of theoretical simulations of the single-site excitation with different input powers \(U\) in the frames of Eq. (1) (black-red insets). In all cases theoretical results well agree with the experimental observations. In Fig. 3a and top right image in Fig. 3c we show how output beam localization progressively increases in linear regime (low pulse energies \(E=20\) nJ) with increase of the amplitude \(r\) of the waveguide oscillations. Efficient excitation of well-localized linear \(\pi\) modes is evident for \(r\geq 7\)\(\mu\)m, while for \(r=5\)\(\mu\)m the excitation efficiency is lower and some fraction of power penetrates into the bulk of the array. First rows in Fig. 3b and c illustrate that \(\pi\) mode undergoes strong oscillations on one \(Z\) period, main intensity maximum switches into second waveguide at \(z=2.50Z\) (consistently with the dynamics of exact state in Fig. 1e), but it returns to the edge one at \(z=3.00Z\) illustrating periodic evolution. By increasing pulse energy we observe the formation of the 1D \(\pi\) solitons. As mentioned above, they are in-gap topological states bifurcating from \(\pi\) modes under the ac Figure 2: (Color online) Quasi-propagation constant (a) and \(z\)-averaged peak amplitude (b) of the \(\pi\) soliton versus its power \(U\). Gray region in (a) corresponds to the bulk band, while white region corresponds to the forbidden gap in Floquet spectrum. Horizontal dashed line shows quasi-propagation constant \(b\) of linear \(\pi\) mode. (c) Intensity distributions in the \(\pi\) soliton with power \(U=0.5\) at different distances within one oscillation period \(Z\). The state is shown near left edge only. Here \(r=6\)\(\mu\)m, \(Z=33\) mm. The dependencies \(b(U)\) and \(A(U)\) for other values of \(3\)\(\mu\)m \(\leq r\leq 9\)\(\mu\)m are qualitatively similar. Figure 3: (Color online) (a) Formation of the \(\pi\) modes with increase of the amplitude of waveguide oscillations \(r\) in linear regime. The impact of nonlinearity on such states and formation of \(\pi\) solitons is illustrated in (b) for \(r=7\,\mu\)m and in (c) for \(r=9\,\mu\)m at different propagation distances. In all panels red lines show experimental 1D intensity cross-sections at \(y=0\), blue inset shows 2D experimental intensity distributions for a given pulse energy \(E\), and black-red insets show corresponding theoretical 2D intensity distributions for different input powers \(U\). tion of nonlinearity. Nonlinearity leads to soliton reshaping (in particular, for \(r=7-9\,\mu\)m it slightly broadens with increase of \(U\)), but when its quasi-propagation constant shifts into the band, strong radiation into the bulk occurs. This is most clearly visible for \(r=7\,\mu\)m (Fig. 3b), where solitons were observed well-localized near the edge for the pulse energies \(E<350\,\)nJ, but radiating around \(E\sim 430\,\)nJ (notice that at this energy the level of radiation becomes visible only after three \(Z\) periods). Further increase of the pulse energy leads to stronger radiation. At \(r=9\,\mu\)m the range of the pulse energies, where formation of robust \(\pi\) solitons is observed substantially increases (Fig. 3c). Well localized \(\pi\) solitons performing \(Z\)-periodic oscillations are observed for the pulse energies \(E<900\,\)nJ (rows 1-3) and only at \(E\sim 1000\,\)nJ small radiation due to the coupling with the bulk modes appears (row 4). Simulations over much larger distances (\(z>100Z\)) confirm robustness of such dynamically excited nonlinear Floquet states with in-gap quasi-propagation constants. It should be stressed that excitation of the waveguide in the bulk of the above arrays does not yield localization for considered pulse energies. ### 2D \(\pi\) solitons For observation of 2D \(\pi\) solitons we utilize 2D generalization of the SSH array with oscillating waveguides. Unit cell of such an array (quadrimer) contains four waveguides, whose centers oscillate with period \(Z\) along the diagonals of the unit cell. We consider sufficiently large structure containing \(5\times 5\) unit cells. Refractive index distribution in this Floquet structure is described by the function \[\mathcal{R}=p\sum_{m,n}[e^{-(x_{1m}^{2}/a_{x}^{2}+y_{1n}^{2}/a_{y} ^{2})}+e^{-(x_{2m}^{2}/a_{x}^{2}+y_{1n}^{2}/a_{y}^{2})}+\] \[e^{-(x_{1m}^{2}/a_{x}^{2}+y_{2n}^{2}/a_{y}^{2})}+e^{-(x_{2m}^{2} /a_{x}^{2}+y_{2n}^{2}/a_{y}^{2})}],\] where \(x_{1m,2m}=x_{m}\pm d/2\pm r\cos(\omega z)\) and \(y_{1n,2n}=y_{n}\pm d/2\pm r\cos(\omega z)\) are the coordinates of centers of four waveguides in the unit cell with \(x_{m}=x-2md\) and \(y_{n}=y-2md\), \(m,n\) are the integers. In 2D case the oscillation period was taken as \(Z=49.5\,\)mm, so that sample contained two full longitudinal array periods. Spacing between waveguides at \(r=0\)\(\mu\)m was set to \(d=32\,\mu\)m, and to achieve more uniform coupling between elliptic waveguides, their longer axes were oriented along the diagonal of the array (see schematics in Fig. 4a and microphotographs of inscribed structure at different distances in Fig. 4b). As one can see, such structure realizes the photonic Floquet higher-order insulator periodically switching between "instantaneous" topological and non-topological phases. Dependence of quasi-propagation constants \(b\) of the Floquet eigenmodes of the 2D array, obtained from linear version of Eq. (2), on amplitude of waveguide oscillations \(r\) shown in Fig. 4c reveals the formation of 2D \(\pi\) modes (red lines) that reside in the corners of the structure, but in comparison with 1D case they appear in sufficiently narrow range of oscillation amplitudes. This is a consequence of substantially more complex spectrum of static 2D SSH structures [73] featuring four bands in topological phase (in contrast to only two bands in 1D SSH arrays), that in our case experience folding due to longitudinal array modulation, resulting in a very complex Floquet spectrum. For Figure 4: (Color online) (a) Schematic illustration of the 2D oscillating waveguide array (for illustrative purposes we show only \(3\times 3\) unit cells). (b) Microphotographs of laser-written 2D array with \(5\times 5\) unit cells at different distances. (c) Quasi-propagation constants of the Floquet modes of the 2D array with oscillating waveguides versus amplitude of waveguide oscillations \(r\) within three longitudinal Brillouin zones. (d) Intensity distribution in linear 2D \(\pi\) mode at different distances \(z\) for \(r=6\,\mu\)m. In all cases \(Z=49.5\,\)mm. instance, quasi-propagation constants of 2D \(\pi\) modes may overlap with the band, as it also happens with eigenvalues of usual corner modes in static higher-order insulators [73]. To obtain such spectrum, where topological gap can be opened by longitudinal array modulation, we had to select not too small depth of potential \(p=5\) to ensure that the width of the bulk bands is comparable with the width of the longitudinal Brillouin zone and no too strong band folding occurs. Example of the \(\pi\) mode performing periodic oscillations (only one corner is shown) is depicted in Fig. 4d. The properties of 2D \(\pi\) solitons, whose family was obtained from nonlinear Eq. (2) using iterative method, are summarized in Fig. 5. As in the 1D case, quasi-propagation constant of 2D solitons crosses the gap with increase of power \(U\) and enters into the band (Fig. 5a). For the selected amplitude \(r=7\,\mu\)m the soliton exists practically in the entire gap, because \(b\) of linear \(\pi\) mode from which it bifurcates is located near the lower gap edge (we have checked that this linear \(\pi\) mode indeed falls into forbidden gap of bulk system by calculating quasienergy spectrum of periodic, i.e. infinite in the transverse direction, Floquet array). The average amplitude \(A\) increases with \(U\) (Fig. 5b). The intensity distributions at different distances illustrating periodic \(\pi\) soliton evolution in \(z\) are presented in Fig. 5c. Despite the fact that this state is 2D and oscillates strongly, the collapse is suppressed and one observes very robust propagation for all powers, when soliton resides in the gap. This conclusion was also supported by the results of propagation of weakly perturbed 2D \(\pi\) solitons over large distances. For \(r=7\,\mu\)m all such states in the gap were found stable. To observe 2D \(\pi\) solitons we inscribed the series of 2D oscillating arrays with various amplitudes of waveguide oscillations up to \(r=9\)\(\mu\)m. Excitations in the right corner of the array were used, but it should be stressed that exci Figure 5: (Color online) Quasi-propagation constant (a) and \(z\)-averaged peak amplitude (b) versus power \(U\) for the \(\pi\) solitons in 2D array. (c) Intensity distributions at different distances for a soliton with \(U=0.2\). In all cases \(r=7\,\mu\)m, \(Z=49.5\) mm. Figure 6: (Color online) Excitation of the \(\pi\) solitons in 2D oscillating waveguide arrays for different amplitudes of waveguide oscillations \(r\) at \(z=2Z\). Top rows (blue background) show experimentally measured intensity distributions for different pulse energies \(E\), while bottom rows (black background) show theoretically calculated output patterns for different powers \(U\). tations of other corners yields nearly identical results due to high symmetry and uniformity of the array. At low pulse energies \(E=10\,\)nJ one observes strong diffraction into the bulk at \(r=3\,\mu\)m (Fig. 6a), while efficient excitation of linear \(\pi\) modes takes place at amplitudes \(r\geq 5\,\mu\)m (Fig. 6b and c). To the best of our knowledge, this constitutes the first observation of 2D \(\pi\) modes in photonics. Increasing pulse energy at low \(r\sim 3\,\mu\)m results first in concentration of light in the bulk of the sample and then its gradual displacement toward the corner (Fig. 6a). By contrast, for \(r\geq 5\,\mu\)m one observes the formation of \(\pi\) solitons, whose range of existence in terms of input power grows with increase of \(r\). Thus, at \(r=5\,\mu\)m the well-localized solitons form at pulse energies \(E<300\) nJ, while at \(E\sim 400\) nJ strong radiation into the bulk occurs (Fig. 6b) due to nonlinearity-induced shift into the allowed band. At \(r=7\,\mu\)m one observes the formation of the \(\pi\) solitons even at \(E\sim 600\) nJ (Fig. 6c) with tendency for slight increase of secondary intensity maxima in soliton profile at highest power levels that is observed also in exact soliton solution of Eq. (2). Excitations in other corners of the array (e.g., top one) yield similar results confirming the \(\pi\) soliton formation, while excitations in the bulk strongly diffract at these pulse energies. ## 3 Conclusion We presented experimental observation of a new type of \(\pi\) solitons in nonlinear Floquet system, where nontrivial topology arises from periodic modulation of the underlying photonic structure in evolution variable (along the light propagation path). Such solitons exist both in 1D and 2D geometries and they show exceptionally robust evolution due to practically absent radiative losses at considered periods and amplitudes of oscillations. The results obtained here may be used in the design of a class of topological Floquet lasers based on \(\pi\) modes, for the control and enhancement of parametric processes, such as generation of new harmonics assisted by topology of the Floquet system, and for design of new types of on-chip all-optically controlled topological devices. ## Conflict of interest The authors declare that they have no conflict of interest. ## Acknowledgments This research is funded by the research project FFUU-2021-0003 of the Institute of Spectroscopy of the Russian Academy of Sciences and partially by the RSF grant 21-12-00096. Y. Z. acknowledges funding by the National Natural Science Foundation of China (Grant Nos.: 12074308). S. Z. acknowledges support by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS" (Grant No.: 22-2-2-26-1). ## Author contributions Yiqi Zhang and Yaroslav V. Kartashov formulated the problem. Sergei A. Zhuravitskii, Nikolay N. Skryabin, Ivan V. Dyakonov, and Alexander A. Kalinkin fabricated the samples. Antonina A. Arkhipova, Victor O. Kompanets, and Sergey V. Chekalin performed experiments. Yiqi Zhang performed numerical modeling. Yaroslav V. Kartashov, Sergei P. Kulik, and Victor N. Zadkov supervised the work. All co-authors took part in discussion of results and writing of manuscript.
2310.06207
Experimental neutrino physics in a nuclear landscape
There are profound connections between neutrino physics and nuclear experiments. Exceptionally precise measurements of single and double beta-decay spectra illuminate the scale and nature of neutrino mass and may finally answer the question of whether neutrinos are their own antimatter counterparts. Neutrino-nucleus scattering underpins oscillation experiments and probes nuclear structure, neutrinos offer a rare vantage point into collapsing stars and nuclear fission reactors, and techniques pioneered in neutrino nuclear-physics experiments are advancing quantum-sensing technologies. In this article, we review current and planned efforts at the intersection of neutrino and nuclear experiments.
D. S. Parno, A. W. P. Poon, V. Singh
2023-10-09T23:22:52Z
http://arxiv.org/abs/2310.06207v2
# Experimental neutrino physics in a nuclear landscape ###### Abstract There are profound connections between neutrino physics and nuclear experiments. Exceptionally precise measurements of single and double beta-decay spectra illuminate the scale and nature of neutrino mass and may finally answer the question of whether neutrinos are their own antimatter counterparts. Neutrino-nucleus scattering underpins oscillation experiments and probes nuclear structure, neutrinos offer a rare vantage point into collapsing stars and nuclear fission reactors, and techniques pioneered in neutrino nuclear-physics experiments are advancing quantum-sensing technologies. In this article, we review current and planned efforts at the intersection of neutrino and nuclear experiments. ## 1 Introduction Nuclear physics is pivotal in the story of the neutrino, the lightest known matter particle in the universe. Wolfgang Pauli first proposed the neutrino's existence in 1930 to solve the longstanding mystery of the nuclear beta-decay spectrum [1], and Clyde Cowan and Frederick Reines discovered the particle experimentally at a nuclear reactor in 1956 [2]. At the same time, the neutrino is a tool for understanding nuclear physics, illuminating fusion reactions inside the Sun [3] and beta decays on Earth. In the past century of work, physicists have established [4] that the neutrino is a neutral, left-handed lepton; the anti-neutrino, conversely, is right-handed. The three neutrino flavour states (\(\nu_{e},\nu_{\mu},\nu_{\tau}\)), associated with charged-lepton flavours, are linear superpositions of the three neutrino mass states (\(\nu_{1},\nu_{2},\nu_{3}\)), with resultant flavour oscillation [5, 6, 7] dictated by the mixing angles of the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix \(U\), by the splittings \(\Delta m^{2}_{ij}\) between mass states, and by the ratio of the source-detector distance to the neutrino energy. Since the neutrino's only Standard Model interaction is via the weak force, its interaction cross sections are intimidatingly small, but are measurable. Despite this progress, however, many basic questions are unanswered. For example, what is the absolute scale and nature (Dirac or Majorana) of the neutrino mass? What is the ordering of the mass values, and is there CP violation in the neutrino sector? Do the three known neutrino flavours constitute the whole neutrino sector, or are there "sterile" neutrinos that do not feel the weak force? This review is conceived as a snapshot of current experimental efforts that relate nuclear and neutrino physics. We begin with an examination of two nuclear laboratories for exploring neutrino physics: single- and double-beta decay. As explored in Sec. 2, nuclear beta decays (including electron-capture decays) permit a direct, kinematic probe of the absolute neutrino-mass scale. Recent work has significantly narrowed the laboratory limits on this quantity, a crucial input to both particle theory and cosmology. Meanwhile, in Sec. 3, we see that searches for neutrinoless double beta decay - a never-before-seen variant on the rare double beta-decay process, which is possible only if neutrinos are Majorana particles - are taking nuclear-physics experiments to new scales and levels of background control. In Sec. 4, we briefly survey additional intersections of neutrino and nuclear physics, including the nuclear physics of high-energy neutrino interactions, essential for interpreting long-baseline neutrino-oscillation experiments (Sec. 4(a)); the use of low-energy neutrino scattering to illuminate nuclear properties and supernova nucleosynthesis (Sec. 4(b)); neutrino probes of fission reactors (Sec. 4(c)); searches for sterile neutrinos (Sec. 4(d)); and applications in quantum sensing (Sec. 4(e)). ## 2 Absolute neutrino-mass measurement Neutrino-oscillation experiments have established that neutrinos cannot all be massless, but are insensitive to the individual mass eigenvalues \(m_{i}\). To date, oscillation data are consistent with two options for neutrino-mass ordering: \(m_{3}>m_{2}>m_{1}\) (normal ordering) and \(m_{2}>m_{1}>m_{3}\) (inverted ordering). In either ordering, the mass of the lightest neutrino sets a scale for the others, which can be probed via studies of \(\beta\) and \(\beta\beta\) decays (Sec. 3), and via cosmological observations. Some of these measurements are model-dependent. For example, the measured sum of the neutrino masses from cosmological studies -- \(\sum_{i}m_{i}\) -- varies conspicuously under different model assumptions and data inputs, albeit achieving remarkable constraints [4]. The measurements described in this section are essentially model-independent. Since Fermi established the kinematic relationship between the electron energy spectrum of the neutrino mass and \(\beta\) decay [8], many experiments have tried to determine the neutrino mass via \(\beta\) and electron-capture decays in different nuclei. Here, we focus on recent efforts and refer the reader to Ref. [9] for complete historical context. Such probes of the neutrino-mass scale derive their sensitivity from precise spectral-shape measurements near the kinematic endpoint of the decay spectrum, where the presence of a non-zero neutrino mass appreciably changes the energy available to other particles in the final state. In the quasi-degenerate regime, where the mass scale is large compared to the mass splittings, kinematic experiments measure an effective neutrino mass \(m_{\beta}\): \[m_{\beta}^{2}=\sum_{i}|U_{ei}|^{2}\ m_{i}^{2}. \tag{1}\] Since the fraction of \(\beta\)-decays in the small, sensitive energy interval \(\delta E\) below the endpoint energy \(E_{0}\approx Q\) (the \(Q\)-value of the decay), is proportional to \(\left(\frac{\delta E}{Q}\right)^{3}\), an ideal nucleus for this type of measurement has a small \(Q\)-value, a relatively short half-life for the enhancement of source intensity, and a well-understood decay structure. The isotopes used in current neutrino-mass measurement efforts are \({}^{3}\)H (Sec. (a)) and \({}^{163}\)Ho (Sec. (b)). Each has some complexity in the measured spectrum. In \(\beta\)-decay experiments with molecular \({}^{3}\)H, the final-state electronic, vibrational, and rotational excitations modify the beta spectrum significantly and are obtained from theory. In \({}^{163}\)Ho electron-capture experiments, similarly intricate theoretical calculations are needed to account for X-ray, Auger-Meitner, and Coster-Kronig transitions, as well as nuclear recoil. Recent efforts have identified other possible isotopes with ultralow \(Q\)-values (\(<1\) keV), and thus enhanced statistical sensitivity; see Ref. [10] for a review. These isotopes, however, typically have extremely long lifetimes, complex nuclear structures, and very small branching ratios for the specific decay modes with ultralow \(Q\), rendering them impractical targets for future precise experiments. ### \({}^{3}\)H The best current kinematic limit on the neutrino-mass scale arises from the decay of molecular tritium, \({}^{3}\)H\({}_{2}\): \[{}^{3}\mathrm{H}_{2}\,\longrightarrow\,^{3}\mathrm{He}\,^{3}\mathrm{H}^{+}+ \mathrm{e}^{-}+\bar{\nu}_{\mathrm{e}}\,+\,Q(^{3}\mathrm{H}_{2})\,. \tag{2}\] The differential decay rate, summed over all final molecular states \(f\) in the daughter molecule, each with energy \(V_{f}\) and weighted by the transitional probability \(P_{f}\) to that state, is [11]: \[\frac{\mathrm{d}\Gamma}{\mathrm{d}E} = \frac{G_{\mathrm{F}}^{2}\,|V_{\mathrm{ud}}|^{2}}{2\pi^{3}}\,|M_{ \mathrm{nuc}}|^{2}\,F(Z,E)\cdot p\,(E+m_{e}) \tag{3}\] \[\cdot\sum_{f}\,P_{f}\,\epsilon_{f}\,\sqrt{\epsilon_{f}^{2}-m_{ \beta}^{2}}\,\Theta(\epsilon_{f}-m_{\beta})\,,\] where \(G_{\mathrm{F}}\) is the Fermi coupling constant, and \(|V_{\mathrm{ud}}|=0.97373\pm 0.00031\) is the CKM matrix element [4]. \(M_{\mathrm{nuc}}\) is the nuclear transition matrix element. \(F(Z,E)\) is the Fermi function that accounts for the Coulomb interaction between the outgoing electron with kinetic energy \(E\) and momentum \(p\), and the daughter nucleus with atomic charge \(Z\); \(\epsilon_{f}\) is the neutrino energy (\(=E_{0}-V_{f}-E\)); \(E_{0}\) is the maximum \(\beta\) energy if the neutrino mass is zero, and the Heaviside step function \(\Theta(\epsilon_{f}-m_{\beta})\) ensures energy conservation. The KArlsruhe TRItium Neutrino (KATRIN) experiment [12] is the most sensitive operating direct neutrino-mass experiment, and the current best limit of \(m_{\beta}<0.8\) eV/\(c^{2}\) (90% C.L.) is based on its first two measurement campaigns [13]. In the KATRIN apparatus, cold \({}^{3}\)H\({}_{2}\) gas is injected into the windowless source section. The decay electrons are guided by magnetic fields to the main spectrometer for energy analysis. Differential and cryogenic pumping stages along the beamline reduce the tritium flow by 14 orders of magnitude. The electrons' transverse momentum is adiabatically transformed into longitudinal momentum in a slowly varying magnetic field, which reaches a minimum in the "analysing" plane of the main spectrometer. Only electrons with enough kinetic energy to pass the potential barrier of \(\sim-18.6\) kV are transmitted to the detector. Essentially, the main spectrometer acts as a high-pass filter so that the detector records an integral spectrum (Fig. 1, left). KATRIN continues to take data toward a design sensitivity goal of \(m_{\beta}<0.2\) eV. The statistical sensitivity of a KATRIN-type experiment follows [14]: \[\delta m_{\beta}^{2}\propto\frac{b^{\frac{1}{b}}}{r^{\frac{2}{3}}t^{\frac{1}{2}}} \tag{4}\] where \(b\) is the background rate, \(r\) is the radius of the spectrometer, and \(t\) is the measurement time. KATRIN observes higher-than-expected backgrounds arising from low-energy electrons generated in the large volume of the main spectrometer; even without this issue, the diameter of a spectrometer to improve on KATRIN's sensitivity by an order of magnitude would be unrealistically large. Instead of enlarging the spectrometer, improvements to a KATRIN-like experiment would thus require reducing the background or changing the role of the main spectrometer in the measurement, e.g. by operating in a time-of-flight mode [15]. It is clear from Eq. 3 that the molecular final-state distribution (FSD) populated by \({}^{3}\)H\({}_{2}\) decay affects the measured \(\beta\) spectrum, and hence \(m_{\beta}^{2}\). Near the spectral endpoint, any unaccounted-for Gaussian broadening \(\Delta\sigma\), including that of the FSD, changes the extracted \(m_{\beta}^{2}\) by [17]: \[\Delta m_{\beta}^{2}\approx-2\Delta\sigma^{2}. \tag{5}\] In fact, it has been demonstrated using Eq. 5 that the use of a more sophisticated, modern FSD [18] is enough to render negative \(m_{\beta}^{2}\) results from the 1980s consistent with zero [19]. Significant theoretical effort has been invested in improved FSD calculations, consistent with the limited available experimental tests [19, 20], and the resulting uncertainty is now negligible for KATRIN. However, the spectral broadening induced by the FSD would become a significant limiting factor for experiments with sufficiently sharp energy resolution. Cyclotron Radiation Emission Spectroscopy (CRES), an alternative to the KATRIN strategy, is a frequency technique for determining \(m_{\beta}\) by precisely measuring the cyclotron radiation from the relativistic electron in atomic \({}^{3}\)H \(\beta\) decay [21]. The power radiated by an 18-keV electron in a 1 T field is approximately 1 fW. The Project 8 experiment aims to realise this concept by measuring the differential energy spectrum of atomic \({}^{3}\)H, thereby eliminating the FSD broadening of molecular \({}^{3}\)H\({}_{2}\) and the need for a mammoth spectrometer. In its first milestone, the Project 8 experiment observed single electrons from \({}^{83m}\)Kr decay [22]. Recently, the experiment has demonstrated CRES as a viable technique for a low-background neutrino-mass measurement with \({}^{3}\)H\({}_{2}\) in a small trap, setting a Bayesian upper limit of \(m_{\beta}<\)155 eV/\(c^{2}\) (90% C.L.). No background was observed after 82 days of running, and an adequate resolution was demonstrated using \({}^{83m}\)Kr 17.8-keV internal-conversion electrons [23, 24]. The collaboration is now following two parallel research-and-development (R&D) tracks: scaling up the CRES technique to larger volumes with resonant cavities, and developing an atomic tritium source in which magnetic trapping of \({}^{3}\)H atoms prevents recombination. The ultimate Figure 1: Measured integral \({}^{3}\)H \(\beta\) spectrum near the endpoint from the KATRIN experiment [13] (left) and the differential electron-capture calorimetric spectrum of \({}^{168}\)Ho from the ECH\(\alpha\) experiment [16] (right). goal of the Project 8 experimental programme is a sensitivity of \(m_{\beta}<40\) meV/c\({}^{2}\) (90% C.L.) with atomic \({}^{3}\)H. ### \({}^{163}\)Ho Serious non-tritium-based efforts to measure the neutrino mass currently centre on the electron-capture decay of \({}^{163}\)Ho to \({}^{163}\)Dy\({}^{\star}\), as first proposed in Ref. [25]. Earlier efforts to use the beta decay of \({}^{187}\)Re have been abandoned due to difficulties in designing a suitable, scalable detector [26]. In \({}^{163}\)Ho-based measurements, microcalorimeters entirely capture the de-excitation energy of the daughter \({}^{163}\)Dy\({}^{\star}\) and convert it to heat. The synthetic isotope \({}^{163}\)Ho has a low \(Q\!=\!2.833\) keV [27] and a half-life of 4570 years. Since neutrinos are emitted in electron-capture decays instead of antineutrino emission in \({}^{3}\)H decay, the two types of measurement are complementary. The \({}^{163}\)Ho spectrum has a complex shape (Fig. 1) requiring a careful treatment of resonance features, shake-off electrons, and solid-state effects from the absorber [28]. The pile-up rate in the microcalorimeters can be very high as they measure the full differential spectrum. Different \({}^{163}\)Ho experiments deploy different SQUID-based detector technologies to read out minute temperature changes. HOLMES [29] uses Transition Edge Sensor (TES) arrays, while ECHo uses arrays of magnetic metallic calorimeters (MMCs) and has set a limit of \(m_{\beta}<150\) eV/c\({}^{2}\) (95% C.L.) [16]. ## 3 Neutrinoless double-beta decay It is not currently understood how the existence of neutrino mass should be incorporated into the Standard Model. Since the neutrino has no electric charge, both Dirac and Majorana mass terms are possible. The search for neutrinoless double-beta decay (\(0\nu\beta\beta\)) is the only practical means to establish which best describes the neutrino nature. In the Standard Model, double-beta decay (\(2\nu\beta\beta\)) is an allowed second-order decay process in which two uncorrelated nucleons decay simultaneously and emit two electrons and two anti-neutrinos: \((Z,A)\!\xrightarrow{}\!(Z+2,A)+2e^{-}+2\bar{\nu}\). However, if neutrinos are their own antiparticles (Majorana particles [30]), the emitted anti-neutrino from one of the nucleons can be absorbed in the second interaction so that there are no neutrinos in the final state: \[(Z,A)\!\xrightarrow{}\!(Z+2,A)+2e^{-}. \tag{3.1}\] This is the much sought-after \(0\nu\beta\beta\) decay mode, which violates lepton-number conservation - an accidental symmetry of the Standard Model - and could help explain the matter-antimatter asymmetry in our Universe [31]. The \(0\nu\beta\beta\) decay rate can be expressed as \[[T_{\frac{1}{2}}^{0\nu}]^{-1}=\sum_{i}G_{i}^{0\nu}(Z,Q)\cdot\left|M_{i}^{0\nu }\right|^{2}\cdot\zeta_{i}^{2} \tag{3.2}\] where \(G^{0\nu}(Z,Q)\) is the phase-space factor that depends on the proton number (\(Z\)) of the decaying nucleus and the \(Q\)-value of the decay, \(M_{i}^{0\nu}\) is the nuclear matrix element (NME), and \(\zeta_{i}\) depends on the mechanism and mode of the lepton-number-violating process. The phase-space factors have been calculated [32, 33] and the \(Q\)-values have been measured precisely for several isotopes actively pursued by experiments [34, 35, 36, 37]. If we assume that the decay is mediated by the exchange of light Majorana neutrinos, \(\zeta\) reduces to an effective Majorana neutrino mass \(m_{\beta\beta}\), which is a coherent sum of neutrino mass eigenvalues defined as \[|m_{\beta\beta}|=|\sum_{i=1}^{3}U_{ei}^{2}m_{i}| \tag{3.3}\] Fig. 2 shows the relationship between \(m_{\beta\beta}\) and \(m_{\beta}\) (Sec. 2). In the scenario of light-neutrino exchange, the decay rate is written as [39, 40]: \[[T_{\frac{1}{2}}^{0\nu}]^{-1}=G^{0\nu}(Z,Q)\cdot(g_{A})^{4}\cdot\left|M^{0\nu} \right|^{2}\cdot\frac{m_{\beta\beta}^{2}}{m_{e}^{2}} \tag{10}\] where \(g_{A}\) is the axial-vector coupling constant factored out of the nuclear matrix element \(|M^{0\nu}|^{2}\), and \(m_{e}\) is the mass of the electron. The corresponding NMEs are calculated using various macroscopic and microscopic nuclear models dealing with complex nuclear structures (Sec. (e)). However, the predictions from these nuclear models disagree by more than a factor of two [39], which results in a significant uncertainty on the predicted value of \(m_{\beta\beta}\). While the constraint on neutrino mass through \(0\nu\beta\beta\) is model-dependent, establishing the Majorana character is not; the Black-Box Theorem [41, 42, 43] states that the observation of neutrinoless double-beta decay would directly imply lepton-number violation. The fundamental concept behind a \(0\nu\beta\beta\) search involves detecting two emitted electrons and identifying their summed energy peak at the \(Q\)-value (\(Q_{\beta\beta}\)) of the energy spectrum. Typically, the \(Q\)-values of relevant isotopes are precisely known and the search for a \(0\nu\beta\beta\) peak is limited to a narrow energy range determined by the detector's energy resolution (\(\Delta E\)) at \(Q_{\beta\beta}\). The search sensitivity is also limited by background events that mimic the signal signature in the region of Figure 2: The effective Majorana-mass observable \(m_{\beta\beta}\) in neutrinoless double-beta decay searches vs. the direct kinematic observable \(m_{\beta}\). The neutrino mixing parameters \(U_{\alpha i}\) are varied within their ranges from oscillation experiments. The blue area is for the normal mass ordering, while the red area is for the inverted mass ordering. The next generation of \(0\nu\beta\beta\) experiments aims to probe the entire inverted mass ordering through \(m_{\beta\beta}\). Adapted from [38]. interest (ROI). The half-life sensitivity of an experiment can be expressed as [44]: \[T_{1/2}^{0\nu} \propto\epsilon\cdot\sqrt{\frac{M\cdot t}{B\cdot\Delta E}}\] background-limited \[T_{1/2}^{0\nu} \propto\epsilon\cdot M\cdot t\] background-free (3.5) where \(\epsilon\) is the detector's efficiency, \(M\) is the mass of the isotope deployed, and \(B\) is the background index, typically expressed as the number of background events expected in a certain energy range within the live-time of the experiment (\(t\)) for a given detector mass. It is often reported in units of counts per detector mass, energy, and time, _e.g._\(\text{counts}/(\text{keV}\cdot\text{kg}\cdot\text{year})\). Given the extreme rarity of this decay, the experimental challenge lies in detecting this process amidst a background of other radioactive decays and cosmic rays [45]. Experiments are carried out in deep underground facilities [46] that provide a natural barrier against cosmic-ray interference. The dominant sources of radioactive background typically include \(\alpha\), \(\beta\), and \(\gamma\) radiation from primordial decay chains, together with neutron-induced reaction products in underground labs. Experiments using isotopes with \(Q_{\beta\beta}\) > 2615 keV benefit from a lower background by avoiding the \({}^{208}\)Tl line. However, it is not always possible to deploy these isotopes on a large scale due to their low isotopic abundance or a lack of suitable detector technology with low enough background levels. Some of the world's leading limits on \(0\nu\beta\beta\) decay were obtained with experiments using \({}^{76}\)Ge, \({}^{136}\)Xe, and \({}^{130}\)Te - all with \(Q_{\beta\beta}\) < 2615 keV - using very powerful detection and background-rejection techniques. These experiments use materials with low radioactive content in detector construction, minimising internal background sources while employing layers of passive shielding to reduce external backgrounds. Active background rejection, using such techniques as timing, event topology, fiducialisation of the active detector volume, and particle identification, complements passive methods. Figure 3 shows the recently completed, running, and proposed experiments in terms of achieved or projected background indices, detector resolution, and the amount of target isotope used. A comparison of the lower half-life limits and corresponding \(m_{\beta\beta}\) values is given in Table 1. Although \(0\nu\beta\beta\) decay has not been observed, the current generation of experiments Figure 3: Relevant experimental parameters –background index, detector resolution (\(\sigma\)), and isotopic moles – for recently completed (\(\ominus\)), currently running (\(\dagger\)), and proposed (\(\ddagger\)) \(0\nu\beta\beta\)-decay search experiments. Furthermore, isotopes with high \(Q_{\beta\beta}\) value offer an additional advantage since the phase space factor \(G^{0\nu}(Z,Q)\) is proportional to \(Q_{\beta\beta}^{5}\) and radioactive backgrounds tend to be smaller at higher energy. has successfully deployed several hundreds of kilogrammes of isotopes to push the \(T_{1/2}^{0\nu}\) lower limit in the order of \(10^{25}\)--\(10^{26}\) years despite being background-limited. The next generation of experiments seeks to increase the isotope mass significantly (\(\approx\) tonne-scale). Since the sensitivity in a background-free experiment scales linearly with measurement time \(t\) instead of \(\sqrt{t}\), these experiments aim to reduce background levels by several orders of magnitude. Deploying a "ton-scale" detector and considerably decreasing the background will allow next-generation experiments to probe half-lives in the range of \(10^{26}\)--\(10^{28}\) years with typically ten years of data taking, setting a limit of \(m_{\beta\beta}\!<\!(18.4\pm 1.3)\) meV [62] and ruling out the inverted hierarchy of neutrino mass spectra. It is essential to search for \(0\nu\beta\beta\) in multiple isotopes not only for confirmation of discovery but also to identify the underlying mechanism of decay [63, 64, 65, 66]. In addition, precise measurements of diverse two-neutrino processes are needed to refine NME calculations and provide a comprehensive grasp of the Standard Model background in \(0\nu\beta\beta\) searches. Consequently, technologies that can reconstruct individual energies and topologies (e.g., SuperNEMO [60]) will have a crucial role even if they may not allow significant scaling up of isotopic mass. Analogously, the light \(\beta\beta\) emitter \({}^{48}\)Ca, which has extremely low isotopic abundance (0.2%) and will be extremely cost-intensive to scale up, is an interesting nucleus to probe since it is an ideal target for benchmarking various NME calculations (Sec. (e)). Numerous review articles summarise the massive experimental and theoretical efforts over the last couple of decades [67, 68, 69, 70, 71]. Here, we take a bird's-eye view of the experimental landscape for \(0\nu\beta\beta\)-decay searches, focusing on efforts toward future ton-scale experiments. We refer the reader to Fig. 3 and Table 1 for the relevant experimental parameters. ### \({}^{76}\)Ge (\(Q_{\beta\beta}\) = 2039.0 keV) The GERmanium Detector Array (GERDA) experiment [47] was located at the Laboratori Nazionali del Gran Sasso (LNGS), Italy. Its high-purity germanium detectors (HPGe) were \begin{table} \begin{tabular}{l l l l l} Experiment & Status & Isotope & \(T_{1/2}^{0\nu}\) [yr] & \(m_{\beta\beta}\) [meV] \\ \hline GERDA [47] & Completed & \({}^{76}\)Ge & \(\mathbf{1.8\times 10^{26}}\) & **79—180** \\ MAJORANA [48] & Completed & \({}^{76}\)Ge & \(\mathbf{8.5\times 10^{25}}\) & **113—269** \\ LEGEND-200 [49] & Taking Data & \({}^{76}\)Ge & \(1.5\times 10^{27}\) & 34—78 \\ LEGEND-1000 [49] & Proposed & \({}^{76}\)Ge & \(8.5\times 10^{28}\) & 9—21 \\ CDEX-300\(\nu\)[50] & Proposed & \({}^{76}\)Ge & \(3.3\times 10^{27}\) & 18—43 \\ KamLAND-Zen [51] & Taking Data & \({}^{136}\)Xe & \(\mathbf{2.3\times 10^{26}}\) & **36—156** \\ EXO-200 [52] & Completed & \({}^{136}\)Xe & \(\mathbf{3.5\times 10^{25}}\) & **93—286** \\ nEXO [53] & Proposed & \({}^{136}\)Xe & \(1.3\times 10^{28}\) & 6.1—27 \\ NEXT-100 & Construction & \({}^{136}\)Xe & \(7.0\times 10^{25}\) & 66—281 \\ CUORE [54] & Taking Data & \({}^{130}\)Te & \(\mathbf{2.2\times 10^{25}}\) & **90—305** \\ SNO+ [55] & Construction & \({}^{130}\)Te & \(2.1\times 10^{26}\) & 37—89 \\ AMoRE-II [56] & Proposed & \({}^{100}\)Mo & \(5.0\times 10^{26}\) & 17—29 \\ CUPID-Mo [57] & Completed & \({}^{100}\)Mo & \(\mathbf{1.8\times 10^{24}}\) & **280—490** \\ CUPID [58] & Proposed & \({}^{100}\)Mo & \(1.5\times 10^{27}\) & 10—17 \\ CUPID-0 [59] & Completed & \({}^{82}\)Se & \(\mathbf{4.6\times 10^{24}}\) & **263—545** \\ SuperNEMO-D [60] & Construction & \({}^{82}\)Se & \(4.0\times 10^{24}\) & 260—500 \\ CANDLES-III [61] & Taking data & \({}^{48}\)Ca & \(\mathbf{5.6\times 10^{22}}\) & **2900—1600** \\ \hline \end{tabular} \end{table} Table 1: Comparison of lower half-life limits \(T_{1/2}^{0\nu}\) (90% CL) and corresponding \(m_{\beta\beta}\) limits for the recently completed, currently running, and next-generation proposed experiments. Each range of \(m_{\beta\beta}\) upper limits is as reported by that experiment and depends on their choice of multiple matrix elements. The measured sensitivities are reported in bold for contrast with projected sensitivities. enriched to 87% of the isotope \({}^{76}\)Ge and immersed and cooled directly in ultra-pure liquid argon (LAr), whose scintillation also actively vetoed radioactive backgrounds. GERDA achieved the lowest background sensitivity (BI\(\times\sigma\)) obtained by any \(0\nu\beta\beta\)-decay search. The Majorana Demonstrator (MJD) [48] at the Sanford Underground Research Facility (SURF), USA, took a more conventional approach with layered ultralow-background electroformed copper (EFCu) and lead shielding for HPGe detectors housed in EFCu vacuum cryostats. MJD reported the best detector resolution among all \(0\nu\beta\beta\)-decay searches with \(\sim\)1.1 keV (\(\sigma\)) at \(Q_{\beta\beta}\). The next-generation Large Enriched Germanium Experiment for Neutrinoless \(\beta\beta\) Decay (LEGEND) [49] aims to achieve a sensitivity of \(T_{\frac{1}{2}}^{0\nu}>10^{28}\) yr by combining the best of GERDA and MJD technologies. The goal is to operate 1 ton of enriched germanium detectors for 10 years at a background index of \(\sim 1\times 10^{-5}\) counts/kg\(\cdot\)keV\(\cdot\)yr. The programme is being pursued in phases, with LEGEND-200 currently deployed and taking data in an upgraded GERDA infrastructure. LEGEND-200's projected background index is about three times lower than GERDA, mainly due to fewer cables and electronic components per unit mass of HPGe detectors, an improved light readout for the liquid-argon veto, and improvements in the radiopurity of construction materials. LEGEND-1000 envisages reducing the background index by another factor of 20 by using underground argon (with a reduced level of radioactive \({}^{42}\)Ar) for shielding, and further reducing the radioactivity levels in components in the vicinity of the detectors. A similar large-scale Ge effort is being pursued for the CDEX-300\(\nu\) experiment [50]. Like GERDA and LEGEND, the detectors will be immersed in liquid argon that serves as both a cooling medium and veto detector. CDEX-300\(\nu\) is expected to demonstrate a half-life sensitivity of \(T_{\frac{1}{2}}^{0\nu}>3.3\times 10^{27}\) yr with an effective runtime of 10 years. Small prototypes with a detector mass of \(\sim\)1 kg have been deployed. ### \({}^{136}\)Xe (\(Q_{\beta\beta}\) = 2457.8 keV) The KamLAND-Zen [51] (Kamioka Liquid Scintillator Anti-Neutrino Detector Zero-Neutrino) series of experiments is located at the Kamioka Observatory, Japan. Its KamLAND-Zen 800 phase uses 745 kg of Xe gas, enriched to 90-91% and dissolved at 3% by weight into liquid scintillator in a nylon inner balloon, which in turn is surrounded by 1 kilo-ton of liquid scintillator acting as an active shield. \({}^{214}\)Bi, cosmogenic spallation products such as \({}^{10}\)C, and \(2\nu\beta\beta\) itself were found to be the dominant backgrounds for its predecessor KamLAND-Zen 400, which had about half of the enriched Xe mass. While KamLAND-Zen now has the best background index compared to all active and past experiments, it also has the worst detector resolution of experiments mentioned here (Fig. 3). Nevertheless, KamLAND-Zen provides a world-leading \({}^{136}\)Xe limit of \(T_{\frac{1}{2}}^{0\nu}\)(Table 1). KamLAND-Zen plans to upgrade to KamLAND2-Zen, where 1 ton of enriched Xe will be deployed with a much brighter liquid scintillator and photomultiplier tubes of higher quantum efficiency to improve the energy resolution by a factor of two. KamLAND2-Zen is projected to have a half-life sensitivity of \(T_{\frac{1}{2}}^{0\nu}>1.1\times 10^{27}\) yr in 5 years. The Enriched Xenon Observatory 200 (EXO-200) [52] was a liquid xenon (LXe), cylindrical time-projection chamber (TPC) located at the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico, USA. The liquid-phase TPC provided good energy resolution and low background due to its ability to reconstruct event topology. The next-generation Enriched Xenon Observatory (nEXO) [53], a successor to EXO-200, will also use an LXe TPC with approximately 5 tons of xenon enriched to 90% in \({}^{136}\)Xe. nEXO is projected to reach a half-life sensitivity of \(T_{\frac{1}{2}}^{0\nu}>1.35\times 10^{28}\) yr with 10 years of data collection. nEXO aims to have an energy resolution of \(<\)1% (\(\sigma\)) in ROI and plans to reduce the EXO-200 background by a factor of \(\sim\)1000. The background projections for nEXO are based on its established radioassay data for most component materials and comprehensive particle tracking and event reconstruction simulations. The nEXO collaboration is currently exploring the feasibility of identifying and labelling the daughter atomic element Ba from the double-beta decay of \({}^{136}\)Xe. If this endeavour proves successful, it has the potential to significantly reduce nEXO's background to almost zero in its second phase. The Neutrino Experiment with a Xenon TPC (NEXT) [72] located at Canfranc Underground Laboratory (LSC), Spain, will use high-pressure xenon-gas time-projection chambers. The experiment aims to capitalise on the naturally low fluctuations in the production of ionisation pairs in xenon gas - combined with electroluminescence to amplify the ionisation signal - resulting in an energy resolution of <0.4% (\(\sigma\)) at \(Q_{\beta\beta}\). Moreover, the tracks left in gaseous xenon have distinct topological features for \(0\nu\beta\beta\) events that can be used for background rejection. NEXT-White, a prototype for NEXT-100 and NEXT-1t, has recently successfully demonstrated the TPC technology with a small fiducial volume. NEXT-100 is currently under construction at LSC. NEXT-1t has an estimated half-life sensitivity of >\(1\times 10^{27}\) yr in less than 5 yr with \(\sim\)1 ton of \({}^{136}\)Xe. NEXT also has its own programme exploring ways to tag the daughter Ba\({}^{++}\) ions from \({}^{136}\)Xe decay. ### \({}^{130}\)Te (\(Q_{\beta\beta}\) = 2527.5 keV) The Cryogenic Underground Observatory for Rare Events (CUORE) [54] is the first ton-scale experiment searching for \(0\nu\beta\beta\) decay using low-temperature calorimeters. The detector, located at the LNGS, Italy, consists of an array of 988 \({}^{nat}\)TeO\({}_{2}\) crystals; each crystal is equipped with an neutron-transmutation doped germanium (NTD-Ge) thermistor and operated at close to 10 mK. In 2021, CUORE released results corresponding to a ton-year of \({}^{nat}\)TeO\({}_{2}\), the largest amount of data ever acquired with a solid-state detector. Low-temperature calorimeters have intrinsically low detector noise, and their detector resolution is comparable to semiconductor detectors. CUORE's background is dominated by the energy-degraded alpha events emanating from detector holders, giving it a high background index. The SNO+ [55] experiment, located at the SNOLAB facility in Canada, will use \({}^{130}\)Te-loaded liquid scintillator. SNO+ developed a novel metal-loading technique using an organic scintillator that keeps the loading stable and enhances the light yield. The initial loading in SNO+ will be 0.5% \({}^{nat}\)Te by mass, providing a \(T_{\frac{1}{2}}^{0\nu}>2.1\times 10^{26}\) yr after 3 years of data taking. The detector-related backgrounds have been measured in two data-taking phases using only water and only liquid scintillator. Deployment of Te-loaded scintillator is planned for 2024. Moreover, recent R&D efforts have shown that the \({}^{130}\)Te loading can be increased to 3% by mass with an acceptable scintillator light yield, which would significantly increase SNO+ sensitivity in the future. ### \({}^{100}\)Mo (\(Q_{\beta\beta}\) = 3034.4 keV) The AMoRE [56] project aims to search for the neutrinoless double beta decay of \({}^{100}\)Mo using molybdate-based crystals as low-temperature calorimeters. It is located in the Yangyang Underground Laboratory (Y2L) in South Korea. The crystals use metallic magnetic calorimeters (MMCs) to read out phonon signals at milli-Kelvin temperatures. AMoRE-Pilot successfully demonstrated the technology. In 2021, the project moved on to the next phase AMoRE-I, which is currently running with a total of approximately 3 kg of \({}^{100}\)Mo mass. AMoRE-I is housed in the same cryostat used for AMoRE-Pilot. Preliminary results indicate a background rate of \(4\times 10^{-2}\) counts/kg\(\cdot\)keV\(\cdot\)yr in the ROI, and a lower limit of \(T_{\frac{1}{2}}^{0\nu}>1.2\times 10^{24}\) yr at 90% C.L. The next phase, AMoRE-II, is currently under preparation and will operate at Yemilab in South Korea. AMoRE-II will comprise approximately 400 molybdate crystals (\(\sim\)100 kg of \({}^{100}\)Mo), using both calcium molybdate (CMO) and lithium molybdate (LMO) crystals. The target sensitivity is \(T_{\frac{1}{2}}^{0\nu}>5\times 10^{26}\) yr. CUPID-Mo [57] was a demonstrator experiment that employed a dual readout of phonon and scintillating light signals to remove the \(\alpha\) background, which is sensitivity-limiting for large-array, low-temperature calorimeter searches like CUORE. This effort followed the success of CUPID-0 [59], the first medium-scale experiment to discriminate \(\alpha\) from \(\gamma/\beta\) backgrounds with scintillating crystals (Zn\({}^{82}\)Se for CUPID-0). CUPID-Mo took data with 20 LMO crystals flanked by 20 auxiliary low-temperature germanium calorimeters that served as light detectors. Similar to CUORE, all the calorimeters were read out by NTD-Ge thermistors. CUPID-Mo achieved an \(\alpha\) rejection efficiency of \(>\)99.9% and an energy resolution similar to the CUORE detectors. The high \(Q_{\beta\beta}\), above most environmental \(\gamma\) lines, and its \(\alpha\) rejection ability enabled CUPID-Mo to establish the feasibility of a larger LMO-based experiment. CUORE Upgrade with Particle IDentification (CUPID) [58] is a next-generation, tonne-scale bolometric experiment that will combine the best of the cryogenic infrastructure developed for CUORE and the detector technology developed by CUPID-Mo. A total of 1596 LMO crystals will be installed inside the CUORE cryostat, for a total of 240 kg of \({}^{100}\)Mo. Each crystal will be flanked by two light detectors that will enable \(\alpha\) rejection. In the baseline design, the estimated background is \(<\)\(1\times 10^{-4}\) counts/kg\(\cdot\)keV\(\cdot\)yr in the ROI, two orders of magnitude lower than CUORE with an energy resolution similar to CUPID-Mo. The projected half-life sensitivity is \(T_{\frac{1}{2}}^{10\nu}>1.4\times 10^{27}\) yr at 90% C.L. for 10 years of livetime. In the future, CUPID aims to push the background index by another factor of 5 with additional purification of the crystals and nearby components, and by reducing the \(2\nu\beta\beta\) pile-up background. Eventually, a new cryostat with much more radiopure materials could allow a push to the normal-ordering region. A large-scale infrastructure hosting CUPID-1T would also be uniquely positioned as a multi-isotope observatory, capable of simultaneously deploying multiple cryogenic calorimeters like Zn\({}^{82}\)Se, Li\({}_{2}\)\({}^{100}\)MoO\({}_{4}\), \({}^{116}\)CdWO\({}_{4}\), and \({}^{130}\)TeO\({}_{2}\). ### Nuclear-matrix elements (NMEs) Knowledge of the NME for the light-neutrino exchange mechanism is one of the most crucial inputs needed for extracting \(m_{\beta\beta}\)[73] from a measured decay rate. The NMEs are typically calculated using macroscopic many-body nuclear models like the proton-neutron quasiparticle random-phase approximation method, energy-density functional theory, and the interacting boson model or using microscopic models that employ realistic nuclear forces like those from the Nuclear Shell Model or _ab initio_ methods. Refs. [39, 74] survey each method with its strengths and weaknesses. The microscopic models using _ab initio_ methods are computationally complex and have been applied to only light and medium \(\beta\beta\) emitters [75, 76]. While macroscopic models have slightly less computational complexity and can cover a wide range of \(\beta\beta\) nuclei, they rely on fitting model parameters to a set of experimental observables. The most frequently used nuclear-structure models are the interacting-boson model, the quasiparticle random-phase approximation, energy-density-functional methods, the generator-coordinate method, and the nuclear shell model [39, 67]. One of the major concerns for nuclear models was the disagreement between the decay rates predicted for \(\beta\)-decay and \(2\nu\beta\beta\)-decay, which could be resolved by adjusting \(g_{A}\) to a lower value from its nominal value. Since the decay rate for \(0\nu\beta\beta\) has a quartic dependence on \(g_{A}\), a reduced value would significantly affect the half-life sensitivity. However, Ref. [77] seems to have resolved the discrepancy by including previously neglected nuclear correlations using _ab initio_ methods. For many-body theories, calculations continuously improve as our comprehension of various nuclear interactions becomes more refined. Some of these interactions can enhance or suppress the values of the NMEs. For example, Ref. [78] shows that including two-body currents is a key ingredient for calculations and can suppress the \(0\nu\beta\beta\) NMEs by \(\sim\)30%. On the other hand, recently introduced short-range \(0\nu\beta\beta\) NMEs [79] may enhance the rate [80, 81] by as much as \(\sim\)30-50%. Given the complexity of the \(0\nu\beta\beta\) decay process, one needs independent nuclear wavefunction tests to understand and characterise NMEs. There is no one suitable experimental probe that can cover the wide range of momentum components and multipolarities within the nuclear states involved - the Majorana-neutrino exchange between the two nucleons is localised within \(\sim\)2 fm, resulting in a momentum spread of 105 MeV/\(c\)[74]. Nevertheless, experimental data from various sources, including studies on ordinary muon capture [82, 83], nucleon transfer reactions [84, 85], double gamma decay [86], and reactions involving single-charge exchange (SCE) and double-charge exchange (DCE) [87], have been or can be utilised to constrain specific aspects of the calculations related to NMEs in \(0\nu\beta\beta\) decay. A good correlation has also been found between the \(2\nu\beta\beta\) and \(0\nu\beta\beta\) decay NMEs [88, 89]. However, in \(2\nu\beta\beta\)-decay only the low-momentum (few MeV/\(c\)) components of the nuclear wave functions are probed, and hence, they may not be enough to make deductions for the \(0\nu\beta\beta\)-decay NMEs. Recently, DCE reactions have received significant attention as probes for \(0\nu\beta\beta\)-decay NMEs. They share the same initial and final states as \(\beta\beta\)-decay, can probe a broad range of momentum and multipolarities in intermediate odd-odd isobar nuclei, and are sensitive to nucleon-nucleon interactions, thus resembling some aspects of the \(0\nu\beta\beta\)-decay mechanism [90]. The NUMEN (NUclear Matrix Elements for Neutrinoless double beta decay) project aims to systematically investigate various Heavy-Ion-DCE reactions to extract essential information needed for NME calculation [91]. The experimental challenges for NUMEN are immense since the relevant cross sections are tiny (few tens of nano-barns), which requires high ion-beam intensities, with excellent particle identification to select the relevant nuclear channel, and high energy and angular resolution to resolve the transitions to different states from the energy spectra. Separate exploratory studies have been performed on different reactions in search of the most promising probe for DCE [92, 93, 94]. Several collaborations also aim to measure ordinary muon capture on double-beta decay-isotopes [95, 83, 96]. ## 4 Additional intersections of neutrino and nuclear physics ### Nuclear physics for high-energy neutrino scattering Accelerator neutrino beams with energies of order 0.1-10 GeV are used to explore fundamental mysteries of neutrino physics ranging from the ordering of the neutrino-mass values, the unitarity of the PMNS mixing matrix, and the presence and scale of any CP violation in the neutrino sector. The upcoming, large-scale, long-baseline neutrino experiments Hyper-Kamiokande [97] (producing neutrinos at J-PARC and observing them at the Kamioka Observatory, Japan) and the Deep Underground Neutrino Experiment (DUNE, producing neutrinos at Fermilab and observing them at SURF, USA) [98] rely on such beams. Nuclear physics enters these measurements both via the neutrino source, typically a decay-in-flight pion beam produced by protons striking a thin target, and via the observation of a neutrino interaction in a detector medium. Conventional neutrino beams are not monoenergetic, although the detector location can be chosen to sample a particular region of the energy distribution. The neutrino energy \(E_{\nu}\) must be reconstructed from the event data and is thus affected not only by the energy-dependent cross section but also by uncertainties in the modelling of the target nucleus, including resonant processes; hadronic final-state interactions within the nuclear medium; secondary interactions outside the nucleus; and the final-state topology [99]. Nuclear processes are now recognised as a major limiting factor on the sensitivity of large neutrino-oscillation experiments, e.g. [100, 101]. In principle, they are included in the Monte-Carlo event generators that simulate neutrino-nucleus interactions [102]. However, the accuracy of these simulations is hindered by a lack of experimental data and precise theoretical calculations, driving substantial investment in both dedicated experiments and sophisticated near-the-source detectors that simultaneously normalise the flux for long-baseline experiments and pursue independent physics measurements. Indeed, standard event generators are discrepant with measured neutrino interactions, e.g. [103, 104]. These discrepancies may point to insufficient constraints on the axial coupling to the nucleon as well as to nuclear effects [105]. Future neutrino-scattering measurements - whether via the near detectors of short- or long-baseline oscillation experiments, or via dedicated experiments such as ANNIE [106], NINJA [107], or nuSTORM [108] - will help illuminate these issues, although in these measurements it is challenging to disentangle the specific physics mechanisms underlying any discrepancies. Neutrino-nucleon scattering measurements on hydrogen or deuterium targets may help disambiguate results [109]. Pion-nucleus scattering measurements probe hadronic final-state interactions within the target nucleus. Recent and future experimental efforts in this line focus on the same targets used by current and next-generation neutrino detectors: water [110]; carbon [111, 112]; and argon [113, 114]. World \(\pi^{\pm}\) data is already being used to tune the NEUT intranuclear cascade model [115]. Electron-nucleus scattering measurements exploit both high statistics and precise control of incident electron energies and final-state kinematics to probe the vector part of lepton-nucleus interactions, as reviewed in Ref. [116]. A recent test of neutrino energy-reconstruction techniques against electron scattering data on \({}^{4}\)He, \({}^{12}\)C and \({}^{56}\)Fe revealed significant discrepancies [117]. Current [118, 119] and planned [120] measurements explore nuclear spectral functions and lepton-nucleus cross sections. Extensions of neutrino event-generators to predict electron-scattering observables are underway, _e.g._ Ref. [121]. In addition to constraining the nuclear physics of neutrino interactions, neutrino-scattering experiments also probe core questions in nuclear physics. For example, MINERvA recently made the first direct measurement of the free-proton axial-vector form factor, based on an analysis of \(\bar{\nu}_{\mu}+p\to\mu^{+}+n\) events [122], and the planned DUNE near detector will measure the electroweak mixing angle \(\sin^{2}\theta_{W}\) and probe isospin physics in hydrocarbon and argon targets [123]. ### Nuclear physics from low-energy neutrino scattering At low energies, neutrino scattering becomes a probe of nuclear structure. Coherent elastic neutrino-nucleus scattering (CEvNS), a neutral-current interaction with a relatively large cross section, probes the neutron distribution within a nucleus [124]. The complete COHERENT data set on CsI [125] has been used to determine the averaged neutron radius \(R_{n}\) for Cs and I to within about 6%; the precision can be improved by combination with atomic parity-violation data [126]. Appropriate nuclear targets for CEvNS allow low detection thresholds; by contrast, nuclear targets for \(R_{n}\) via parity-violating electron scattering (\({}^{27}\)Al [127], \({}^{48}\)Ca [128], \({}^{208}\)Pb [129]) are chosen for high-lying nuclear excited states and for robustness under intense irradiation. Both neutrino- and electron-scattering techniques avoid the model dependencies of hadronic probes [130] while illuminating complementary regions of the neutron-distribution landscape. Combined with measurements of the proton radius, these results explore the nuclear symmetry energy and inform our understanding of neutron stars [131]. Neutrinos with energy of order 10 MeV are an important driver of supernova nucleosynthesis, interacting with abundant nuclei in the collapsing star to produce rare, often neutron-poor isotopes [132, 133]. Direct measurements of these charged-current interactions are useful inputs to models of this \(\nu\)-process nucleosynthesis. Supernova neutrinos of \(\mathcal{O}(1-10)\) MeV will appear in worldwide detectors through a variety of detection channels [134], many of which will benefit from dedicated measurements to reduce uncertainties on supernova dynamics and other observables. Two recent charged-current measurements from COHERENT - \({}^{nat}\mathrm{Pb}(\nu_{e},\mathrm{X}n)\)[135] and \({}^{127}\)I(\(\nu_{e},\mathrm{X}n\)) [136] - show significant deficits relative to theoretical predictions in the MARLEY framework [137], highlighting the need for further work. ### Reactor antineutrinos and nuclear physics Nuclear reactors produce copious amounts of \(\bar{\nu}_{e}\) via beta-decay chains fed by fission reactions in the core. The first experimental discovery of neutrinos was made at the Savannah River reactor [2]. Since then, reactor antineutrinos have been instrumental in completing the picture of three-neutrino oscillation. The large-scale Jiangmen Underground Neutrino Observatory (JUNO), under construction in China, will observe reactor antineutrinos [138]. In the last decade, high-precision reactor experiments independently observed two anomalies sometimes taken as evidence of sterile neutrinos: a \(\sim 5-6\%\) flux deficit relative to the Huber-Muller prediction [139, 140] based on the conversion of summed beta spectra to antineutrino spectra, and an excess of antineutrinos at about 5 MeV [141, 142, 143]. Extensive experimental and theoretical work, including new beta-spectrum measurements [144] and studies of the neutrino flux for different fuel compositions [145, 146, 147], suggest attribution of the flux deficit to biases in the model inputs. Meanwhile, investigations of the 5-MeV excess revealed errors in nuclear databases [148]; the precise origin of this feature remains unclear, but the likely presence of contributions from all primary fission isotopes suggests a common error in the flux prediction [149]. Precise reactor-antineutrino measurements are improving our understanding of nuclear fission. The impossibility of shielding antineutrinos gives them an appealing possible application in nuclear non-proliferation, recently reviewed in Ref. [150]: in principle, measuring characteristic antineutrino spectra allows the detection of a covert fission plant, or non-invasive monitoring of spent fuel or reactor operations. However, neutrino detection (especially in a high-background reactor environment) requires both significant financial investment and exposure time, and is likely impractical without facility cooperation. The Nu Tools study, based on discussions with end users in nuclear energy and nuclear security, found that neutrino monitoring would most likely be useful in the context of future nuclear deals; assay of spent fuel in dry casks; and future advanced reactors where traditional accountancy methods cannot be used [151]. Further development is needed for practical neutrino monitoring. ### Searching for sterile neutrinos with nuclear physics Apart from reactor-based searches (Sec. (c)), nuclear physics is key to non-oscillation-based searches for sterile neutrinos. The spectrum from a beta or electron-capture decay (Sec. 2) is, in principle, a superposition of spectra: one for each neutrino-mass state, the mass value of which shifts the endpoint of the spectrum. Although the splittings of the three known mass states are too small for current-generation measurements to resolve, the presence of a fourth, widely separated neutrino-mass value \(m_{4}\) will generate a kink-like spectral distortion at \(E_{0}-m_{4}\), where \(E_{0}\) is the spectral endpoint. KATRIN has searched for this sterile-neutrino signature at both the eV scale [153], excluding significant portions of the parameter space that could explain the reactor flux deficit, and (using a commissioning data set) at the keV scale [154]. A planned future phase of KATRIN will perform higher-sensitivity searches for keV-scale sterile neutrinos with high-rate, deep spectral measurements enabled by the TRISTAN detector upgrade [15]. Planned Project 8 operations would further improve sensitivity at the eV scale [155]. Neutrino-mass experiments favour low \(E_{0}\), setting a ceiling on any observable value of \(m_{4}\). Two experiments aim to push past that ceiling by precisely measuring the kinematics of electron-capture decays with higher \(E_{0}\): \({}^{7}\)Be (BeEST, first limit set in Ref. [156]) and \({}^{131}\)Cs (HUNTER, planned [157]). This allows kinematic reconstruction of the neutrino four-momentum Figure 4: Achieved (solid) and projected (dotted) exclusion curves for sterile neutrinos from \(\beta\)-decay experiments, along with the parameter space preferred by the gallium anomaly (\(2\sigma\) contours). Adapted from Ref. [152]. and corresponding sensitivity to a heavy mass state. The presence of a sterile neutrino would also affect the electron kinematics in \(2\nu\beta\beta\) decay; Bolton et al. [158] explore the corresponding sensitivity of \(0\nu\beta\beta\)-decay searches (Sec. 3) to sterile neutrinos. Fig. 4 shows the existing (solid) and projected (dotted) limits on sterile-neutrino mixing \(\left|U_{e4}\right|^{2}\) and mass \(m_{4}\) from beta-decay and double-beta-decay experiments. Beyond beta decays, nuclear physics may also be central to the longstanding gallium anomaly. When a high-intensity \(\nu_{e}\) source irradiates a gallium target, \({}^{71}\mathrm{Ga}(\nu_{e},e^{-})^{71}\mathrm{Ge}\) interactions may be counted using radiochemical methods. The combined result of historical (GALLEX [159, 160] and SAGE [161, 162]) and modern (BEST [163]) experiments is a significant deficit in the observed \(\nu_{e}\) rate. An overestimation of the nuclear-interaction cross section has been proposed as an alternative explanation to the oscillation of \(\nu_{e}\) into a sterile flavour; however, a recent recalculation of corrections to the cross section shows only modest effects, and the well-measured ground-state transition prohibits large changes [164]. Followup experiments with intense \(\nu_{e}\) and \(\bar{\nu}_{e}\) sources, or the realisation of a fundamental problem with the nuclear-interaction calculation, could help resolve this anomaly. ### Synergy between neutrino-nuclear physics and quantum sensing Quantum sensors are gaining prominence in cutting-edge neutrino-nuclear experiments. Transition-edge sensors (TES) are used to achieve precise energy spectra from nuclear-\(\beta\) decay (as seen in HOLMES [29]); explore coherent neutrino nuclear scattering (NUCLEUS [165] and RICOCHET [166]); and pursue next-generation \(0\nu\beta\beta\) searches like CUPID [167]. Simultaneously, superconducting tunnel junctions (STJs) are instrumental in BeEST's [156] investigation of phenomena such as sterile-neutrino states, while ECHo [16] and AMoRE [56] are developing arrays of metallic magnetic calorimeters (MMCs) for a neutrino-mass measurement and a \(0\nu\beta\beta\)-decay search, respectively. Project 8 [23] aims to develop superconducting parametric amplifiers near the quantum limit for a tritium-based \(m_{\beta}\) measurement, and the QTNM project [168] aims to push still further by adding quantum-sensor magnetometry. Experimental needs demand faster sensor response times, expanded channel capacity, and more efficient multiplexing capabilities to enable the simultaneous readout of multiple sensors on a single line. Cryogenic hardware employed for quantum-sensor readout predominantly relies on superconducting microwave resonators and superconducting quantum interference devices (SQUIDs). These readout technologies have the potential for broader applications beyond quantum sensing, such as interfaces with large qubit arrays. On the flip side, the development of low-radioactivity techniques, primarily designed to explore rare phenomena like \(0\nu\beta\beta\) decays and dark-matter searches, offers a distinctive opportunity to address the impact of ionising radiation on quantum computers. Recent works have explored low-radiation materials, shielding, and underground quantum-circuit locations to reduce the effect of ionising radiation on superconducting qubits [169, 170]. Ionising radiation can lead to correlated errors, which pose a significant challenge for error correction and jeopardise the performance of quantum algorithms [171, 172, 173]. Hence, understanding the physics of how ionising radiation thermalises in qubit devices is crucial for successful mitigation and for advancing quantum error correction at scale. While operating quantum computers deep underground, with extensive shielding material around them, may not be feasible for large-scale applications, deep underground nuclear-physics facilities present a unique opportunity to research radiation effects within a controlled environment. ## 5 Conclusion Since 1930, when Pauli postulated the neutrino's existence to restore energy-momentum conservation in nuclear beta decays, we have learned a great deal about this ghostly particle. The 1956 discovery of neutrinos via inverse beta decay at a nuclear fission plant was a triumph of experimental neutrino physics. However, fundamental questions about neutrinos remain open despite significant theoretical and experimental progress. In this paper, we have reviewed the many connections between nuclear physics and neutrino physics, which illuminate questions in both areas. In the Standard Model, the neutrinos are the only massless fermions. We now know that neutrinos in fact have mass, and we have seen how current and future beta-decay and electron-capture experiments can probe the neutrino mass scale below the current limit via direct kinematic measurements in various nuclei. Although we have observed the second-order \(2\nu\beta\beta\) decay in certain nuclei, we have yet to detect lepton-number-violating \(0\nu\beta\beta\) decay in any nucleus. We have discussed future experiments with ever larger sizes and higher sensitivities that will investigate the possibility that neutrinos are their own antiparticles (i.e., whether they are Majorana fermions). Finally, we discuss additional ways in which nuclear physics and neutrino physics intertwine, from final states in high-energy interactions, to nuclear structure, to searches for sterile neutrinos, to cutting-edge developments in quantum sensing. The quest to understand neutrino properties is a multi-disciplinary effort, and the nucleus is a critical laboratory for many of these endeavours. ###### Acknowledgements. DSP acknowledges support from the U.S. Department of Energy (DOE), Office of Science, under Award Numbers DE-SC0010118, DE-SC0019304 and DE-SC0022125. AP is supported by the U.S. DOE under Federal Prime Agreement DE-AC02-05CH11231. VS is supported by the U.S. DOE, Office of Science, under grant DE-FG02-00ER41138. We thank Alexey Lokhov, Moritz Machatschek, Lisa Schluter, Pranava Teja Surukuchi, and Kathrin Valerius for their useful contributions and suggestions.
2307.06810
Spatio-Temporal Calibration for Omni-Directional Vehicle-Mounted Event Cameras
We present a solution to the problem of spatio-temporal calibration for event cameras mounted on an onmi-directional vehicle. Different from traditional methods that typically determine the camera's pose with respect to the vehicle's body frame using alignment of trajectories, our approach leverages the kinematic correlation of two sets of linear velocity estimates from event data and wheel odometers, respectively. The overall calibration task consists of estimating the underlying temporal offset between the two heterogeneous sensors, and furthermore, recovering the extrinsic rotation that defines the linear relationship between the two sets of velocity estimates. The first sub-problem is formulated as an optimization one, which looks for the optimal temporal offset that maximizes a correlation measurement invariant to arbitrary linear transformation. Once the temporal offset is compensated, the extrinsic rotation can be worked out with an iterative closed-form solver that incrementally registers associated linear velocity estimates. The proposed algorithm is proved effective on both synthetic data and real data, outperforming traditional methods based on alignment of trajectories.
Xiao Li, Yi Zhou, Ruibin Guo, Xin Peng, Zongtan Zhou, Huimin Lu
2023-07-13T15:23:24Z
http://arxiv.org/abs/2307.06810v2
# Spatio-Temporal Calibration for Omni-Directional Vehicle-Mounted Event Cameras ###### Abstract We present a solution to the problem of spatio-temporal calibration for event cameras mounted on an omni-directional vehicle. Different from traditional methods that typically determine the camera's pose with respect to the vehicle's body frame using alignment of trajectories, our approach leverages the kinematic correlation of two sets of linear velocity estimates from event data and wheel odometers, respectively. The overall calibration task consists of estimating the underlying temporal offset between the two heterogeneous sensors, and furthermore, recovering the extrinsic rotation that defines the linear relationship between the two sets of velocity estimates. The first sub-problem is formulated as an optimization one, which looks for the optimal temporal offset that maximizes a correlation measurement invariant to arbitrary linear transformation. Once the temporal offset is compensated, the extrinsic rotation can be worked out with an iterative closed-form solver that incrementally registers associated linear velocity estimates. The proposed algorithm is proved effective on both synthetic data and real data, outperforming traditional methods based on alignment of trajectories. Calibration and Identification, SLAM, Event-based Vision. ## Multimedia Material Code: [https://github.com/esheroe/EvCalib.git](https://github.com/esheroe/EvCalib.git) ## I Introduction Extrinsic calibration is a prerequisite to almost any mobile robot application, because measurements from different sensor modalities are sometimes processed and even fused into a unified coordinate system, such as a robot's body frame. For an autonomous ground vehicle equipped with cameras, extrinsic calibration refers to the operation that determines each camera's mounting position and orientation with respect to the vehicle's body frame. Existing solutions designed for standard cameras typically run a pipeline based on alignment of trajectories, which estimates the extrinsic parameters by registering two trajectories recovered from vision information and wheel odometers, respectively. This widely used approach is, however, inapplicable when the recovered trajectory from the vision end is unreliable. Different from its standard counterpart, an event camera is a biologically-inspired novel sensor which reports only brightness changes asynchronously. This unique characteristic leads to better performance in terms of temporal resolution and dynamic range. Thus, event cameras are suitable for dealing with robotic perception [1, 2, 3, 4, 5], localization [6, 7, 8, 9] and control [10, 11] tasks involving aggressive motion and high-dynamic-range (HDR) illumination conditions. However, existing image processing techniques designed for standard vision cannot be applied straightforwardly to event data due to the special output format. Specifically, mature feature detection and matching techniques, based on which reliable and long-term event data association is to be established and maintained, are lacking. Hence, techniques that can effectively eliminate accumulated errors in the recovered trajectory, e.g., local bundle adjustment (BA) [12], are still not available in event-based visual odometry. Consequently, the drifted trajectory will lead to inaccurate extrinsic calibration results when applying trajectory alignment based pipelines. This issue poses a challenge when trying to extrinsically calibrate an event camera to an acceptable level. In this paper, we look into the problem of extrinsic calibration of event cameras mounted on an omni-directional mobile platform, as shown in Fig. 1. Omni-directional vehicles, including non-holonomic all-wheel-steering vehicles and holonomic omni-directional vehicles, are mobility systems that are capable of moving in arbitrary direction. Such a flexible and potentially aggressive maneuverability leads to a great need for event-based vision. To circumvent the Fig. 1: Illustration of the geometry of the calibration problem. The goal is to recover the relative orientation (R\({}_{oe}\)) of the event camera with respect to the mobile platform body frame, while having the temporal offset between the two heterogeneous sensors (i.e., the odometers and the event camera) compensated. issue of inaccurate trajectory estimates from an event-based visual odometry, we propose a novel calibration method that recovers the temporal offset and the spatial extrinsics by exploiting the kinematic correlation between linear velocity estimates obtained from event data and wheel odometers, respectively. The core of our method is a Canonical Correlation Analysis (CCA) process, which evaluates the underlying linear correlation between the two sets of linear velocity estimates. The applied trace correlation measurement is invariant to arbitrary linear transformation, and thus, the temporal offset can be recovered by maximizing the correlation measurement between the two sets of spatially non-aligned data. With the temporally-aligned data, the extrinsic rotation can be furthermore worked out with an iterative closed-form solver that incrementally registers associated linear velocity estimates. The contribution of this paper consists of the following aspects: * An efficient and robust method that determines the direction of linear velocity of a moving event camera, using as input the short-term feature correspondences obtained from the proposed speed-invariant image representation of event data. * A rotation-free solution to the problem of spatio-temporal calibration for omni-directional vehicle mounted event cameras, with special consideration to the mobility system's kinematic characteristics and the limited accuracy of event-based visual odometry. We leverage the kinematic correlation between the linear velocity estimates from two heterogeneous sources, and build the solver on top of a CCA scheme. * An extensive evaluation of the proposed method using both synthetic and real data, and an open-source implementation of our method. The paper is organized as follows. A literature review on methods of cameras' extrinsic calibration is provided in Section II. The proposed method is detailed in Section III, followed with experimental evaluation and relevant analysis in Section IV. We draw the conclusion in Section V. ## II Related Work ### _Trajectory Alignment based Methods_ The main category of extrinsic calibration methods is hand-eye calibration [13, 14, 15], which is based on aligning multiple pose estimates (i.e., trajectories) from two independent sensors. The hand-eye calibration method establishes the extrinsic constraint using a closed loop of local transformations: \[\mathbf{AX}=\mathbf{XB}, \tag{1}\] where \(\mathbf{A}\) and \(\mathbf{B}\) represent, respectively, the poses of the two involved sensors at the same time instant, and \(\mathbf{X}\) the unknown extrinsic parameters, all in homogeneous form \(\begin{pmatrix}\text{R}&\mathbf{t}\\ \mathbf{0}^{T}&1\end{pmatrix}\). The hand-eye calibration pipeline typically consists of two steps: 1) Obtaining two sets of time-synchronized pose estimates; 2) Looking for the optimal extrinsic parameters that align maximally the two trajectories. Given time-synchronized poses \(\xi_{\text{k}}=\{\mathbf{A}_{0},\mathbf{A}_{1},\dots,\mathbf{A}_{N}\}\) and \(\xi_{\text{B}}=\{\mathbf{B}_{0},\mathbf{B}_{1},\dots,\mathbf{B}_{N}\}\), the trajectory alignment task is accomplished by solving a non-linear optimization problem with an objective function: \[\mathtt{R}_{\text{E}}^{*},\mathbf{t}_{\text{E}}^{*}=\arg\max_{\mathtt{R}_{ \text{E}},\mathbf{t}_{\text{E}}}\sum_{i=0}^{N}\|\mathbf{A}_{i}\mathbf{X}- \mathbf{XB}_{i}\|, \tag{2}\] where \(\mathtt{R}_{\text{E}}\) and \(\mathtt{t}_{\text{E}}\) represent the under estimated extrinsic parameters. The hand-eye calibration method and its variants haven been witnessed in many works that estimate the extrinsics between two heterogeneous sensors, especially those recovering the pose of an exteroceptive sensor with respect to the body frame of a robot. Censi _et al._[16] utilize hand-eye calibration to calculate the extrinsic pose of a range finder with respect to a differential-drive robot's body frame. The work is carried out without knowing the odometer parameters (i.e., wheel radii and distance between wheels) as a prior, and thus, it jointly estimates the intrinsic odometry parameters and the extrinsic sensor pose. Assuming all intrinsics are known, Guo _et al._[17] unlock the limitation that original hand-eye calibration cannot recover all the three degrees of freedom (DoF) in the camera's orientation. Their method formulates a least-squares problem to estimate a subset of the odometer-camera rotation parameters, and furthermore, uses these parameters to formulate a second least-squares problem for estimating the remaining unknown parameters of the odometer-camera transformation. Besides, some globally optimal solutions based on hand-eye calibration, such as [18, 19, 20], are developed with special consideration in parametrization of motion parameters. The accuracy of hand-eye calibration and its variants is largely up to the quality of the input sensor poses in terms of temporal synchronicity and spatial consistency. To improve temporal synchronicity between input signals, the underlying temporal offset between the two sensors needs to be considered as an additional variable to be optimized in Eq. 2. Consequently, the temporal offset and the extrinsic parameters are solved jointly, and thus, a proper initial value is needed. However, hand-eye calibration is not suitable for spatio-temporal calibration of event cameras, because state-of-the-art event-based visual odometry methods [6, 7, 9] cannot suppress drifts in the recovered trajectory effectively. ### _Motion Correlation based Methods_ Different from trajectory alignment based methods, motion correlation based methods can decouple the overall calibration task into sub problems of recovering temporal offset and spatial transformation. The sub problem of recovering temporal offset can be solved using cross correlation, which is widely used in signal processing [21, 22]. The optimal temporal offset \(\delta t^{*}\) can be found by maximizing the follow ing objective function: \[\delta t^{*}=\arg\max_{\delta t}\sum_{i=0}^{N}{}^{a}\zeta(t_{i}+\delta t)\cdot{}^{ b}\zeta(t_{i}), \tag{3}\] where \({}^{a}\zeta\) and \({}^{b}\zeta\) represent the identical 1-D signal measured by sensor \(a\) and \(b\), respectively. This cross correlation can be established on any kinematic measurements (or estimates) independent of coordinate system, such as the absolute angular velocity, which has been used to solve the task of IMU-to-Camera temporal calibration [23]. The limitation of the cross-correlation pipeline (Eq. 3) is that it is hardly extended to high-dimensional data that are frame dependent. Consequently, the recovered temporal offset is likely to be inaccurate when data are biased in the missing degrees of freedom. To overcome this limitation, Qiu _et al._[24] propose a unified calibration framework based on 3-D motion correlation, which can efficiently work out the temporal offset and extrinsic rotation between two heterogeneous sensors. The method can evaluate the correlation of two sets of multi-variate random vectors (e.g., 3-D angular velocity), because the applied CCA technique [25] is invariant to the underlying (unknown) linear transformation between the two kinematic signals. Although 3-D angular velocity can be estimated from event data [26], the 3-D motion correlation on angular velocity estimates is, however, inapplicable to our task, because one DoF of the 3D orientation is never observable. Fortunately, we could leverage the omni-directional locomotion property of holonomic vehicles that can generate pure translation in arbitrary directions (within the ground plane). Therefore, our method takes advantage of the correlation between linear velocity estimates from heterogeneous sensors and it is also built on top of the CCA scheme. To estimate the direction of linear velocity (heading direction) from event data, we determine the epipolar geometry using as input the feature correspondences obtained on a novel speed-invariant image representation of event data. ## III Methodology Given two sets of linear velocity estimates from an event camera and odometers respectively, the goal is to recover the temporal offset between the two heterogeneous sensors, and furthermore, determine the orientation of the event camera with respect to the vehicle's body frame. In this section, we first discuss our method for estimating the direction of linear velocity from pure event data (Sec. III-A). Second, we demonstrate the way of acquiring body-frame velocity according to the kinematic model of the omni-directional vehicle used (Sec. III-B). Finally, we disclose the method that recovers the temporal offset and spatial extrinsics by maximizing the kinematic correlation between linear velocities estimates from heterogeneous sources (Sec. III-C). ### _Determining Heading Direction from Event Data_ Assuming the ground is a perfect 2D horizontal plane, the vehicle can move ideally in a straight line under a constant steering command. Thus, the heading direction (i.e., direction of linear velocity) can be well approximated by the normalized translation vector between any two successive time instants. Although there exist several ways for relative pose estimation using event data as input, such as by fitting a homography [27, 28], they are typically computational expensive and require the camera to observe a planar scene. Instead, we leverage a feature-based method for relative pose estimation in standard vision, which calculates the essential matrix from a set of feature correspondences. Detection and matching of event-based features (e.g., corners) have to be carried out on image-like representation of event data. Assuming the photometric pattern of the environment is stable, the output of an event-based camera is dependent on the relative speed with respect to the scene. Thus, an ideal representation, on one hand, is supposed to be invariant to speed, and, on the other hand, can be rendered (refreshed) efficiently. To discover the optimal representation, we investigate a number of existing representations, including Time Surface (TS) [29], Discounted Sorted Timestamp Image (DiST) [30], Speed Invariant Time Surface (SILC) [31], and Threshold-Ordinal Surface (TOS) [32]. TS is a 2D representation where each pixel stores a single time value, e.g., the timestamp of the most recent event at that pixel [35]. Using an exponential decay kernel [29], a TS (Fig. 2) emphasizes recent events over past events and has been proved discriminative in pattern recognition tasks. Similarly, DiST [30] (Fig. 2) aims at preserving semantic profiles for object recognition under the camera's motion, and it does a great job in noise suppression. These two representations are, however, not invariant to the event camera's speed. The appearance similarity, even at two close views, will not be preserved in the presence of speed variation, and thus, no guarantee for the success of short-baseline feature matching. To circumvent this issue, some hand-crafted representations (e.g., SILC [31] (Fig. 2) and TOS [32] (Fig. 2)) are designed to keep the 2D spatial gradient of moving edges constant under a variation of camera's speed. This is basically achieved by continuously assigning the most recently firing coordinates Fig. 2: Various image-like representations of event data [33] and results of corner detection [34]. (a) is the intensity image, which is only for visualization. (b)-(e) are state-of-the-art representations of event data. (f) shows our design. to the maximum value, and reducing the magnitude of the adjacent area by a distance-related quantity. The main issue of these two speed-invariant representations is about the relatively low signal-to-noise ratio, which is largely due to the fact that historical information is not recycled in time. To hold the speed-invariant property while maintaining a good signal-to-noise ratio, we propose a novel representation by combining TS and TOS. Given a TS and a TOS rendered by the same time, our representation is specifically obtained by performing a logical AND operation on corresponding pixels of the two maps. As seen in Fig. 2(f), the resulting representation inherits the speed-invariant property from TOS; meanwhile it is much cleaner than the original TOS due to the exponential decay kernel used in the TS. We apply Arc\({}^{\star}\)[34] for corner detection and BRIEF [36] for feature description and matching. As seen from Fig. 2(b) to Fig. 2(f), more true-positive corners can be detected on the resulting representation, and thus, it is beneficial to solving the essential matrix in the following. To calculate the essential matrix, we implement a five-point algorithm [37] inside a RANSAC scheme. We notice that the essential matrix is a skew-symmetric matrix due to the absence of rotation, and therefore, the resulting translation vector can be straightforwardly retrieved from it. Finally, the resulting heading direction from event data, denoted by \(\mathbf{v}_{e}\), can be approximated by the normalized translation vector. ### _Acquiring Body-Frame Velocity from Kinematic Model_ The ground vehicle used in this work is an all-wheel-steering mobility system as illustrated in Fig. 1. Different from commonly seen non-holonomic counterparts (e.g., Ackermann mobility systems [38]), an all-wheel-steering vehicle is capable of moving in any direction by simply steering all wheels into a certain angle. Therefore, the direction of the linear velocity is not always along the X-axis of the body frame, and such a kinematic property enables us to obtain linear velocity measurements in various directions in the vehicle's body frame. The vehicle's body frame is defined as the coordinate system in Fig. 3. Note that the location of the origin can be set anywhere, which does not affect the extrinsic rotation result. What matters is the definition of the body frame's orientation. The X-axis is defined to be with the longitudinal direction, and the Y-axis is the lateral direction. The remaining Z-axis can be obtained using the right-hand rule. The four wheels are actuated independently, and the corresponding odometer can report two states, i.e., the steering angle \(\theta_{i}\) and the wheel speed \(v_{wi}\). The linear velocity of the vehicle can be simply derived from \[\mathbf{v}_{o}=\begin{bmatrix}\bar{v}_{w}\\ 0\\ 0\end{bmatrix}\begin{bmatrix}cos(\bar{\theta})&-sin(\bar{\theta})&0\\ sin(\bar{\theta})&cos(\bar{\theta})&0\\ 0&0&1\end{bmatrix}, \tag{4}\] where \(\mathbf{v}_{o}\) denotes the linear velocity (in metric scale) represented in the body frame \(o\), and \(\bar{v}_{w}\) and \(\bar{\theta}\) the average speed and steering angle of the four wheels. Note that we use only the direction of linear velocity in the following extrinsic calibration. With a slight abuse of mathematical notation, for brevity, we hereafter denote the direction by \(\mathbf{v}_{o}\). ### _Extrinsic Calibration via Correlation Maximization_ The above velocity direction estimates, \(\mathbf{v}_{e}\) and \(\mathbf{v}_{o}\), are linearly correlated through the extrinsic rotation \(\mathbb{R}_{oe}\), as \[\mathbf{v}_{o}=\mathbb{R}_{oe}\mathbf{v}_{e}. \tag{5}\] Obviously, the minimal problem to solve \(\mathbb{R}_{oe}\) requires two pairs of velocity direction estimates, which must be not parallel. In the absence of temporal offset, the extrinsic rotation calibration can be simply solved as a 3-D point registration problem using the closed-form solution in [39]. However, simply neglecting the temporal offset may lead to inaccurate spatial calibration. Thus, we need a method to determine the temporal offset in the presence of unknown extrinsic rotation. CCA is an effective tool for evaluating the linear correlation between two random vectors (i.e., discrete samples of two "synchronized" signals). It is typically used to recover the underlying linear transformation from signals' statistical profiles, such as covariances. Assuming that we have collected two sets of "synchronized" velocity direction estimates, denoted by \(\mathcal{V}_{o}\doteq\{\mathbf{v}_{o}(t_{i})\}_{i=1}^{N_{o}}\) and \(\mathcal{V}_{e}\doteq\{\mathbf{v}_{e}(t_{i})\}_{i=1}^{N_{e}}\), the cross-covariance and auto-covariance can be approximately calculated, in the presence of temporal offset \(t_{d}\), by \[\Sigma_{v_{o}v_{e}}(t_{d}) \approx\frac{1}{N-1}\sum_{i=0}^{N-1}(\mathbf{v}_{o}(t_{i})-\bar{ \mathbf{v}}_{o})(\mathbf{v}_{e}(t_{i}+t_{d})-\bar{\mathbf{v}}_{e}^{\prime})^{ \intercal},\] \[\Sigma_{v_{o}v_{o}}(t_{d}) \approx\frac{1}{N-1}\sum_{i=0}^{N-1}(\mathbf{v}_{o}(t_{i})-\bar{ \mathbf{v}}_{o})(\mathbf{v}_{o}(t_{i}+t_{d})-\bar{\mathbf{v}}_{o}^{\prime})^{ \intercal},\] \[\Sigma_{v_{e}v_{e}}(t_{d}) \approx\frac{1}{N-1}\sum_{i=0}^{N-1}(\mathbf{v}_{e}(t_{i})-\bar{ \mathbf{v}}_{e})(\mathbf{v}_{e}(t_{i}+t_{d})-\bar{\mathbf{v}}_{e}^{\prime})^{ \intercal}, \tag{6}\] Fig. 3: Kinematic model of the all-wheel-steering mobile platform. Pure linear motion at arbitrary direction can be realized by controlling all wheels with same steering angle and identical rotating speed. where \(t_{i}\) refers to the sampling time, \(N\) the number of samples, \(\bar{(\cdot)}\) the mean of original samples, and \(\bar{(\cdot)}^{\prime}\) the mean of temporal-offset compensated samples. For multi-variable random vectors, the underlying linear transformation is made from several linear combination pairs. In our case, the linear combination pairs are \(\mathbf{s}_{i}\leftrightarrow\mathbf{r}_{i}\) (\(i=\{1,2,3\}\)), which represent respectively the basis vectors of the 3-by-3 identity matrix and the transpose of the extrinsic rotation matrix, namely \(\mathbf{I}_{3\times 3}=[\mathbf{s}_{1}|\mathbf{s}_{2}|\mathbf{s}_{3}]\) and \(\mathbf{R}_{oe}^{\intercal}=[\mathbf{r}_{1}|\mathbf{r}_{2}|\mathbf{r}_{3}]\). Each column vector \(\mathbf{r}_{i}\) can be determined by maximizing the correlation coefficient \[\rho_{i} \doteq Corr(\mathbf{s}_{i}^{\intercal}\mathbf{v}_{o},\mathbf{r}_ {i}^{\intercal}\mathbf{v}_{e}) \tag{7}\] \[=\frac{\mathbf{s}_{i}^{\intercal}\Sigma_{v_{o}v_{e}}\mathbf{r}_ {i}}{\sqrt{\mathbf{s}_{i}^{\intercal}\Sigma_{v_{o}v_{e}}\mathbf{s}_{i}}\sqrt{ \mathbf{r}_{i}^{\intercal}\Sigma_{v_{e}v_{e}}\mathbf{r}_{i}}},i\in\{1,2,3\},\] With the three canonical correlation coefficients, the so-called trace correlation between the two set of estimates is further defined as \[r(\mathcal{V}_{o},\mathcal{V}_{e})=\sqrt{\frac{1}{3}\sum_{i=1}^{3}\rho_{i}^{2 }}=\sqrt{\frac{1}{3}\mathrm{Tr}(\Sigma_{v_{o}v_{e}}^{-1}\Sigma_{v_{o}v_{e}} \Sigma_{v_{e}v_{e}}^{-1}\Sigma_{v_{e}v_{e}})}, \tag{8}\] where \(\mathrm{Tr}(\cdot)\) denotes the trace of an input matrix. It is also a normalized measurement that evaluates the correlation between two signals. One of the great properties of trace correlation is that it is invariant to underling linear transformation between two signals, such as scaling, rotation and translation. In our case, hence, we have \(r(\mathcal{V}_{o},\mathsf{R}_{oe}\mathcal{V}_{e})=r(\mathcal{V}_{o},\mathcal{ V}_{e})\). This property decouples the impact of the temporal offset on the trace correlation from the unknown spatial extrincs. Therefore, the optimal \(t_{d}\) can be independently obtained by maximizing eq. 8. Once the temporal offset is compensated in the data, the extrinsic rotation \(\mathsf{R}_{oe}\) can be simply worked out as a by-product of the above CCA process [24]. Using the covariance matrices obtained in the CCA process, the extrinsic rotation can be calculated by \[\mathsf{R}_{oe}^{\intercal}=\mathbf{U}\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&\mathtt{det}(\mathbf{U}\mathbf{V}^{\intercal})\end{bmatrix}\mathbf{V}^{\intercal}, \tag{9}\] where \(\mathbf{U}\) and \(\mathbf{V}\) are obtained from the following singular value decomposition (SVD) \[\Sigma_{v_{e}v_{e}}^{-1}\Sigma_{v_{e}v_{o}}=\mathbf{U}\mathbf{\Sigma}\mathbf{ V}^{\intercal}. \tag{10}\] The determinant operation in Eq. 9 is to guarantee the resulting rotation matrix is not a reflection. However, this quick solver cannot be straightforwardly applied to our case. The kinematic characteristics of the ground vehicle gives rise to zero-velocity measurements in \(z\) axis of the body frame. A big condition number is witnessed for \(\Sigma_{v_{e}v_{e}}^{-1}\Sigma_{v_{e}v_{o}}\), and thus, the decomposition result is always numerically unstable. Although this issue can be more or less mitigated by adding an additional perturbation in the non-stimulated dimension, accurate results are still not guaranteed as seen in our experiments. To this end, we regard the two sets of linear velocity estimates as general "point cloud" on a unit sphere, and assort to a least-squares method [39] for registering two point sets. Consequently, the extrinsic rotation can be calculated by maximizing \[\mathcal{L} =\sum_{i=1}^{N}\mathbf{v}_{o,t_{i}}^{T}\mathsf{R}_{oe}\mathbf{v}_ {e,t_{i}} \tag{11}\] \[=\mathrm{Tr}(\sum_{i=1}^{N}\mathsf{R}_{oe}\mathbf{v}_{e,t_{i}} \mathbf{v}_{o,t_{i}}^{T})=\mathrm{Tr}(\mathsf{R}_{oe}\mathbf{H}),\] where \(\mathbf{v}_{o,t_{i}}\doteq\mathbf{v}_{o}(t_{i})\), \(\mathbf{v}_{e,t_{i}}\doteq\mathbf{v}_{e}(t_{i})\), and \(\mathbf{H}:=\sum_{i=1}^{N}\mathbf{v}_{e,t_{i}}\mathbf{v}_{o,t_{i}}^{T}\). Let the SVD of \(\mathbf{H}\) be \(\mathbf{U}\mathbf{\Lambda}\mathbf{V}^{T}\). The resulting extrinsic rotation matrix is given as \(\mathsf{R}_{oe}=\mathbf{V}\mathbf{U}^{T}\). To deal with noise and outliers, we apply an improved version of Arun's method [40], which replaces the original sum of squares with sum of absolute values. The resulting problem can be solved efficiently using an iteratively re-weighted least square method. The whole spatio-temporal calibration method is summarized in Alg. 1. ## IV Experiments In this section, we evaluate the proposed calibration method. We first show that how synthetic data and real data for evaluation are generated and collected in Sec. IV-A and Sec. IV-B, respectively. Then we quantitatively evaluate the proposed method and compare against trajectory-alignment based pipelines in Sec. IV-C, demonstrating the advantage and efficacy of our method. ### _Generation of Synthetic Data_ In order to demonstrate the negative impact of drifted trajectories on trajectory-alignment based calibration methods, we generate two types of input trajectories to be aligned, which include an arc-shape trajectory (Fig. 4(a)) and a polyline-shape trajectory (Fig. 4(c)). The perfect arc-shape trajectory can be determined solely by one parameter, i.e., the radius of the big circle. The direction of instantaneous linear velocity at any way point on the arc can be determined as the tangential direction, while its magnitude is determined by the instantaneous angular velocity. Without loss of generality, we simply use a constant angular velocity in each trial. As for the perfect polyline-shape trajectory, the design parameters consist of the turning angle and the length of each segment. Still, we assume these two parameters are constant for simplicity. Therefore, the direction of instantaneous linear velocity at any way point (except for those corners) is identical to that of the corresponding segment, and its magnitude is set with a constant number. To simulate spatial drifts in the trajectories to be aligned (see Fig. 4(b) and Fig. 4(d)), we add Gaussian noises to the perfect linear velocities and obtain the drifted trajectory by dead reckoning according to the following simple kinematics: \[\begin{split}\hat{\mathbf{v}}_{o,i}&=\mathbf{v}_{o, i}+\mathbf{\epsilon}_{o},\\ \hat{\mathbf{v}}_{e,i}&=\mathbb{R}_{\text{E}} \mathbf{v}_{o,i}+\mathbf{\epsilon}_{e},\\ \mathbf{t}_{o,i}&=\mathbf{t}_{o,i-1}+\hat{\mathbf{ v}}_{o,i-1}\cdot dt,\\ \mathbf{t}_{e,i}&=\mathbf{t}_{e,i-1}+\hat{\mathbf{ v}}_{e,i-1}\cdot dt,\end{split} \tag{12}\] where \(\mathbf{\epsilon}_{o}\) and \(\mathbf{\epsilon}_{e}\) denote the added Gaussian noises, and \(\mathbb{R}_{\text{E}}\) the groundtruth extrinsic rotation. To simulate temporal offset between data, we furthermore slide temporally one drifted trajectory by a certain time length with respect to the other. As a result, we have temporally non-aligned and spatially perturbed kinematic information (i.e., linear velocities and way points) from two heterogeneous sources. For extensive evaluation, we generate six groups of data for each type of trajectory with different design parameters, and in each group we propose 100 trials with random temporal offset and extrinsic rotation. ### _Collection of Real Data_ We also collect real data using an event camera mounted on a non-holonomic all-wheel-steering mobile platform. By periodically adjusting the steering angle of all the wheels, we can obtain finger-shape trajectories which are friendly to our calibration method. An example of three-finger-shape trajectory and the corresponding linear velocity information are illustrated in Fig. 5. The measurement of linear velocity (i.e., \(\mathbf{v}_{o}\)) can be obtained according to Eq. 4 using as input the control commands of steering angle and wheel speed. We Fig. 4: Simulation of perfect trajectories and corresponding drifted ones. Trajectories from the two heterogeneous sensors are expressed in a unified coordinate system. Different colors are used to distinguish trajectories from heterogeneous sources. Fig. 5: An example of finger-shape trajectories carried out in the collection of real data. (a) The trajectory of three “fingers”. (b) Corresponding linear velocity measurements. Fig. 6: Illustration of visual information in the real data. (a) A sample RGB image of the scene for visualization only. (b) Our image-like representation rendered by the same time using events collected. collect four groups of data with various number of "fingers", aiming to show that the more directions the data contain the more accurate the calibration result. An illustration of visual data is shown in Fig 6. As for the groundtruth extrinsic rotation, we manage to set the event camera at a desired orientation with the help of two independent laser level meters. ### _Evaluation Metrics and Results_ We evaluate our method using the above datasets, and compare against existing pipelines listed in the following: * CGOC: A globally optimal solution method using quadratically constrained quadratic programs (QCQPs) proposed in [20]. * Hand-eye: An implementation based on Eq. 2. * VC: The proposed velocity-correlation based method. * VC-woTA: The proposed method without temporal alignment. The metric used for quantitative evaluation of extrinsic rotation is defined as \[e=arccos((\mathrm{Tr}(\mathrm{R}_{g}^{T}\mathrm{R}_{est})-1)/2)*180/\pi, \tag{13}\] where \(\mathrm{R}_{\texttt{ent}}\) and \(\mathrm{R}_{g}\) are the estimation result and the groundtruth orientation, respectively. Using the synthetic dataset, we calculate the calibration error according to Eq. 13 and illustrate the error's statistics using box plot. As seen in Fig. 7, our methods (VC and VC-woTA) outperform Hand-eye calibration method in terms of accuracy. Specifically, we clearly observe that VC outperforms VC-woTA. This indicates that temporal calibration is vital in the calibration task involving heterogeneous sensors. Furthermore, VC-woTA outperforms Hand-eye, and this result validates our point: Trajectory-alignment based methods suffer from drifted trajectories while methods based on measurements of instantaneous first-order kinematics (e.g., linear velocity in our case) can work. Besides, we find that the calibration results of the polyline-shape dataset are typically more accurate than those of the arc-shape dataset. This is because the temporal offset has less effect on the polyline-shape dataset than on the arc-shape dataset. To evaluate on real data, we first recover trajectories from data of the event camera and the odometers, respectively. The trajectory from event data is estimated using EVO [7], a state-of-the-art visual odometry pipeline for a monocular event camera. The direction of linear velocity from event data is obtained using our method discussed in Sec. III-A. Meanwhile, the linear velocity from odometers is obtained as shown in Sec. III-B, and the trajectory is obtained by dead reckoning. Since the estimates from the heterogeneous sensors are typically not synchronized, we obtain data association (synchronized and temporally-non-aligned) by data interpolation. Compared to the evaluation on synthetic dataset, we introduce another trajectory-alignment based solution [20], which can return globally optimal extrinsic estimates. As seen in Table. I, the best results are highlighted in bold, and the conclusion is consistent with that in the evaluation on synthetic data. Our methods (VC and VC-woTA) outperform trajectory-alignment based methods, and more specifically, VC outperforms VC-woTA. Besides, we see a clear trend in the last column of Table. I that the diversity of linear velocity direction (i.e., the number of "fingers") would promote the accuracy of calibration results. This is due to the fact that the more directions are considered, the more distinctive the kinematic profile becomes. ## V Conclusion This paper provides a novel solution to the problem of spatio-temporal calibration for omni-directional vehicle mounted event cameras. We argue that trajectory-alignment based methods suffer from drifts in any of the input trajectories. To this end, we propose a two-step method that establishes correlation on first-order dynamics, namely instantaneous linear velocity. In the first step, the optimal temporal offset is estimated by maximizing a correlation measurement invariant to the unknown extrinsic rotation. In \begin{table} \begin{tabular}{c c c c c} \hline \hline \#Fingers & CGOC [20] & Hand-eye & VC-woTA & VC \\ \hline 2 & 86.22 & 35.59 & 6.29 & **5.97** \\ 3 & 38.98 & 5.82 & 2.02 & **1.98** \\ 4 & 7.96 & 9.19 & 2.11 & **2.01** \\ 5 & 6.21 & 7.20 & 1.82 & **1.68** \\ \hline \hline \end{tabular} \end{table} TABLE I: Calibration errors on real data (°) Fig. 7: Illustration of statistics of the calibration error. the second step, we regard directions of linear velocity estimates as general point clouds on a unit sphere and calculate the extrinsic rotation matrix via a point cloud registration process, which is accurately and robustly solved using an iteratively re-weighted least squares method. Experiments on both synthetic data and real data demonstrate the efficacy of the proposed calibration method. Finally, we hope this work inspires new research in the topic of multi-sensor calibration that involves event-based cameras.
2305.19191
Method of Exact Solutions Code Verification of a Superelastic Constitutive Model in a Commercial Finite Element Solver
The superelastic constitutive model implemented in the commercial finite element code ABAQUS is verified using the method of exact solutions (MES). An analytical solution for uniaxial strain is first developed under a set of simplifying assumptions including von Mises-like transformation surfaces, symmetric transformation behavior, and monotonic loading. Numerical simulations are then performed, and simulation predictions are compared to the exact analytical solutions. Results reveal the superelasticity model agrees with the analytical solution to within one ten-thousandth of a percent (0.0001%) or less for stress and strain quantities of interest when using displacement-driven boundary conditions. Full derivation of the analytical solution is provided in an Appendix, and simulation input files and post-processing scripts are provided as supplemental material.
Kenneth I. Aycock, Nuno Rebelo, Brent A. Craven
2023-05-30T16:35:30Z
http://arxiv.org/abs/2305.19191v1
Method of Exact Solutions Code Verification of a Superelastic Constitutive Model in a Commercial Finite Element Solver ###### Abstract The superelastic constitutive model implemented in the commercial finite element code ABAQUS is verified using the method of exact solutions (MES). An analytical solution for uniaxial strain is first developed under a set of simplifying assumptions including von Mises-like transformation surfaces, symmetric transformation behavior, and monotonic loading. Numerical simulations are then performed, and simulation predictions are compared to the exact analytical solutions. Results reveal the superelasticity model agrees with the analytical solution to within one ten-thousandth of a percent (0.0001%) or less for stress and strain quantities of interest when using displacement-driven boundary conditions. Full derivation of the analytical solution is provided in an Appendix, and simulation input files and post-processing scripts are provided as supplemental material. ## 1 Introduction Superelastic nickel titanium (nitinol) alloys are commonly used in medical devices such as guidewires, dental arches, and self-expanding peripheral stents, stent grafts, heart valve frames, and inferior vena cava filters. Because of nitinol's unique material behavior and the complex geometry of most nitinol devices, engineers and scientists often use physics-based computational modeling and simulation to predict device mechanics and fatigue safety factors as part of non-clinical bench performance testing. As described in ASME V&V40-2018 [1], model predictions relied on for decision making should be accompanied by verification and validation (V&V) evidence demonstrating simulation credibility commensurate with the risk associated with the intended model use. However, rigorous code verification evidence for medical device simulations is often omitted (e.g., see Figure 1 in [2]), in part due to the lack of detailed examples to facilitate these studies. Herein, we aim to provide such an example for superelastic nitinol. In previous work, we demonstrated gold-standard method of manufactured solutions (MMS) code verification of the commercial finite element software ABAQUS for various linear and nonlinear elastostatics problems [3]. However, direct MMS verification of the superelastic model commonly used to simulate nitinol was not possible due to the lack of a closed-form representation of the underlying rate- and history-dependent constitutive equations. An approach recommended in the literature for rigorously verifying similarly complex, plasticity-based constitutive models is to perform method of exact solutions (MES) verification on an affine deformation problem with prescribed strains or displacements [4]. In this study, we perform MES code verification of the superelastic constitutive model in ABAQUS. Methods ### Constitutive model summary Superelastic constitutive behavior was first implemented in ABAQUS/Standard as a user-material (UMAT) by Rebelo et al [5, 6] in 2000. In brief, the model is based on the work of Auricchio and Taylor [7, 8] and leverages generalized plasticity theory to model the dependency of the material stiffness on the current stress state. More specifically, the model uses a mixture-based approach to simulate the stress-induced solid-solid phase transformation between cubic (B2) austenite and monoclinic (B19') martensite, tracked by the martensite fraction parameter \(\zeta\). Additional details on the constitutive model and the associated transformation flow rule are provided in the Appendix. A notional stress-strain response and associated input parameters are summarized in Figure 1 and Table 1, respectively. ### Simplifying assumptions The superelastic constitutive model in ABAQUS uses pressure-dependent Drucker-Prager-like transformation surfaces and cubic transformation equations to define nonlinear hardening during phase transformation between austenite and martensite. As such, a general analytical solution to the associated rate equations is not easily obtained and to our knowledge has not been derived. Here, we instead derive an analytical solution for linear transformation behavior. Because of the way the nonlinear transformation equations are defined in ABAQUS, the linear and nonlinear transformation solutions should be equivalent at the beginning, mid-point, and end of both the loading (austenite \(\rightarrow\) martensite) and unloading (martensite \(\rightarrow\) austenite) transformations under the following assumptions (see Figure 2 and Table 2): * symmetric transformation behavior in tension and compression, i.e. \(\sigma_{L}^{S}=\sigma_{cL}^{S}\) (constitutive model becomes von Mises-like rather than Drucker-Prager-like) * constant temperature (isothermal) * equal elastic moduli for austenite and martensite, i.e. \(E_{a}=E_{m}\) * pseudo-plasticity/superelasticity behavior only (i.e., the model is not superelastic-plastic) * monotonic and proportional (i.e., radial) loading. With these assumptions, we derive an exact analytical solution for the uniaxial strain of a single cubic element undergoing linear transformation and monotonic loading. ### Problem description Uniaxial strain of a unit cube is considered, i.e., \[\epsilon_{11}=f(t) \tag{1}\] \[\epsilon_{22}=\epsilon_{33}=\epsilon_{12}=\epsilon_{13}=\epsilon _{23}=0 \tag{2}\] where \(\epsilon_{ij}\) are the components of the logarithmic strain tensor and \(t\) is the simulation pseudo-time for the elastostatic analysis. The cube has an initial side length \(L_{0}=1\) and final side length in the direction of the applied strain of \(L_{F}=L_{0}+u\), where \(u\) is the applied displacement (Figure 3). An unloading analysis is additionally performed to return the single element to the reference configuration. During the loading and unloading analyses, the current length \(l\) is defined as a linear ramping between \(L_{0}\) and \(L_{F}\), \[l=\begin{cases}L_{0}+u\,t&\text{loading}\\ L_{0}+u\,(1-t)&\text{unloading}\end{cases} \tag{3}\] with pseudo-time \(t\) in the range \[0<t<1\,. \tag{4}\] We can also quantify the deformation using the non-dimensional stretch, \[\lambda=\frac{l}{L_{0}}=\begin{cases}1+\frac{u\,t}{L_{0}}&\text{loading}\\ 1+\frac{u(1-t)}{L_{0}}&\text{unloading}.\end{cases} \tag{5}\] ### Analytical solution The exact analytical solution is derived in detail in the Appendix. In summary, given * Mises stress \(q\) and martensite fraction \(\xi\) at the verification points in Table 2, * shear and bulk moduli \(G\) and \(K\), * assumptions from Section 2.2, the analytical solutions for pseudo-time and the remaining field variables are summarized in Table 3. ### Numerical simulations Single-element verification simulations were performed in ABAQUS/Standard versions R2016x and 2022 using a single core of an Intel Xeon E5-4627 v4 processor. Commonly used continuum element types C3DB, C3DBR, C3DBI, C3D20, and C3D20R are investigated in both software versions. Material constants are prescribed as defined in Table 4. The input files for all cases are provided in supplemental material. ABAQUS solves nonlinear mechanics problems in an incremental fashion for each simulation "step" or load sequence. For the present verification exercise, multiple steps are defined to facilitate extraction of field outputs at the precise pseudo-times where the analytical and numerical solutions agree if the constitutive code is correctly implemented (Tables 2 and 3). Displacement-controlled boundary conditions are prescribed to enforce the uniaxial strain conditions described in Eqn. 1-3. Increment sizes are defined such that each step consists of 100 increments. Numerical calculations of Mises stress \(q\), uniaxial strain \(\epsilon_{11}\), and stress components \(\sigma_{11}\), \(\sigma_{22}\), and \(\sigma_{33}\) are extracted at each pseudo-time point using an ABAQUS/Python script. Note these results could also be obtained from ABAQUS printed output provided such output is requested at appropriate simulation pseudo-times. For elements with multiple integration points, quantities of interest are averaged across all integration points. Percentage error magnitudes between the numerical and analytical solutions are calculated as \(|\phi_{\text{numerical}}-\phi_{\text{analytical}}|/\phi_{\text{analytical}}\times 100\) for each quantity of interest \(\phi\). A second set of single-element verification simulations using traction boundary conditions was also performed. In brief, the moving displacement boundary condition used above was replaced with a pressure boundary condition, with the prescribed pressure equal to the negative of the \(\sigma_{11}\) component of the Cauchy stress tensor from the analytical solution (Table 3). Results Each single-element simulation was completed within approximately 30 seconds with no clear influence of element type or solver version on simulation time (25.7 \(\pm\) 1.3 sec and 35.6 \(\pm\) 3.4 sec for displacement- and traction-controlled simulations, respectively). Simulations converged at each pseudo-time increment without the need for decreasing the increment size. Qualitatively, the numerical and analytical solutions are identical across all element types and software versions considered (Figure 4). The loading and unloading plateaus exhibit relatively large stiffnesses under uniaxial strain conditions compared to those associated with uniaxial stress conditions (Figure 4). Indeed, the \(\sigma_{11}\) Cauchy stresses reach values nearly an order of magnitude larger under uniaxial strain conditions. Quantitatively, for the displacement-controlled simulations, the results are relatively insensitive to the continuum element type or software version used, and extracted stress and strain measures are equivalent to within six to eight significant figures (see Supplemental Materials). One exception is the final equilibrium state after unloading where small residual stresses and strains are observed (Table 5). Comparing the analytical and numerical results, maximum errors in stress and strain are on the order of one ten-thousandth of a percent (Table 5). Although differences in the percent error magnitudes are observed among the various solver versions and element types investigated, the differences are relatively small. Accordingly, only the largest error magnitudes are reported in Table 5 for brevity. Results are similar for traction-controlled conditions, although percent error magnitudes increase to approximately 1e-03 at the end of transformation in both loading and unloading. One exception is encountered using C3DBI elements, where percent error magnitudes reach approximately 1% at the end of transformation in unloading, and finite residual stresses and strains are observed at the end of unloading. For full details, see the supplemental material. ## 4 Discussion and Conclusions Using the method of exact solutions, we demonstrate verification of the superelastic constitutive model implemented in the ABAQUS/Standard implicit finite element solver. Specifically, uniaxial strain conditions are used to facilitate derivation of a closed-form solution for monotonic loading and unloading through the full range of transformation behavior. Although the uniaxial conditions are relatively simple to implement, they generate a nontrivial stress state that differs from the uniaxial stress condition typically used for model calibration. The verification exercise is performed by extracting quantities of interest at specific simulation increments where the numerical and analytical solutions should theoretically agree if the superelastic model is properly implemented. Simulation results reveal maximum errors in quantities of interest are on the order of one ten-thousandth of a percent, providing evidence that the model is indeed correctly implemented. The results are quantitatively similar for all solver versions and continuum element types investigated. Note that, while we observe relatively small error magnitudes, the errors exceed machine precision for the double-precision floating-point operations performed by ABAQUS/Standard. A potential explanation is numerical incrementation. Each ABAQUS solution is computed in 800 increments, and there is a potential for errors to be generated at each increment. The results do not indicate obvious trends in error magnitude with increasing pseudo-time increment during loading. Accordingly, errors generated through incrementation are either negligible compared to the overall observed error magnitudes, or they are offset by subsequent increments such that they do not accumulate in simulation pseudo-time. In contrast, error magnitudes do increase i) during unloading when using C3D8I elements in the 2022 solver under displacement control, and ii) at the end of transformation for all element types and solver versions under traction control (see supplemental material), possibly due to incrementation error. The largest error magnitudes are observed at the end of unloading when using the C3D8I elements under traction control. Based on further investigation, this latter observation is uniquely associated with a combination of C3D8I elements, traction boundary conditions, and the superelastic model and is believed to be caused by the particular formulation of the incompatible mode element type. A few limitations should be noted. First, although the fully integrated and simplified closed-form solution provided in Table 3 is convenient for performing the verification exercise, the solution is limited to strictly monotonic loading. Alternatively, the solution could be generalized to consider the activation and evolution of the martensite fraction \(\xi\) under other loading paths, for example, unloading from the midpoint on the upper plateau (i.e., between verification points 2 and 6 in Figure 2). Second, a number of simplifying assumptions were used to derive the closed-form analytical solution. Consideration of tension-compression asymmetry, differences in austenite and martensite elastic moduli, and extensions of the superelasticity model such as the superelastic-plastic implementation could be investigated in future work. Third, the verification tests considered herein only address three-dimensional continuum elements using the implicit solver in ABAQUS. Verification using other element (e.g., beam and tetrahedral) and solver types (e.g., ABAQUS/Explicit) could be investigated in future work. In conclusion, method of exact solutions code verification of the superelasticity model in ABAQUS was successful under uniaxial strain conditions. Code verification evidence like that generated by this study, alongside solution verification, experimental validation, and uncertainty quantification evidence, are useful to support the credibility of computational models, especially when model predictions are used to inform high-risk decision making. To facilitate reproducibility of this study using other hardware systems or software versions, and adaptation of the approach to other rate-based constitutive models, the full derivation of the analytical solution is provided in the Appendix, and simulation input files and post-processing scripts are provided as supplemental material. ## Conflict of interest statement One of the authors (NR) was formerly an employee of Dassault Systemes Simulia, makers of ABAQUS. ## Acknowledgments We thank Snehal S. Shetye and Andrew P. Baumann (U.S. FDA) for reviewing the manuscript. This study was funded by the U.S. FDA Center for Devices and Radiological Health (CDRH) Critical Path program. The research was supported in part by an appointment to the Research Participation Program at the U.S. FDA administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and FDA. The findings and conclusions in this article have not been formally disseminated by the U.S. FDA and should not be construed to represent any agency determination or policy. The mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services. Appendix In the following, standard typeface symbols are scalars (e.g., \(\phi\)), boldface symbols denote second-order tenors (e.g., \(\mathbf{F}\) or \(\mathbf{\sigma}\)), and blackboard bold characters denote fourth-order tensors (e.g., \(\mathbb{A}\)). Additionally, the overdot operator (e.g., \(\dot{A}\)) indicates a time derivative. ### General kinematic equations Begin with the problem setup provided in Section 2.3 of the manuscript. The deformation gradient tensor is \[\mathbf{F}=\mathbf{I}+\nabla\mathbf{u} \tag{6}\] or, in matrix form, \[\mathbf{F}=\begin{bmatrix}\lambda&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix} \tag{7}\] where \(\mathbf{I}\) is the second-order identity tensor and \(\lambda=\frac{l}{L_{0}}\) is the stretch. For finite-strain problems, ABAQUS solves the problem incrementally and calculates the strain by integrating the strain increments. The resulting strain measure is thus the logarithmic or "true" strain \[\mathbf{\epsilon}=\ln\mathbf{V} \tag{8}\] where \(\ln\) is the principal matrix logarithm and \[\mathbf{V}=\sqrt{\mathbf{F}\mathbf{F}^{T}} \tag{9}\] is the left Cauchy stretch tensor. Since \(\mathbf{F}\) is diagonal here, \(\mathbf{F}=\mathbf{F}^{T}\) and \(\mathbf{V}=\mathbf{F}\). Therefore, the logarithmic strain tensor is simply \(\ln\mathbf{F}\) or \[\mathbf{\epsilon}=\begin{bmatrix}\ln\lambda&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}\,. \tag{10}\] Since the only non-zero strain component for this uniaxial strain problem is \(\epsilon_{11}\), let \[\epsilon =\epsilon_{11}\] \[=\ln\lambda \tag{11}\] to simplify notation. The volumetric strain \(\epsilon_{V}\) is next defined as \[\epsilon_{V} =\operatorname{tr}\mathbf{\epsilon}\] \[=\epsilon_{11}+\epsilon_{22}+\epsilon_{33}\] \[=\epsilon\,, \tag{12}\] which represents a measure of volume change or dilation, where \(\operatorname{tr}\mathbf{A}=\mathbf{A}:\mathbf{I}\) is the trace operator for a second-order tensor and \((:)\) is the double-inner product operator \(\mathbf{A}:\mathbf{B}=A_{ij}B_{ij}\). Note that the volumetric strain is sometimes defined in other literature as \(\epsilon_{V}=\frac{1}{3}\operatorname{tr}\mathbf{\epsilon}\), which represents a measure of mean normal strain. The strain rate tensor \[\dot{\mathbf{\epsilon}}=\begin{bmatrix}\dot{\mathbf{\epsilon}}&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix} \tag{13}\] is obtained by taking the time derivative of \(\mathbf{\epsilon}\). The deviatoric strain rate tensor \(\dot{\mathbf{\epsilon}}\) is \[\dot{\mathbf{\epsilon}}=\dot{\mathbf{\epsilon}}-\frac{1}{3}\operatorname{tr}\dot{\mathbf{ \epsilon}}\,\mathbf{I}\,, \tag{14}\] or, in matrix form, \[\dot{\mathbf{\epsilon}}=\begin{bmatrix}\frac{2}{3}\dot{\mathbf{\epsilon}}&0&0\\ 0&-\frac{1}{3}\dot{\mathbf{\epsilon}}&0\\ 0&0&-\frac{1}{3}\dot{\mathbf{\epsilon}}\end{bmatrix}\,. \tag{15}\] The scalar equivalent strain rate \(\dot{\mathbf{\epsilon}}\) is \[\dot{\mathbf{\epsilon}} =\sqrt{\frac{2}{3}\dot{\mathbf{\epsilon}}:\dot{\mathbf{\epsilon}}}\] \[=\frac{2}{3}\dot{\mathbf{\epsilon}}\,. \tag{16}\] ### Equations for linear, monotonic transformation behavior The superelastic constitutive model in ABAQUS [5; 6] is based on the work of Aurrichio and Taylor [7; 8] and leverages generalized plasticity theory to model the dependency of the material stiffness on the current stress state (see Online SIMULIA User Assistance 2022 >Abaqus >Materials >Elastic Mechanical Properties >Superelasticity). The constitutive model uses the additive strain rate decomposition \[\dot{\mathbf{\epsilon}}^{e}=\dot{\mathbf{\epsilon}}-\dot{\mathbf{\epsilon}}^{\text{tr}}\,, \tag{17}\] where \(\dot{\mathbf{\epsilon}}^{e}\) is the elastic strain rate tensor and \(\dot{\mathbf{\epsilon}}^{\text{tr}}\) is the transformation strain rate tensor. The Cauchy stress rate tensor is then \[\dot{\mathbf{\sigma}}=\mathbf{D}:\dot{\mathbf{\epsilon}}^{e} \tag{18}\] where \(\mathbf{D}\) is the fourth-order elasticity or stiffness tensor and \((:)\) is the operator \(\mathbf{A}:\mathbf{B}=A_{ijkl}B_{kl}\). For an isotropic material, \[\mathbf{D}=2G\,\left(\mathbf{I}-\frac{1}{3}\,\mathbf{I}\otimes\mathbf{I} \right)+K\,\mathbf{I}\otimes\mathbf{I}\,, \tag{19}\] where \(G\) is the shear modulus, \(K\) is the bulk modulus, \((\mathbf{I}\otimes\mathbf{I})_{ijkl}=\delta_{ij}\delta_{kl}\), \((\mathbf{I})_{ijkl}=\frac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta _{jk}\right)\), and \(\delta_{ij}\) is the Kronecker delta function \(\delta_{ij}=\begin{cases}0&i\neq j\\ 1&i=j\end{cases}\)[9]. Substituting Eqn. 19 into Eqn. 18 and simplifying, the Cauchy stress rate can be written in Hooke's law form as \[\dot{\mathbf{\sigma}} =2G\left(\dot{\mathbf{\epsilon}}^{e}-\frac{1}{3}\text{tr}\,\dot{\mathbf{ \epsilon}}^{e}\mathbf{I}\right)+K\,\text{tr}\,\dot{\mathbf{\epsilon}}^{e}\mathbf{I}\] \[=2G\dot{\mathbf{\epsilon}}^{e}+K\,\text{tr}\,\dot{\mathbf{\epsilon}}^{e} \mathbf{I}\,. \tag{20}\] In ABAQUS, the flow rule describing the transformation strain rate for superelastic materials is \[\dot{\mathbf{\epsilon}}^{\text{tr}}=\mathbf{\epsilon}^{L}\,\dot{\mathbf{\epsilon}}\,\frac {\partial G^{\text{tr}}}{\partial\mathbf{\sigma}}\,, \tag{21}\] where \(\epsilon^{L}\) is a material constant, \(\xi\) is the martensite fraction, and \(G^{\text{tr}}\) is a Drucker-Prager type transformation potential \[G^{\text{tr}}=q-p\tan\psi\,. \tag{22}\] In the above, \(p\) is hydrostatic pressure, \(\psi\) is a scaling constant, and \(q\) is the von Mises stress \[q=\sqrt{\frac{3}{2}\mathbf{S}:\mathbf{S}}\,, \tag{23}\] where \(\mathbf{S}\) is the deviatoric stress tensor \[\mathbf{S}=\boldsymbol{\sigma}-\frac{1}{3}\,\text{tr}\boldsymbol{\sigma}. \tag{24}\] As stated earlier, here we assume symmetric compression and tension behavior (i.e., \(\sigma_{L}^{S}=\sigma_{cL}^{S}\)). Thus, \(\psi=0\), and the transformation potential takes on a von Mises form as simply \[G^{\text{tr}}=q\,. \tag{25}\] ### An aside: radial return and the direction tensor \(\mathbf{n}\) Because the transformation behavior has been simplified as von Mises-type, the deviatoric stress and deviatoric strain rate tensors point in the same direction (in 6D space). The transformation potential is \(q\), and the transformation strain rate is proportional to (i.e., in the direction of) the gradient of \(q\) with respect to stress \(\frac{\partial q}{\partial\boldsymbol{\sigma}}\), which we denote as \(\mathbf{n}\). Expanding using Eqn. 23 and applying chain rule, \[\mathbf{n} =\frac{\partial q}{\partial\boldsymbol{\sigma}}\] \[=\frac{\partial\sqrt{\frac{3}{2}\mathbf{S}:\mathbf{S}}}{\partial \boldsymbol{\sigma}}\] \[=\frac{\partial\left(\mathbf{S}:\mathbf{S}\right)^{\frac{1}{2}}} {\partial\mathbf{S}}\cdot\frac{\partial\mathbf{S}}{\partial\boldsymbol{ \sigma}}\] \[=\frac{1}{2}\left(\frac{3}{2}\mathbf{S}:\mathbf{S}\right)^{- \frac{1}{2}}\frac{3}{2}\left(2\mathbf{S}\right)\cdot\frac{\partial(\sigma- \frac{1}{3}\,\text{tr}\boldsymbol{\sigma})}{\partial\boldsymbol{\sigma}}\] \[=\frac{3}{2}\frac{\mathbf{S}}{q}\cdot\left(\mathds{1}-\frac{1}{3 }\,\mathbf{I}\otimes\mathbf{I}\right)\] \[=\frac{3}{2}\frac{\mathbf{S}}{q}\,. \tag{26}\] Note the inner product of \(\mathbf{n}\) with itself is \[\mathbf{n}:\mathbf{n} =\frac{3}{2}\left(\frac{3}{2}\frac{\mathbf{S}:\mathbf{S}}{q^{2}}\right)\] \[=\frac{3}{2}\frac{q^{2}}{q^{2}}\] \[=\frac{3}{2}\,. \tag{27}\] The double-inner product between the deviatoric stress tensor \(\mathbf{S}\) and the direction of the deviatoric strain \(\mathbf{n}\) is also useful since \[\mathbf{S}:\mathbf{n} =\frac{3}{2}\frac{\mathbf{S}:\mathbf{S}}{q}\] \[=\frac{q^{2}}{q}\] \[=q\,. \tag{28}\] Radial return algorithms project the trial stress back onto the yield (here transformation) surface by scaling the stress radially with respect to the hydrostatic axis \(\sigma_{1}=\sigma_{2}=\sigma_{3}\), where \(\sigma_{i}\) are principal stresses. We assume radial return to be exact under the given simplifications and approximations, specifically, equal transformation stresses in compression and tension and thereby von Mises-like transformation behavior, and proportional (radial) loading. Accordingly, the loading direction coincides with the projection direction and the (pseudo)plastic strain rate direction. As derived below, \[\dot{\mathbf{e}}^{tr}=\dot{e}^{tr}\,\mathbf{n} \tag{29}\] where \(\dot{\mathbf{e}}^{tr}\)and \(\dot{e}^{tr}\) are the deviatoric transformation strain rate tensor and equivalent scalar, respectively, and \(\mathbf{n}\) specifies the direction of the deviatoric transformation strain rate (the normal direction to the transformation surface with the given problem description). Following standard Mises plasticity arguments and given proportional loading, \(\dot{\mathbf{e}}\) and \(\dot{\mathbf{e}}^{tr}\)are collinear. Accordingly, using Eqn. 16, the deviatoric strain rate tensor may also be written \[\dot{\mathbf{e}} =\dot{e}\,\mathbf{n}\] \[=\frac{2}{3}\dot{\mathbf{e}}\,\mathbf{n} \tag{30}\] and therefore, given Eqn. 15, \[\mathbf{n}=\begin{bmatrix}1&0&0\\ 0&-\frac{1}{2}&0\\ 0&0&-\frac{1}{2}\end{bmatrix}\,. \tag{31}\] ### Equations for linear, monotonic transformation behavior (continued) Continuing from Eqn. 21, using Eqn. 25 and the definition of \(\mathbf{n}\), the last term on the right-hand side becomes \[\frac{\partial G^{\text{tr}}}{\partial\sigma} =\frac{\partial q}{\partial\sigma}\] \[=\mathbf{n}\,. \tag{32}\] The transformation strain rate from Eqn. 21 can thus be written \[\dot{\mathbf{e}}^{\text{tr}}=\mathbf{\epsilon}^{L}\,\dot{\xi}\,\mathbf{n}\,, \tag{33}\] and the scalar equivalent transformation strain rate becomes \[\dot{\mathbf{\epsilon}}^{\text{tr}} =\sqrt{\frac{2}{3}\dot{\mathbf{\epsilon}}^{\text{tr}}:\dot{ \mathbf{\epsilon}}^{\text{tr}}}\] \[=\mathbf{\epsilon}^{L}\,\dot{\xi}\,. \tag{34}\] Note that \(\mathbf{n}\) is deviatoric and \(\operatorname{tr}\mathbf{n}=0\). Therefore, \[\operatorname{tr}\dot{\boldsymbol{\epsilon}}^{\operatorname{tr}}=0 \tag{35}\] and \[\operatorname{tr}\dot{\boldsymbol{\epsilon}}^{e} =\operatorname{tr}\,\left(\dot{\boldsymbol{\epsilon}}-\dot{ \boldsymbol{\epsilon}}^{\operatorname{tr}}\right)\] \[=\operatorname{tr}\dot{\boldsymbol{\epsilon}}\] \[=\dot{\epsilon}_{V}\,, \tag{36}\] where \(\dot{\boldsymbol{\epsilon}}_{V}\) is the volumetric strain rate. Using Eqns. 14, 17, and 33--36, the Cauchy stress rate from Eqn. 20 becomes \[\dot{\boldsymbol{\sigma}} =2G\left[\left(\dot{\boldsymbol{\epsilon}}-\boldsymbol{\epsilon} ^{L}\,\dot{\boldsymbol{\xi}}\,\mathbf{n}\right)-\frac{1}{3}\operatorname{tr} \,\left(\dot{\boldsymbol{\epsilon}}-\boldsymbol{\epsilon}^{L}\,\dot{\xi}\, \mathbf{n}\right)\,\mathbf{I}\right]+K\,\dot{\boldsymbol{\epsilon}}_{V} \mathbf{I}\] \[=2G\left[\left(\dot{\boldsymbol{\epsilon}}-\boldsymbol{\epsilon} ^{L}\,\dot{\xi}\,\mathbf{n}\right)-\frac{1}{3}\operatorname{tr}\,\left(\dot{ \boldsymbol{\epsilon}}\right)\mathbf{I}\right]+K\,\dot{\boldsymbol{\epsilon}}_ {V}\mathbf{I}\] \[=2G\left(\dot{\boldsymbol{\epsilon}}-\boldsymbol{\epsilon}^{L}\, \dot{\xi}\,\mathbf{n}\right)+K\,\dot{\boldsymbol{\epsilon}}_{V}\,\mathbf{I}\,. \tag{37}\] The Cauchy stress rate can be split into deviatoric and hydrostatic components \[\dot{\boldsymbol{\sigma}}=\dot{\mathbf{S}}-\dot{p}\,\mathbf{I}\,, \tag{38}\] where \[\dot{\mathbf{S}}=2G\left(\dot{\boldsymbol{\epsilon}}-\boldsymbol{\epsilon}^{L }\,\dot{\xi}\,\mathbf{n}\right) \tag{39}\] is the deviatoric stress rate and \[\dot{p}=-K\,\dot{\boldsymbol{\epsilon}}_{V} \tag{40}\] is the hydrostatic pressure rate. The martensite fraction rate \(\dot{\xi}\) is a function of the equivalent stress, \[\dot{\xi}=f(q)\,, \tag{41}\] and this function is called the transformation law (equivalent to a work-hardening law in plasticity). In general, the martensite fraction rate must be calculated for each increment, as its sign and magnitude depend on the current stress state as well as the direction and magnitude of the stress rate. However, for the linear hardening approximation and monotonic loading, we can define the martensite fraction directly as a linear function of the von Mises equivalent stress. For loading, we have \[\xi_{\text{L}}\left(q\right)=\begin{cases}0&q\leq\sigma_{L}^{S}\\ \frac{q-\sigma_{L}^{S}}{\sigma_{L}^{E}-\sigma_{L}^{S}}&\sigma_{L}^{S}<q<\sigma _{L}^{E}\\ 1&q\geq\sigma_{L}^{E}\end{cases}\,, \tag{42}\] and for unloading, \[\xi_{\text{U}}\left(q\right)=\begin{cases}0&q\leq\sigma_{U}^{E}\\ \frac{q-\sigma_{U}^{E}}{\sigma_{U}^{S}-\sigma_{U}^{E}}&\sigma_{U}^{E}<q<\sigma _{U}^{S}\\ 1&q\geq\sigma_{U}^{S}\end{cases}\,. \tag{43}\] For strictly monotonic proportional (constant \(\mathbf{n}\)) loading or unloading, we can now integrate the rate tensors in Eqns. 38--40 from \(t=0\) to \(t=t\) to obtain \[\boldsymbol{\sigma}=\mathbf{S}-p\,\mathbf{I} \tag{44}\] \[\mathbf{S}=2G\,\left(\mathbf{e}-\epsilon^{L}\,\xi\,\mathbf{n}\right) \tag{45}\] \[p=-K\,\epsilon_{V}\,. \tag{46}\] Similarly, substituting Eqn. 45 into Eqn. 28 and expanding using Eqns. 30 and 27, the equivalent stress \(q\) becomes \[q =\mathbf{S}:\mathbf{n}\] \[=2G\left(\mathbf{e}:\mathbf{n}-\epsilon^{L}\,\xi\,\mathbf{n}: \mathbf{n}\right)\] \[=2G\left(\frac{2}{3}\epsilon\,\mathbf{n}:\mathbf{n}-\epsilon^{L} \,\xi\,\mathbf{n}:\mathbf{n}\right)\] \[=2G\left(\epsilon-\frac{3}{2}\epsilon^{L}\,\xi\right)\] \[=2G\,\epsilon-3G\epsilon^{L}\,\xi\,. \tag{47}\] Note the integrated equations take the same form for both loading and unloading. Pairings of \(q\) and \(\xi\), however, define specific locations on the stress-strain curve such that the solutions for loading and unloading are unique (see Figure 2 and Table 2). ### Final analytical solution for uniaxial strain conditions Given \(q\) and \(\xi\) (Table 2), solve Eqn. 47 for the total logarithmic strain \(\epsilon\), \[\boxed{\epsilon=\underbrace{\frac{q}{2G}}_{\text{elastic strain}}+\underbrace{ \frac{3}{2}\epsilon^{L}\xi}_{\text{transformation strain}}}. \tag{48}\] Next, use Eqn. 26 to calculate the deviatoric stress tensor, \[\boxed{\mathbf{S}=\frac{2}{3}q\,\mathbf{n}} \tag{49}\] and Eqns. 46 and 12 to calculate the hydrostatic pressure, \[\boxed{p=-K\epsilon}. \tag{50}\] Use Eqn. 44 to calculate the Cauchy stress tensor, \[\boxed{\boldsymbol{\sigma}=\mathbf{S}-p\mathbf{I}}. \tag{51}\] Finally, using Eqns. 5 and 11, calculate the corresponding solution pseudo-time \(t\), \[\boxed{t=\begin{cases}\frac{L_{0}(\lambda-1)}{u}&\text{loading}\\ 1-\frac{L_{0}(\lambda-1)}{u}&\text{unloading}\end{cases}} \tag{52}\] where \(\lambda=\exp\left(\epsilon\right)\).
2308.07353
Quantum information entropies for solitonic systems
Particle with position-dependent mass is a useful concept in the context of semiconductor physics. We study a particle with the solitonic mass distribution in two different forms of potential: the quartic and the symmetric potential. We estimate the Shannon entropy and Fisher information associated with the ground state of particle in these two scenarioes by obtaining the wave-function from Zhu-Kroemer equation. The ground state of the particle in each case satisfies the Bialynicki-Birula-Mycielski inequality. Upon comparing all four models under consideration, we have observed that the Shannon entropy is greater for the solitonic mass distribution when it is subjected to a quartic potential.
Ramkumar Radhakrishnan, Mariyah Ughradar, Vikash Kumar Ojha
2023-08-14T10:15:30Z
http://arxiv.org/abs/2308.07353v1
# Quantum information entropies for solitonic systems ###### Abstract Particle with position-dependent mass is a useful concept in the context of semiconductor physics. We study a particle with the solitonic mass distribution in two different forms of potential: the quartic and the symmetric potential. We estimate the Shannon entropy and Fisher information associated with the ground state of particle in these two scenarios by obtaining the wave-function from Zhu-Kroemer equation. The ground state of the particle in each case satisfies the Bialynicki-Birula-Mycielski inequality. Upon comparing all four models under consideration, we have observed that the Shannon entropy is greater for the solitonic mass distribution when it is subjected to a quartic potential. ## I Introduction Solitons, also known as solitary waves, encompass a range of physical phenomena, including shock waves in sound waves and ultra-short optical pulses in optical fibers [1]. From a mathematical perspective, solitons can be understood as localized solutions to partial differential equations that describe nonlinear systems with infinite degrees of freedom [2]. The groundbreaking contributions of Feynman have significantly influenced and propelled the field of Quantum information processing [3]. Consequently, the study of solitons holds great importance in the realm of Quantum information processing. More precisely, the study of solitons are profound in the field of Quantum communication [4; 5; 6]. Solitons are distinct from elementary particles that emerge from quantized wave-like excitations of the fields [7]. Their characteristics are primarily determined by classical equations. This research took place during the early stages of quantum field theory in the late 1960s. There exist important connections between the solitons of a theory and the wave-like fields that satisfy the linearized field equations. Quantizing the latter results in states of elementary particles, with the interactions between these particles being governed by the nonlinear components [2]. Initially, in regions far from the soliton, the field progressively tends towards the vacuum state, adhering to the rate determined by the linearized field equation. Therefore, if the linearized equation lacks a mass term, leading to the existence of massless elementary particles, the impact of the soliton will extend over a considerable distance, diminishing inversely as distance increases. This comprehension has implications for multiple facets of quantum field theory within condensed matter setups [8]. Initially, in quantum mechanics, the Schrodinger equation focused on physical systems with a constant effective mass. However, advancements in condensed matter physics presented a new challenge: the existence of non-relativistic particles that exhibit a position-dependent effective mass. Subsequently, the study of systems with position-dependent effective mass has garnered significant attention from researchers due to its wide range of applications. In conjunction with the concept of position-dependent effective mass arises the theory of communication, formally known as the Information theory proposed by Claude Shannon around mid-1900s. Entropy, as a measurable physical property, is commonly associated with a state of disorder, randomness, or uncertainty. In the context of thermodynamics [9], entropy represents a measure of irreversibility within a physical system. However, it is important to note that this concept of entropy differs from information entropy, which is the focus of our interest. Several parameters highlight the distinctions between thermodynamic entropy and information entropy. Thermodynamic entropy considers the number of possible structural arrangements within a system, while informational entropy pertains to choices made within a communication system. In recent times, there has been a growing interest in utilizing information-theoretic measures for quantum systems. Notably, entropic uncertainty relations have been explored as alternatives to the Heisenberg uncertainty principle [10]. Furthermore, Shannon entropy [11] and entropic uncertainty relations find applications in various areas. These include the study of squeezed localization [12], fractional revivals [13; 14], and reconstruction of charge and momentum densities in atomic and molecular systems using maximum entropy procedures [15]. Additionally, Shannon entropy [11] finds utility in Measure theory [16], the study of open quantum systems involving Markov chains [17], machine learning for decision tree algorithms, and Bayesian inference [18]. The Fisher information, also referred to as Fisher's entropy, is a fascinating concept that emerges from communication theory and is recognized as a precursor to Shannon entropy [19]. In the quantum domain, Fisher information is inherently connected to the uncertainty connected with measurement, as explained in references [20]. Essentially, Fisher's information measures the information content carried by a specific observable concerning a parameter, while considering its inherent probability [19]. This article is structured as follows: In section II, we provide an overview of Shannon entropy, entropic uncertainty, and Fisher's information measure. In section III, we provide a detailed explanation of the quantum mechanics of position-dependent mass. In section IV, V, we calculate Shannon and Fisher's information measure for different solitonic systems and discuss the results in section VI. We have computed all calculations for systems in their ground state and by considering the convention \(\hbar=c=1\). ## II Shannon entropy and Fisher's information measure - A bird's eye view In the realm of information theory, Shannon entropy, originally introduced by Claude Shannon in 1948 in his publication 'A Mathematical Theory of Communication [11], represents an average rate of information generation from a stochastic data source. A higher value of Shannon entropy implies that a new value from the process provides more information. Mathematically, for a given distribution function \(p_{i}\), Shannon entropy can be defined as follows [11]: \[S=-\sum p_{i}\text{ln}(p_{i}). \tag{1}\] Shannon entropy was not derived from pre-established assumptions but rather developed as a means to quantify the uncertainty principle. It bears a resemblance to Gibbs entropy and is occasionally referred to as "Boltzmann-Gibbs-Shannon" entropy [21]. Shannon entropy can be conceptualized as the measure of uncertainty within a probability distribution connected to an information source. In the context of a given probability distribution in position space, Shannon entropy in the \(n\)th state is defined as. \[S_{x}=-\int_{-\infty}^{\infty}dx\bigg{[}(\psi_{n}(x))^{2}\text{ln}(\psi_{n}(x) )^{2}\bigg{]}. \tag{2}\] In the reciprocal space (momentum representation), Shannon entropy in the \(n\)th state is defined by \[S_{k}=-\int_{-\infty}^{\infty}dk\bigg{[}(\phi_{n}(k))^{2}\text{ln}(\phi_{n}(k) )^{2}\bigg{]}, \tag{3}\] where, \[\phi_{n}(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-ikx}\psi_{n}(x)dx. \tag{4}\] While the uncertainty principle deals with the limits of measurement in quantum mechanics, Shannon entropy deals with the quantification of information in a probabilistic framework. However, there is a conceptual similarity between the two in terms of the idea of uncertainty and the limits of knowledge or measurement. In quantum mechanics, the uncertainty principle implies that there are inherent limits to the simultaneous knowledge of certain pairs of complementary properties, such as position and momentum, or energy and time. This uncertainty arises from the wave-particle duality of quantum systems and the probabilistic nature of quantum measurements. Shannon entropy, on the other hand, provides a measure of uncertainty or unpredictability in a probability distribution. It quantifies the average amount of information needed to describe or predict the outcomes of a random variable. High entropy implies higher uncertainty or unpredictability, while low entropy implies lower uncertainty or greater predictability. Although these concepts are not directly connected, they both reflect the fundamental limitations on our ability to precisely know or predict certain aspects of physical systems. The uncertainty principle sets limits on the simultaneous knowledge of complementary properties in quantum mechanics, while it quantifies the uncertainty or information content of random variables in a probabilistic framework. This is been briefly discussed in [22; 23], which can be represented as \[S_{x}+S_{k}\geq D(1+\text{ln}\pi), \tag{5}\] where \(D\) denotes the dimension of the system. The computation of Shannon entropy analytically is challenging due to the presence of logarithmic terms in the integrals, especially considering the system's dimension. As a result, it has only been determined for specific lower-energy states of certain systems such as the Harmonic oscillator [24], the Poschl-Teller potential [25], the Morse potential [26], and highly excited states of the Coulomb potential [27]. The foundation of the second law of thermodynamics lies in the concept of entropy [28]. According to this law, the overall entropy of a closed system increases as time progresses and approaches a maximum value in the limit of infinite time. In its classical formulation, entropy is applicable to systems in equilibrium and is defined by Clausius and others (see ref.[29]), can be demonstrated by \[dS=\frac{\delta Q}{T}. \tag{6}\] As mentioned earlier, when viewed through the lens of information theory, entropy serves as a metric for the level of disorder within a system [22]. Consequently, the entropy of a thermodynamic system escalates as the count of accessible microstates for the system expands. In the realm of information theory, there exists an alternative form of information known as Fisher information (\(F\)) [30]. It is of the form, \[F=\sum_{i=1}^{m}\frac{1}{p_{i}}\bigg{[}\frac{dp_{i}}{di}\bigg{]}^{2}, \tag{7}\] where, \(p_{i}\) denotes the probability density for finding the system in micro-state \(i\). In the context of a specific thermodynamic system [9], an increase in the number of microstates leads to a corresponding increase in entropy. This occurs because the system becomes more disordered, resulting in higher entropy. For instance, if the system occupies a single microstate, the probability density function exhibits a steep slope around that state, indicating high Fisher information. Conversely, if the system exists in numerous states with approximately equal probabilities, the probability density function becomes flat, with a nearly zero slope and Fisher information. Therefore, Fisher information serves as a measure of order within the system, representing a scenario where the system exists predictably in one or a few high-probability microstates. It should be noted that despite the resemblance of equations (1) and (7), these two quantities do not measure the same property of the system. Shannon entropy is determined by the probability density function, while Fisher information depends on the derivative of the probability density function, representing its slope. Consequently, it can be regarded as a global property, while Fisher information can be considered a local property relative to the probability density function. Therefore, Fisher information and Shannon entropy are not analogous to each other. Fisher's information for an observable \(\zeta\) is defined as \[F_{\zeta}=\int_{-\infty}^{\infty}d\zeta\bigg{[}\rho(\zeta)\bigg{(}\frac{d}{d \zeta}\text{ln}(\rho(\zeta))\bigg{)}^{2}\bigg{]}>0, \tag{8}\] We calculate Fisher's information for an observable \(\zeta\) whose probability density is \(|\psi(\zeta,t)|^{2}\) is given by \[F_{\zeta}=\int_{-\infty}^{\infty}d\zeta\bigg{[}(\psi(\zeta,t))^{2}\bigg{(} \frac{d}{d\zeta}\text{ln}(\psi(\zeta,t))^{2}\bigg{)}^{2}\bigg{]}>0,\] However, in one dimensional stationary quantum systems (solitonic systems), at position space, \(\rho(\zeta)=|\psi(\zeta,t)|^{2}\approx|\psi(\zeta)|^{2}\), is the probability density. It can be obtained by approximating \(\psi(\zeta,t)\) to \(\psi(\zeta)\). \[F_{\zeta}=\int_{-\infty}^{\infty}d\zeta\bigg{[}(\psi(\zeta))^{2}\bigg{(}\frac {d}{d\zeta}\text{ln}(\psi(\zeta))^{2}\bigg{)}^{2}\bigg{]}. \tag{9}\] Similarly, we compute Fisher information for the same soliton in the momentum space. \[F_{k}=\int_{-\infty}^{\infty}dk\bigg{[}(\phi(k,t))^{2}\bigg{(}\frac{d}{dk} \text{ln}(\phi(k,t))^{2}\bigg{)}^{2}\bigg{]}, \tag{10}\] Implementing the same approximation as before we get Fisher information as \[F_{k}=\int_{-\infty}^{\infty}dk\bigg{[}(\phi(k))^{2}\bigg{(}\frac{d}{dk}\text {ln}(\phi(k))^{2}\bigg{)}^{2}\bigg{]}. \tag{11}\] where, \(\phi(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-ik\zeta}\psi(\zeta)d\zeta\). Uncertainty, in the context of Fisher information, refers to the degree of imprecision or lack of knowledge about the true value of a parameter. It captures the spread or variability in the estimates obtained from different samples or measurements. It measures the uncertainty or randomness associated with parameter estimation based on observed data, with higher entropy implying lower uncertainty and better estimation performance. This is represented as follows \[F_{\zeta}F_{k}\geq 4\hbar^{2}. \tag{12}\] where, \(\hbar\) is reduced Planck's constant. ## III Quantum mechanics with position dependent mass In standard quantum mechanics, the mass of a particle is typically treated as a constant. However, there are certain physical systems where the mass of a particle can depend on its position within the system. This introduces an additional complexity in the mathematical formalism of quantum mechanics, but it can be accommodated using position-dependent mass quantum mechanics. The Schrodinger equation, which describes the behavior of quantum systems, needs to be modified to account for the position-dependent mass. The general form of the Schrodinger equation for a particle with position-dependent mass is \[\frac{-\hbar^{2}}{2m(x)}\frac{d^{2}\psi}{dx^{2}}+V(x)\psi=E\psi, \tag{13}\] where, \(E\) is the energy, \(\psi(x)\) is a wave function and \(V(x)\) is the potential energy. The position-dependent mass function m(x) introduces a spatial dependence in the kinetic energy term of the Schrodinger equation. This means that the behavior of the particle can vary depending on its position within the system. Different choices for \(m(x)\) can lead to a variety of interesting physical phenomena. Throughout the sections we use the mass distribution to be \[m(x)=\frac{m_{0}}{1+x^{2}}. \tag{14}\] Thus, for a position dependent mass, the kinetic energy term (\(\hat{T}\)) of Schrodinger equation eq. (13) is modified as follows \[\hat{T}=\frac{-\hbar^{2}}{2\sqrt{m(x)}}\frac{d^{2}}{dx^{2}}\frac{1}{\sqrt{m(x) }}. \tag{15}\] The Hamiltonian operator (\(\hat{H}\)), which describes the energy of a stationary system (solitonic system) in quantum mechanics, can be expressed in the following manner \[\hat{H}=\frac{-\hbar^{2}}{2\sqrt{m(x)}}\frac{d^{2}}{dx^{2}}\frac{1}{\sqrt{m(x )}}+V(x). \tag{16}\] The Schrodinger equation modifies as \[\frac{-\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}}+\frac{-\hbar^{2}}{2} \frac{m^{\prime}(x)^{2}}{m(x)^{2}}\frac{d\psi(x)}{dx}+\Bigg{[}\frac{\hbar^{2}}{4} \frac{m^{\prime\prime}(x)}{m(x)^{2}}-\] \[\frac{3\hbar^{2}}{8}\frac{m^{\prime}(x)^{2}}{m(x)^{3}}+V(x) \Bigg{]}\psi(x)=E\psi(x). \tag{17}\] This differential equation [eq.(17)] is derived by Zhu and Kroemer [31] and is used in the context of semiconductor physics to explore the electronic properties of semiconductor devices. In the next section, we shall discuss Shannon and Fisher's information measure for different solitonic systems. The motive of this particular work is from the context of solitons in field theory, where we look for time-independent, finite energy, and localized solutions. The majority of research on solitons focuses on theoretical analyses of soliton solutions in quantum field theories that are not applicable to describing our physical universe. When considering particle physics models that accurately represent our world, solitons often possess unconventional properties such as exotic magnetic monopole charges [2], making them relatively heavy. These applications of solitons represent only a small portion of the extensive literature on solitons. Moreover, there are exceptional cases, like the description of quarks and leptons through a dual electromagnetic gauge group [2] using a magnetic monopole framework, which is even rarer in the soliton literature. ## IV Solitons in a quartic potential The double well potential also known as the Quartic potential, holds significant importance in quantum mechanics, quantum field theory, and other other fields for investigating different physical phenomena and mathematical properties. The symmetric well potential is ubiquitous as it serves as a model to illustrate the concept of instantons [2] and Feynman path integral formulation in quantum mechanics [32]. Solitons in quartic potential are formed by the interplay between self-phase modulation and anomalous second- and fourth-order dispersion [33]. This has a widespread application in the field of optical communication [34; 35], such as shape in variance in nonlinear wave packets which enhances optical communication. Recent notable achievements in this area have shown promising outcomes in terms of the potential observation of localized quartic solitons within specifically engineered slot wave guides based on silicon [36]. This is been observed in the case of quartic solitons with a mass distribution of \(\frac{m_{0}}{1+x^{2}}\), thus studying the quantum information entropies for the same will help us to understand it much better [37]. The given solitonic mass distribution and well-known quartic potential with parameters \(a\) and \(b\) can be written as \[V(x)=ax^{2}+bx^{4},b>0, \tag{18}\] ### Constant Mass We start by calculating wave function for this system for constant mass (i.e., \(m=m_{0}\)), Schrodinger equation for this system is simplified as follows \[\frac{d^{2}\psi(x)}{dx^{2}}+\bigg{[}A-Bx^{2}-Cx^{4}\bigg{]}\psi(x)=0,\] where, \(A=\frac{2m_{0}E}{\hbar^{2}},B=\frac{2m_{0}a}{\hbar^{2}},C=\frac{2m_{0}b}{\hbar ^{2}}\). The wave function is computed (briefly discussed in VII.1) and is given by \[\psi(x)=\sqrt{\frac{A}{\pi}}\mathrm{exp}\bigg{[}\frac{-A}{2}x^{2}\bigg{]}, \tag{19}\] In reciprocal space we can write the wave function as \[\phi(k)=\frac{1}{\sqrt{\pi}}\mathrm{exp}\bigg{[}\frac{-k^{2}}{2A}\bigg{]}. \tag{20}\] Figure 1: Solitonic mass distribution for constant \(m_{0}\) Figure 2: Quartic potential for different values of \(a\) and \(b\). Shannon's entropy According to Bonn [38], the statistical interpretation of the quantum system, which describes the probability of finding the particle in the state \(\psi(x,t)\) between the spatial interval \(x\) and \(x+dx\) describes as \(|\psi(x,t)|^{2}\approx|\psi(x)|^{2}\). Shannon's entropy in position space (\(S_{x}\)) is given by \[S_{x}=-\int_{-\infty}^{\infty}|\psi(x)|^{2}\mathrm{ln}[|\psi(x)|^{2}]dx,\] and we obtain the value for \(S_{x}\) as \[S_{x}=\sqrt{\frac{A}{\pi}}\bigg{(}\frac{1}{2}-\mathrm{ln}\bigg{[}\frac{A}{\pi }\bigg{]}\bigg{)}. \tag{21}\] In reciprocal space, it is given by \(S_{k}\) \[S_{k}=-\int_{-\infty}^{\infty}|\phi(k)|^{2}\mathrm{ln}[|\phi(k)|^{2}]dk,\] where \(D\) is the dimension of the spatial coordinates of the system. and we obtain the value for \(S_{k}\) as \[S_{k}=\sqrt{\frac{A}{\pi}}\bigg{(}\frac{1}{2}+\mathrm{ln}\pi\bigg{)}, \tag{22}\] such that, \[S_{x}+S_{k}\geq D(1+\mathrm{ln}\pi).\] where \(D\) is the dimension of the spatial coordinates of the system. The numerical study of Shannon entropy was carried out considering the eigenfunctions at the position and momentum space, as well as the definition of Shannon entropy, i.e. eq. (22) and eq. (21). The numerical result of Shannon entropy for constant mass is presented in the following table: #### ii.2.2 Fisher's Information measure For an observable \(x\), Fisher information (\(F_{x}\)) is given by \[F_{x}=\int_{-\infty}^{\infty}|\psi(x)|^{2}\bigg{[}\frac{d}{dx}\mathrm{ln}|\psi (x)|^{2}\bigg{]}^{2}dx>0,\] and we obtain the value for \(F_{x}\) as \[F_{x}=\frac{2A^{\frac{3}{2}}}{\sqrt{\pi}}. \tag{23}\] In reciprocal space, Fisher information for an observable \(k\), (\(F_{k}\)) is given by \[F_{k}=\int_{-\infty}^{\infty}|\phi(k)|^{2}\bigg{[}\frac{d}{dk}\mathrm{ln}|\phi (k)|^{2}\bigg{]}^{2}dk>0,\] and we obtain the value for \(F_{k}\) as \[F_{k}=\frac{2}{\sqrt{A\pi}}. \tag{24}\] Let us also remember that the standard deviation of the position and momentum measures is given respectively by \[\sigma_{x}^{2}=<x^{2}>-<x>^{2}, \tag{25}\] \[\sigma_{k}^{2}=<k^{2}>-<k>^{2}. \tag{26}\] where, \(<x>,<k>,<x^{2}>,<k^{2}>\) are the expected values of \(x,k,x^{2},k^{2}\) respectively. Using the definitions of Fisher information given in eq. (8) and eq. (10) and standard deviations given in eq. (25) and eq. (26), we present the numerical result of Fisher's information measure for mass distribution is presented in the following table: \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(A\) & \(F_{x}\) & \(F_{k}\) & \(\sigma_{x}^{2}\) & \(\sigma_{k}^{2}\) & \(F_{x}\cdot F_{k}\) \\ \hline 3.5 & 7.386 & 0.6029 & 0.1507 & 1.846 & 4.453 \\ \hline 3.6 & 7.704 & 0.5945 & 0.1486 & 1.926 & 4.580 \\ \hline 3.7 & 8.028 & 0.5864 & 0.1466 & 2.007 & 4.707 \\ \hline 3.8 & 8.356 & 0.5786 & 0.1447 & 2.089 & 4.834 \\ \hline 3.9 & 8.688 & 0.5711 & 0.1428 & 2.172 & 4.962 \\ \hline 4.0 & 9.024 & 0.5640 & 0.1410 & 2.256 & 5.089 \\ \hline 4.2 & 9.709 & 0.5504 & 0.1376 & 2.427 & 5.344 \\ \hline \end{tabular} \end{table} Table 2: Results for the uncertainty relation and Fisher information measure Figure 3: Fisher’s information measure for soliton in position space (_constant mass_) in quartic potential. Consequently, by examining the results of the standard deviation, the Heisenberg uncertainty principle was explored, leading to the derivation of the following relationships, \[F_{x}=4\sigma_{k}^{2},\] \[F_{k}=4\sigma_{x}^{2},\] Heisenberg uncertainty principle is written as \[F_{x}F_{k}\geq 4\hbar^{2}.\] ### Mass distribution We consider the mass distribution to be \(m(x)=\frac{m_{0}}{1+x^{2}}\). For the sake of simplicity, we take the values of \(a\) and \(b\) to be unity. Thus the modified potential can be read as, \[V(x)=x^{2}+x^{4}.\] The Schrodinger equation for the given mass distribution is simplified as, \[\frac{d^{2}\psi}{dx^{2}}-\frac{2m_{0}}{\hbar^{2}}x^{2}\psi=\frac{-2m_{0}E}{ \hbar^{2}}\frac{1}{1+x^{2}}\psi,\] The wave function is computed (briefly discussed in VII.1) and is given by \[\psi(x)=\frac{A}{(1+x^{2})^{1/4}}\mathrm{exp}\bigg{[}\frac{-x^{2}}{2(1+x^{2}) }\bigg{]}. \tag{27}\] In reciprocal space, we can write the wave function as \[\phi(k)=\frac{A}{\sqrt{2}}\exp\left(\frac{-k^{2}}{2}-ik\right). \tag{28}\] where \(A\) is a constant. This computation is carried out by assuming the values of \(m_{0}\) to be \(1\). #### iii.2.1 Shannon Entropy Shannon's entropy in position space (\(S_{x}\)) is given by \[S_{x}=-\int_{-\infty}^{\infty}|\psi(x)|^{2}\mathrm{ln}|\psi(x)|^{2}dx,\] and we obtain the form for Shannon entropy (\(S_{x}\)) as \[S_{x}=-\int_{-\infty}^{\infty}\Bigg{(}\frac{2A^{2}\mathrm{ln}A} {\sqrt{1+x^{2}}}\mathrm{exp}\bigg{[}\frac{-x^{2}}{1+x^{2}}\bigg{]}-\frac{A^{2} }{2\sqrt{1+x^{2}}}\mathrm{ln}(1+x^{2})\] \[\mathrm{exp}\bigg{[}\frac{-x^{2}}{1+x^{2}}\bigg{]}-\frac{A^{2}x^ {2}}{(1+x^{2})^{3/2}}\mathrm{exp}\bigg{[}\frac{-x^{2}}{1+x^{2}}\bigg{]}\Bigg{)}dx,\] Similarly, Shannon entropy in momentum space is given by \[S_{k}=-\int_{-\infty}^{\infty}|\phi(k)|^{2}\mathrm{ln}|\phi(k)|^{2}dk.\] and is obtained as follows \[S_{k}=\frac{\sqrt{\pi}A^{2}}{4}-\frac{A^{2}\sqrt{\pi}}{2}\mathrm{ln}\bigg{[} \frac{A^{2}}{2}\bigg{]}. \tag{29}\] The numerical result of Shannon entropy for mass distribution is presented in the following table: #### iii.2.2 Fisher's Information measure For an observable \(x\), Fisher information (\(F_{x}\)) is given by \[F_{x}=\int_{-\infty}^{\infty}|\psi(x)|^{2}\bigg{[}\frac{d}{dx}\mathrm{ln}| \psi(x)|^{2}\bigg{]}^{2}dx>0,\] and we obtain the form for the fisher's information measure (\(F_{x}\)) as \[F_{x}=\int_{-\infty}^{\infty}\Bigg{(}\frac{A^{2}}{\sqrt{1+x^{2 }}}\mathrm{exp}\bigg{[}\frac{-x^{2}}{1+x^{2}}\bigg{]}\Bigg{)}\Bigg{[}\frac{d} {dx}\Bigg{(}2\mathrm{ln}A-\] \[\frac{1}{2}\mathrm{ln}(1+x^{2})-\frac{x^{2}}{1+x^{2}}\Bigg{)} \Bigg{]}^{2}dx,\] Figure 4: Fisher’s information measure for soliton in momentum space (_constant mass_) in quartic potential. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(A\) & \(S_{x}\) & \(S_{k}\) & \(S_{x}+S_{k}\) & \(1+\mathrm{ln}\pi\) \\ \hline 0.01 & 20.458 & 0.00092 & 20.459 & 2.1447 \\ \hline 0.02 & 81.523 & 0.0032 & 81.526 & 2.1447 \\ \hline 0.05 & 506.964 & 0.0159 & 506.98 & 2.1447 \\ \hline 0.1 & 2020.11 & 0.0514 & 2020.16 & 2.1447 \\ \hline 0.2 & 8049.46 & 0.1564 & 8049.62 & 2.1447 \\ \hline 0.3 & 18070.50 & 0.2872 & 18070.79 & 2.1447 \\ \hline 0.4 & 32073.9 & 0.4290 & 32074.33 & 2.1447 \\ \hline \end{tabular} \end{table} Table 3: Numerical results of the Shannon entropy In reciprocal space, Fisher information for an observable \(k\), (\(F_{k}\)) is given by \[F_{k}=\int_{-\infty}^{\infty}|\phi(k)|^{2}\bigg{[}\frac{d}{dk}{\rm ln}|\phi(k)|^ {2}\bigg{]}^{2}dk>0.\] and we obtain the value for \(F_{k}\) as \[F_{k}=A^{2}\sqrt{\pi} \tag{30}\] Based on the findings, it is observed that there is a phenomenon of "information propagation" of the solitonic mass distribution when it is exposed to the confining potential \(V(x)=ax^{2}+bx^{4}\). ## V Solitons in a symmetric well The prevalence of solitons is primarily attributed to a limited set of adaptable nonlinear equations that govern various physical and biological systems. Despite the distinct nature of these systems and their nonlinear characteristics, the mathematical equations describing them can exhibit significant similarities or even identical forms. Consequently, the study and analysis of these generic nonlinear models hold significant importance in diverse physical contexts, allowing for a deeper understanding of a wide range of phenomena. We shall consider ultra-fast pulsed fiber lasers as an example. It finds its application in the field of optical and quantum communication. One of the major factors that affect communication in this system is dispersion. Dispersion management is a key route for manipulating optical pulses for some desired output in ultra-fast optics. Minimal dispersion is achieved in the case of symmetric potential. So it's crucial for us to understand its behavior in case of symmetric potential [39]. In this section, we shall discuss solitons in symmetric potential. We know the well-known symmetric potential as \[V(x)=V_{0}\bigg{(}\frac{1-\lambda xcot(\lambda x)}{(\lambda x)^{2}}\bigg{)}, \tag{31}\] where, \(V_{0}\) is a constant parameter, \(\lambda\) corresponds to a parameter in the potential which has a unit of \([Length]^{-1}\). ### Constant mass We start by calculating the wave function for this system for constant mass (i.e., \(m=m_{0}\)), Schrodinger equation for this system is simplified as follows \[\frac{d^{2}\psi(x)}{dx^{2}}+\frac{2m_{0}}{\hbar^{2}}\Bigg{[}E-V_{0}\bigg{(} \frac{1-\lambda xcot(\lambda x)}{(\lambda x)^{2}}\bigg{)}\Bigg{]}\psi(x)=0,\] The wave function is computed (briefly discussed in VII.2) and is given by \[\psi(x)=\bigg{(}\frac{1}{\pi}\sqrt{\frac{2}{45}}\bigg{)}^{1/4}\,e^{\frac{- \lambda x^{2}}{2}}\bigg{[}1-0.087(4\lambda x^{2}-2)\bigg{]}. \tag{32}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(A\) & \(F_{x}\) & \(F_{k}\) & \(\sigma_{x}^{2}\) & \(\sigma_{k}^{2}\) & \(F_{x}\cdot F_{k}\) \\ \hline 1.2 & 2.215 & 2.552 & 0.6380 & 0.5538 & 5.652 \\ \hline 1.4 & 3.014 & 3.474 & 0.8685 & 0.7535 & 10.470 \\ \hline 1.6 & 3.937 & 4.538 & 1.134 & 0.9842 & 17.866 \\ \hline 1.8 & 4.982 & 5.742 & 1.436 & 1.246 & 28.607 \\ \hline 2.0 & 6.152 & 7.090 & 1.772 & 1.538 & 43.618 \\ \hline 2.2 & 7.444 & 8.579 & 2.145 & 1.861 & 63.862 \\ \hline 2.4 & 8.858 & 10.209 & 2.552 & 2.214 & 90.431 \\ \hline \end{tabular} \end{table} Table 4: Results for the uncertainty relation and Fisher information measure Figure 5: Fisher’s information measure for soliton in position space (_mass distribution_) in quartic potential. Figure 6: Fisher’s information measure for soliton in momentum space (_mass distribution_) in quartic potential. In reciprocal space, the wave function (\(\phi(k)\)) can be written as \[\phi(k)=\bigg{(}\frac{1}{\pi}\sqrt{\frac{2}{45}}\bigg{)}^{1/4}e^{\frac{-k^{2}}{ \lambda}}\bigg{[}\frac{1.174}{\sqrt{\lambda}}-\frac{0.348}{\lambda^{3/2}}( \lambda-k^{2})\bigg{]}. \tag{33}\] #### iii.2.1 Shannon Entropy Shannon's entropy in position space (\(S_{x}\)) is given by \[S_{x}=-\int_{-\infty}^{\infty}|\psi(x)|^{2}\text{ln}|\psi(x)|^{2}dx,\] and we obtain the form for Shannon entropy (\(S_{x}\)) as \[S_{x}=-\int_{-\infty}^{\infty}\Bigg{(}\bigg{(}\frac{1}{\pi}\sqrt {\frac{2}{45}}\bigg{)}^{1/2}e^{-\lambda x^{2}}\bigg{[}1-0.087(4\lambda x^{2}- 2)\bigg{]}^{2}\Bigg{)}\] \[\Bigg{(}\frac{1}{2}\text{ln}\bigg{(}\frac{1}{\pi}\sqrt{\frac{2}{4 5}}\bigg{)}-\lambda x^{2}+2\text{ln}\bigg{[}1-0.087(4\lambda x^{2}-2)\bigg{]} \Bigg{)}dx,\] Similarly, Shannon entropy in momentum space is given by \[S_{k}=-\int_{-\infty}^{\infty}|\phi(k)|^{2}\text{ln}|\phi(k)|^{2}dx,\] and we obtain the form for Shannon entropy (\(S_{k}\)) as \[S_{k}=-\int_{-\infty}^{\infty}\bigg{(}\frac{1}{\pi}\sqrt{\frac{ 2}{45}}\bigg{)}^{1/2}e^{\frac{-k^{2}}{\lambda}}\bigg{[}\frac{1.174}{\sqrt{ \lambda}}-\frac{0.348}{\lambda^{3/2}}(\lambda-k^{2})\bigg{]}^{2}\] \[\Bigg{(}\frac{1}{2}\bigg{(}\frac{1}{\pi}\sqrt{\frac{2}{45}}\bigg{)} -\frac{k^{2}}{\lambda}+2\text{ln}\bigg{[}\frac{1.174}{\sqrt{\lambda}}-\frac{0. 348}{\lambda^{3/2}}(\lambda-k^{2})\bigg{]}\bigg{)}dk.\] The numerical result of Shannon entropy for constant mass is presented in the following table: #### iii.2.2 Fisher's Information measure For an observable \(x\), Fisher information (\(F_{x}\)) is given by \[F_{x}=\int_{-\infty}^{\infty}|\psi(x)|^{2}\bigg{[}\frac{d}{dx} \text{ln}|\psi(x)|^{2}\bigg{]}^{2}dx>0,\] and we obtain the form for Fisher information (\(F_{x}\)) as \[F_{x}=\int_{-\infty}^{\infty}\Bigg{(}\bigg{(}\frac{1}{\pi}\sqrt {\frac{2}{45}}\bigg{)}^{1/2}e^{-\lambda x^{2}}\bigg{[}1-0.087(4\lambda x^{2}- 2)\bigg{]}^{2}\] \[\Bigg{[}-2\lambda x+\frac{1.392\lambda x}{1-0.087(4\lambda x^{2}- 2)}\bigg{]}^{2}\Bigg{)}dx.\] In reciprocal space, Fisher information for an observable \(k\), (\(F_{k}\)) is given by \[F_{k}=\int_{-\infty}^{\infty}|\phi(k)|^{2}\bigg{[}\frac{d}{dk} \text{ln}|\phi(k)|^{2}\bigg{]}^{2}dk>0,\] and we obtain the form for Fisher information (\(F_{k}\)) as \[F_{k}=\int_{-\infty}^{\infty}\Bigg{(}\bigg{(}\frac{1}{\pi}\sqrt {\frac{2}{45}}\bigg{)}^{1/2}e^{\frac{-k^{2}}{\lambda}}\bigg{[}\frac{1.174}{ \sqrt{\lambda}}-\frac{0.348}{\lambda^{3/2}}(\lambda-k^{2})\bigg{]}^{2}\] \[\Bigg{[}\frac{-2k}{\lambda}+\frac{1.392}{\lambda^{3/2}(\frac{1.1 74}{\sqrt{\lambda}}-\frac{0.348}{\lambda^{3/2}}(\lambda-k^{2}))}\bigg{]}^{2} \Bigg{)}dk.\] We present the numerical result of Fisher's information measure in the following table: \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\lambda\) & \(F_{x}\) & \(F_{k}\) & \(\sigma_{x}^{2}\) & \(\sigma_{k}^{2}\) & \(F_{x}\cdot F_{k}\) \\ \hline 0.01 & 0.1834 & 557.04 & 139.26 & 0.0458 & 102.16 \\ \hline 0.02 & 0.2595 & 196.94 & 49.235 & 0.0648 & 51.105 \\ \hline 0.05 & 0.4103 & 49.823 & 12.455 & 0.1026 & 20.442 \\ \hline 0.08 & 0.5190 & 24.618 & 6.154 & 0.1298 & 12.777 \\ \hline 0.10 & 0.5803 & 17.615 & 4.403 & 0.1450 & 10.222 \\ \hline 0.15 & 0.7107 & 9.588 & 2.397 & 0.1777 & 6.814 \\ \hline 0.20 & 0.8206 & 6.228 & 1.557 & 0.2051 & 5.110 \\ \hline 0.25 & 0.9175 & 4.456 & 1.114 & 0.2293 & 4.088 \\ \hline \end{tabular} \end{table} Table 6: Results for the uncertainty relation and Fisher information measure Figure 8: Fisher’s information measure for soliton in position space (_constant mass_) in symmetric well. ### Mass distribution For a mass distribution \(m(x)=\frac{m_{0}}{1+x_{Z}}\), Schrodinger equation is modified as follows \[\frac{-\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}}+\frac{-\hbar^{2}}{2}\frac{m^{ \prime}(x)}{m(x)^{2}}\frac{d\psi(x)}{dx}+\Bigg{[}\frac{\hbar^{2}}{4}\frac{m^{ \prime\prime}(x)}{m(x)^{2}}-\] We adopt stationary perturbation theory and variation principle to obtain the wave function [40]. Complete derivation of wave function is presented in Appendix (VII.2). We get the wave function as follows \[\psi(x)\approx\frac{1}{\sqrt{\pi}}\frac{e^{\frac{-x^{2}}{2}}}{\sqrt{1+x^{2}}} \bigg{(}1-\frac{\lambda^{2}x^{2}}{10}+\frac{\lambda^{4}x^{4}}{126}-\frac{ \lambda^{6}x^{6}}{2520}\bigg{)}, \tag{34}\] In momentum space, the wave function is obtained through fourier transforming the wave function in position space. \[\phi(k)\approx\frac{1}{\sqrt{\pi}}\frac{e^{\frac{-k^{2}}{2}}}{\sqrt{1+\frac{k ^{2}}{\lambda^{2}}}}\bigg{(}1+\frac{\lambda^{2}k^{2}}{10}+\frac{\lambda^{4}k^ {4}}{126}+\frac{\lambda^{6}k^{6}}{2520}\bigg{)}. \tag{35}\] #### vi.2.1 Shannon entropy Probability of finding the particle in the state \(\psi(x,t)\) between the spatial interval \(x\) and \(x+dx\) describes as \(|\psi(x,t)|^{2}\approx|\psi(x)|^{2}\). Shannon's entropy in position space (\(S_{x}\)) is given by \[S_{x}=-\int_{-\infty}^{\infty}|\psi(x)|^{2}\mathrm{ln}|\psi(x)|^{2}dx,\] and we obtain the form for Shannon entropy (\(S_{x}\)) as \[S_{x}=-\int_{-\infty}^{\infty}\Bigg{(}\frac{1}{\pi}\frac{e^{-x^{2}}}{(1+x^{2} )}\bigg{(}1-\frac{\lambda^{2}x^{2}}{10}+\frac{\lambda^{4}x^{4}}{126}-\frac{ \lambda^{6}x^{6}}{2520}\bigg{)}^{2}\Bigg{)}\] In reciprocal space, \[S_{k}=-\int_{-\infty}^{\infty}|\phi(k)|^{2}\mathrm{ln}|\phi(k)|^{2}dk,\] and we obtain the form for Shannon entropy (\(S_{k}\)) as \[S_{k}=-\int_{-\infty}^{\infty}\frac{1}{\pi}\frac{e^{-k^{2}}}{(1+\frac{k^{2}} {\lambda^{2}})}\bigg{(}1+\frac{\lambda^{2}k^{2}}{10}+\frac{\lambda^{4}k^{4}}{ 126}+\frac{\lambda^{6}k^{6}}{2520}\bigg{)}^{2}\] \[\bigg{(}-\mathrm{ln}\pi-k^{2}-\mathrm{ln}(1+\frac{k^{2}}{\lambda^{2}})+2 \mathrm{ln}\bigg{(}1+\frac{\lambda^{2}k^{2}}{10}+\frac{\lambda^{4}k^{4}}{126} +\frac{\lambda^{6}k^{6}}{2520}\bigg{)}\bigg{)}dk.\] Finding an analytical solution for this particular equation will be complicated. So, we use some numerical techniques to find Shannon entropy for this system. The numerical result of Shannon entropy is presented in the following table: #### vi.2.2 Fisher's Information measure For an observable \(x\), Fisher information (\(F_{x}\)) is given by \[F_{x}=\int_{-\infty}^{\infty}|\psi(x)|^{2}\bigg{[}\frac{d}{dx}\mathrm{ln}|\psi (x)|^{2}\bigg{]}^{2}dx>0,\] and we obtain the form for Fisher's information (\(F_{x}\)) as \[F_{x}=\int_{-\infty}^{\infty}\Bigg{(}\frac{1}{\pi}\frac{e^{-x^{2}}}{(1+x^{2} )}\bigg{(}1-\frac{\lambda^{2}x^{2}}{10}+\frac{\lambda^{4}x^{4}}{126}-\frac{ \lambda^{6}x^{6}}{2520}\bigg{)}^{2}\Bigg{)}\] In reciprocal space, Fisher information for an observable \(k\), (\(F_{k}\)) is given by \[F_{k}=\int_{-\infty}^{\infty}|\phi(k)|^{2}\bigg{[}\frac{d}{dk}\mathrm{ln}|\phi (k)|^{2}\bigg{]}^{2}dk>0.\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\lambda\) & \(S_{x}\) & \(S_{k}\) & \(S_{x}+S_{k}\) & \(1+\mathrm{ln}\pi\) \\ \hline 2.0 & 0.6033 & 1.6511 & 2.254 & 2.1447 \\ \hline 2.1 & 0.6034 & 1.8394 & 2.443 & 2.1447 \\ \hline 2.2 & 0.6107 & 2.0343 & 2.645 & 2.1447 \\ \hline 2.3 & 0.6289 & 2.2036 & 2.832 & 2.1447 \\ \hline 2.4 & 0.6630 & 2.2848 & 2.948 & 2.1447 \\ \hline 2.5 & 0.7190 & 2.1661 & 2.885 & 2.1447 \\ \hline 2.6 & 0.8035 & 1.6592 & 2.463 & 2.1447 \\ \hline \end{tabular} \end{table} Table 7: Numerical results of the Shannon entropy Figure 9: Fisher’s information measure for soliton in momentum space (_constant mass_) in symmetric well. and we obtain the form for Fisher's information (\(F_{k}\)) as \[F_{k}=\int_{-\infty}^{\infty}\frac{1}{\pi}\frac{e^{-k^{2}}}{(1+ \frac{k^{2}}{\lambda^{2}})}\bigg{(}1+\frac{\lambda^{2}k^{2}}{10}+\frac{\lambda^ {4}k^{4}}{126}+\frac{\lambda^{6}k^{6}}{2520}\bigg{)}^{2}\] \[\Bigg{[}\frac{d}{dk}\Bigg{(}-\ln\!\pi-k^{2}-\ln(1+\frac{k^{2}}{ \lambda^{2}})+2\!\ln\!\bigg{(}1+\frac{\lambda^{2}k^{2}}{10}+\frac{\lambda^{4}k ^{4}}{126}+\] \[\frac{\lambda^{6}k^{6}}{2520}\bigg{)}\bigg{)}\Bigg{]}^{2}dk.\] Based on the findings, it is observed that there is a phenomenon of "information propagation" in the solitonic mass distribution when it is exposed to the confining potential \(V(x)=V_{0}\bigg{(}\frac{1-\lambda xcot(\lambda x)}{(\lambda x)^{2}}\bigg{)}\). ## VI Summary and remarks This study focused on examining Shannon entropy and Fisher information concerning a constant mass and solitonic mass distribution. The work explored the solutions of the stationary Schrodinger equation, where the mass depends on the position and is exposed to a quartic (\(ax^{2}+bx^{4}\)) and symmetric potential (\(V_{0}\bigg{(}\frac{1-\lambda xcot(\lambda x)}{(\lambda x)^{2}}\bigg{)}\)). After obtaining the analytical solutions of the model, the investigation proceeded to study the Shannon entropy and Fisher's information associated with the ground state energy levels. The observations indicate that the solitonic mass distribution under the quartic potential exhibits higher Shannon entropy values compared to the case with a constant mass. Hence, the combined value of Shannon entropy (i.e., the sum of \(S_{x}\) and \(S_{k}\)) is greater for the solitonic mass distribution than for a constant mass scenario. On the other hand, the Fisher information is higher for the solitonic mass distribution in comparison to the constant mass scenario. In the context of information [41] and communication theories [42], based on our observations, we can deduce that the solitonic system with mass distribution possesses attributes that empower it to manage unpredictability, convey abundant information, demonstrate heightened responsiveness, and function effectively in various communication and data processing scenarios. Moreover, this system also displays greater capacity, resilience, and versatility, leading to enhanced performance and reliability in contemporary communication situations than the system with a constant mass. Thus for effective communication, a solitonic mass distribution can be preferred. When considering solitons in symmetric potential, we notice distinct behavior compared to that in quartic potential. The results show that the solitonic mass distribution within the symmetric well has greater Shannon entropy values compared to a constant mass scenario. As a consequence, the combined Shannon entropy (the sum of \(S_{x}\) and \(S_{k}\)) is greater for the solitonic mass distribution than the constant mass scenario. On the contrary, the Fisher information is higher for constant mass in contrast to mass distribution. In information theory [41], a solitonic system with constant mass distribution reflects a system with increased unpredictability, richer information content, and heightened sensitivity to changes. This combination leads to a greater capacity for data transmission, improved parameter estimation, and enhanced performance in communication and data processing. The system can efficiently handle diverse information and is more robust against noise and dis Figure 11: Fisher’s information measure for soliton in momentum space (_mass distribution_) in symmetric well. Figure 10: Fisher’s information measure for soliton in position space (_mass distribution_) in symmetric well. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\lambda\) & \(F_{x}\) & \(F_{k}\) & \(\sigma_{x}^{2}\) & \(\sigma_{k}^{2}\) & \(F_{x}\cdot F_{k}\) \\ \hline 0.01 & 1.3600 & 50.009 & 12.502 & 0.34 & 68.012 \\ \hline 0.02 & 1.3595 & 25.0406 & 6.260 & 0.3398 & 34.042 \\ \hline 0.03 & 1.3596 & 16.7331 & 4.183 & 0.1942 & 22.750 \\ \hline 0.04 & 1.3597 & 12.5902 & 3.1476 & 0.3399 & 17.119 \\ \hline 0.05 & 1.3597 & 10.1128 & 2.5282 & 0.3399 & 13.750 \\ \hline 0.08 & 1.36003 & 6.4257 & 1.6064 & 0.3400 & 8.739 \\ \hline 0.1 & 1.3606 & 5.2142 & 1.3036 & 0.3402 & 7.094 \\ \hline \end{tabular} \end{table} Table 8: Results for the uncertainty relation and Fisher information measure turbances, making it well-suited for various applications in information theory and communication. However, managing the higher complexity and resource requirements of such a system may be necessary. Nonetheless, in communication systems [42], employing a solitonic system with constant mass leads to greater capacity, improved noise resilience, and enhanced data transmission capabilities, making them well-suited for modern communication scenarios and emerging technologies. Therefore, based on our observations, we can infer that an improved quantum communication can be attained using two types of solitonic systems: one under a symmetric well with constant mass, and the other under a quartic potential with a mass distribution. In conclusion, the extension of the current work to investigate Shannon and Fisher's information measure in the context of Sine-Gordon solitons and solitons in Bose-Einstein condensates (BEC) within the Gross-Pitaevskii equation (GPE) framework will provide a deeper and more comprehensive understanding of the underlying complexities of these intriguing systems. The exploration of information entropy, both from a Shannon and Fisher perspective, will unveil valuable insights into the intrinsic nature of solitonic phenomena. By quantifying the uncertainty and information content associated with these soliton structures, we gain a quantitative measure of the intricate balance between coherence, localization, and the underlying dynamics of the systems under consideration. Studying information entropy in Sine-Gordon solitons will not only elucidate the richness of these topologically nontrivial solutions but will also establish a connection between information theory and field theory [43]. The quantification of information entropy in this context will shed light on the interplay between spatial distribution, energy concentration, and the stability of solitonic configurations. Such insights not only will enhance our theoretical understanding but also paves the way for potential applications in fields such as data encoding, signal processing, and quantum information. Extending the analysis to solitons in BEC gases governed by the GPE will introduce a quantum mechanical dimension to the study. The intricate balance between quantum coherence and nonlinear interactions inherent in BEC systems will add a layer of complexity to the information entropy analysis. The quantification of information content in these systems will offer a unique perspective on the emergence and behavior of solitons in a quantum context, contributing to the broader understanding of quantum coherence and correlations. The culmination of this work will highlight the universality of information entropy as a powerful tool for characterizing and comparing soliton dynamics across different physical systems. ## VII Appendix ### Solitons in Quartic potential - Wave function computation #### vii.1.1 Constant mass Our aim is to solve eq.(19). We employ the technique by assuming the solution of this particular differential equation, viz, Gaussian wave function. Previously, this assumption is used only to solve the system's quadratic potentials with time dependence. However, in [44], it is shown that the same method also works in the case of quartic potential. The main difference is that all the systems which are studied using this formulation work only for time-dependent cases, but we work with solitonic systems which are time-independent. This formulation (assumption) works even in our case. The Gaussian function is assumed to have some parameters \(r,s,q\) and a normalization factor of \(N\). We assume the solution \(\psi(x)\) as \[\psi(x)=N{\rm exp}\bigg{[}rx^{2}+sx+q\bigg{]}.\] Substituting this assumption in this differential equation eq. (19) we get \[\frac{d^{2}\psi}{dx^{2}}=(2rx+s)^{2}\psi(x)+2r\psi(x),\] Thus we get, \[(2rx+s)^{2}\psi(x)+2r=Cx^{4}+Bx^{2}-A,\] By comparing the co-coefficients, we get the wave function of this particular system to be \[\psi(x)=N{\rm exp}\bigg{[}\frac{-A}{2}x^{2}\bigg{]},\] Upon normalising we get the final wave equation to be \[\psi(x)=\sqrt{\frac{A}{\pi}}{\rm exp}\bigg{[}\frac{-A}{2}x^{2}\bigg{]}.\] #### vii.1.2 Mass distribution The Schrodinger equation for the given mass distribution is given by, \[\frac{-\hbar^{2}}{2m_{0}}(1+x^{2})\frac{d^{2}\psi}{dx^{2}}+x^{2}\psi+x^{4} \psi=E\psi,\] \[\frac{d^{2}\psi}{dx^{2}}-\frac{2m_{0}}{\hbar^{2}}x^{2}\psi=\frac{-2m_{0}E}{ \hbar^{2}}\frac{1}{1+x^{2}}\psi,\] \[\frac{d^{2}\psi}{dx^{2}}+\frac{2m_{0}}{\hbar^{2}}\bigg{(}\frac{E}{1+x^{2}}-x^{ 2}\bigg{)}\psi=0.\] We find the solution analytically by Perturbation theory, total Hamiltonian is given by \(H=H_{0}+H_{1}\). The corresponding potential for \(H_{0}\) is \(V_{0}\) which is equal to \(x^{2}\) and \(H_{1}=\lambda x^{4}\) (perturbation term), where \(\lambda\) is small. The wave function corresponding to \(H_{0}\) (zeroth order perturbation theory) is \[\psi_{0}(x)=A\text{exp}\bigg{[}\frac{-x^{2}}{2(1+x^{2})}\bigg{]}H_{n}(x), \tag{36}\] where, \(H_{n}\) is the Hermite polynomial for a given state \(n\), \(A\) is the normalization constant. However, we are interested in computing the wave function only for the ground state (i.e. \(n=0\)). Therefore, we get the Hermite polynomial to be, \(H_{0}=1\). Therefore, the wave function for the system in the ground state is given by \[\psi_{0}(x)=A\text{exp}\bigg{[}\frac{-x^{2}}{2(1+x^{2})}\bigg{]},\] here, \(\lambda x^{4}\) term is looked as a perturbation to the \(x^{2}\) term. Thus the wave function perturbed to the first order is given by \[\psi_{1}(x)=\sum_{n}\frac{\bra{n}H_{1}\ket{0}}{E_{0}-E_{n}}\psi_{0}(x),\] For ground state, we get the perturbed wave function (\(\psi_{1}(x)\)) as \[\psi(x)=\frac{A}{(1+x^{2})^{1/4}}\text{exp}\bigg{[}\frac{-x^{2}}{2(1+x^{2})} \bigg{]}. \tag{37}\] ### Solitons in Symmetric well - Wave function computation #### iv.2.1 Constant mass For a constant mass, the Schrodinger equation for symmetric potential is given by \[\frac{d^{2}\psi(x)}{dx^{2}}+\frac{2m_{0}}{\hbar^{2}}\bigg{[}E-V_{0}\bigg{(} \frac{1-\lambda xcot(\lambda x)}{(\lambda x)^{2}}\bigg{)}\bigg{]}\psi(x)=0,\] We find the solution analytically by Perturbation theory, total Hamiltonian is given by \(H=H_{0}+H_{1}\). The corresponding potential for \(H_{0}\) is \(V_{0}\) which is equal to the potential of the harmonic oscillator. We use stationary perturbation theory to obtain the wave function and energy levels for this particular system. This potential (up to \(\mathcal{O}(x^{6})\) term) is expressed as \[V(x)\approx V_{0}\bigg{(}\frac{1}{3}+\frac{1}{45}(\lambda x)^{2}+\frac{2}{945 }(\lambda x)^{4}+\frac{1}{4725}(\lambda x)^{6}\bigg{)}.\] The unperturbed hamiltonian is \(H_{0}=\frac{V_{0}}{45}(\lambda x)^{2}\) and the perturbed hamiltonian (\(H_{1}\)) is given by, \[H_{1}(x)=V_{0}\bigg{(}\frac{1}{3}+\frac{2}{945}(kx)^{4}+\frac{1}{4725}(kx)^{6 }\bigg{)}\] The wave function of solitons in first-order perturbation theory is given by \[\psi_{n}^{(0)}(u)=\bigg{(}\frac{\lambda}{\pi}\bigg{)}^{1/4}\frac{1}{\sqrt{2^{ n}n!}}H_{n}(u)e^{\frac{-u^{2}}{2}},\] where, \(u=\sqrt{\lambda}x\) and \(H_{n}\) is the Hermite polynomial and the values are as follows: \[H_{0}=1\] \[H_{1}=2u\] \[H_{2}=4u^{2}-2\] \[H_{3}=8u^{3}-12u\] \[H_{4}=16u^{4}-48u^{2}+12\] For ground states (\(n=0\)), we get the wave function (\(\psi_{0}^{(0)}\)) to be \[\psi_{0}^{(0)}(x)=\bigg{(}\frac{\lambda}{\pi}\sqrt{\frac{2}{45}}\bigg{)}^{1/4 }\text{exp}\bigg{[}-\lambda\frac{x^{2}}{2}\bigg{]}.\] Eigen value for unperturbed system reads \(E_{0}^{(0)}=\frac{1}{2}\). \[\psi_{1}^{(0)}(x)=\psi_{0}^{(0)}+\frac{\bra{\psi_{2}^{(0)}}H_{1}(x^{4})\ket{ \psi_{0}^{(0)}}}{E_{0}^{(0)}-E_{0}^{(2)}}\psi_{2}^{(0)}+\frac{\bra{\psi_{4}^{ (0)}}H_{1}(x^{4})\ket{\psi_{0}^{(0)}}}{E_{0}^{(0)}-E_{4}^{(0)}}\psi_{4}^{(0)}\] \[\psi_{1}^{(0)}(x)=\psi_{0}^{(0)}+\frac{\bra{2}H_{1}(x^{4})\ket{0}}{E_{0}^{(0) }-E_{0}^{(2)}}\psi_{2}^{(0)}+\frac{\bra{4}H_{1}(x^{2})\ket{0}}{E_{0}^{(0)}-E_{ 4}^{(0)}}\psi_{4}^{(0)}\] We only look into unperturbed and the second order state. Thus we get the wave function as follows \[\psi(x)=\psi_{0}^{(0)}+\frac{\bra{2}H_{1}\ket{0}}{E_{0}^{(0)}-E_{0}^{(2)}}.\] Therefore we get the final wave function as, \[\psi(u)=\bigg{(}\frac{1}{\pi}\sqrt{\frac{2}{45}}\bigg{)}^{1/4}e^{\frac{-u^{2} }{2}}\bigg{[}1-0.087(4u^{2}-2)\bigg{]},\] where, \(u=\sqrt{\lambda}x\). We write the final expression as follows \[\psi(x)=\bigg{(}\frac{1}{\pi}\sqrt{\frac{2}{45}}\bigg{)}^{1/4}e^{\frac{-\lambda x ^{2}}{2}}\bigg{[}1-0.087(4\lambda x^{2}-2)\bigg{]}. \tag{38}\] Mass distribution The potential of symmetric well is given by \[V(x)=V_{0}\bigg{(}\frac{1-\lambda xcot(\lambda x)}{(\lambda x)^{2}}\bigg{)},\] For the sake of simplicity, we consider the value of \(V_{0}\) to be \(1\). This potential can be expressed as, \[V(x)=-4\sum_{n=1}^{\infty}\frac{(-1)^{n}(2\lambda x)^{2n-2}}{(2n!)}B_{2n},\ \ \ \ \forall\lambda x<\pi.\] The first few terms are given by, \[\sum_{n=1}^{\infty}\frac{(-1)^{n}(2\lambda x)^{2n-2}}{(2n!)}B_{2n}=\frac{-B_{2 }}{2}+\frac{B_{4}}{6}(\lambda x)^{2}-\cdots\] We use stationary perturbation theory to obtain the wave function and energy levels for this particular system. This potential (up to \(\mathcal{O}(x^{6})\) term) is expressed as \[V(x)=\frac{1}{3}+\frac{1}{45}(\lambda x)^{2}+\frac{2}{945}(\lambda x)^{4}+ \frac{1}{4725}(\lambda x)^{6}.\] The Schrodinger equation for the given mass distribution, \(m(x)=\frac{1}{1+x^{2}}\) (assuming the value of \(m_{0}\) to be \(1\)), we get \[\frac{d^{2}\psi(x)}{dx^{2}}+\frac{2}{\hbar^{2}}\bigg{[}E-V_{0}\bigg{(}\frac{1 -\lambda xcot(\lambda x)}{(\lambda x)^{2}}\bigg{)}\bigg{]}\bigg{(}\frac{1}{1+ x^{2}}\bigg{)}\psi(x)=0,\] We work with perturbation theory for small values of \(\lambda\) and the usual convention \(\hbar=1=V_{0}\). Thus we get \[\frac{d^{2}\psi(x)}{dx^{2}}+2\Bigg{[}E-\bigg{(}\frac{1}{3}+\frac{ 1}{45}(\lambda x)^{2}+\frac{2}{945}(\lambda x)^{4}+\frac{1}{4725}(\lambda x)^ {6}\bigg{)}\Bigg{]}\] \[\bigg{(}\frac{1}{1+x^{2}}\bigg{)}\psi(x)=0,\] The first-order correction to the wave function \(\psi(x)\) can be calculated using the first-order perturbation theory formula \[\psi_{1}(x)=\sum_{n\neq 0}\frac{\left\langle\psi_{n}^{(0)}\right|H_{1}\left| \psi_{0}^{(0)}\right\rangle}{E_{0}^{0}-E_{n}^{0}}\psi_{0}(x),\] where, \(H_{1}\) is the perturbation term. In our case, it's our usual potential. Now, the first-order perturbation correction to the final ground state wave function is, \[\psi(x) \approx \frac{1}{\sqrt{\pi}}\frac{e^{\frac{-x^{2}}{2}}}{\sqrt{1+x^{2}}} \bigg{(}1-\frac{\lambda^{2}x^{2}}{10}+\frac{\lambda^{4}x^{4}}{126}-\frac{ \lambda^{6}x^{6}}{2520}\bigg{)}.\]
2308.08060
Robust Bayesian Tensor Factorization with Zero-Inflated Poisson Model and Consensus Aggregation
Tensor factorizations (TF) are powerful tools for the efficient representation and analysis of multidimensional data. However, classic TF methods based on maximum likelihood estimation underperform when applied to zero-inflated count data, such as single-cell RNA sequencing (scRNA-seq) data. Additionally, the stochasticity inherent in TFs results in factors that vary across repeated runs, making interpretation and reproducibility of the results challenging. In this paper, we introduce Zero Inflated Poisson Tensor Factorization (ZIPTF), a novel approach for the factorization of high-dimensional count data with excess zeros. To address the challenge of stochasticity, we introduce Consensus Zero Inflated Poisson Tensor Factorization (C-ZIPTF), which combines ZIPTF with a consensus-based meta-analysis. We evaluate our proposed ZIPTF and C-ZIPTF on synthetic zero-inflated count data and synthetic and real scRNA-seq data. ZIPTF consistently outperforms baseline matrix and tensor factorization methods in terms of reconstruction accuracy for zero-inflated data. When the probability of excess zeros is high, ZIPTF achieves up to $2.4\times$ better accuracy. Additionally, C-ZIPTF significantly improves the consistency and accuracy of the factorization. When tested on both synthetic and real scRNA-seq data, ZIPTF and C-ZIPTF consistently recover known and biologically meaningful gene expression programs.
Daniel Chafamo, Vignesh Shanmugam, Neriman Tokcan
2023-08-15T22:25:15Z
http://arxiv.org/abs/2308.08060v1
# Robust Bayesian Tensor Factorization with Zero-Inflated Poisson Model and Consensus Aggregation ###### Abstract. Tensor factorizations (TF) are powerful tools for the efficient representation and analysis of multidimensional data. However, classic TF methods based on maximum likelihood estimation underperform when applied to zero-inflated count data, such as single-cell RNA sequencing (scRNA-seq) data. Additionally, the stochasticity inherent in TFs results in factors that vary across repeated runs, making interpretation and reproducibility of the results challenging. In this paper, we introduce Zero Inflated Poisson Tensor Factorization (ZIPTF), a novel approach for the factorization of high-dimensional count data with excess zeros. To address the challenge of stochasticity, we introduce Consensus Zero Inflated Poisson Tensor Factorization (C-ZIPTF), which combines ZIPTF with a consensus-based meta-analysis. We evaluate our proposed ZIPTF and C-ZIPTF on synthetic zero-inflated count data and synthetic and real scRNA-seq data. ZIPTF consistently outperforms baseline matrix and tensor factorization methods in terms of reconstruction accuracy for zero-inflated data. When the probability of excess zeros is high, ZIPTF achieves up to \(2.4\times\) better accuracy. Additionally, C-ZIPTF significantly improves the consistency and accuracy of the factorization. When tested on both synthetic and real scRNA-seq data, ZIPTF and C-ZIPTF consistently recover known and biologically meaningful gene expression programs. All our data and code are available at: [https://github.com/klarman-cell-observatory/scBTF](https://github.com/klarman-cell-observatory/scBTF) and [https://github.com/klarman-cell-observatory/scbtf_experiments](https://github.com/klarman-cell-observatory/scbtf_experiments). . 2010 Mathematics Subject Classification: Primary: 11E76,11P05,12D15,14N10. \({}^{*}\)These authors contributed equally to this work ## 1. Introduction Tensors are multi-way arrays that extend matrices to higher dimensions and provide a natural way to represent multidimensional data. Traditional matrix methods _matricize_ tensors, limiting their ability to exploit the intrinsic multi-way structure of the data [24]. Tensor factorization extends matrix factorization to higher dimensions while preserving the said intrinsic structure and enabling the discovery of complex interactions within the data. Several variants of tensor factorization methods exist, among which Candecomp/Parafac (CP) and Tucker are the most widely used [20]. Tensor factorizations have found applications in fields such as computer vision, neuroscience, genomics, recommender systems, and social network analysis [24, 32, 42, 14, 43, 20, 21]. Classic tensor factorization methods using maximum likelihood estimation (MLE) can be unreliable when applied to sparse count data [10]. Bayesian Poisson Tensor Factorization (BPTF)--a higher-order extension of Poisson matrix factorization--is used to overcome the limitations of the MLE approach when dealing with high-dimensional count data. BPTF provides advantages such as the ability to incorporate prior knowledge, perform model selection, and quantify uncertainty in parameter estimates [17, 37, 19]. Highly-dispersed count data with excess number of zeros is common in various fields such as healthcare (e.g., hospital readmissions), genomics (e.g., gene expression levels), social sciences (e.g., user behaviors), and insurance claims [11, 39]. The Zero-Inflated Poisson (ZIP) distribution is a better model for such data compared to the Poisson distribution [27, 11, 15], and has been successfully used in recommendation systems and other applications [39]. In addition to modeling the distribution of data and noise appropriately, another issue to be addressed is the inherent randomness of tensor factorization algorithms. This leads to varying results for multiple runs and negatively impacts the interpretability and reproducibility [24, 41]. In this paper, we propose a novel approach for stable tensor factorization which is robust for high-dimensional sparse count data with excess zeros (Section 3). We claim three **main contributions**: * We propose a novel factorization approach for high-dimensional sparse count data with excess zeros, namely _Zero Inflated Poisson Tensor Factorization (ZIPTF)_, which utilizes the Bayesian ZIP model (Section 3). * To address the discussed randomness issue, we develop a meta-analysis method that generalizes consensus matrix factorization [25] and incorporates novel techniques to improve the stability and interpretability of the factorization results (Section 3.4, Figure 1). We specifically focus on its integration with ZIPTF, namely _Consensus-ZIPTF (C-ZIPTF)_. Nonetheless, our method is generalizable to other factorization approaches. * We provide an extensive evaluation on three different datasets: (1) synthetic zero-inflated count tensors with increasing probability \(\Phi\) of excess zeros (Section 4.1); (2) synthetic multi-sample single-cell RNA sequencing (scRNA-seq) data (Section 4.2); (3) real scRNA-seq dataset of immune cells stimulated with interferon beta (Section 4.3). We compare ZIPTF and C-ZIPTF against baseline matrix and tensor factorization methods. Our results indicate that ZIPTF outperforms the baselines in terms of reconstruction accuracy for zero-inflated data. Specifically, for \(\Phi=0.8\), ZIPTF achieves an average explained variance of \(0.92\), compared to a maximum of \(0.38\) achieved by the baseline models. Additionally, C-ZIPTF significantly improves the consistency and accuracy of the factorization results. Finally, both ZIPTF and C-ZIPTF successfully capture biologically meaningful gene expression programs (GEPs) and result in factors with higher Pearson correlations to known GEPs. ## 2. Tensor preliminaries This section presents the foundational concepts and notations for tensors, with most of the notation borrowed from [24]. We denote the \((i_{1},i_{2},\ldots,i_{N})\)-th entry of an \(N\)- way tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times\ldots\times I_{N}}\) as \(\mathcal{X}_{i_{1}i_{2}\ldots i_{N}}\). The Frobenius norm of a tensor is similar to the matrix Frobenius norm: \[\|\mathcal{X}\|_{F}=\sqrt{\sum_{i_{1}=1}^{I_{1}}\sum_{i_{2}=1}^{I_{2}}\ldots \sum_{i_{N}=1}^{I_{N}}\mathcal{X}_{i_{1}i_{2}\ldots i_{N}}^{2}}. \tag{2.1}\] An \(N\)-way tensor \(\mathcal{Y}\) is called a rank-\(1\) tensor if it can be written as outer product of \(N\) vectors, i.e, \(\mathcal{Y}=u^{(1)}\otimes u^{(2)}\otimes\ldots\otimes u^{(N)}\) with \(\mathcal{Y}i_{1}i_{2}\ldots i_{N}={ui_{1}}^{(1)}u_{i_{2}}^{(2)}\ldots u_{i_{N}} ^{(N)}\). A rank \(R\geq 1\) approximation to the tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\ldots\times I_{N}}\) can given as: \[\mathcal{X}=\tilde{\mathcal{X}}+\mathcal{E}\text{ where }\tilde{\mathcal{X}}= \sum_{r=1}^{R}a_{r}^{(1)}\otimes a_{r}^{(2)}\otimes\cdots\otimes a_{r}^{(N)}, \tag{2.2}\] \(A^{(i)}=[a_{1}^{(i)}\ldots a_{R}^{(i)}]\in\mathbb{R}^{I_{i}\times R},1\leq i\leq N\) is the _factor matrix_ along the \(i-\)th mode, and \(\mathcal{E}\in\mathbb{R}^{I_{1}\times I_{2}\ldots\times I_{N}}\). The factorization given in Eqn. (3.9) is often referred to as the CP (Candecomp / Parafac) decomposition which is a special case of Tucker decomposition (see [24] for details). The approximation can be concisely expressed as \(\tilde{\mathcal{X}}=[[A^{(1)},\ A^{(2)},\ldots,\ A^{(N)}]].\) In this paper, we impose a non-negativity constraint on factors to improve their interpretability. The primary method for solving Eqn. (3.9) involves using the maximum likelihood estimation (MLE) approach, which entails minimizing the following error: \[\min_{A^{(1)},A^{(2)},\ldots,A^{(N)}}||\mathcal{X}-\tilde{\mathcal{X}}||_{F}. \tag{2.3}\] Iterative algorithms such as multiplicative updates, alternating least, and gradient descent are commonly utilized for Eqn. (2.3) [1, 24, 41]. The MLE approach often assumes Gaussian noise [24, 41]. ## 3. Bayesian tensor factorization and consensus aggregation ### Bayesian Poisson tensor factorization Traditional tensor factorization methods using MLE are unstable when applied to zero-inflated count data [10]. Bayesian Poisson Tensor Factorization (BPTF) extends the Poisson Matrix Factorization method to higher dimensions and utilizes Bayesian inference to obtain a point estimate and offers benefits such as uncertainty quantification, realistic noise assumptions, and principled inclusion of prior information [17, 19, 37, 44]. This section presents a general framework for BPTF with Variational Inference (VI) for high-dimensional count data. Let \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\ldots\times I_{N}}\) be the observed count data drawn from the Poisson distribution and with the CP decomposition as given in Eqn. (3.9). Let \(I=i_{1}i_{2}\ldots i_{N}\in\overline{I}=\{i_{1}i_{2}\ldots i_{N}\ :\ 1\leq i_{j}\leq I_{j},\ 1 \leq j\leq N\},\) then \[X_{I}\approx Poisson(\lambda_{I})\ \text{where}\ \mathcal{X}_{I}\approx\tilde{ \mathcal{X}}_{I}=\sum_{r=1}^{R}a^{(1)}_{i_{1}r}a^{(2)}_{i_{2}r}\ldots a^{(N)} _{i_{N}r}\approx\lambda_{I}. \tag{3.1}\] BPTF uses Gamma priors to regularize the estimation of the latent factors [8, 46, 5]. The Gamma distribution, which is characterized by a shape parameter \(\alpha>0\) and a rate parameter \(\alpha\beta>0,\) is employed as a sparsity-inducing prior [37, 17, 8]. Then for each \(a^{(k)}_{jr}\) in Eqn. (3.1), we have: \[a^{(k)}_{jr}\approx Gamma(\alpha,\alpha\beta^{(k)}),\ 1\leq k\leq N, \tag{3.2}\] with the expectation \(E[a^{(k)}_{jr}]=\frac{1}{\beta^{(k)}}\) and \(Var[a^{(k)}_{jr}]=\frac{1}{\alpha\beta^{(k)^{2}}}\). The posterior distribution given by \(P(A^{(1)},A^{(2)},\ldots,A^{(N)}|\mathcal{X},\mathcal{H})\) is intractable due to the inability to compute the evidence, given a model hyperparameter set \(\mathcal{H}=\{\alpha,B^{(1)},B^{(2)},\ldots,B^{(N)}\}\)[5]. BPTF uses VI and assumes a variational family of distributions \(\mathcal{Q}_{V}=\mathcal{Q}(A^{(1)},A^{(2)},\ldots,A^{(N)};V^{(1)},\ldots,V^{ (N)})\) which is indexed by a set of variational parameters \(V^{(k)},1\leq k\leq N\)[5, 4]. We employ a fully-factorized mean-field approximation assuming that \(\mathcal{Q}_{\mathcal{V}}(A^{(1)},A^{(2)},\ldots,A^{(N)})=\prod_{k=1}^{N} \mathcal{Q}_{\mathcal{V}}(A^{(k)};V^{(k)}),\) where \(\mathcal{Q}(a^{(k)}_{jr};V^{(k)}_{jr})=Gamma(a^{(k)}_{jr};\gamma^{(k)}_{jr}, \delta^{(k)}_{jr}),1\leq k\leq N.\) The variational family \(Q\) used here is similar to the one employed in Bayesian Poisson Matrix Factorization [8, 16, 33]. BPTF fits variational parameters by minimizing the Kullback-Leibler (KL) divergence between the true posterior distribution and \(\mathcal{Q}_{\mathcal{V}},\) which is equal to maximizing the evidence lower bound (ELBO) [5, 4, 17]: \[ELBO(V)=E_{Q_{V}}[log(P(\mathcal{X},A^{(1)},A^{(2)},\ldots,A^{(N)}|\mathcal{H })]+H(Q_{V}). \tag{3.3}\] where \(H(Q_{V})\) is the entropy for \(Q_{V}.\) Coordinate ascent algorithms are commonly used to maximize the ELBO by iteratively optimizing each variational parameter while fixing the others until convergence, monitored by the relative change in the ELBO [4, 5]. From Eqn. (3.1), we have the total \(n=\sum_{I\in\overline{I}}\mathcal{X}_{I}\approx\ Poisson(\Lambda)\) where \(\Lambda=\sum_{I\in\overline{I}}\lambda_{I}.\) We can use the Poisson-Multinomial connection to express \(\mathcal{X}\) given \(n\) as \(Multinomial(n,\pi)\) where \((\pi)_{I}=\frac{\lambda_{I}}{\Lambda},\) and update variational parameters using this auxiliary distribution [5, 37, 8, 26]: \[\gamma^{(k)}_{jr} = \alpha+\sum_{\begin{subarray}{c}i_{1}i_{2}\ldots i_{N}\in\overline {I}\\ i_{k}=j\end{subarray}}\mathcal{X}_{i_{1}i_{2}\ldots i_{n}}\frac{\mathbb{C}_{Q_{V} }\big{[}\prod_{s=1}^{N}a^{(s)}_{i_{s}r}\big{]}}{\sum_{t=1}^{R}\mathbb{G}_{Q_{V }}\big{[}\prod_{s=1}^{N}a^{(s)}_{i_{s}t}\big{]}}, \tag{3.5}\] \[\delta^{(k)}_{jr} = \alpha\beta^{(k)}+\sum_{i_{1}i_{2}\ldots i_{N}\in\overline{I}}E_ {Q_{V}}\big{[}\prod_{1\leq s\neq k\leq N}a^{(s)}_{i_{s}r}\big{]}, \tag{3.4}\] where \(E_{Q_{V}}[.]\) and \(\mathbb{G}_{Q_{V}}=exp(E_{Q_{V}}[log(.)])\) denote arithmetic and geometric expectations. Since \(Q_{V}\) is fully factorized, the expectations in Equations (3.4) and (3.5) can be expressed as a product of individual expectations [5]. Specifically, for \(a^{(s)}_{i_{s}r}\), \[E_{Q_{V}}[a^{(s)}_{i_{s}r}]=\frac{\gamma^{(s)}_{i_{s}r}}{\delta^{(s)}_{i_{s}r }}\text{ and }\mathbb{G}_{Q_{V}}[a^{(s)}_{i_{s}r}]=\frac{exp(\Psi(\gamma^{(s)}_{i_{s}r})) }{\delta^{(s)}_{i_{s}r}}, \tag{3.6}\] where \(\Psi\) is the digamma function (logarithmic derivative of the gamma function). An empirical Bayes approach can be used to update the hyperparameters \(\beta^{(k)},1\leq k\leq N,\) in conjunction with the variational parameters [8, 5]: \[\beta^{(k)}=\big{(}\sum_{j=1}^{I_{j}}\sum_{r=1}^{R}\mathbb{E}_{Q_{V}}[a^{(k)} _{jr}]\big{)}^{-1}. \tag{3.7}\] The variational inference algorithm for BPTF is fully specified by the set of update equations Equations (3.4), (3.5), and (3.7). ### Zero-inflated Poisson tensor factorization (ZIPTF) Poisson models may not always be sufficient to model count data with excess zeros, and zero-inflated models can often provide a better fit [27, 39]. The Zero-Inflated Poisson (ZIP) model assumes that the counts in the tensor \(\mathcal{X}\) can be modeled as a mixture of a point mass at zero and a Poisson distribution with parameter \(\lambda\). Let \(\mathcal{X}\) be a count data in \(\mathbb{R}^{I_{1}\times I_{2}\ldots\times I_{N}}.\) We define the index set \(\overline{I}\) as the collection of all possible indices, i.e., \(\overline{I}=\{i_{1}i_{2}\ldots i_{N}\ :\ 1\leq i_{j}\leq I_{j},\ 1\leq j \leq N\}.\) We say \(\mathcal{X}\) has Zero-inflated Poisson (ZIP) distribution if for every \(I\in\overline{I}:\) \[P(\mathcal{X}_{I}=x_{I})=p_{I}\mathbb{1}_{x_{I}=0}+(1-p_{I})\frac{e^{-\lambda }\lambda^{x_{I}}}{x_{I}!}, \tag{3.8}\] where the outcome variable \(x_{I}\) has non-negative integer values, \(\lambda_{I}\) is the expected Poisson count, and \(p_{I}\) is the _probability of extra zeros_[27]. As an abbreviation, we write it as \(\mathcal{X}_{I}\sim ZIP(\lambda_{I},p_{I})\). The ZIP can be considered as the _product_ of a Poisson random variable \(\mathcal{Y}_{I}\sim Poisson(\lambda_{I})\) and an independent Bernoulli variable \(\Phi_{I}\sim Bernoulli(p_{I})\)[11]. The Bernoulli variable \(\Phi_{I}\) takes the value of \(1\) when \(\mathcal{X}_{I}\) is equal to \(0\), due to the Bernoulli component, and takes the value of \(0\) otherwise. We consider the low rank \(R\geq 1\) decomposition of the zero-inflated count tensor \(\mathcal{X}:\) \[\mathcal{X}\approx\sum_{r=1}^{R}a^{(1)}_{r}\otimes a^{(2)}_{r}\otimes\cdots \otimes a^{(N)}_{r}. \tag{3.9}\] Hence, for \(I=i_{1}i_{2}\ldots i_{N}\), the reconstruction \(\sum_{r=1}^{R}a^{(1)}_{i_{1}r}a^{(2)}_{i_{2}r}\ldots a^{(N)}_{i_{N}r}\) can be interpreted as the mean of the distribution from which the observed count \(\mathcal{X}_{I}\) is assumed to be sampled. Then we have: \[\mathcal{X}_{I}\sim ZIP(\lambda_{I}=\sum_{r=1}^{R}a^{(1)}_{i_{1}r}a^{(2)}_{i_{ 2}r}\ldots a^{(N)}_{i_{N}r},p_{I}). \tag{3.10}\] ### Variational Inference for ZIPTF For given position \(I=i_{1}i_{2}\ldots i_{N}\), we consider the rank \(R\) decomposition in Eqn. (3.10). In Bayesian Poisson factorizations, the Gamma distribution is utilized as a prior to induce sparsity, and it is assumed that each latent factor matrix \(A^{(k)}=[a_{1}^{(k)}\ldots a_{R}^{(k)}]\in\mathbb{R}_{+}^{I_{k}\times R},\ 1 \leq k\leq N\), follows a Gamma distribution [8, 37]. Therefore, for each \(a_{jr}^{(k)}\) in Eqn. (3.10), we have: \[a_{jr}^{(k)}\sim Gamma(\alpha^{(k)},\beta^{(k)}),\quad 1\leq k\leq N, \tag{3.11}\] where \(\alpha^{(k)}>0\) and \(\beta^{(k)}>0\) represent the shape and rate parameters of the distribution, with the expectation \(E[a_{jr}^{(k)}]=\frac{\alpha^{(k)}}{\beta^{(k)}}\) and \(Var[a_{jr}^{(k)}]=\frac{\alpha^{(k)}}{\beta^{(k)^{2}}}\). Additionally, for ZIP models a latent variable \(\xi\) is introduced to capture the hidden state of the probability of extra zeros which specify \(\Phi\sim Bernoulli(p_{I})\)[39, 5]. Let \(S(.)\) denote the _logistic sigmoid_ function, given by \(S(x)=\frac{1}{1+e^{-x}}\), then: \[\xi=S(\zeta)\ \text{where}\ \zeta\sim Normal(\mu,\sigma). \tag{3.12}\] Let \(Z=\{A^{(1)},A^{(2)},\ldots,A^{(N)},\Phi\}\), consider the posterior distribution \(P(Z|\mathcal{X},\mathcal{H})\), given a model hyperparameter set \(\mathcal{H}=\{\alpha^{(1)},\beta^{(1)},\alpha^{(2)},\ldots,\beta^{(2)},\ldots,\alpha^{(N)},\beta^{(N)},\mu,\sigma\}\). Variational inference approximates the true posterior distribution using a family of probability distributions \(\mathcal{Q}\) over hidden variables [5]. This family of distributions is characterized by free parameters, and the key assumption is that each latent variable is independently distributed given these parameters. We assume a variational family of distributions \(\mathcal{Q}\) indexed by a set of variational parameters \(V=\{\gamma^{(1)},\delta^{(1)},\gamma^{(2)},\delta^{(2)},\ldots,\gamma^{(N)}, \delta^{(N)},\overline{\mu},\overline{\sigma}\}\) where \((\gamma^{(k)},\delta^{(k)})\) are variational shape and rate parameters of the Gamma distribution for the latent factor along the \(k-\)th mode, and \((\overline{\mu},\overline{\sigma})\) are the variational parameters for \(\zeta\). We use a fully factorized mean-field approximation [5] and the variational distribution factors as the following: \[\mathcal{Q}(A^{(1)},A^{(2)},\ldots,A^{(N)},\Phi)=\mathcal{Q}(\Phi;\overline{ \mu},\overline{\sigma})\prod_{k=1}^{N}\mathcal{Q}(A^{(k)};\gamma^{(k)},\delta ^{(k)}). \tag{3.13}\] where \(a_{jr}^{(k)}\sim Gamma(\gamma_{jr}^{(k)},\delta_{jr}^{(k)})\) and \(\Phi_{I}\sim Bernoulli(S(\zeta))\) for \(\zeta\sim Normal(\overline{\mu},\overline{\sigma})\). The goal is to choose a member \(q^{*}\) of the variational family variational distributions which minimizes the Kullback-Leibler (KL) divergence of the exact posterior from \(\mathcal{Q}\): \[q^{*}(Z)=\arg\min_{q(Z)\in\mathcal{Q}}D_{KL}\left(q(Z)\|P(Z|\mathcal{X}, \mathcal{H})\right). \tag{3.14}\] Upon examining the KL divergence, we encounter a significant challenge: it involves the true posterior distribution \(P(Z|\mathcal{X},\mathcal{H})\), which is not known. Nevertheless, we can rewrite the KL divergence as follows: \[D_{KL}\left(q(Z)\|P(Z|\mathcal{X},\mathcal{H})\right) = \int q(Z)\log\left(\frac{q(Z)}{P(Z|\mathcal{X},\mathcal{H})}\right) dZ\ \cdot \tag{3.16}\] \[= \int q(Z)\log\left(\frac{q(Z)P(\mathcal{X},\mathcal{H})}{P(Z, \mathcal{X},\mathcal{H})}\right)dZ\] (3.17) \[= \log\left(P(\mathcal{X},\mathcal{H})\right)\int q(Z)dZ-\int q(Z) \log\left(\frac{P(Z,\mathcal{X},\mathcal{H})}{q(Z)}\right)dZ\] (3.18) \[= \log\left(P(\mathcal{X},\mathcal{H})\right)-\int q(Z)\log\left( \frac{P(Z,\mathcal{X},\mathcal{H})}{q(Z)}\right)dZ. \tag{3.15}\] The second term in Eqn. (3.18) is called Evidence Lower Bound (ELBO). We know that the KL divergence is non-negative, therefore, \(\log\left(P(\mathcal{X},\mathcal{H})\right)\geq\ \text{ELBO}(q(Z))=\int q(Z)\log \left(\frac{P(Z,\mathcal{X},\mathcal{H})}{q(Z)}\right)dZ\). \[\text{ELBO}(q(Z)) = \int q(Z)\log\left(P(Z,\mathcal{X},\mathcal{H})\right)dZ-\int q(Z) \log\left(q(Z)\right)dZ \tag{3.20}\] \[= E_{q(Z)}[\log\left(P(\mathcal{X},Z,\mathcal{H})\right)-E_{q(Z)}[ \log q(Z)]. \tag{3.19}\] The evidence lower bound serves as a transformative tool that converts intractable inference problems into optimization problems that can be tackled using gradient-based methods [5]. Coordinate ascent algorithms are frequently employed in maximizing the evidence lower bound (ELBO)[5, 39]. However, these algorithms require tedious gradient calculations and may not scale well for very large datasets [18, 34]. Closed-form coordinate-ascent updates are applicable to conditionally conjugate exponential family models, but they necessitate analytic computation of various expectations for each new model[18, 34]. Stochastic Variational Inference (SVI) [18] offers a more efficient algorithm by incorporating stochastic optimization [35]. This technique involves utilizing noisy estimates of the gradient of the objective function. To maximize the evidence lower bound (ELBO), we employ a stochastic optimization algorithm known as the _Black Box Inference Algorithm_[34]. This algorithm operates by stochastically optimizing the variational objective using Monte Carlo samples from the variational distribution to compute the noisy gradient (see Section 2, [34] for details). By doing so, it effectively alleviates the burden of analytic computations and provides a more efficient approach to ELBO maximization. ### Generic consensus-based tensor factorization Selecting the number of components in tensor factorization is challenging [24, 1]. The dependence on initial guesses for latent factors can lead to substantially different factor sets across repeated runs, making it difficult to interpret the results [24, 1, 41]. We typically select the minimum value of \(R\) in Eqn. (3.9) that maximizes the explained variance of the approximation, defined as follows: \[\text{explained variance}=1-\frac{||\mathcal{X}-\tilde{\mathcal{X}}||_{F}}{|| \mathcal{X}||_{F}}. \tag{3.21}\] Our goal is not solely to improve the explained variance, but also to ensure the interpretability and stability of the factors. We generalize the consensus meta-analysis approach, which has been previously used for matrix factorization [25], and include novel techniques to enhance the stability. Figure 1. Overview of the consensus meta-analysis approach discussed in Section 3.4 for the 3-way tensor \(\mathcal{X}\). The overview of the proposed pipeline is depicted in Figure 1. In the remainder of this section, we will refer to the steps 1-5 given in the figure. Running a generic rank \(R\) factorization given in Eqn. (3.9) for \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times\ldots\times I_{N}}\) with \(M\) different random seeds yields the sets of non-negative factor matrices \(\{(A^{(1)})_{m},(A^{(2)})_{m},\ldots,(A^{(N)})_{m}\},\ 1\leq m\leq M\) (Step 1). For a chosen modality \(k\) (\(1\leq k\leq N\)), we can aggregate and normalize the factor matrices from independent runs (Step 2): \[\overline{A^{(k)}}=\Big{[}\frac{(A^{(k)})_{1}}{||(A^{(k)})_{1}||_{F}}\ \ \frac{(A^{(k)})_{2}}{||(A^{(k)})_{2}||_{F}}\ \ldots\frac{(A^{(k)})_{M}}{||(A^{(k)})_{M}||_{F}}\Big{]}\in I_{k} \times(R\times M). \tag{3.22}\] The cophenetic correlation coefficient, commonly used to select ranks for matrix factorizations [7], assumes one-to-one mapping between features and factors based on maximum loadings. However, this assumption may not be valid when a feature contributes significantly to multiple factors. Our method for selecting the rank and evaluating factorization stability involves clustering column factors of aggregated matrices and fixing the initial guess to ensure reliability. Initially, we perform K-means clustering [31] on the columns of the aggregated factor matrix \(\overline{A^{(k)}}\) with \(K=R\) (Step 3). The resulting cluster sets are given as \(C_{i}^{(k)}=\{\)columns of \(\overline{A^{(k)}}\) assigned to cluster \(i\},1\leq i\leq R\). The Local Outlier Factor algorithm [6] is used to remove outliers by considering the local density deviation of a data point compared to its neighbors. We evaluate the _goodness_ of the clustering with the silhouette coefficient [36], computed as (b-a)/max(a,b), where a is the average intra-cluster distance, and b is the average inter-cluster distance. The silhouette coefficient ranges from -1 to 1, with higher values indicating more coherence. After clustering, we obtain the consensus factors \(f_{C_{i}}^{(k)}\), where \(1\leq i\leq R\), by computing the median value of the factors in each cluster (Step 4) and form the consensus matrix: \[\overline{A^{(k)}}_{C}=[f_{C_{1}}^{(k)}f_{C_{2}}^{(k)}\ldots f_{C_{R}}^{(k)}] \in\mathbb{R}^{I_{k}\times R},1\leq k\leq N. \tag{3.23}\] We perform the decomposition using \(\overline{A^{(k)}}_{C}\) as the fixed initial guess for the \(k\)-th modality to obtain the final factor matrices (Step 5). Notice that if ZIPTF is employed as the factorization method in Step 1 described above, we refer to the resulting factorization as C-ZIPTF. ## 4. Experiments Here we present results showing the superior performance of C-ZIPTF across multiple evaluation metrics. We implemented C-ZIPTF in Python, using the probabilistic programming language Pyro [3]. Our presentation in Section 3.1 focused on Bayesian tensor factorization frameworks that utilize Poisson and Zero-Inflated Poisson based models. However, our implementation is designed to be more versatile and can accommodate different types of noise models. We conduct three different evaluations to assess the performance of our proposed method. First, we compare ZIPTF with alternative factorization methods on simulated tensors with known factors and ZIP noise and evaluate the benefits of using a ZIP model and the inclusion of the consensus approach (Section 4.1). Second, we evaluate the performance of our method on simulated single-cell RNA sequencing data and compare it with other matrix and tensor factorization methods at the task of recovering identity and activity gene expression programs (GEPs), Section 4.2. Finally, we demonstrate the ability of our method to capture biologically meaningful gene expression programs by applying it to a real single-cell RNA sequencing dataset of immune cells stimulated with interferon beta (Section 4.3). ### Synthetic tensor experiment To evaluate the performance of C-ZIPTF on synthetic data, we generate tensors using known factors and Poisson noise with varying degrees of zero inflation and measure the accuracy of different methods at recovering the original factors. To generate a tensor \(\mathcal{T}^{\prime}\in\mathbb{R}^{I\times J\times K}\), we first create three factor matrices \(A\in\mathbb{R}_{+}^{I\times R}\), \(B\in\mathbb{R}_{+}^{J\times R}\), and \(C\in\mathbb{R}_{+}^{K\times R}\), with elements drawn from a Gamma distribution with shape \(\alpha=3\) and rate \(\beta=0.3\), where \(R\) is the desired true rank. We then construct a tensor \(\mathcal{T}\) by taking the sum of the outer product of the corresponding columns of the matrices, i.e., \(\mathcal{T}=[[A,B,C]]\). Finally, we generate the tensors \(\mathcal{T}^{\prime}\) by sampling from a ZIP distribution with mean \(\mathcal{T}\) and a given probability of extra zeros, denoted by \(\Phi\) in Section 3.2. **Zero-inflated Poisson model results in higher explained variance** For the first experiment, we ran ZIPTF without consensus aggregation to evaluate the advantages of using the ZIP model alone. For comparison, we considered three alternative tensor factorization methods: Non-Negative CP decomposition via Alternating-Least Squares (NNCP-ALS) [24], Bayesian Tensor Factorization with Truncated Gaussian model (TGTF)[17], and Bayesian Tensor Factorization with Gamma Poisson model (GPTF)[37]. We conducted 20 trials, generating a new simulated tensor \(\mathcal{T}^{\prime}\) of shape \(10\times 20\times 300\) and rank 9 each time and running each factorization method on the tensor for a fixed maximum number of iterations (\(max\_iter=1000\)). We evaluated the performance of the methods using the explained variance (3.21) of the approximation generated by each factorization. ZIPTF consistently outperformed both NNCP-ALS and the Bayesian Tensor Factorization methods without the ZIP model, as shown in Figure 2(a). At a zero probability of excess zeros, all four methods showed similar and nearly perfect explained variance. However, as the excess zero level increased, the performance of the other methods deteriorated rapidly. At the highest probability of excess zeros simulated, \(\Phi=0.8\), the average explained variance of the ZIPTF approximation was 0.974, with a 95% confidence interval (CI) [0.962, 0.987], about \(2.4\times\) better than the second highest explained variance of 0.338, 95% CI [0.334, 0.342] achieved by the Gamma Poisson model. We also note that the difference in explained variance between NNCP-ALS and the Bayesian methods other Figure 2. ZIPTF compared to alternative factorization methods on a synthetic tensor with known factors and ZIP noise, and stability comparison between ZIPTF and C-ZIPTF. (a) We calculated the explained variance of ZIPTF and alternative methods for different levels of extra zeros. (b) Cosine similarity between factors obtained on repeat runs for ZIPTF and C-ZIPTF. (c) Cosine similarity between inferred factors and original factors for ZIPT and C-ZIPT. than ZIPTF is minimal compared to the difference to ZIPTF. This indicates that the superiority of ZIPTF arises from using the appropriate noise model. **Consensus aggregation leads to more consistent factorization** After demonstrating ZIPTF's superior performance in modeling zero-inflated count data, we examine the benefits of consensus aggregation. We generate tensors of shape \(40\times 20\times 2000\) and rank 9 with known factors and Zero-Inflated Poisson noise as described above and evaluate the recovered factors by running ZIPTF with and without consensus aggregation. For this experiment, we fix the probability of excess zeros \(\Phi=0.6\). We compare the internal consistency of factors obtained from multiple runs of the decompositions. For our simulated tensor \(\mathcal{T}^{\prime}\), assume that we have two rank R approximations \([[A,B,C]]\) and \([[D,E,F]]\) corresponding to different randomly initialized runs. To measure the similarity between factorizations, we calculate: \[\text{\emph{cosine score}}([[A,B,C]],[[D,E,F]])=\frac{1}{R}\sum_{i=1}^{R}\max_{ 1\leq j\leq R}\ \cos(a_{i},d_{j})\cos(b_{i},e_{j})\cos(c_{i},f_{j}). \tag{4.1}\] We evaluate the similarity of factors recovered from 20 randomly initialized runs of both ZIPTF and C-ZIPTF using the _cosine score_ given in Eqn. 4.1. We observe that the factors recovered from C-ZIPTF are more consistent with one another compared to those recovered from ZIPTF, as shown in Figure 2(b). The consensus approach makes C-ZIPTF more robust, reducing the impact of the inherent stochasticity of the factorization process and resulting in a more stable set of factors. **Consensus aggregation leads to more accurate recovery of original factors** We assess the accuracy of both ZIPTF and C-ZIPTF in recovering the original factors used to create the tensor with \(\Phi=0.6\). We perform 20 randomly initialized runs of each method and compare the recovered factors to the original factors using the _cosine score_. Figure 2(c) demonstrates that C-ZIPTF outperforms ZIPTF in recovering the original factors. ### Synthetic single-cell RNA-Seq data analysis We test the performance of C-ZIPTF on single-cell RNA sequencing (scRNA-seq) data, which is prone to zero inflation due to technical limitations that result in dropout events [28]. To evaluate the effectiveness of C-ZIPTF, we compared its performance with other matrix and tensor factorization methods using a synthetic scRNA-seq dataset. We used the Splatter simulation framework [45] which was adapted to Python in a previous study [25] to generate the synthetic scRNA-seq dataset. The simulation framework utilizes a Gamma-Poisson hierarchical model with hyper-parameters estimated from real data. Technical dropouts are simulated by randomly replacing some of the simulated counts with zeros using a Bernoulli distribution. The complete details of the simulation framework and parameters used are provided in Section 6. The synthetic dataset consists of 15,000 cells and 5,000 genes from six donors, with five gene expression programs defining cell type identities and three gene expression programs defining donor-specific activity. We evaluated the performance of various factorization methods, including ZIPTF, C-ZIPTF, Non-negative Matrix Factorization (NMF) [29], Consensus NMF (CNMF) [25], and NNCP-ALS, with the goal of recovering the eight gene expression programs embedded in the synthetic scRNA-seq dataset. For NMF and CNMF, the rank 8 decomposition is performed with a maximum of 1000 iterations for convergence after normalizing the cell-by-gene count matrix to counts per million (CPM). For the tensor-based approaches, we construct the observed tensor by pseudobulking the cell-by-gene counts matrix. We cluster the cells to obtain tentative _cell type_ groupings and generate pseudobulk counts by summing all the counts for each donor, cell type, and gene. This creates a tensor of shape \(D\times C\times G\), where D, C, and G represent the number of donors, cell types, and genes, respectively. We then normalize the pseudobulk tensor to CPM and apply tensor factorization methods with rank 8 and perform 1000 iterations for each method. To evaluate the performance of the factorization methods in recovering the eight true gene expression programs (GEPs), we computed the Pearson correlation [2] between each of the eight latent factors in the gene mode obtained via factorization and the original GEPs. This correlation was used to establish a one-to-one alignment between the factors and the GEPs. We calculated the average Pearson Correlation between each factor and its corresponding GEP as the accuracy score of the method. The results of our analysis are presented in Figure 3 for three different levels of simulated intensity for the activity GEPs (mean log2 fold change of differentially expressed genes (log2FC) \(\in\{0.25,0.5,0.75\}\)). We observed that when the signal is strong enough (log2FC \(=0.75\)), CNMF and C-ZIPTF perform comparably. However, when the signal intensities are lower (log2FC \(\in\{0.25,0.5\}\)), C-ZIPTF clearly outperforms all other methods. Additionally, we found that ZIPTF without consensus aggregation also performs better than the other factorization methods, indicating that both the use of the ZIP model and the consensus aggregation improve the accuracy of the method in recovering GEPs. ### Real single-cell RNA-Seq data analysis We applied C-ZIPTF to a real-world single-cell RNA sequencing dataset of peripheral blood mononuclear cells (PBMCs) from patients with Lupus, reported in [22]. We obtained the single-cell RNA-Seq data from GEO using accession number GSE96583 [https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE96583](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE96583). As described in [22], the dataset contains 29,065 cells from eight patients, which are divided into stimulated and control groups, with the former being treated with interferon beta (IFN-\(\beta\)), a cytokine that modulates the transcriptional profile of immune cells. As part of the preprocessing step, we filter out multiplets and cells without a cell type assignment. Additionally, we remove samples and cell types that constitute less than 2 percent of cells. After these filtering steps, the dataset contained 14 samples, 7 control and 7 stimulated, and 6 cell types: CD4 T-cells, CD14+ Monocytes, B-cells, CD8 T-cells, NK- cells, and FCGR3A+ Monocytes. In order to facilitate biological interpretability of factors and reduce noise in the tensor formed we removed genes that are either not provided with HGNC symbols [38], or had a total count of less than 50 across all cells. Finally, we create a pseudobulk tensor by summing up the raw counts for each cell type, sample, and gene. The resulting Figure 3. The average Pearson correlation between the true gene expression programs (GEPs) used in the simulation and the inferred GEPs obtained from various factorization methods. The results are presented for three different signal intensity levels (0.25, 0.5, 0.75), which are indicated by the mean log2 fold change (log2FC) of simulated differentially expressed genes. pseudobulk data tensor has dimensions S \(\times\) C \(\times\) G (14 \(\times\) 6 \(\times\) 9,276), where S, C and G denote the number of samples, cell types and genes respectively. We normalize the tensor such that each sample-cell type pair has a total of \(10^{6}\) counts. We first determined the optimal rank for the data by running C-ZIPTF with a range of ranks from 2 to 14 and just 5 restarts. Some of the metrics we considered in deciding the optimal rank including explained variance and silhouette score are shown in Figure 4. We select rank 8, which exhibited a high explained variance of 0.969. As depicted in Figure 5, C-ZIPTF successfully identifies both cell type identity and condition-specific gene expression programs (GEPs). Notably, factor 4 represents an identity GEP that remains active in all B-cells, irrespective of the condition. The genes exhibiting the highest loadings for this factor are well-established B-cell markers, such as _MS4A1_, _CD79A_, and _BANK1_[12]. Furthermore, we performed gene set enrichment analysis [40] of these factors using GSEApy [13] in Python. This analysis revealed enrichment pathways consistent with B-cell characteristics, including B-cell activation and the B-cell receptor signaling pathway. Conversely, factor 1 and factor 6 capture distinct gene expression programs that are specifically activated in IFN-\(\beta\) stimulated samples. Factor 1 captures a cross-cell-type response to IFN-\(\beta\) stimulation, whereas factor 6 represents a Monocyte-specific response. These findings align with previous studies that have reported a Monocyte-specific response to IFN-\(\beta\) stimulation [30]. Furthermore, gene set enrichment analysis revealed enrichment in pathways such as the cellular response to type I interferon and inflammatory response, among others. For a comprehensive list of factors identified by C-ZIPTF and their associated gene expression programs, please refer to Figure 5. ## 5. Conclusion Zero-inflated count data is a common phenomenon in a wide range of fields, including genomics, finance, risk management, healthcare, and social sciences. However, traditional tensor factorization methods have limited effectiveness when dealing with zero-inflated data, often yielding inaccurate and unstable results across runs with different initializations. To overcome these challenges, we propose ZIPTF, a Bayesian tensor factorization model that is specifically tailored to zero-inflated count data. Additionally, we introduce a generic meta-analysis framework for consensus-driven tensor factorization. By combining these two approaches, we develop a novel method called C-ZIPTF that achieves both high accuracy and stability, and outperforms state-of-the-art baselines on synthetic and real data. Our proposed method provides a useful tool for researchers in various fields to gain deeper insights into their data. Figure 4. Metrics used in selecting the optimal rank for running C-ZIPTF on real single-cell RNA sequencing dataset [22] of immune cells stimulated with interferon beta (IFN-\(\beta\)). The Bayesian approach for tensor factorizations offers several other advantages over maximum likelihood estimation-based methods for tensor factorization. These include the ability to incorporate prior knowledge, perform model selection, and quantify uncertainty in the parameter estimates. However, it is important to note that Bayesian methods can be computationally expensive and require careful specification of prior distributions, which may require expert knowledge. Moreover, the tensor methods discussed in this paper rely on a multilinear factorization form and may be inadequate for capturing more complex, nonlinear relations in the data. To overcome this limitation, one possible solution is to integrate a kernelized approach into the factorization. In future work, we Figure 5. Full set of factors recovered by running C-ZIPTF on real single-cell RNA sequencing dataset [22] of immune cells stimulated with interferon beta (IFN-\(\beta\)). Each row represents a factor, and the first three columns display the three modes: sample, cell type, and gene. The \(y-\)axis in the sample and cell type modes represent the loading of the sample or cell type on that factor. The gene mode exhibits the top 20 genes associated with the factor. The last column provides the top 3 enriched terms obtained from a gene set enrichment analysis. plan to focus on the careful design of kernel functions that would enable us to effectively capture nonlinear patterns in the data. ## 6. Supplementary Material ### Implementation of ZIPTF and C-ZIPTF We present a Python implementation of a versatile Bayesian Tensor Factorization method using Variational Inference. Our implementation leverages Pyro [3], a probabilistic programming framework built on PyTorch. The BayesianCP class inherits from torch.nn.Module and offers functionalities for model fitting and summarizing the posterior distribution of factor matrices. During model fitting, Stochastic Variational Inference (SVI) is employed with an Adam optimizer [23, 18]. The current implementation supports three models: Zero Inflated Poisson model (ZIPTF), a Gamma Poisson model (GPTF) [37], and a Truncated Gaussian model (TGTF) [17]. ### Implementation of baseline methods As mentioned in Section 6.1, we utilize the same implementation for the other Bayesian tensor factorization approaches (Gamma Poisson Bayesian Tensor Factorization and Truncated Gaussian Bayesian Tensor Factorization) as the ZIPTF method and the code is provided at [https://github.com/klarman-cell-observatory/scBTF](https://github.com/klarman-cell-observatory/scBTF) and [https://github.com/klarman-cell-observatory/scbtf_experiments](https://github.com/klarman-cell-observatory/scbtf_experiments). For the remaining baselines used in our comparisons we use the following implementations: * **Non-negative Matrix Factorization (NMF)**: We use the Python implementation provided in the scikit-learn package. [https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.non_negative_factorization.html](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.non_negative_factorization.html). * **Consensus Non-negative Matrix Factorization (cNMF)**: We use the Python implementation described in [25], and provided on GitHub [https://github.com/dylkot/cNMF/tree/master](https://github.com/dylkot/cNMF/tree/master) * **Non-negative CP via Alternating-Least Squares (NNCP-ALS)**: We use the Python implementation provided in the Tensorly package. [http://tensorly.org/stable/modules/generated/tensorly.decomposition.non_negative_parafac_hals.html](http://tensorly.org/stable/modules/generated/tensorly.decomposition.non_negative_parafac_hals.html) ### Simulation details We use a Python adaptation of the Splatter [45] statistical framework given in [25] to simulate single-cell RNA-Seq data. The core of the simulation is a Gamma-Poisson distribution used to generate a cell-by-gene count matrix. While the original Splatter framework supports the simulation of both expression outlier genes and technical dropout (random knockout of counts), the Python adaptation in [25] only keeps outlier expression simulation. Since our method is specifically adapted to handle dropout noise in single-cell data, we add back the modeling of dropout to the Python adaptation. Specifically, after sampling counts from a Poisson distribution, we simulate dropout noise by calculating the probability of a zero for each gene from its mean expression and using that to randomly replace some of the simulated counts with zeros employing a Bernoulli distribution as described in [45]. The distribution of expression values prior to incorporating differential expression was determined based on parameters estimated from a random sample of 8000 cells from an organoid dataset as described in [25]. Specifically, the library size of a cell is sampled from a Lognormal distribution derived from a Normal distribution with a mean of 7.64 and a standard deviation of 0.78. The mean expression of a gene is sampled from a Gamma distribution with a mean of 7.68 and a shape of 0.34. With the probability of 0.00286, a gene will be an outlier from this Gamma distribution and will instead be sampled from a Lognormal distribution derived from a Normal distribution with a mean of 6.15 and standard deviation of 0.49. Additionally, we set a 5% doublet rate. Doublets are formed by randomly sampling a pair of cells, combining their gene counts, and downsampling such that the total count equals the larger of the two.
2304.06398
A Logical Account of Subtyping for Session Types
We study the notion of subtyping for session types in a logical setting, where session types are propositions of multiplicative/additive linear logic extended with least and greatest fixed points. The resulting subtyping relation admits a simple characterization that can be roughly spelled out as the following lapalissade: every session type is larger than the smallest session type and smaller than the largest session type. At the same time, we observe that this subtyping, unlike traditional ones, preserves termination in addition to the usual safety properties of sessions. We present a calculus of sessions that adopts this subtyping relation and we show that subtyping, while useful in practice, is superfluous in the theory: every use of subtyping can be "compiled away" via a coercion semantics.
Ross Horne, Luca Padovani
2023-04-13T11:02:52Z
http://arxiv.org/abs/2304.06398v1
# A Logical Account of Subtyping for Session Types ###### Abstract We study the notion of subtyping for session types in a logical setting, where session types are propositions of multiplicative/additive linear logic extended with least and greatest fixed points. The resulting subtyping relation admits a simple characterization that can be roughly spelled out as the following laplissade: every session type is larger than the smallest session type and smaller than the largest session type. At the same time, we observe that this subtyping, unlike traditional ones, preserves termination in addition to the usual safety properties of sessions. We present a calculus of sessions that adopts this subtyping relation and we show that subtyping, while useful in practice, is superfluous in the theory: every use of subtyping can be "compiled away" via a coercion semantics. ## 1 Introduction Session types [13, 14, 16] are descriptions of communication protocols supported by an elegant correspondence with linear logic [24, 4, 17] that provides session type systems with solid logical foundations. As an example, below is the definition of a session type describing the protocol implemented by a mathematical server (in the examples of this section, \(\&\) and \(\oplus\) are \(n\)-ary operators denoting external and internal labeled choices, respectively): \[B=\&\{\mathtt{end}:\bot,\mathtt{add}:\mathsf{Num}^{\bot}\bindsto\mathsf{Num}^{ \bot}\bindsto\mathsf{Num}\otimes B\}\] According to the session type \(B\), the server first waits for a label - either \(\mathtt{end}\) or \(\mathtt{add}\) - that identifies the operation requested by the client. If the label is \(\mathtt{end}\), the client has no more requests and the server terminates. If the label is \(\mathtt{add}\), the server waits for two numbers, sends their sum back to the client and then makes itself available again offering the same protocol \(B\). In this example, we write \(\mathsf{Num}^{\bot}\) for the type of numbers being consumed and \(\mathsf{Num}\) for the type of numbers being produced. A client of this server could implement a communication protocol described by the following session type: \[A=\oplus\{\mathtt{add}:\mathsf{Num}\otimes\mathsf{Num}\otimes\mathsf{Num}^{ \bot}\bindsto\mathsf{\oplus}\{\mathtt{end}:\mathbf{1}\}\}\] This client sends the label \(\mathtt{add}\) followed by two numbers, it receives the result and then terminates the interaction with the server by sending the label \(\mathtt{end}\). When we connect two processes through a session, we expect their interaction to be flawless. In many session type systems, this is guaranteed by making sure that the session type describing the behavior of one process is the _dual_ of the session type describing the behavior of its peer. _Duality_, often denoted by \(\cdot^{\bot}\), is the operator on session types that inverts the _direction_ of messages without otherwise altering the structure of protocol. In the above example it is clear that \(A\) is _not_ the dual of \(B\) nor is \(B\) the dual of \(A\). Nonetheless, we would like such client and such server to be declared compatible, since the client is exercising only a subset of the capabilities of the server. To express this compatibility we have to resort to a more complex relation between \(A\) and \(B\), either by observing that \(B\) (the behavior of the server) is a _more accommodating_ version of \(A^{\bot}\) or by observing that \(A\) (the behavior of the client) is a _less demanding_ version of \(B^{\bot}\). We make these relations precise by means of a _subtyping relation_\(\leqslant\) for session types. Subtyping enhances the applicability of type systems by means of the well-known substitution principle: an entity of type \(C\) can be used where an entity of type \(D\) is expected if \(C\) is a subtype of \(D\). After the initial work of Gay and Hole [10] many subtyping relations for session types have been studied [5, 21, 18, 22, 11]. Such subtyping relations differ widely in the way they are defined and/or in the properties they preserve, but they all share the fact that subtyping is essentially defined by the branching structure of session types given by labels. To illustrate this aspect, let us consider again the session types \(A\) and \(B\) defined above. We have \[B\leqslant\&\{\mathtt{add}:\mathsf{Num}^{\perp}\otimes\mathsf{Num}^{\perp} \otimes\mathsf{Num}\otimes\&\{\mathtt{end}:\perp\}\}=A^{\perp} \tag{1}\] meaning that a server behaving as \(B\) can be safely used where a server behaving as \(A^{\perp}\) is expected. Dually, we also have \[A\leqslant\oplus\{\mathtt{end}:\mathbf{1},\mathtt{add}:\mathsf{Num}\otimes \mathsf{Num}\otimes\mathsf{Num}^{\perp}\otimes B^{\perp}\}=B^{\perp} \tag{2}\] meaning that a client behaving as \(A\) can be safely used where a client behaving as \(B^{\perp}\) is expected. Note how subtyping is crucially determined by the sets of labels that can be received/sent when comparing two related types. In (1), the server of type \(B\) is willing to accept any label from the set \(\{\mathtt{end},\mathtt{add}\}\), which is a _superset_ of \(\{\mathtt{add}\}\) that we have in \(A^{\perp}\). In (2), the client is (initially) sending a label from the set \(\{\mathtt{add}\}\), which is a subset of \(\{\mathtt{end},\mathtt{add}\}\) that we have in \(B^{\perp}\). This co/contra variance of labels in session types is a key distinguishing feature of all known notions of subtyping for session types.1 Footnote 1: Gay and Hole [10] and other authors [5, 21, 22] define subtyping for session types in such a way that the _opposite_ relations of eqs. (1) and (2) hold. Both viewpoints are viable depending on whether session types are considered to be types of _channels_ or types of _processes_. Here we take the latter stance, referring to Gay [9] for a comparison of the two approaches. In this work we study the notion of subtyping for session types in a setting where session types are propositions of \(\mu\mathsf{MALL}^{\infty}\)[3, 7], the infinitary proof theory of multiplicative additive linear logic extended with least and greatest fixed points. Our investigation has two objectives. First, to understand whether and how it is possible to capture the well-known co/contra variance of behaviors when the connectives used to describe branching session types (\(\&\) and \(\oplus\) of linear logic) have fixed arity. Second, to understand whether there are criticial aspects of subtyping that become relevant when typing derivations are meant to be logically sound. At the core of our proposal is the observation that, when session types (hence process behaviors) are represented by linear logic propositions [24, 4, 17], it is impossible to write a process that behaves as \(\mathbf{0}\) and it is very easy to write a process that behaves as \(\top\). If we think of a session type as the set of processes that behave according to that type, this means that the additive constants \(\mathbf{0}\) and \(\top\) may serve well as the least and greatest elements of a session subtyping relation. Somewhat surprisingly, the subtyping relation defined by these properties of \(\mathbf{0}\) and \(\top\) allows us to express essentially the same subtyping relations that arise from the usual co/contra variance of labels. For example, following our proposal the session type of the client, previously denoted \(A\), would instead be written as \[C=\oplus\{\mathtt{end}:\mathbf{0},\mathtt{add}:\mathsf{Num}\otimes\mathsf{Num }\otimes\mathsf{Num}^{\perp}\otimes\oplus\{\mathtt{end}:\mathbf{1},\mathtt{ add}:\mathbf{0}\}\}\] using which we can derive both \[B\leqslant\&\{\mathtt{end}:\top,\mathtt{add}:\mathsf{Num}^{\perp}\otimes \mathsf{Num}^{\perp}\otimes\mathsf{Num}\otimes\&\{\mathtt{end}:\perp, \mathtt{add}:\top\}\}=C^{\perp}\qquad\text{as well as}\qquad C\leqslant B^{\perp}\] without comparing labels and just using the fact that \(\mathbf{0}\) is the least session type and \(\top\) the greatest one. Basically, instead of _omitting those labels_ that correspond to impossible continuations (_cf._ the missing end and add in \(A\)), we use the uninhabited session type \(\mathbf{0}\) or its dual \(\top\)_as impossible continuations_ (_cf. C_). It could be argued that the difference between the two approaches is mostly cosmetic. Indeed, it is easy to devise (de)sugaring functions to rewrite session types from one syntax to the other. However, the novel approach we propose allows us to recast the well-known subtyping relation for session types in a logical setting. A first consequence of this achievement is that the soundness of the type system _with subtyping_ does not require an _ad hoc_ proof, but follows from the soundness of the type system _without subtyping_ through a suitable coercion semantics. In addition, we find out that the subtyping relation we propose preserves not only the usual _safety properties_ - communication safety, protocol fidelity and deadlock freedom - but also _termination_, which is a _liveness property_. Structure of the paper.In Section 2 we define \(\mu\mathsf{CP}^{\infty}\), a session calculus of processes closely related to \(\mu\mathsf{CP}\)[17] and \(\mathsf{CP}\)[24]. In Section 3 we define the type language for \(\mu\mathsf{CP}^{\infty}\) and the subtyping relation. In Section 4 we define the typing rules for \(\mu\mathsf{CP}^{\infty}\) and give a coercion semantics to subtyping, thus showing that the type system of \(\mu\mathsf{CP}^{\infty}\) is a conservative extension of \(\mu\mathsf{MALL}^{\infty}\)[3, 7]. We wrap up in Section 5. ## 2 Syntax and semantics of \(\mu\mathsf{CP}^{\infty}\) The syntax of \(\mu\mathsf{CP}^{\infty}\) is shown in Table 1 and makes use of a set of _process names_\(\mathsf{A}\), \(\mathsf{B}\), \(\ldots\) and of an infinite set of _channels_\(x\), \(y\), \(z\) and so on. The calculus includes standard forms representing communication actions: fail\(x\) models a process failing on \(x\); \(x().P\) and \(x[]\) model the input/output of a termination signal on \(x\); case\(x\{P,Q\}\) and \(x[\mathsf{in}_{i}].P\) model the input/output of a label \(\mathsf{in}_{i}\) on \(x\); \(x(y).P\) and \(x[y](P\,|\,Q)\) model the input/output of a channel \(y\) on \(x\). Note that \(x[y](P\,|\,Q)\) outputs a _new_ channel \(y\) which is bound in \(P\) but not in \(Q\). Free channel output can be encoded as shown in previous works [17]. The form \((x)(P\,|\,Q)\) models a session \(x\) connecting two parallel processes \(P\) and \(Q\) and the form \(\mathsf{A}\langle\overline{x}\rangle\) models the invocation of the process named \(\mathsf{A}\) with parameters \(\overline{x}\). For each process name \(\mathsf{A}\) we assume that there is a unique global definition of the form \(\mathsf{A}(\overline{x})\triangleq P\) that gives its meaning. Hereafter \(\overline{x}\) denotes a possibly empty sequence of channels. The notions of free and bound channels are defined in the expected way. We identify processes up to renaming of bound channels and we write \(\mathsf{fn}(P)\) for the set of free channels of \(P\). The operational semantics of \(\mu\mathsf{CP}^{\infty}\) is shown in Table 2 and consists of a structural pre-congruence relation \(\preccurlyeq\) and a reduction relation \(\to\), both of which are fairly standard. We write \(P\to\) if \(P\to Q\) for some \(Q\) and we say that \(P\) is _stuck_, notation \(P\to\), if not \(P\to\). **Example 2.1**.: We can model client and server described in Section 1 as the processes below. \[\mathsf{Client}(x)\triangleq x[\mathsf{in}_{1}].x[\mathsf{in}_{0}].x[]\qquad \mathsf{Server}(x,z)\triangleq\mathsf{case}\,x\{x().z[],\mathsf{Server}\, \langle x,z\rangle\}\] For simplicity, we only focus on the overall structure of the processes rather than on the actual mathematical operations they perform, so we omit any exchange of concrete data from this model. \begin{table} \begin{tabular}{c c c c c} **Process** & \(P,Q::=\) & & \(|\;(x)(P\,|\,Q)\) & composition \\ & \(\mathsf{A}\langle\overline{x}\rangle\) & invocation & \(|\;\mathsf{fail}\,x\) & failure \\ \(|\;\;\;x().P\) & signal input & \(|\,x[]\) & signal output \\ \(|\;\;\;x(y).P\) & channel input & \(|\,x[y](P\,|\,Q)\) & channel output \\ \(|\;\;\;\mathsf{case}\,x\{P,Q\}\) & choice input & \(|\,x[\mathsf{in}_{i}].P\) & choice output & \(i\in\{0,1\}\) \\ \end{tabular} \end{table} Table 1: Syntax of \(\mu\mathsf{CP}^{\infty}\). We conclude this section with the definitions of the properties ensured by our type system, namely _deadlock freedom_ and _termination_. The latter notion is particularly relevant in our setting since termination preservation is a novel aspect of the subtyping relation that we are about to define. **Definition 2.1** (deadlock-free process).: We say that \(P\) is _deadlock free_ if \(P\Rightarrow Q\nrightarrow\) implies that \(Q\) is not (structurally pre-congruent to) a process of the form \((x)(R_{1}\,|\,R_{2})\). A deadlock-free process either reduces or it is stuck waiting to synchronize on some free channel. **Definition 2.2** (terminating process).: A _run_ of a process \(P\) is a (finite or infinite) sequence \((P_{0},P_{1},\dots)\) of processes such that \(P_{0}=P\) and \(P_{i}\to P_{i+1}\) whenever \(i+1\) is a valid index of the sequence. We say that a run is maximal if either it is infinite or if the last process in it is stuck. We say that \(P\) is _terminating_ if every maximal run of \(P\) is finite. Note that a terminating process is not necessarily free of restrictions. For example, \((x)(\operatorname{\mathsf{fail}}x\,|\,x[])\) is terminated but not deadlock free. It really is the conjunction of deadlock freedom and termination (as defined above) that ensure that a process is "well behaved". ## 3 Types and subtyping The type language for \(\mu\mathsf{CP}^{\infty}\) consists of the propositions of \(\mu\mathsf{MALL}^{\infty}\)[2, 6, 1], the infinitary proof theory of multiplicative/additive linear logic extended with least and greatest fixed points. We start from the definition of _pre-types_, which are linear logic propositions built using type variables taken from an infinite set and ranged over by \(X\) and \(Y\). \[\text{\bf Pre-type}\qquad A,B::=X\mid\bot\mid\mathbf{1}\mid\top\mid\mathbf{0} \mid A\,\bindsto B\mid A\,\bindsto B\mid A\,\bindsto B\mid\nu X\,A\mid\mu X.A\] The usual notions of free and bound type variables apply. A _type_ is a closed pre-type. We assume that type variables occurring in types are _guarded_. That is, we forbid types of the form \(\sigma_{1}X_{1}\dots\sigma_{n}X_{n}.X_{i}\) where \(\sigma_{1},\dots,\sigma_{n}\in\{\mu,\nu\}\). We write \(A^{\bot}\) for the _dual_ of \(A\), which is defined in the expected way with the proviso that \(X^{\bot}=X\). This way of dualizing type variables is not problematic since we will always apply \(\cdot^{\bot}\) to types, which contain no free type variables. As usual, we write \(A\{B/X\}\) for the (pre-)type obtained by replacing every \(X\) occurring free in the pre-type \(A\) with the type \(B\). Hereafter we let \(\kappa\) range over the constants \(\mathbf{0}\), \(\mathbf{1}\), \(\bot\) and \(\top\), we let \(\star\) range over the connectives \(\mathcal{E}_{\iota}\), \(\oplus\), \(\bindsto\) and \(\otimes\) and \(\sigma\) range over the binders \(\mu\) and \(\nu\). Also, we say that any type of the form \(\sigma X\,A\) is a \(\sigma\)-type. \begin{table} \begin{tabular}{l c c} \([\)s-par-comm\(]\) & \((x)(P\,|\,Q)\) & \(\preccurlyeq\) \\ \([\)s-par-assoc\(]\) & \((x)(P\,|\,(y)(Q\,|\,R))\) & \(\preccurlyeq\) \\ \([\)s-call\(]\) & \(\mathsf{A}(\overline{x})\) & \(\preccurlyeq\) \\ \([\)r-close\(]\) & \((x)(x[]\,|\,x().P)\) & \(\to\) \\ \([\)r-comm\(]\) & \((x)(x[y](P\,|\,Q)\,|\,x(y).R)\) & \(\to\) \\ \([\)r-case\(]\) & \((x)(x[\mathrm{in}_{i}].P\,|\,\mathsf{case}\,x\{Q_{0},Q_{1}\})\) & \(\to\) \\ \([\)r-par\(]\) & \((x)(P\,|\,R)\) & \(\to\) \\ \([\)r-struct\(]\) & \(P\to Q\) & \(P\preccurlyeq P^{\prime}\to Q^{\prime}\preccurlyeq Q\) \\ \end{tabular} \end{table} Table 2: Structural pre-congruence and reduction semantics of \(\mu\mathsf{CP}^{\infty}\). We write \(\preceq\) for the standard _sub-formula_ relation on types. To be precise, the relation \(\preceq\) is the least preorder on types such that \(A\preceq\sigma X.A\) and \(A_{i}\preceq A_{1}\star A_{2}\). For example, consider \(A\stackrel{{\text{\tiny def}}}{{=}}\mu X.vY.(1\oplus X)\) and its unfolding \(A^{\prime}\stackrel{{\text{\tiny def}}}{{=}}vY.(1\oplus A)\). We have \(A\preceq 1\oplus A\preceq A^{\prime}\), hence \(A\) is a sub-formula of \(A^{\prime}\). Given a set \(\mathcal{T}\) of types we write \(\min\mathcal{T}\) for the \(\preceq\)-minimum type in \(\mathcal{T}\) when it is defined. Table 3 shows the inference rules for subtyping judgments. The rules are meant to be interpreted coinductively so that a judgment \(A\leqslant B\) is derivable if it is the conclusion of a finite/infinite derivation. The rules [bot] and [top] establish that \(\mathbf{0}\) and \(\top\) are respectively the least and the greatest session type; the rules [refl] and [cong] establish reflexivity and pre-congruence of \(\leqslant\) with respect to all the constants and connectives; the rules [left-\(\sigma\)] and [right-\(\sigma\)] allow fixed points to be unfolded on either side of \(\leqslant\). **Example 3.1**.: Consider the types \(A\stackrel{{\text{\tiny def}}}{{=}}\mathbf{0}\oplus(\mathbf{1} \oplus\mathbf{0})\) and \(B\stackrel{{\text{\tiny def}}}{{=}}\nu X.(\bot\,\&\,X)\) which, as we will see later, describe the behavior of Client and Server in Example 2.1. We can derive both \(A\leqslant B^{\bot}\) and \(B\leqslant A^{\bot}\) thus: \[\begin{array}{c}\infer{\mathbf{1}\leqslant\mathbf{1}\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ subtype of \(\nu X.(\mathbf{1}\oplus X)\) but \(\nu X.(\mathbf{1}\oplus X)\) is not a subtype of \(\mu X.(\mathbf{1}\oplus X)\). As we will see in Section 4, the application of a subtyping relation \(A\leqslant B\) can be explicitly modeled as a process _consuming_ a channel of type \(A\) while _producing_ a channel of type \(B\). According to this interpretation of subtyping, we can see that clause (1) of Definition 3.1 is just a dualized version of clause (2). In both clauses of Definition 3.1 there is a requirement that the type of the fixed point on each side of the relation is determined by the \(\preceq\)-minimum of the types that appear infinitely often on either side. This is needed to handle correctly alternating fixed points, by determining which one is actively contributing to the infinite path. To see what effect this has consider the types \(A\stackrel{{\text{\tiny def}}}{{=}}\mu X.\nu Y.(\mathbf{1}\oplus X)\), \(A^{\prime}\stackrel{{\text{\tiny def}}}{{=}}\nu Y.(\mathbf{1} \oplus A)\), \(B\stackrel{{\text{\tiny def}}}{{=}}\mu X.\mu Y.(\mathbf{1}\oplus X)\) and \(B^{\prime}\stackrel{{\text{\tiny def}}}{{=}}\mu Y.(\mathbf{1} \oplus B)\). Observe that \(A\) unfolds to \(A^{\prime}\), \(A^{\prime}\) unfolds to \(\mathbf{1}\oplus A\), \(B\) unfolds to \(B^{\prime}\) and \(B^{\prime}\) unfolds to \(\mathbf{1}\oplus B\). We have \(A\leqslant B\) despite \(Y\) is bound by a _greatest_ fixed point on the left and by a _least_ fixed point on the right. Indeed, both \(A\) and \(A^{\prime}\) occur infinitely often in the (only) infinite branch of the derivation for \(A\leqslant B\), but \(A\preceq A^{\prime}\) according to the intuition that the \(\preceq\)-minimum type that occurs infinitely often is the one corresponding to the outermost fixed point. In this case, the outermost fixed point is \(\mu X\) which "overrides" the contribution of the inner fixed point \(\nu Y\). The interested reader may refer to the literature on \(\mu\mathsf{MALL}^{\infty}\)[2, 6] for details. Hereafter, unless otherwise specified, we write \(A\leqslant B\) to imply that \(A\) is a subtype of \(B\) and not simply that the judgment \(A\leqslant B\) is derivable. It is possible to show that \(\leqslant\) is a preorder and that \(A\leqslant B\) implies \(B^{\perp}\leqslant A^{\perp}\). Indeed, as illustrated in Example 3.1, we obtain a derivation of \(B^{\perp}\leqslant A^{\perp}\) from that of \(A\leqslant B\) by dualizing every judgment and by turning every application of [left-\(\sigma\)] (respectively [right-\(\sigma\)], [bot], [top]) into an application of [right-\(\sigma^{\perp}\)] (respectively [left-\(\sigma^{\perp}\)], [top], [bot]). ## 4 Typing rules In this section we describe the typing rules for \(\mu\mathsf{CP}^{\infty}\). Typing judgments have the form \(P\vdash\Gamma\) where \(P\) is a process and \(\Gamma\) is a typing context, namely a finite map from channels to types. We can read this judgment as the fact that \(P\) behaves as described by the types in the range of \(\Gamma\) with respect to the channels in the domain of \(\Gamma\). We write \(\mathsf{dom}(\Gamma)\) for the domain of \(\Gamma\), we write \(x:A\) for the typing context with domain \(\{x\}\) that maps \(x\) to \(A\), we write \(\Gamma,\Delta\) for the union of \(\Gamma\) and \(\Delta\) when \(\mathsf{dom}(\Gamma)\cap\mathsf{dom}(\Delta)=\emptyset\). The typing rules of \(\mu\mathsf{CP}^{\infty}\) are shown in Table 4 and, with the exception of [call] and [sub], they correspond to the proof rules of \(\mu\mathsf{MALL}^{\infty}\)[3, 7] in which the context is the sequent being proved and the process is (almost) a syntactic representation of the proof. The rules for the multiplicative/additive constants and for the connectives are standard. The rule \([\sigma]\) where \(\sigma\in\{\mu,\nu\}\) simply unfolds fixed points regardless of their nature. The rule [call] unfolds a process invocation into its definition, checking that the invocation and the definition are well typed in the same context. Finally, [sub] checks that the composition \((x)(P\,|\,Q)\) is well typed provided that \(A\) (the behavior of \(P\) with respect to \(x\)) is a subtype of \(B^{\perp}\) (where \(B\) is the behavior of \(Q\) with respect to \(x\)). In this sense [sub] embeds the substitution principle induced by \(\leqslant\) since it allows a process behaving as \(A\) to be used where a process behaving as \(B^{\perp}\) is expected. Note that the standard cut rule of \(\mu\mathsf{MALL}^{\infty}\) is a special case of [sub] because of the reflexivity of \(\leqslant\). Like in \(\mu\mathsf{MALL}^{\infty}\), the rules are meant to be interpreted coinductively so that a judgment \(P\vdash\Gamma\) is deemed derivable if there is an arbitrary (finite or infinite) derivation whose conclusion is \(P\vdash\Gamma\). **Example 4.1**.: Let us show the typing derivations for the processes discussed in Example 2.1. To this aim, let \(A\stackrel{{\mathsf{def}}}{{=}}\mathbf{0}\oplus(\mathbf{1}\oplus \mathbf{0})\) and \(B\stackrel{{\mathsf{def}}}{{=}}\nu X.(\bot\,\&X)\) and recall from Example 3.1 that \(A\leqslant B^{\perp}\). We derive: \[\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{ \infer{\infer{\infer{\infer{\infer{\}}}{{\}{\}{\}{\vdash}}}}}}}}}}}{\infer{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}}{{\}{\vdash}}}}}}}}}}}}}}}}{\infer{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{ \infer{\infer{\infer{\infer{\}}{{\vdash}}}}}}}}}}}}}}}{\infer{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{ \infer{\infer{\infer{\infer{\infer{\}}{{\vdash}}}}}}}}}}}}}}}}}{\infer{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{ \infer{\infer{\infer{\infer{\infer{\infer{\}}{{\vdash}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{{\vdash}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}}{{\vdash}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}}{{\vdash}}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{{\vdash}}}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}}{{\vdash}}}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\}}}{{\vdash}}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\}}{\vdash}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}{\vdash}}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}{\infer{\infer{\}{\}{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}{\infer{\}{\}{\vdash}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}{\infer{\infer{\}{\}{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}{\infer{\}}{\}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\}{\infer{\infer{\}{\infer{\}{\}{\}{\vdash}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\{\}}{\infer{\infer{\}{\}{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\{\infer{\infer{\}{\infer{\}{\infer{\}{\}{\}}{\vdash}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\{\}{\infer{\}{\infer{\}{\}{\}}{\vdash}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\{\infer{\}{\infer{\}{\infer{\{\}}{\infer{\}{\infer{\}{\}{\}}{\vdash}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\{\infer{\infer{\{\infer{\}{\infer{\}{\infer{\}{\infer{\}{\infer{\}}{\}{\infer{\}{\infer{\}{\}{\}}{\vdash}}}}}}}}}}}}}}}{ \infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\infer{\{\infer{\{\infer{\}{\infer{\}{\infer{\}{\infer{\}{\infer{\}{\}{\infer{\}{\}}{\infer{\}{\}{\infer{\}{\}{\infer{\}{\}{\infer{\}{\}{\infer{\}{\}}{\infer{\}{\infer{\}{\infer{\}{\}{\}}{\}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\)\)\)\)\)\) {\}\{\{\}}\{\, \{\{\{\{\,\,\,\,\to **Definition 4.3** (valid branch).: Let \(\gamma=(P_{i}\vdash\Gamma_{i})_{i\in\mathbb{N}}\) be an infinite branch of a typing derivation. We say that \(\gamma\) is _valid_ if there is a \(\nu\)-thread \((x_{i})_{i\geq k}\) of \(\gamma\) such that \([\nu]\) is applied to infinitely many of the \(x_{i}\). Definition 4.3 establishes that a branch is valid if it contains a \(\nu\)-thread in which the \(\nu\)-type occurring infinitely often is also unfolded infinitely often. This happens in Example 4.1, in which the \([\nu]\) rule is applied infinitely often to unfold the type of \(x\). The reader familiar with the \(\mu\mathsf{MALL}^{\infty}\) literature may have spotted a subtle difference between our notion of valid branch and the standard one [2, 6]. In \(\mu\mathsf{MALL}^{\infty}\), a branch is valid only provided that the \(\nu\)-thread in it is not "eventually constant", namely if the greatest fixed point that defines the \(\nu\)-thread is unfolded infinitely many times. This condition is satisfied by our notion of valid branch because of the requirement that there must be infinitely many applications of \([\nu]\) concerning the names in the \(\nu\)-thread. Now we can define the notion of valid typing derivation. **Definition 4.4** (valid derivation).: A typing derivation is _valid_ if so is every infinite branch in it. Following Pierce [22] we provide a _coercion semantics_ to our subtyping relation by means of two translation functions, one on derivations of subtyping relations \(A\leqslant B\) and one on typing derivations \(P\vdash\Gamma\) that make use of subtyping. The first translation is (partially) given in Table 5. The translation takes a derivation \(\pi\) of a subtyping relation \(A\leqslant B\) - which we denote by \(\pi::A\leqslant B\) - and generates a process \(\llbracket\pi\rrbracket_{x,y}\) that transforms (the protocol described by) \(A\) into (the protocol described by) \(B\). The translation is parametrized by the two channels \(x\) and \(y\) on which the transformation takes place: the protocol \(A\) is "consumed" from \(x\) and reissued on \(y\) as a protocol \(B\). In Table 5 we show a fairly complete selection of cases, the remaining ones being obvious variations. It is easy to establish that \(\llbracket\pi\rrbracket_{x,y}\vdash x:A^{\perp},y:B\) if \(A\leqslant B\). In particular, consider an infinite branch \(\gamma\stackrel{{\mathrm{def}}}{{=}}(\llbracket\pi_{i}\rrbracket_ {x,y_{i}}\vdash x_{i}:A^{\perp}_{i},y:B_{i})_{i\in\mathbb{N}}\) in the typing derivation of the coercion where \(A_{0}=A\) and \(B_{0}=B\). This branch corresponds to an infinite branch \((A_{i}\leqslant B_{i})_{i\in\mathbb{N}}\) in \(\pi::A\leqslant B\). According to Definition 3.1, either clause (1) or clause (2) holds for this branch. Suppose, without loss of generality, that clause (1) holds. Then \(\min\{C\mid\exists^{\infty}i\in\mathbb{N}:A_{i}=C\}\) is a \(\mu\)-type. According to Table 5 we have that \((x_{i})_{i\in\mathbb{N}}\) is a \(\nu\)-thread of \(\gamma\), hence \(\gamma\) is a valid branch. Note that in general \(\llbracket\pi\rrbracket_{x,y}\) is (the invocation of) a recursive process. \begin{table} \begin{tabular}{|c|c|} \hline \(\llbracket\mathbf{0}\leqslant A\rrbracket\) & \(\triangleq\mathsf{fail}\ x\) & \(\llbracket\mathbf{1}\leqslant\mathbf{1}\rrbracket\) \\ \hline \(\llbracket\mathbf{1}\leqslant\mathbf{1}\rrbracket\) & \(\triangleq x().y[]\) \\ \(\llbracket\mathbf{1}\leqslant A\rrbracket\) & \(\triangleq x().y[]\) \\ \(\llbracket\mathbf{1}\leqslant A\rrbracket\) & \(\triangleq x(). Concerning the translation of typing derivations, it is defined by the equation \[\left[\!\!\left[\frac{\pi_{1}\left[x\right]}{x(x)(P\,|\,Q)\vdash\Gamma}\right]\! \!\right]=\frac{\frac{\left[\pi_{1}\left\{y/x\right\}\right]\qquad\left[\!\!\left[ \pi\right]\!\right]_{y,x}\vdash y:A^{\perp},x:B}{(y)(P\{y/x\}\,|\,\left[\!\! \left[\pi\right]\!\right]_{y,x})\vdash\Gamma,x:B}}{(x)(y)(P\{y/x\}\,|\,\left[\! \left[\pi\right]\!\right]_{y,x})\vdash\Gamma,x:B}}\qquad\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 5 Concluding remarks We have defined a subtyping relation for session types as the precongruence that is insensitive to the (un)folding of recursive types and such that **0** and \(\top\) act as least and greatest elements. Despite the minimalistic look of the relation and the apparent rigidity in the syntax of types, in which the arity of internal and external choices is fixed, \(\leqslant\) captures the usual co/contra variance of labels thanks to the interpretation given to **0** and \(\top\). Other refinement relations for session types with least and greatest elements have been studied in the past [20, 22], although without an explicit correspondance with logic. Unlike subtyping relations for session types [10, 5, 18, 11] that only preserve _safety properties_ of sessions (communication safety, protocol fidelity and deadlock freedom), \(\leqslant\) also preserves termination, which is a _liveness property_. For this reason, \(\leqslant\) is somewhat related to _fair subtyping_[21, 22], which preserves _fair termination_[12, 8]. It appears that \(\leqslant\) is coarser than fair subtyping, although the exact relationship between the two relations is difficult to characterize because of the fundamentally different ways in which recursive behaviors are represented in the syntax of types. The subtyping relation defined in this paper inherits least and greatest fixed points from \(\mu\mathsf{MALL}^{\infty}\)[3, 7], whereas fair subtyping has been studied on session type languages that either make use of general recursion [21] or that use regular trees directly [22]. A more conclusive comparison is left for future work. A key difference between the treatment of fixed points in this work and a related logical approach to session subtyping [15] is that, while both guarantee deadlock freedom, the current approach also guarantees termination. Insight concerning the design of fixed points should be exportable to other session calculi independently from any logical interpretation. In particular, it would be interesting to study subtyping for _asynchronous session types_[18, 11] in light of Definition 3.1. This can be done by adopting a suitable coercion semantics to enable buffering of messages as in simple orchestrators [19]. Acknowledgments.We are grateful to the anonymous reviewers for their thoughtful comments.
2303.02159
Robust Parameter Estimation for Rational Ordinary Differential Equations
We present a new approach for estimating parameters in rational ODE models from given (measured) time series data. In typical existing approaches, an initial guess for the parameter values is made from a given search interval. Then, in a loop, the corresponding outputs are computed by solving the ODE numerically, followed by computing the error from the given time series data. If the error is small, the loop terminates and the parameter values are returned. Otherwise, heuristics/theories are used to possibly improve the guess and continue the loop. These approaches tend to be non-robust in the sense that their accuracy depend on the search interval and the true parameter values; furthermore, they cannot handle the case where the parameters are locally identifiable. In this paper, we propose a new approach, which does not suffer from the above non-robustness. In particular, it does not require making good initial guesses for the parameter values or specifying search intervals. Instead, it uses differential algebra, interpolation of the data using rational functions, and multivariate polynomial system solving. We also compare the performance of the resulting software with several other estimation software packages.
Oren Bassik, Yosef Berman, Soo Go, Hoon Hong, Ilia Ilmer, Alexey Ovchinnikov, Chris Rackauckas, Pedro Soto, Chee Yap
2023-03-02T14:33:06Z
http://arxiv.org/abs/2303.02159v3
# Symbolic-Numeric Parameter Estimation Software Package in Julia ###### Abstract We present our Julia software package ParameterEstimation.jl, which estimates parameter values in parametric ODE models, using measured data. Our approach, unlike most other approaches, is not based on optimization. Instead, it is based on differential algebra, interpolation of the data using rational functions, and multivariable polynomial system solving. We compare the accuracy and time/memory performance of our software with other estimation software packages, SciMLSensitivity, AMIGO2, and IQM. Parametric ODE Models Parameter Estimation Differential Algebra Symbolic-Numeric Differentiation Mathematical Software ## 1 Introduction ### Overall problem Given a parametric ODE, we would like to to determine the values of parameters, which is usually done using measured data. Since the measured data can have some errors (noise), one can only estimate the parameter values. Hence, the overall problem is, given a parametric ODE and some measured data, to estimate the parameters ### State of the art Due to its importance, this problem has been the subject of intensive research efforts that yielded various theories and software implementations; to list a few: AMIGO2, COPASI, Data2Dynamics, SBtoolbox2/IQM; see [2, 4, 21, 31, 29, 28, 20, 19, 14, 12, 1, 26, 15, 30] and references to and comparisons with other algorithms there. See [11] for a survey of underlying theories and software packages and [1] for systemic comparisons. In this work, we test our program against [1, 25, 28]. Roughly speaking, the aforementioned approaches formulate the estimation problem as finding parameter values that best fit the measurements. The best fits are found by making initial guesses and iteratively updating the guesses by numerical ODE solving and followed by constrained global optimization. These approaches, though efficient in practice, may not produce desired estimates, mainly due to heuristic nature of initial guesses and use of local optimizer or heuristic global optimizer (instead of rigorous global optimizer) for efficiency. In summary, there have been significant progress in the efficiency. However, the correctness of the estimates obtained this way is not guaranteed. Moreover, estimators that are based on optimization techniques may not reflect the possibility of multiple values for locally identifiable parameters. ### Our approach and Novelty Our approach is based on combining: * differential algebra approach from the parameter identifiability analysis developed in [17, 18], * estimation of the derivatives of the ODE solutions from rational interpolation of the data, * overdetermined polynomial system solving by squaring the system and filtering the solution set. We would like to emphasize the following novel features of our approach: 1. Our approach does not involve making initial guesses and global optimization; it instead relies on differential algebraic properties of the ODE model to propose solution candidates. 2. Out of the possible solution candidates, the procedure computes the best fit for the given data including cases where such fit may not be unique. That is, accounting for possible local identifiability of the model we report all valid parameter values. ### Software The software package is written in Julia language. All code and data used for benchmarking are available at [https://github.com/iliailmer/ParameterEstimation.jl](https://github.com/iliailmer/ParameterEstimation.jl). ## 2 Problem Here, we state the problem solved by our software and illustrate it using a toy example. We also show how to enter/solve the problem using our software. **Input:** 1. An ODE **system**\[\Sigma\] \[\begin{cases}\mathbf{x}^{\prime}(t)=\mathbf{f}(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\mu})\\ \mathbf{y}(t)=\mathbf{g}(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\mu})\\ \mathbf{x}(0)=\mathbf{x_{0}}\end{cases}\] (1) where * \(\mathbf{f}\) and \(\mathbf{g}\) are vectors of rational functions describing the model, * \(\mathbf{x}\) is a vector of state variables, * \(\mathbf{u}\) is a vector of input (control) variables, * \(\mathbf{y}\) is a vector of output variables, * \(\mathbf{\mu}\) and \(\mathbf{x_{0}}\) are vectors of unknown parameters. 2. The data sample \(D=((t_{1},y_{1}),\ldots,(t_{n},y_{n}))\) where \(y_{i}\) is the measured value of \(y\) at time **Output:** Estimated values for the parameters \(\boldsymbol{\mu}\) and \(\boldsymbol{x_{0}}\). _Toy example:_ **Input:** \[\Sigma : \left\{\begin{array}{ccc}x^{\prime}&=&-\mu x\\ y&=&x^{2}+x\\ x\left(0\right)&=&x_{0}\\ \end{array}\right.\] \[D : \left(\left(0.000,2.000\right),\left(0.333,1.563\right),\right.\] \[\left.\left(0.666,1.229\right),\left(1.000,0.974\right)\right)\] **Output:**\(\left(\mu,x_{0}\right)\ \approx\ \left(0.499,1.000\right)\) The result above is good, because we have "simulated" the measured output values \(D\) by numerically solving \(\Sigma\) for \(\mu=0.500\) and \(x_{0}=1.000\). To solve the toy example problem using our software, enter the following lines into Julia console: ``` usingParameterEstimation usingModelingToolkit #Input: #-Differentialmodel @parametersmu @variablestx(t)y(t) D=Differential(t) @namedSigma=ODESystem([D(x) ~-mu * x], t, [x], [mu]) outs=[y ~x^2 + x] #-Data data=Dict( "t" =>[0.000, 0.333, 0.666, 1.000], x^2 + x =>[2.000, 1.563, 1.229, 0.974]) #Run res=estimate(Sigma,outs,data); ``` You should see the following output: ``` #Output: Parameter(s):mu=0.499 InitialCondition(s):x(t)=1.000 ``` ## 3 Approach We will briefly describe the approach used in the system, which is also illustrated in Fig. 1. 1. Use the model \(\Sigma\) to find the number of parameter values (see [17, 18] for details, cf. [3]). _Toy example:_ There is only one parameter value. 2. Differentiate the model \(\Sigma\) sufficiently many times (see [17, 18] for details). _Toy example:_ \[\left\{\begin{array}{ccc}y&=&x^{2}+x\\ y^{\prime}&=&2xx^{\prime}+x^{\prime}\\ y^{\prime\prime}&=&2\left(x^{\prime}x^{\prime}+xx^{\prime\prime}\right)+x^{ \prime\prime}\\ x^{\prime}&=&-\mu x\\ x^{\prime\prime}&=&-\mu x^{\prime}\end{array}\right.\] 3. Use the data \(D\) to derive a numerical overdetermined polynomial system from the above equations. 1. Approximate the output functions \(y\) by interpolating the data \(D\) into rational functions \(g\). _Toy example:_ \[y \approx g\ =\ \frac{0.58t^{2}-3.11t+6.82}{t+3.41}\] (We display only two decimal places in the coefficients for brevity.) 2. Approximate the above system of equations by replacing \(y\) with \(g\). _Toy example:_ \[\left\{\begin{array}{rcl}\frac{0.58t^{2}-3.11t+6.82}{t+3.41}&\approx&x^{2}+x \\ \frac{0.58t^{2}+3.96t^{2}-17.41}{(t+3.41)^{2}}&\approx&2xx^{\prime}+x^{\prime} \\ \frac{48.32}{(t+3.41)^{3}}&\approx&2\left(x^{\prime}x^{\prime}+xx^{\prime\prime }\right)+x^{\prime\prime}\\ x^{\prime}&=&-\mu x\\ x^{\prime\prime}&=&-\mu x^{\prime}\end{array}\right.\] 3. Evaluate the above system on \(t=0\). _Toy example:_ \[\left\{\begin{array}{rcl}2.00&\approx&x_{0}^{2}+x_{0},\\ -1.50&\approx&2x_{1}x_{0}+x_{1},\\ 1.22&\approx&2\left(x_{1}x_{1}+x_{0}x_{2}\right)+x_{2}\\ x_{1}&=&-\mu x_{0}\\ x_{2}&=&-\mu x_{1}\end{array}\right.\] where a shorthand notation \(x_{i}=x^{(i)}\left(0\right)\) is used. Note that the system consists of \(5\) equations in \(4\) unknowns: \(\mu,x_{0},x_{1},x_{2}\). Figure 1: The flowchart illustrating steps of the method implemented in our software package. 4. Find all the solutions of the over-determined system of polynomial equations. For this, first we make the system square. Then we find all the real solutions of the square system (for instance, using homotopy continuation method, see [8]). _Toy example:_ \[\begin{cases}(\mu,x_{0})&\approx\ (0.499,\quad 1.000)\\ (\mu,x_{0})&\approx\ (0.249,-2.000)\end{cases}\] 5. Compute errors of the solutions. For each solution \((\widehat{\mu},\widehat{x}_{0})\) 1. Find \(\widehat{y}\left(t_{i}\right)\) by solving \(\Sigma\left(\widehat{\mu},\widehat{x}_{0}\right)\) using numerical ODE solver. 2. Compute error \(e\) between the computed values of \(\widehat{y}\) and the measured \(y\). _Toy example:_ \[\begin{cases}(\mu,x_{0})&\approx\ (0.499,\ \ 1.0000)\Rightarrow e=6.87\cdot 10^{-4} \\ (\mu,x_{0})&\approx\ (0.249,-2.0000)\Rightarrow e=2.22\cdot 10^{-2}\end{cases}\] 6. Select \(k\) solutions with smallest error, where \(k\) is the number of parameter values from Step 1. _Toy example:_ \[(\mu,x_{0})\ \approx\ (0.499,1.0000)\] ## 4 Implementational details ### Steps 1) and 2) The two steps are are done by calling SIAN [17, 18]. ### Step 3) We perform rational interpolation of the dataset \(D\). In this interpolation, we have freedom of choosing the degree of the numerator, not to exceed the number of data points minus \(1\). Our experiments show that different numerator degree can lead to different estimation quality. In our program, we iterate over multiple degrees and find the best one. Once interpolation is done, we apply high-order automatic differentiation functionality of TaylorSeries.jl[6] package to the rational function to estimate derivative values of higher order. We substitute the estimated values of the derivatives of the output \(y\) into the system obtained in Step 2). This typically results in an over-determined system of polynomial equations. ### Step 4) Since the obtained polynomial system is typically overdetermined, the next step is to square the system carefully. This is achieved by collecting equations one by one, given that a newly added equation increases the rank of the system. If, for a given equation, the rank is unchanged, it is not added. Then we find all solutions of the squared system. By default, we are using homotopy-based solving from HomotopyContinuation.jl. We also support the use of MSolve[7] via Oscar algebra system in Julia [23, 10]. This method finds all solutions with theoretical guarantees. However, it has been less stable for some of the systems we encountered. All results presented here use homotopy-based solving, the default option. ### Steps 5) and 6) For all obtained solutions (parameter estimates), we numerically solve the original ODE and collect the corresponding data sample. This new sample is compared with the input sample data. We keep the estimates with minimal errors between the sample data and the new data. Such estimates may exist for every degree during our search for the rational interpolated function. We keep the ones with minimal error across all degrees. #### Extra features We would like to emphasize the following extra features: * Our iteration over interpolation degrees and underlying polynomial solving via HomotopyContinuation.jl[8] are parallelizable and can take advantage of a multicore (or multithreaded) computing environment. * ParameterEstimation.jl provides identifiability interface to SIAN [17, 18]. One can assess identifiability of parameters and initial conditions before estimation. #### Limitations We list a few limitations of the current implementation: * Our program supports rational ODEs models (the right-hand sides are rational functions), while software such as AMIGO2 and IQM also support ODE models that are not necessarily rational. * We use SIAN that is based on a Monte-Carlo algorithm that returns the bound on number of solutions with high probability, though detailed analysis is still ongoing. ## 5 Benchmarking and Comparison With Other Estimation Software In this section, we compare a collection of benchmarks models listed in section 6. We focus on the maximal absolute error, computational resources (CPU runtime and memory consumption) of each program. We ran the parameter estimation task in 4 programs: * ParameterEstimation.jl (this work) * SciMLSensitivity.jl[24, 25] * AMIGO2[1] * IQM [28]. The last three programs in the list above required certain additional information from the user that pertain specifically to the optimization task performed by each program, such as * maximal number of iterations for gradient-based optimization, * specific loss/error functions to be optimized. For SciMLSensitivity.jl, we used 1000 function evaluations with learning rate of 0.01. In IQM, we set the maximal number of function evaluations at 2000 and we used a simplexIQM solver. In AMIGO2, we set the solver to nonlinear least squares method and 20000 maximum function evaluations. All points in our samples were collected equidistantly from a given range. As a heuristic, we recommend using at least \(n+1\) points where \(n\) is the number of unknown initial conditions and parameters. The effect of distance between time points is not discussed in this work and remains an open question. To evaluate the quality of the estimation and compare the result with the ground truth parameters, we use the _maximum absolute relative error_ defined as \[E(p_{true},p_{est})=\max_{p_{true}\in\mathcal{P}}\left(\left|\frac{p_{true}-p _{est}}{p_{true}}\right|\right),\] where \(\mathcal{P}\) is the set of all parameters in the input system. For all benchmarks, unless otherwise stated, we use a cluster with 192 Intel Xeon Platinum 2.40GHz CPUs, and Ubuntu 20.04.4 LTS (GNU/Linux 5.15.0). The runtimes in Table 2 are the CPU time in seconds. In addition to the times, we present the memory usage, and the estimation error. ParameterEstimation.jl and SciMLSensitivity were run using latest release Julia-1.8 on a single thread, AMIGO2 and IQM were run on Matlab R2022a (9.12.0.1884302) and internally may use multiple threads. ## Discussion We make the following observations from benchmarking results with respect to several factors. * **Maximum absolute error:** For most of the scenarios considered here, our system was able to outperform in estimation error. Despite generally lower estimation errors, we do observe a few cases, where optimization is able to outperform our methods. * **Computational resources:** Generally, our program uses less memory than SciMLSensitivity.jl, but may require larger CPU time to run. Matlab-based programs considered here use less CPU time and memory than both Julia options, but may be less accurate in estimation. * **Data size:** We also note that fewer data points may be required by our program in order to provide quality estimation. We present figs. 1(a) to 1(b) to illustrate how number of points affects the error. * **Multiple solutions:** Our program's advantage is its ability to recover multiple solutions where it is appropriate (local identifiability) while optimization-based programs may miss a valid value of a parameter. We do not provide performance data on the following systems * Data2Dynamics [26], * COPASI [19], * IQR1/IQDesktop2 Footnote 1: [https://igrtools.intiquan.com/](https://igrtools.intiquan.com/) Footnote 2: [https://igddesktop.intiquan.com/book/](https://igddesktop.intiquan.com/book/) ## 6 List of Differential Models In this section, we present the ODE systems used in our benchmarks. ### Crauste Model It is based on the Crauste system [16, Section 4] originally described in [9]. These equations describe multi-compartment dynamical system simulating behavior of CD8 T-cells. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Model Name & Identifiability & \begin{tabular}{c} Num. \\ Parameters \\ \end{tabular} & \begin{tabular}{c} Time \\ Range \\ \end{tabular} & \begin{tabular}{c} Num. \\ Points \\ \end{tabular} & \begin{tabular}{c} Max. Abs. Rel. Err. \% \\ \end{tabular} & \begin{tabular}{c} Rel. Err. \% \\ \end{tabular} \\ \hline Crauste model, eq. (2) & All Global & 18 & [0, 1] & 10 & 37.5 & 13.0 & 74.9 & 89.7 \\ \hline DAISY 3-Compartment model 1, eq. (4) & Local: \(x_{3}(0),p_{4},p_{5},p_{6}\) & 9 & [0, 1] & 10 & **19.8** & \(>\)100.0 & \(>\)100.0 & \(>\)100.0 \\ \hline DAISY 3-Compartment model 2, eq. (6) & All Global & 8 & [0, 1] & 10 & **0.8** & 15.0 & \(>\)100.0 & \(>\)100.0 \\ \hline DAISY 4-Compartment model, eq. (8) & Local: \(x_{4}(0),x_{3}(0),k_{13},k_{41},k_{41},k_{41}\) & 11 & [0, 1] & 20 & 0.1 & -0.1 & 54.0 & \(>\)100.0 & \(>\)100.0 \\ \hline \multirow{2}{*}{Fitzhugh-Nagumo model, eq. (10)} & All Global & 5 & [0, 1] & 10 & 5 & \(>\)100.0 & \(>\)100.0 & \(>\)100.0 \\ \cline{2-10} & & & [0, 1] & 20 & **0.3** & N/A & \(>\)100.0 & \(>\)100.0 \\ \hline \multirow{2}{*}{HIV model, eq. (12)} & All Global & 5 & [0, 1] & 50 & +0.1 & -0.1 & \(>\)100.0 & \(>\)100.0 \\ \cline{2-10} & & & [0, 10] & 83.3 & N/A & 97.2 & 54.1 \\ \cline{2-10} & & & [0, 20] & **22.7** & \(>\)100.0 & \(>\)100.0 & 60.2 \\ \cline{2-10} & & & [0, 20] & 10 & **0.8** & N/A & \(>\)100.0 & 26.1 \\ \hline \multirow{2}{*}{Lotka-Volterra model, eq. (14)} & All Global & 5 & [0, 1] & 10 & 24.2 & N/A & 99.2 & 22.2 \\ \cline{2-10} & & & [0, 1] & 20 & **0.9** & N/A & 89.5 & \(>\)100.0 \\ \hline \multirow{2}{*}{Harmonic Oscillator model, eq. (16)} & All Global & 4 & [0, 2, 3] & 10 & 0.6 & +0.1 & \(>\)100.0 & 36.9 \\ \cline{2-10} & & & [0, 20] & **0.41** & N/A & 97.4 & **54.8** \\ \hline \end{tabular} \end{table} Table 1: Comparison of the maximal absolute error (in percentages) across three parameter estimation systems. “N/A” represents missing answer due to a stability error when running SciMLSensitivity.jl code. The \({}^{*}\) means that our program returned multiple valid results due to local identifiability but we report the smallest error between true and estimated parameters. \[\begin{cases}\dot{N}=-\mu_{N}\,N-\delta_{NE}\,N\,P\\ \dot{E}=\delta_{NE}\,N\,P-\mu_{EE}\,E^{2}-\delta_{EL}\,E+\rho_{E}\,E\,P\\ \dot{S}=\delta_{EL}\,S-S\,\delta_{LM}-\mu_{LL}\,S^{2}-\mu_{LE}\,E\,S\\ \dot{M}=\delta_{LM}\,S-\mu_{M}\,M\\ \dot{P}=\rho_{P}\,P^{2}-\mu_{P}\,P-\mu_{PE}\,E\,P-\mu_{PL}\,S\,P\\ \end{cases} \tag{2}\] \[\begin{cases}y_{1}=N,\,y_{2}=E\\ y_{3}=S+M,y_{4}=P\end{cases} \tag{3}\] \[\begin{cases}\dot{x}_{1}=-p_{1}\,x_{1}+x_{2}+u_{0}\\ \dot{x}_{2}=p_{3}\,x_{1}-p_{4}\,x_{2}+x_{3}\\ \dot{x}_{3}=p_{6}\,x_{1}-p_{7}\,x_{3}\\ \dot{u}_{0}=1.\end{cases} \tag{4}\] \[\begin{cases}y_{1}=x_{1}\\ y_{2}=u_{0}\end{cases} \tag{5}\] #### DAISY 3-Compartment Model 2 It comes from examples used in the analysis of DAISY [5] identifiability software. It represents a 3-compartment model. \[\begin{cases}\dot{x_{1}}=-(a_{21}+a_{31}+a_{01})\,x_{1}+a_{12}\,x_{2}+a_{13}\,x _{3}\\ \dot{x_{2}}=a_{21}\,x_{1}-a_{12}\,x_{2}\\ \dot{x_{3}}=a_{31}\,x_{1}-a_{13}\,x_{3}\end{cases} \tag{6}\] \[\begin{cases}y_{1}=x_{1}\\ y_{2}=x_{2}\end{cases} \tag{7}\] #### DAISY 4-Compartment Model It comes from examples used in the analysis of DAISY [5] identifiability software. It represents a 4-compartment system. \[\begin{cases}\dot{x_{1}}=-k_{01}\,x_{1}+k_{12}\,x_{2}+k_{13}\,x_{3}+\\ \qquad\quad k_{14}x_{4}-k_{21}\,x_{1}-k_{31}\,x_{1}-k_{41}\,x_{1}\\ \dot{x_{2}}=-k_{12}\,x_{2}+k_{21}\,x_{1}\\ \dot{x_{3}}=-k_{13}\,x_{3}+k_{31}\,x_{1}\\ \dot{x_{4}}=-k_{14}\,x_{4}+k_{41}\,x_{1}\\ \end{cases} \tag{8}\] \[\begin{cases}y_{1}=x_{1}\\ y_{2}=x_{2}\\ y_{3}=x_{3}+x_{4}\end{cases} \tag{9}\] #### Fitzhugh-Nagumo Model Here we list the Fitzhugh-Nagumo model [13, 22], which is derived from the Hodgkin-Huxley equations for spike generation in squid giant axons. \[\begin{cases}\dot{V}=g\,\left(V-\frac{V^{3}}{3}+R\right)\\ \dot{R}=\frac{1}{g}\,\left(V-a+b\,R\right)\\ \{y_{1}=V\end{cases} \tag{10}\] #### HIV Dynamics Model The following example is based on the model of HIV infection dynamics during interaction with the immune system over the course of various treatments. The model equations are from [32, Page 1]. \[\begin{cases}\dot{x}=\lambda-d\,x-\beta\,x\,v\\ \dot{y}=\beta\,x\,v-a\,y\\ \dot{v}=k\,y-u\,v\\ \dot{w}=c\,x\,y\,w-c\,q\,y\,w-b\,w\\ \dot{z}=c\,q\,y\,w-h\,z\\ \end{cases} \tag{12}\] \[\begin{cases}y_{1}=w,\;y_{2}=z\\ y_{3}=x,\;y_{4}=y+v\end{cases} \tag{13}\] #### Lotka-Volterra Model It is a two-species Lotka-Volterra predator-prey model. This particular model has 3 parameters and 2 states. \[\dot{r}=k_{1}\,r-k_{2}\,r\,w, \tag{14}\] \[\dot{w}=k_{2}\,r\,w-k_{3}\,w\] (15) \[\left\{y_{1}=r\right. \tag{16}\] #### Harmonic Oscillator Model It represents a harmonic oscillator (pendulum). \[\dot{x_{1}}=-a\,x_{2} \tag{17}\] \[\dot{x_{2}}=\frac{1}{b}\,x_{1}\] (18) \[\dot{y}_{1}=x_{1}\] (19) \[\dot{y}_{2}=x_{2} \tag{20}\] ## Acknowledgements The authors are grateful to the CCIS at Queens College and CIMS NYU for the computational resources, to Julio Banga, Marisa Eisenberg, Nikki Meshkat, and Maria Pia Saccomani for useful discussions, and to the referees for valuable comments. The authors are also grateful to Xinglong Zhou and Jianing Qi for initial explorations of the role of polynomial interpolation in this problem. The authors are also grateful to Henning Schmidt, developer of IQRTools/IQDesktop for numerous discussions while we tried to install the software. ## Funding This work was supported by the National Science Foundation [DMS-1760448, CCF-1708884, CCF-2212460, CCF-2212461, CCF-2212462] and City University of New York [PSC-CUNY #65605-00 53].
2305.01906
A Bayesian approach to identify changepoints in spatio-temporal ordered categorical data: An application to COVID-19 data
Although there is substantial literature on identifying structural changes for continuous spatio-temporal processes, the same is not true for categorical spatio-temporal data. This work bridges that gap and proposes a novel spatio-temporal model to identify changepoints in ordered categorical data. The model leverages an additive mean structure with separable Gaussian space-time processes for the latent variable. Our proposed methodology can detect significant changes in the mean structure as well as in the spatio-temporal covariance structures. We implement the model through a Bayesian framework that gives a computational edge over conventional approaches. From an application perspective, our approach's capability to handle ordinal categorical data provides an added advantage in real applications. This is illustrated using county-wise COVID-19 data (converted to categories according to CDC guidelines) from the state of New York in the USA. Our model identifies three changepoints in the transmission levels of COVID-19, which are indeed aligned with the ``waves'' due to specific variants encountered during the pandemic. The findings also provide interesting insights into the effects of vaccination and the extent of spatial and temporal dependence in different phases of the pandemic.
Siddharth Rawat, Abe Durrant, Adam Simpson, Grant Nielson, Candace Berrett, Soudeep Deb
2023-05-03T05:42:08Z
http://arxiv.org/abs/2305.01906v1
A Bayesian approach to identify changepoints in spatio-temporal ordered categorical data: An application to COVID-19 data ###### Abstract Although there is substantial literature on identifying structural changes for continuous spatio-temporal processes, the same is not true for categorical spatio-temporal data. This work bridges that gap and proposes a novel spatio-temporal model to identify changepoints in ordered categorical data. The model leverages an additive mean structure with separable Gaussian space-time processes for the latent variable. Our proposed methodology can detect significant changes in the mean structure as well as in the spatio-temporal covariance structures. We implement the model through a Bayesian framework that gives a computational edge over conventional approaches. From an application perspective, our approach's capability to handle ordinal categorical data provides an added advantage in real applications. This is illustrated using county-wise COVID-19 data (converted to categories according to CDC guidelines) from the state of New York in the USA. Our model identifies three changepoints in the transmission levels of COVID-19, which are indeed aligned with the "waves" due to specific variants encountered during the pandemic. The findings also provide interesting insights into the effects of vaccination and the extent of spatial and temporal dependence in different phases of the pandemic. **Keywords and phrases:** Pandemic modeling, Gibbs Sampling, Slice Sampling, Coronavirus, Process Change, MCMC. ## 1 Introduction The changepoint problem has rich existing literature. The earliest work by Chernoff and Zacks (1964) analyzed a change in the mean structure for an independent and identically distributed (i.i.d.) Gaussian distribution. Ensuing work examined changepoints in the variance and structural changes for the non-Gaussian case (see, for example, Zacks, 1983; Krishnaiah and Miao, 1988). Moving away from the i.i.d. Gaussian setting, changepoints are most often modeled or tested in time series data. In this case, the concept of a changepoint implies a change in the temporal dependence after a specific point in time. The simplest case to account for a changepoint in time is to account for a change in the mean structure after the changepoint, while the autoregressive or dependence structure remains the same (see Brockwell and Davis, 1991). Taylor (1994) implemented a new class of models to account for stochastic volatility, and Kim, Shephard and Chib (1998) analyzed similar research topics in the autoregressive conditional heteroskedasticity (ARCH) framework. In fact, changepoint detection methods in time series data are abounding in extant literature. Our focus in this work is on spatio-temporal data sets; therefore, we provide a brief review of this literature here. Majumdar, Gelfand and Banerjee (2005) proposed a model-based approach under a Bayesian framework and estimated a single changepoint in both the mean and dependence structures. Yu et al. (2008) and Xu et al. (2012) developed multi-level spatio-temporal dual changepoint models using conditional autoregressive (CAR) structure. They applied the model to examine the effect of alcohol outlets' control policy on assaultive violence rates. Altieri et al. (2015) proposed a Bayesian changepoint model for spatio-temporal point processes and fit the model with the help of Integrated Nested Laplace Approximation (INLA) to detect multiple changepoints. Altieri et al. (2016) extended the previous work to a Bayesian P-splines framework for examining earthquake point processes. Fiedler et al. (2018) studied a multiple-changepoints model in spatio-temporal seismic data, and implemented it through a Bayesian approach. More recently, Berrett ###### Abstract We consider a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of "transmission" feedback mechanisms for the control of a class of a class of "transmission" feedback mechanisms for the control response variables. We elaborate on this in Section 3.2 below. The rest of the paper is organized as follows. Section 2 gives a brief description and outline of the data, along with the motivation of considering a spatio-temporal model. Our proposed method and related discussions are given in Section 3. Application to the COVID-19 data from New York state, United States of America (USA), is presented in Section 4. We finish with a discussion and remarks in Section 5. Some additional exploratory analyses are provided in Appendix A, while thorough mathematical derivations of the Bayesian steps used in the method are provided in Appendix B. ## 2 Data The data used for the analysis are obtained from the GitHub repository maintained by JHU-CSSE (2020). The data is available at a county level. For this study, we concentrate on the data for all the counties from the state of New York. We look at the data at a weekly granularity, from 20th January 2020 to the week of 16th May 2022. This covers most of the COVID-19 "waves" in New York. Note that the data is prepared at a weekly level for two main reasons. First, this takes care of the problems with data reporting over weekends (see, Ricon-Becker et al., 2020). Second, the reduction in the size of the data from daily to weekly provides a significant computational advantage while not losing too much information on the data collection front. Thus, the data set has a total of 7564 observations from 62 counties and 122 weeks. As mentioned in Section 1, the weekly numbers are first converted into categorical data that reflect different transmission levels. We follow the CDC guidelines to define these levels, as provided in Table 1. As a first step of exploratory analysis, Figure 1 shows the transmission level categories of COVID-19 in New York at four different weeks across the entire time period of the data set. At each point in time, there is clear spatial dependence for these categories. The maps show that counties with the same COVID-19 transmission levels tend to cluster, providing evidence of a need to account for or quantify spatial dependence in any modeling technique. We see similar dependence in the counties' transmission levels across time. Because of the categorical nature of the response variable, it is easier to examine the similarities between counties in a smoother, less noisy plot. Specifically, Figure 2 shows the average transmission level for 30-week windows across the observed time period. Each line represents a different county, with four counties - Albany (red), Niagara (green), Queens (blue), and Suffolk (purple) - highlighted in color for illustrative purposes. It is evident that the transmission levels are similar across counties for the entire time period. We also account for two covariates in our model: the number of deaths that occurred in the county in the previous week (which reflects the severity of the virus) and the proportion of vaccinated people with the first dose (which reflects the extent of preventive measures). The rationale behind using these covariates is discussed in more detail in Section 3.3. For the first one, in a similar line as Rawat and Deb (2021), we take the logarithmic transformation for the previous week's new death data and denote it by \(\log d(s_{i},t-1)\) for the \(i^{th}\) county at the \(t^{th}\) week. For the vaccination covariate, we define the vaccination prevalence \(x(s_{i},t)\) as \[x(s_{i},t)=\frac{vc_{s_{i},t}}{p_{s_{i},t}}, \tag{1}\] where \(vc_{s_{i},t}\) is the cumulative number of first doses of the COVID-19 vaccine administered in the \(i^{th}\) county until the \(t^{th}\) week, and \(p_{s_{i},t}\) is the corresponding population. In our model, for both covariates, we use standardized values such that the mean is 0 and the standard deviation is 1. To understand how the covariate values change over time, we look at Figure 3. Again, each line represents a unique county, with four illustrative counties highlighted in color. Like the transmission levels, the covariates display generally similar patterns in all counties, but there is quite a bit of variation in the values. The \begin{table} \begin{tabular}{c c} \hline New cases per 100,000 persons in a week & Category \\ \hline 0 to 9.99 & 1 \\ 10 to 49.99 & 2 \\ 50 to 99.99 & 3 \\ \(\geq 100\) & 4 \\ \hline \end{tabular} \end{table} Table 1: CDC guidelines used to define the ordered categories of COVID-19 transmission levels. trend that the covariate values follow across time is different from the trend of the transmission levels; for example, vaccinations are non-decreasing, but the transmission levels both increase and decrease across the time period. For this reason, a model that accounts for a changing relationship with the covariates across time is imperative. Because of the introduction of changepoints in the model, our methodology can account for this changing relationship in an apt manner. It is worth mentioning that we experimented with linear and quadratic trend functions as well, but since COVID-19 data shows "wave" like patterns, they were not found to be suitable for the study. Furthermore, the prevalence of the second dose of the COVID-19 vaccine was explored as an additional covariate. It was Figure 1: COVID-19 transmission level category at four different time points for New York state counties. Figure 2: Average transmission level for 30-week windows for all counties (gray) with four counties from across the state highlighted in the four colors. found to have a similar relationship as the first dose, thereby deemed not to be required in the model. In the USA, responses to and policies for the COVID-19 pandemic varied widely from state to state, county to county, and even city to city. Thus, while the goal of our analysis was to examine all of New York state transmission levels together, we first explored whether analyses on individual counties would be more appropriate. In the interest of space, our analysis on this note is deferred to Appendix A. Interestingly, we found that the data for individual counties was insufficient for estimation. For example, we found, first, that models for some counties (approximately half) did not converge even for a very large number of posterior samples. Additionally, the lack of data for a single county meant that we could also only fit these models to estimate a single changepoint, as there would not be enough information to estimate many (if any) changepoints beyond one. Third, we found that for counties that did converge, estimates for changepoints and coefficients had very large uncertainty ranges and large overlap with other counties. For these reasons, a spatio-temporal model that allows for borrowing information from neighbors is absolutely necessary. ## 3 Methodology ### Model development Consider a categorical response variable \(y(s_{i},t)\) for locations \(s_{i}\in\mathcal{S}\subset\mathbb{R}^{2},\ i=1,\ldots,n\) and for time-points \(t\in\Gamma\). While we primarily work with the regularly spaced set \(\Gamma=\{1,\ldots,T\}\), we emphasize that the methodology would work for irregular spaced data as well. Likewise, although the COVID-19 data is not Figure 3: Plots showing the values of vaccine prevalence (top) and log of the previous week’s new deaths (bottom) across time. The different lines represent the different counties, with four counties highlighted in color. point-level, we find that the methodology works well for aggregate areal data, following similar works by other researchers (Jin, Carlin and Banerjee, 2005; Bradley, Holan and Wikle, 2015). For the response variable, let the total number of possible categories be represented by \(m\). For convenience, we denote these categories as \(1,2,\ldots,m\). Considering that the response is an ordered variable, we adopt the structure of ordinal prob \(|t_{1}-t_{2}|\) is calculated by the lag between the time-points. In other words, with decay parameters \(\phi_{us}\), \(\phi_{ut}\), \(\phi_{vs}\), \(\phi_{vt}\), one may write \[\begin{split}\operatorname{Corr}\left(u(s_{i},t_{1}),u(s_{j},t_{2}) \right)&=\exp\left\{-\phi_{us}\left\|s_{i}-s_{j}\right\|\right\} \times\exp\left\{-\phi_{ut}\left|t_{1}-t_{2}\right|\right\},\\ \operatorname{Corr}\left(v(s_{i},t_{1}),v(s_{j},t_{2})\right)& =\exp\left\{-\phi_{vs}\left\|s_{i}-s_{j}\right\|\right\}\times \exp\left\{-\phi_{vt}\left|t_{1}-t_{2}\right|\right\}.\end{split} \tag{6}\] Throughout this article, we use \(M=nT\) for the total number of observations. Let \(\mathbf{Y}\) be the \(M\)-dimensional vector of \(y(s_{i},t)\) observations, first arranged by time-point and then by the index of the location. The vectors \(\mathbf{\pi}\), \(\mathbf{U}\), \(\mathbf{V}\), \(\mathbf{\epsilon}\), \(\mathbf{\epsilon}^{*}\) are also created in the same fashion. The design matrix of regressor values is expressed as \(X\). The aforementioned model using vector-matrix notation is then written as follows: \[\mathbf{\pi}=\begin{cases}X\mathbf{\theta}+\mathbf{U}+\mathbf{\epsilon},&\text{for }t\leqslant t_{0},\\ X\mathbf{\theta}^{*}+\mathbf{U}+\mathbf{V}+\mathbf{\epsilon}^{*},&\text{for }t>t_{0},\end{cases} \tag{7}\] where \(\mathbf{\theta}\) is the \(k\)-dimensional coefficient vector obtained by combining \((\beta_{0},\beta_{1},\ldots,\beta_{G})^{\prime}\) and the parameter vectors \((\mathbf{\gamma}_{1}^{\prime},\mathbf{\gamma}_{2}^{\prime},\ldots,\mathbf{\gamma}_{H}^{ \prime})^{\prime}\). \(\mathbf{\theta}^{*}\) is similarly defined. Furthermore, from the above discussions, one may write \(\mathbf{U}\sim N_{M}(0,\sigma_{u}^{2}\Sigma_{ut}\otimes\Sigma_{us}))\), such that \((\Sigma_{ut})_{ij}=\exp\{-\phi_{ut}|t_{i}-t_{j}|\}\) and \((\Sigma_{us})_{ij}=\exp\{-\phi_{vs}|\lvert s_{i}-s_{j}\rvert\lvert\}\). Similarly, we can define covariance matrices \(\Sigma_{vt}\), \(\Sigma_{vs}\) and write \(\mathbf{V}\sim N_{M}(0,\sigma_{v}^{2}(\Sigma_{vt}\otimes\Sigma_{vs}))\). For the white noise vectors, we know that \(\mathbf{\epsilon}\sim N_{M}(0,\sigma_{\epsilon}^{2}\mathbf{I})\) and \(\mathbf{\epsilon}^{*}\sim N_{M}(0,\sigma_{\epsilon}^{2}\mathbf{I})\), where \(\mathbf{I}\) stands for an identity matrix of appropriate order. ### Bayesian estimation We use a complete Bayesian framework to estimate the model parameters. To do that, suitable prior specifications are necessary. It should be noted that to avoid identifiability issues in the computations, following Higgs and Hoeting (2010), we set the variance parameters \(\sigma_{u}^{2}\), \(\sigma_{v}^{2}\), \(\sigma_{\epsilon}^{2}\), and \(\sigma_{\epsilon^{*}}^{2}\) equal to \(1\). We adopt the principles of Gibbs sampling for the posterior computation of the other parameters. Recall that it is a type of Markov chain Monte Carlo (MCMC) technique where each parameter is updated iteratively, following the full conditional posterior distribution given the values of the other parameters from the previous iteration. Interested readers may refer to Gelfand (2000) for relevant details on the origin and implementation of Gibbs sampling. To set up the steps of the sampler, we use the symbol \(\mathcal{F}\) to indicate the information contained by all the parameters from the previous iteration except the parameter for which the full conditional posterior is calculated. First, consider the latent vector \(\mathbf{\pi}\) and let \(\pi_{i}\) denote the \(i^{th}\) element of it, while \(\mathbf{\pi}_{-i}\) represents the vector \(\mathbf{\pi}\) with all elements but the \(i^{th}\) one. In the same spirit, \(u_{i}\) and \(v_{i}\) are defined for vectors \(\mathbf{U}\) and \(\mathbf{V}\) whereas \(x_{i}^{\prime}\) represents the \(i^{th}\) row of the matrix \(X\) defined in eq. (7). Let \(t(i)\) be the time point corresponding to the same observation. Then, the full conditional distribution for \(\pi_{i}\) is found by marginalizing over \(\epsilon(s_{i},t(i))\) and \(\epsilon^{*}(s_{i},t(i))\). Following the earlier notations, if \(\mathcal{F}\) stands for \((\mathbf{Y},\mathbf{U},\mathbf{V},\mathbf{\theta},\mathbf{\theta}^{*},\mathbf{\delta},t_{0},\sigma_{ \epsilon}^{2},\sigma_{\epsilon^{*}}^{2},\mathbf{\pi}_{-i})\), then it can be shown that \[\pi_{i}|\mathcal{F}\sim\begin{cases}TN(x_{i}^{\prime}\mathbf{\theta}+u_{i},\sigma_ {\epsilon}^{2};\delta_{j-1},\delta_{j}),&\text{if }t(i)\leqslant t_{0},\;y_{i}=j,\\ TN(x_{i}^{\prime}\mathbf{\theta}^{*}+u_{i}+v_{i},\sigma_{\epsilon^{*}}^{2}; \delta_{j-1},\delta_{j}),&\text{if }t(i)>t_{0},\;y_{i}=j,\end{cases} \tag{8}\] where \(TN(a,b;c,d)\) denotes the univariate truncated normal distribution, obtained by truncating a normal distribution with mean \(a\) and variance \(b\) in the interval \((c,d)\). For each component in \(\mathbf{\theta}\) and \(\mathbf{\theta}^{*}\), we assume the uniform flat infinite-support prior distribution. Thus, posteriors for these parameters are dominated by the data, and no information is assumed in the prior distribution. The full conditional posterior distribution for these parameter vectors can then be written as \[\begin{split}\mathbf{\theta}|\mathcal{F}&\sim N_{k}\left( \Sigma_{\mathbf{\theta}}\left[\frac{X^{-^{\prime}}(\mathbf{\pi}^{-}-\mathbf{U}^{-})}{\sigma_ {\epsilon}^{2}}\right],\Sigma_{\mathbf{\theta}}\right),\\ \mathbf{\theta}^{*}|\mathcal{F}&\sim N_{k}\left(\Sigma_{\mathbf{ \theta}^{*}}\left[\frac{X^{+^{\prime}}(\mathbf{\pi}^{+}-\mathbf{U}^{+}-\mathbf{V}^{+})}{ \sigma_{\epsilon}^{2}}\right],\Sigma_{\mathbf{\theta}^{*}}\right).\end{split} \tag{9}\] In this equation, \(\mathbf{\pi}^{-}\) and \(\mathbf{\pi}^{+}\) represent the \(\mathbf{\pi}\) vector before and after the changepoint, respectively. \(\mathbf{U}^{-}\), \(\mathbf{U}^{+}\), \(\mathbf{V}^{-}\), \(\mathbf{V}^{+}\), \(\mathbf{Y}^{-}\), \(\mathbf{Y}^{+}\), \(T^{-}\), \(T^{+}\), \(X^{-}\), and \(X^{+}\) are similarly defined. The dispersion matrices \(\Sigma_{\mathbf{\theta}}\) and \(\Sigma_{\mathbf{\theta}^{*}}\) are defined as \[\Sigma_{\mathbf{\theta}}=\left[\frac{X^{-^{\prime}}X^{-}}{\sigma_{ \varepsilon}^{2}}+\frac{\Psi^{-1}}{k}\right]^{-1},\;\Sigma_{\mathbf{\theta}^{*}}= \left[\frac{X^{+^{\prime}}X^{+}}{\sigma_{\varepsilon^{*}}^{2}}+\frac{\Psi^{*- 1}}{k}\right]^{-1}, \tag{10}\] where \(\Psi\) and \(\Psi^{*}\) are block diagonal matrices with entries in the order \((\mathbf{I}_{G},\Omega_{s_{1}},\Omega_{s_{2}},\ldots,\Omega_{s_{H}})\) and \((\mathbf{I}_{G},\Omega_{s_{1}}^{*},\Omega_{s_{2}}^{*},\ldots,\Omega_{s_{H}}^{ *})\), respectively. It is worth noting that the above conditional posteriors would be Gaussian (with different parameter values) even if one considers Gaussian priors for \(\mathbf{\theta}\) and \(\mathbf{\theta}^{*}\), instead of the aforementioned flat distributions. Next, for calculating the posterior distributions of the cut points \(\delta_{2},\delta_{3},\ldots,\delta_{m-1}\) we use the single variable slice sampling algorithm as used for other parameterizations of the cut points by Heiner et al. (2022). The R package "diversitree" developed by FitzJohn (2012) is used to implement the algorithm. A brief description of the single variable slice sampling algorithm is warranted here. The algorithm follows three main steps. First, a random number \(z\) is drawn from the vertical slice of \((0,g(r_{0}))\), where \(r_{0}\) is the starting point for the random variable \(R\) and \(g(r_{0})\) is its probability density function. Second, an interval of width \(w\) is randomly positioned around \(r_{0}\) and expanded until both ends are outside the horizontal slice where all values are equal to \(z\). Third, a new point \(r_{1}\) is randomly sampled from the interval obtained in the previous step of width \(w\). For further details, please refer to Neal (2003). For the prior distribution of \(\delta_{j}\), the support must be between \(\delta_{j-1}\) and \(\delta_{j+1}\) since we are working with ordinal categorical data as discussed in Section 3.1. We use the uniform distribution between those two bounds as the prior distribution for \(\delta_{j}\). Now, let \(\mathbf{\pi}_{j}^{-}\) and \(\mathbf{\pi}_{j+1}^{-}\) be the vectors that represent only those values from the vector \(\mathbf{\pi}^{-}\) where the corresponding categories for the vector \(\mathbf{Y}^{-}\) are \(j\) and \(j+1\), respectively. Similarly, \(X_{j}^{-}\), \(\mathbf{U}_{j}^{-}\), \(X_{j+1}^{-}\), and \(\mathbf{U}_{j+1}^{-}\) are defined. In an identical manner, let \(\mathbf{\pi}_{j}^{+}\) and \(\mathbf{\pi}_{j+1}^{+}\) be the vectors representing only those values from \(\mathbf{\pi}^{+}\) where the corresponding categories for the vector \(\mathbf{Y}^{+}\) are \(j\) and \(j+1\), respectively. Similarly, \(X_{j}^{+}\), \(\mathbf{U}_{j}^{+}\), \(\mathbf{V}_{j}^{+}\), \(X_{j+1}^{+}\), \(\mathbf{U}_{j+1}^{+}\), and \(\mathbf{V}_{j+1}^{+}\) are defined. Then, the full conditional posterior for \(\delta_{j}\) can be written as \[\begin{split} f(\delta_{j}|\mathcal{F})&\propto I( \delta_{j-1}<\delta_{j}<\delta_{j+1})\int\limits_{\delta_{j-1}}^{\delta_{j}} \exp\Big{\{}\frac{-1}{2\sigma_{\varepsilon}^{2}}\left\|\mathbf{\pi}_{j}^{-}-X_{j} ^{-}\mathbf{\theta}-\mathbf{U}_{j}^{-}\right\|^{2}\Big{\}}\mathrm{d}\mathbf{\pi}_{j}^{-}\\ &\int\limits_{\delta_{j}}^{\delta_{j+1}}\exp\Big{\{}\frac{-1}{2 \sigma_{\varepsilon}^{2}}\left\|\mathbf{\pi}_{j+1}^{-}-X_{j+1}^{-}\mathbf{\theta}-\mathbf{ U}_{j+1}^{-}\right\|^{2}\Big{\}}\mathrm{d}\mathbf{\pi}_{j+1}^{-}\\ &\int\limits_{\delta_{j-1}}^{\delta_{j}}\exp\Big{\{}\frac{-1}{2 \sigma_{\varepsilon}^{2}}\left\|\mathbf{\pi}_{j}^{+}-X_{j}^{+}\mathbf{\theta}^{*}-\mathbf{ U}_{j}^{+}-\mathbf{V}_{j}^{+}\right\|^{2}\Big{\}}\mathrm{d}\mathbf{\pi}_{j}^{+}\\ &\int\limits_{\delta_{j}}^{\delta_{j+1}}\exp\Big{\{}\frac{-1}{2 \sigma_{\varepsilon}^{2}}\left\|\mathbf{\pi}_{j+1}^{+}-X_{j+1}^{+}\mathbf{\theta}^{*}- \mathbf{U}_{j+1}^{+}-\mathbf{V}_{j+1}^{+}\right\|^{2}\Big{\}}\mathrm{d}\mathbf{\pi}_{j+1} ^{+}.\end{split} \tag{11}\] It is easy to observe that the posterior distribution of \(\delta_{j}\) does not have a closed form. To implement the slice sampling procedure, we simplify the above integral by making appropriate substitutions. It is worth mentioning that the multivariate slice sampling algorithm was also explored in this context, but it was computationally more expensive and less efficient for our algorithm compared to the single variable slice sampling procedure. We turn our attention to the space-time process vectors \(\mathbf{U}\) and \(\mathbf{V}\). Recall that the \(M\times 1\) vector \(\mathbf{Y}\) denotes the entire data and that \(\mathbf{U}\) and \(\mathbf{V}\) are the concatenated forms (by column) of the matrices \[U_{T\times n}=\begin{pmatrix}u_{s_{1},1}&u_{s_{1},2}&\ldots&u_{s_{1},T}\\ u_{s_{2},1}&u_{s_{2},2}&\ldots&u_{s_{2},T}\\ \vdots&\vdots&\vdots&\vdots\\ u_{s_{n},1}&u_{s_{n},2}&\ldots&u_{s_{n},T}\end{pmatrix},V_{T\times n}= \begin{pmatrix}v_{s_{1},1}&v_{s_{1},2}&\ldots&v_{s_{1},T}\\ v_{s_{2},1}&v_{s_{2},2}&\ldots&v_{s_{2},T}\\ \vdots&\vdots&\vdots&\vdots\\ v_{s_{n},1}&v_{s_{n},2}&\ldots&v_{s_{n},T}\end{pmatrix}. \tag{12}\] It is critical to point out that the conventional posterior distributions for the entire vectors \(\mathbf{U}\) and \(\mathbf{V}\) would entail the inversion of \(M\times M\) dimensional matrices in every iteration for every Markov chain during the Gibbs sampling procedure. This is computationally demanding, and to circumvent the problem, we follow a less expensive procedure by taking advantage of the separability of the spatio-temporal structure. Divide \(\mathbf{V}\) into \(\mathbf{V}_{1},\mathbf{V}_{2},\ldots,\mathbf{V}_{T}\) with \(\mathbf{V}_{t}\) being the \(t^{th}\) column of the \(V\) matrix in eq.12, and let \(\mathbf{V}_{-t}=(\mathbf{V}_{2},\ldots\mathbf{V}_{t-1},\mathbf{V}_{1},\mathbf{V}_{t+1},\ldots,\mathbf{V }_{T})\). Consider the partitioned structure \[\Sigma_{vt}=\begin{bmatrix}\Sigma_{11}&\Sigma_{12}\\ \Sigma_{21}&\Sigma_{22}\end{bmatrix}\] for the temporal correlation matrix, with subscript \(1\) denoting the correlation part corresponding to the \(t^{th}\) time point and subscript \(2\) denoting the same for the rest of the time points. Then, letting \(\mathbf{\mu}_{vc}=(\Sigma_{12}\otimes\Sigma_{vs})(\Sigma_{22}^{-1}\otimes\Sigma_{vs }^{-1})\mathbf{V}_{-t}\) and \(\Sigma_{vc}=(\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21})\otimes\Sigma_ {vs}\), multivariate normal distribution theory indicates that \[\mathbf{V}_{t}|\mathbf{V}_{-t},\sigma_{v}^{2}\sim N_{n}(\mathbf{\mu}_{vc},\sigma_{v}^{2} \Sigma_{vc}). \tag{13}\] Following the same idea for \(\mathbf{U}\) as well, the full conditional posterior distributions are \[\mathbf{V}_{t}|\mathcal{F}\sim\begin{cases}N_{n}(\mu_{vc},\sigma_{v}^{2}\Sigma_{vc }),&\text{for }t\leqslant t_{0},\\ N_{n}\bigg{(}\Sigma_{V_{t}}\left(\frac{\Sigma_{-t}^{-1}\mathbf{\mu}_{vc}}{\sigma_{v }^{2}}+\frac{\mathbf{\pi}_{t}-X_{t}\mathbf{\theta}^{*}-\mathbf{U}_{t}}{\sigma_{v}^{2}} \right),\Sigma_{V_{t}}\bigg{)},&\text{for }t>t_{0},\end{cases} \tag{14}\] \[\mathbf{U}_{t}|\mathcal{F}\sim\begin{cases}N_{n}\bigg{(}\Sigma_{U_{t}}\left(\frac {\Sigma_{-t}^{-1}\mathbf{\mu}_{vc}}{\sigma_{v}^{2}}+\frac{\mathbf{\pi}_{t}-X_{t}\mathbf{ \theta}}{\sigma_{v}^{2}}\right),\Sigma_{U_{t}}\bigg{)},&\text{for }t\leqslant t_{0},\\ N_{n}\bigg{(}\Sigma_{U_{t}}\left(\frac{\Sigma_{-t}^{-1}\mathbf{\mu}_{vc}}{\sigma_{ v}^{2}}+\frac{\mathbf{\pi}_{t}-X_{t}\mathbf{\theta}^{*}-\mathbf{V}_{t}}{\sigma_{v}^{2}}\right), \Sigma_{U_{t}}\bigg{)},&\text{for }t>t_{0},\end{cases} \tag{15}\] where \(\Sigma_{V_{t}}\) and \(\Sigma_{U_{t}}\) are defined to be \[\Sigma_{V_{t}}=\left[\frac{\Sigma_{vc}^{-1}}{\sigma_{v}^{2}}+\frac{\mathbf{I}_{n} }{\sigma_{v^{*}}^{2}}\right]^{-1},\;\Sigma_{U_{t}}=\left[\frac{\Sigma_{uc}^{-1 }}{\sigma_{u}^{2}}+\frac{\mathbf{I}_{n}}{\sigma_{v^{*}}^{2}}\right]^{-1}. \tag{16}\] Next, we look at the four decay parameters in the spatio-temporal covariance functions. One can calculate the posterior distributions of \(\phi_{vt}\), \(\phi_{vs}\), \(\phi_{ut}\), and \(\phi_{us}\), and show that they do not adhere to any known probability distribution. Thus, we use the single variable slice sampling algorithm as applied for \(\delta_{j}\) before. It is particularly of note here that we estimate the decay parameters on a continuous scale, which is in stark contrast to the conventional cross-validation approach used by many other researchers. The problem with using a cross-validation method is that if there are more than two decay parameters being estimated, then only a small set of possible values for the parameters can be considered to handle the computational burden. For example, we look at the work of Sahu, Gelfand and Holland (2006), who used two spatio-temporal processes in their model for analyzing air pollution levels. However, in their study, they had to find the optimal values of the decay parameters from a set of only five values for the spatial decay parameters and only three values for the temporal decay parameters. Clearly, this limits the algorithm's ability to explore all possible values for the decay parameters and runs the risk of not exploring the actual sample space (a four-dimensional cube for our model). We take advantage of the aforementioned slice sampling procedure to alleviate this computational burden in estimating the decay parameters. For the priors, we choose uniform distributions between \(0\) and \(3\) to obtain the full conditional posterior distributions as follows \[\phi_{vt}|\mathcal{F}\propto|\Sigma_{vt}|^{-n/2}\exp\Bigg{\{}\frac{-\mathbf{V}^{ \prime}(\Sigma_{vt}^{-1}\otimes\Sigma_{vs}^{-1})\mathbf{V}}{2\sigma_{v}^{2}} \Bigg{\}}I(0<\phi_{vt}<3),\] \[\phi_{vs}|\mathcal{F}\propto|\Sigma_{vs}|^{-T/2}\exp\Bigg{\{}\frac{-\mathbf{V}^{ \prime}(\Sigma_{vt}^{-1}\otimes\Sigma_{vs}^{-1})\mathbf{V}}{2\sigma_{v}^{2}} \Bigg{\}}I(0<\phi_{vs}<3),\] \[\phi_{ut}|\mathcal{F}\propto|\Sigma_{ut}|^{-n/2}\exp\Bigg{\{}\frac{-\mathbf{U}^{ \prime}(\Sigma_{ut}^{-1}\otimes\Sigma_{us}^{-1})\mathbf{U}}{2\sigma_{u}^{2}} \Bigg{\}}I(0<\phi_{ut}<3),\] \[\phi_{us}|\mathcal{F}\propto|\Sigma_{us}|^{-T/2}\exp\Bigg{\{}\frac{-\mathbf{U}^{ \prime}(\Sigma_{ut}^{-1}\otimes\Sigma_{us}^{-1})\mathbf{U}}{2\sigma_{u}^{2}} \Bigg{\}}I(0<\phi_{us}<3).\] In an exactly similar fashion, we can also show that for the decay parameters of the spatially-varying coefficients, \[\begin{split}\omega_{s}^{h}|\mathcal{F}&\propto| \Omega_{s}^{h}|^{-1/2}\exp\Bigg{\{}\frac{-\mathbf{\gamma_{h}^{\prime}}(\Omega_{s}^ {h})^{-1}\mathbf{\gamma_{h}}}{2k}\Bigg{\}}I(0<\omega_{s}^{h}<3),\\ \omega_{s}^{sh}|\mathcal{F}&\propto|\Omega_{s}^{*h}| ^{-1/2}\exp\Bigg{\{}\frac{-\mathbf{\gamma_{h}^{*^{\prime}}}(\Omega_{s}^{*h})^{-1} \mathbf{\gamma_{h}^{*}}}{2k}\Bigg{\}}I(0<\omega_{s}^{*h}<3).\end{split} \tag{18}\] Finally, for calculating the full conditional posterior distribution of the changepoint \(t_{0}\), we consider the set \(S_{T}=\{0,1,2,\ldots,T\}\) and use a discrete uniform prior distribution. Let \(f_{S_{T}}(t_{0})\) denote this prior. Note that the endpoints are included in the sample space so as to enable the algorithm to converge to any of the endpoints, which would effectively suggest no changepoint. Then, the Gibbs sampling step corresponding to the parameter \(t_{0}\) requires the conditional posterior distribution, which is described by the probability mass function as \[\begin{split} f(t_{0}|\mathcal{F})&\propto I(t_{0} \in S_{T})(2\pi\sigma_{\epsilon}^{2})^{\frac{-nT-}{2}}\exp\Bigg{\{}\frac{-1}{2 \sigma_{\epsilon}^{2}}\left\|\mathbf{\pi}^{-}-(X^{-}\mathbf{\theta}+\mathbf{U}^{-})\right\| ^{2}\Bigg{\}}\\ &\quad(2\pi\sigma_{\epsilon^{*}}^{2})^{\frac{-nT+}{2}}\exp \Bigg{\{}\frac{-1}{2\sigma_{\epsilon^{*}}^{2}}\left\|\mathbf{\pi}^{+}-(X^{+}\mathbf{ \theta}^{*}+\mathbf{U}^{+}+\mathbf{V}^{+})\right\|^{2}\Bigg{\}}.\end{split} \tag{19}\] ### Multiple changepoint detection in COVID-19 Data In the previous two subsections, we discussed the Bayesian estimation of the proposed model of changepoint detection in a general sense. Here, we explain how the model is applied to the COVID-19 data described in Section 2. For this application, as mentioned already, we use the logarithmic transformation of the number of new deaths in the previous week as a non-spatially varying covariate. On the other hand, the prevalence of the first COVID-19 vaccine dose is used as a spatially-varying coefficient in the model. Let us use \(\log d(s_{i},t-1)\) to denote the first covariate at time \(t\) and location \(s_{i}\), whereas \(\mathbf{x}(t)=(x(s_{1},t),\ldots,x(s_{n},t))^{\prime}\) is the \(n\)-dimensional vector of the vaccination prevalence in the set of locations. Then, the mean structure of the latent variable can be written as \[\begin{split}\mu(s_{i},t)&=\beta_{0}+\beta_{1}\log d (s_{i},t-1)+\mathbf{\gamma}(s_{i})x(s_{i},t),\quad\text{for}\ \ t\leqslant t_{0},\\ \mu^{*}(s_{i},t)&=\beta_{0}^{*}+\beta_{1}^{*}\log d (s_{i},t-1)+\gamma^{*}(s_{i})x(s_{i},t),\quad\text{for}\ t>t_{0}.\end{split} \tag{20}\] For the covariate related to the previous week's death, it is expected that it contributes positively to the spreading of the disease. It is connected to the idea that death by COVID-19 corresponds to a higher viral count, which increases the chance of spreading the disease to another human (Pujadas et al., 2020). However, as more people are vaccinated, we expect this effect to be negligible, as there will be fewer deaths after vaccination. Regarding the second covariate, we expect a significantly negative coefficient in the first phase of the vaccination, which would provide evidence of the effectiveness of COVID-19 vaccines in decreasing the spread of the pandemic. As the vaccination picks up, herd immunity is likely to be acquired. For example, MacIntyre, Costantino and Trent (2022) demonstrated in their research that Australia achieved herd immunity with the vaccination coverage of around 66% population. Once that status is reached, the effect of vaccines should not be prominent in explaining the spread of the disease. Furthermore, taking inspiration from Utazi et al. (2018) who studied the spatially-varying impact of measles vaccination in different countries, we hypothesize that different counties in New York state may have experienced unequal effects of vaccines in its relationship with the spread of the disease. This is, in fact, the primary motivation behind using the spatially-varying coefficients for the first-dose vaccination. Another crucial aspect of our proposed methodology is the ability to assess whether the estimated changepoint is indeed significant. Following the convention in Bayesian literature, we make use of the Bayes factor. The Bayes factor allows us to compare a model that does not have any changepoint in the structure to the model with a changepoint. Let \(\mathcal{M}_{1}\) denote our proposed model with a changepoint (see eq. (2) and eq. (3)) and \(\mathcal{M}_{2}\) be the other model obtained by fitting the model with no changepoint (i.e., the first line before the changepoint in eq. (3) for the entire data). The likelihood of the data given a particular model is then \[f(\mathbf{Y}|\mathcal{M}_{i}) =\int f(\mathbf{Y},\mathbf{\theta},\mathbf{\theta}^{*},\mathbf{U},\mathbf{V}|\mathcal{ M}_{i})\mathrm{d}\mathbf{\theta}\mathrm{d}\mathbf{\theta}^{*}\mathrm{d}\mathbf{U}\mathrm{d} \mathbf{V}\] \[=\int f(\mathbf{Y}|\mathbf{\theta},\mathbf{\theta}^{*},\mathbf{U},\mathbf{V}, \mathcal{M}_{i})f(\mathbf{\theta},\mathbf{\theta}^{*},\mathbf{U},\mathbf{V}|\mathcal{M}_{i}) \mathrm{d}\mathbf{\theta}\mathrm{d}\mathbf{\theta}^{*}\mathrm{d}\mathbf{U}\mathrm{d}\mathbf{V}\] \[=\mathbb{E}_{\mathbf{\theta},\mathbf{\theta}^{*},\mathbf{U},\mathbf{V}}\left[f( \mathbf{Y}|\mathbf{\theta},\mathbf{\theta}^{*},\mathbf{U},\mathbf{V},\mathcal{M}_{i})\right]\] \[\approx\frac{1}{m}\sum_{j=1}^{m}f\left(\mathbf{Y}|\mathbf{\theta}^{(j)}, \mathbf{\theta}^{*(j)},\mathbf{U}^{(j)},\mathbf{V}^{(j)},\mathcal{M}_{i}\right),\] where \(\mathbf{\theta}^{(j)}\sim f(\mathbf{\theta}|\mathcal{M}_{i}))\), \(\mathbf{\theta}^{*(j)}\sim f(\mathbf{\theta}^{*}|\mathcal{M}_{i}))\), \(\mathbf{U}^{*(j)}\sim f(\mathbf{U}|\mathcal{M}_{i}))\), \(\mathbf{V}^{*(j)}\sim f(\mathbf{V}|\mathcal{M}_{i}))\) are realizations from the model-specific distributions. Interestingly, often the likelihood values are intractable if one uses non-informative priors, as in this case. As suggested by Newton and Raftery (1994) and Risser et al. (2019), in such cases, one can leverage the posterior distributions of the parameters to find the likelihood with the help of the harmonic mean. Specifically, if \(\mathbf{\beta}^{(j)}_{post}\) denotes the realization of the parameter vector from the \(j^{th}\) posterior sample (similarly for other parameters), then the likelihood is obtained as \[f(\mathbf{Y}|\mathcal{M}_{i})\approx\Bigg{[}\frac{1}{m}\sum_{j=1}^{m}\frac{1}{f( \mathbf{Y}|\mathbf{\beta}^{(j)}_{post},\mathbf{\beta}^{*(j)}_{post},\mathbf{U}^{(j)}_{post}, \mathbf{V}^{(j)}_{post},\mathcal{M}_{i})}\Bigg{]}^{-1}. \tag{21}\] Once the likelihood values are computed, we use the Bayes factor to decide if the estimated changepoint is significant in our problem. Following Kass and Raftery (1995), we use the cutoff of 100 for the Bayes factor to deduce whether there is decisive evidence in favor of the changepoint model. Recall that the Bayes factor for the model \(\mathcal{M}_{1}\) with respect to \(\mathcal{M}_{2}\) is given by \[BF_{12}=\frac{f(\mathbf{Y}|\mathcal{M}_{1})}{f(\mathbf{Y}|\mathcal{M}_{2})}. \tag{22}\] The above procedure is not only useful in finding whether there is a single significant changepoint, but it is also useful for finding if there are multiple changepoints in the ordered categorical spatio-temporal data. To that end, we propose a binary segmentation-type algorithm. This technique has been used in the literature to detect multiple changepoints in different problems (Fryzlewicz, 2014; Cho and Fryzlewicz, 2015). To implement the algorithm in our context, first, we run the aforementioned procedure to find a changepoint, if any, in the entire time period. In all instances of Gibbs samplers in this work, we monitor the convergence through the Gelman-Rubin diagnostic (Gelman and Rubin (1992)), running three Markov chains simultaneously. Now, if the Bayes factor approach provides decisive evidence in favor of the changepoint, we divide the data into two segments with respect to the temporal domain. Then, the same algorithm is run on each of the two segments, and we proceed recursively until no significant changepoint is left in the data. From a pragmatic standpoint, as an additional stopping rule, we impose the restriction that any segment identified by the changepoint(s) must be at least three months (equivalently, 12 weeks) long. Thus, if there are no more than 24 weeks of data for a particular segment, we do not run the algorithm further. Also, if a changepoint is found to be too close to either of the endpoints, it is ignored based on the same restriction. We present the pseudo-code of the entire procedure of multiple changepoint detection in ordered categorical spatio-temporal data in Algorithm 1. Note that the stopping criteria \(\mathcal{C}\) implies that either the Bayes factor approach does not support the existence of a changepoint or that every segment has no more than the required amount of data. We emphasize that this is a subjective choice and can be easily adjusted under other considerations. ## 4 Results As mentioned in Section 2, our objective is to utilize the proposed model to detect the changepoints in the spatio-temporal spread of COVID-19 by considering its ordered categorical nature. Recall that the data used in this study comprise 122 weeks of information (\(20^{th}\) January 2020 to \(16^{th}\) May 2022) from 62 counties in the state of New York. In this section, we first discuss the changepoints identified by the algorithm and discuss their implications. We then provide an in-depth look at the relationship between the covariates and the spread of the pandemic. Finally, we examine the extent of spatial and temporal dependence in the propagation of the disease across different time periods. In this application, using the steps in Algorithm 1, the proposed methodology detects three changepoints. Table 2 shows the estimated changepoints with the corresponding Bayes factors in all stages of the analysis, including those that were not found to be significant according to the criteria. The first changepoint is estimated at the \(57^{th}\) week, and the corresponding Bayes factor is above the cutoff, indicating evidence for a changepoint. In the second stage, the data set is divided into two time periods, and the proposed model is applied to each separate time period. For the first time period, i.e., for the \(1^{st}\) to \(57^{th}\) weeks of the data, because the vaccination started only at the end of the time period (approximately week 50), we had to remove it as a covariate. The Bayes factor here again supports the presence of changepoint, which is obtained at the \(36^{th}\) week. Likewise, for the second time period, i.e., the \(58^{th}\) to \(122^{nd}\) weeks, the \(96^{th}\) week is found to be a significant changepoint. Subsequently, in the third stage of the binary segmentation algorithm, when we fit the model again on the data from \(1^{st}\) to \(36^{th}\) week, the detected changepoint is towards the beginning of the time period with a logarithmic Bayes factor of \(-72\), suggesting no changepoint in the structure. A similar conclusion is reached when the model is run on the data from the \(58^{th}\) to \(96^{th}\) week. Meanwhile, in the time period from \(97^{th}\) to \(122^{nd}\) week, the changepoint is found to be too close to the beginning of the time period and is ignored based on our restriction. Finally, note that we do not run the algorithm for the remaining time period because there is insufficient data according to our assumption mentioned earlier. Thus, there are three changepoints in this data set, and they are on \(21^{st}\) September 2020, \(15^{th}\) February 2021, and \(15^{th}\) November 2021. To put this into context, Figure 4 shows the three changepoints (represented by black dots) with the total number of new cases in the state. This figure shows that the changepoint on \(15^{th}\) February 2021 is towards \begin{table} \begin{tabular}{c c c c c} \hline Stage & Time horizon & Changepoint & Date & Log Bayes Factor \\ \hline 1 & \(1^{st}\) to \(122^{nd}\) week & \(\mathbf{57^{th}}\)**week** & \(15^{th}\) Feb 2021 & 44 \\ \hline 2 & \(1^{st}\) to \(57^{th}\) week & \(\mathbf{36^{th}}\)**week** & \(21^{st}\) Sep 2020 & 64 \\ & \(58^{th}\) to \(122^{nd}\) week & \(\mathbf{96^{th}}\)**week** & \(15^{th}\) Nov 2021 & 1118 \\ \hline 3 & \(1^{st}\) to \(36^{th}\) week & \(8^{th}\) week & \(9^{th}\) Mar 2020 & \(-72\) \\ & \(37^{th}\) to \(57^{th}\) week & Insufficient data & & \\ & \(58^{th}\) to \(96^{th}\) week & \(79^{th}\) week & \(19^{th}\) Jul 2021 & \(-102\) \\ & \(97^{th}\) to \(122^{nd}\) week & Not found & & \\ \hline \end{tabular} \end{table} Table 2: Changepoints and respective Bayes factors for the proposed algorithm. The first column indicates the stages of binary segmentation. Bolded changepoints indicate that it was identified as present/significant in the data. the end of the winter alpha wave of COVID-19 in the USA. It is also the time when vaccinations picked up rapidly in the country, and around 11% of the New York state population received the first dose. In contrast, the changepoint on 21\({}^{st}\) September 2020 coincides with the period when the same wave was about to start in the state. It was undoubtedly one of the most challenging times for the state, and that is well reflected by the two changepoints. Subsequently, we notice that the period of the delta variant did not experience any significant change in the way the disease spread, but the beginning of the wave of the Omicron variant that peaked around the end of January aligns perfectly well with the third changepoint, 15\({}^{th}\) November 2021. This clearly demonstrates strong support for the proposed model to correctly detect the different types of COVID-19 waves in New York. Next, we discuss the effect sizes of the covariates, obtained from fitting the proposed model to the different time periods. We report the posterior means and the 95% credible intervals in Table 3. The spatially-varying coefficients for vaccinations are shown in Figure 3 and discussed in a later paragraph. These values are obtained based on the time periods formed by the changepoints, as discussed before. Notably, for both the intercept and the coefficient for \(\log d(s_{i},t-1)\), there are quite different estimates in the four segments. The intercept for the first phase, when the pandemic was gradually picking up, is much lower. This makes sense, given the relatively lower number of reported COVID-19 cases in this period. Then, the alpha wave started running wild during the 37\({}^{th}\) to 57\({}^{th}\) week period. As vaccines did not provide much intervention until then, the intercept's estimated value is found to be relatively high. In the following segment, with more people getting vaccinated, the number of new cases decreased, thereby justifying a lower value for the estimated intercept. Contrary to that, in the last period, a generally higher average in the latent variable is observed. It can be substantiated by the rise in the new COVID-19 cases, with the omicron variant escaping the vaccination-induced immunity to take over the entire country. Interested readers may refer to Tian et al. (2021) and Ao et al. (2022) for some related discussions in this regard. For the coefficient of \(\log d(s_{i},t-1)\), initially, when there was no available vaccine, the variable was significantly related to a rise in the new cases of COVID-19 infections. In fact, for the period of 1\({}^{st}\) to 36\({}^{th}\) week, the coefficient for the previous week's number of deaths is the highest among all periods, indicating a strong positive relationship to the transmission level of COVID-19. During this phase, the state of New York reported around 6000 deaths on a weekly basis. In the next segment, the state was experiencing the alpha wave. The number of deaths came down in this stage, which points to less severity of the disease, and naturally explains the lower value of the estimated coefficient. Furthermore, once a considerable proportion of the population was vaccinated in the third time period, we see the coefficient estimates for the death Figure 4: COVID-19 waves corresponding to new cases for New York State. The graph’s black dots denote the proposed model’s estimated changepoint. numbers going down significantly. This may be perceived as the effectiveness of the vaccines. Finally, for the period between \(97^{th}\) and \(122^{nd}\) week, the coefficient for the death numbers is found to be insignificant as the omicron variant was milder in terms of causing severe illness but was highly infectious (explaining the large intercept term in this segment). Thus, in addition to capturing changepoints, the proposed model has appropriately captured this effect as well. Let us now focus on the effect of the prevalence of vaccination. It is known that the vaccine was offered only to hospital workers and older people in the initial days. In that light, relevant data are unavailable before the \(57^{th}\) week, and this covariate has to be omitted from the model. Therefore, we present the spatially varying effect of the vaccination in the other two segments in Figure 5. The plots show the parameters' posterior means, and stripes indicate that the credible interval is sufficiently away from zero. When vaccination rates picked up in the period of \(58^{th}\) to \(96^{th}\) week, it was significantly related to a decrease in the transmission level of COVID-19 for all counties in New York. For 33 of the 62 counties, this coefficient is found to be significant. It includes almost all counties in the eastern part of the state, which are typically more populous than the rest. One can thus argue that the vaccination was more effective in the initial phase in more populated counties. In contrast, for the period of \(97^{th}\) to \(122^{nd}\) week, we see that vaccination is generally not significant. One possible justification is the aspect of herd immunity. By \(22^{nd}\) November 2021, \(75\%\) of the state population received their first dose, and by \(16^{th}\) May 2022, it was more than \(85\%\) of the population. As discussed in Section 3.3, around two-thirds of population coverage is required to achieve herd immunity. According to World Health Organization (2021) too, \(70\%\) of the population should be vaccinated to ensure herd immunity against COVID-19. Therefore, after achieving this much population coverage, the effect of the vaccination became irrelevant, until there was no resistance for the omicron variant (Ao et al., 2022). As a last piece of the discussion, we look at the spatio-temporal dependence in the four segments created Figure 5: Spatially varying coefficients (posterior means) for the vaccination prevalence in the last two segments obtained via the changepoints. Significance is equivalent to the credible interval being sufficiently away from zero. \begin{table} \begin{tabular}{l c c c} \hline Variable & Week Range & Posterior mean & \(95\%\) credible interval \\ \hline Intercept & 1-36 & 1.079 & \((0.467,1.615)\) \\ & 37-57 & 7.713 & \((5.483,9.525)\) \\ & 58-96 & 6.279 & \((5.124,7.406)\) \\ & 97-122 & 7.978 & \((6.520,9.321)\) \\ \hline log(Previous week’s death) & 1-36 & 1.101 & \((0.679,1.497)\) \\ & 37-57 & 0.296 & \((0.071,0.533)\) \\ & 58-96 & 0.273 & \((0.157,0.389)\) \\ & 97-122 & \(-0.146\) & \((-0.357,0.061)\) \\ \hline Spatial decay parameter & 1-57 & 0.0108 & \((0.0081,0.0129)\) \\ & 58-122 & 0.0021 & \((0.0016,0.0026)\) \\ \hline Temporal decay parameter & 1-57 & 0.264 & \((0.206,0.315)\) \\ & 58-122 & 0.203 & \((0.165,0.245)\) \\ \hline \end{tabular} \end{table} Table 3: Parameter estimates and corresponding credible intervals when the proposed model is fitted in different segments. by the changepoints. To this end, heatmaps of the estimated spatio-temporal correlation for the latent process \(\pi(s_{i},t)\) for different periods are displayed in Figure 6. The correlations are computed based on the posterior means of the spatial and temporal decay parameters and are presented as functions of distance and time-lag. The correlation is estimated to be the lowest in the first time period compared to all other time periods. A probable reason is that lockdowns were enforced in New York very early when the COVID-19 prevalence was quite low. After that, restrictions were relaxed slowly in a four-phase technique (Husch Blackwell, 2021). Thus, as the pandemic progressed and the prevalence kept increasing during the alpha wave (\(37^{th}\) to \(57^{th}\) weeks), we see that there is a strong spatio-temporal dependence. It was, in fact, the largest correlation among all time periods. Afterward, between \(58^{th}\) and \(96^{th}\) week, it can be seen from the plots that there is a drop in spatial and temporal dependence. This can be attributed to the fact that during this time period, vaccination rates started picking up everywhere, as many vaccines were made available to all adults of different age groups. Finally, we see a further drop in the spatial and temporal dependence pattern for the last time period. As pointed out before, the omicron variant prevailing in this phase was milder than the previous counterparts and did not cause severe illness. The population had also arguably reached herd immunity. It can be the reason behind the drop in spatial dependence, whereas the drop in the temporal correlation can be connected to the fact that illness due to the omicron variant persisted for a shorter duration than the delta variant, as explicated by Menni et al. (2022) and Vitiello et al. (2022). ## 5 Conclusion In this work, we have developed a novel methodology to detect changepoints in an ordinal categorical spatio-temporal data set. We also provided complete estimation details and considerations for the various model parameters. The model is applied to COVID-19 data from the state of New York, by converting weekly new cases for every county into ordered categories (following CDC guidelines), signifying the transmission levels of COVID-19. Our model is defined through an appropriately structured latent process that can quantify the spatial and temporal dependence in the spread of the disease. It is implemented through a complete Bayesian framework. Our analysis identifies interesting changes in the spatial and temporal patterns of the pandemic, which are closely aligned with the infectivity of specific virus variants. Taking advantage of our Figure 6: Estimated correlation for the latent process \(\pi(s_{i},t)\) in four segments. The contour lines represent the points that have the same correlation values. proposed model, we also illustrate how vaccination is related to the transmission levels in different counties over the time period. One interesting finding is the evidence of varying spatial dependence in various time segments, which may be attributed to the movement of people and the vaccination rate. Due to the unavailability of movement data across different counties of New York state, this is beyond the scope of the paper. Nonetheless, this could be a valuable extension of our work to understand the infection dynamics better. In a similar vein, restrictions, and lockdowns imposed by the government are expected to impact the spread of COVID-19. Such covariates related to policy implementation have not been incorporated into our analysis, once again, because of the lack of relevant data at an appropriate granularity. Naturally, it poses another idea of an extension to the current approach. It is imperative to point out that not only the application of our methodology to COVID-19 data was useful for providing additional insights into a world-changing event, but the proposed model would also be helpful in other applications. For example, it can be utilized in epidemiological (e.g., other infectious disease transmission levels), environmental (e.g., precipitation levels or soil organic carbon concentration levels), education (e.g., school grades over time), social network (e.g., level of connectedness of individuals or companies) studies, and more. Finally, we note that the proposed methodology in this article has limitations in terms of computational burden in the case of a bigger data set, e.g., if one wants to look at all counties of the USA in a single model. Particularly, it would be an intriguing exercise to estimate the changepoints for different regions across the country and assess how similar or different the spread of COVID-19 was in different periods. Although our approach works in theory, in such cases, applying a model at this scale would require a significant approximation in the Bayesian algorithm.
2310.14603
Betelgeuse as a Merger of a Massive Star with a Companion
We investigate the merger between a 16 solar mass star, on its way to becoming a red supergiant (RSG), and a 4 solar mass main-sequence companion. Our study employs three-dimensional hydrodynamic simulations using the state-of-the-art adaptive mesh refinement code Octo-Tiger. The initially corotating binary undergoes interaction and mass transfer, resulting in the accumulation of mass around the companion and its subsequent loss through the second Lagrangian point (L2). The companion eventually plunges into the envelope of the primary, leading to its spin-up and subsequent merger with the helium core. We examine the internal structural properties of the post-merger star, as well as the merger environment and the outflow driven by the merger. Our findings reveal the ejection of approximately 0.6 solar mass of material in an asymmetric and somewhat bipolar outflow. We import the post-merger stellar structure into the MESA stellar evolution code to model its long-term nuclear evolution. In certain cases, the post-merger star exhibits persistent rapid equatorial surface rotation as it evolves in the H-R diagram towards the observed location of Betelgeuse. These cases demonstrate surface rotation velocities of a similar magnitude to those observed in Betelgeuse, along with a chemical composition resembling that of Betelgeuse. In other cases, efficient rotationally-induced mixing leads to slower surface rotation. This pioneering study aims to model stellar mergers across critical timescales, encompassing dynamical, thermal, and nuclear evolutionary stages.
Sagiv Shiber, Emmanouil Chatzopoulos, Bradley Munson, Juhan Frank
2023-10-23T06:15:23Z
http://arxiv.org/abs/2310.14603v1
# Betelgeuse as a Merger of a Massive Star with a Companion ###### Abstract We investigate the merger between a \(16M_{\odot}\) star, on its way to becoming a red supergiant (RSG), and a \(4M_{\odot}\) main-sequence companion. Our study employs three-dimensional hydrodynamic simulations using the state-of-the-art adaptive mesh refinement code Octo-Tiger. The initially corotating binary undergoes interaction and mass transfer, resulting in the accumulation of mass around the companion and its subsequent loss through the second Lagrangian point (L2). The companion eventually plunges into the envelope of the primary, leading to its spin-up and subsequent merger with the helium core. We examine the internal structural properties of the post-merger star, as well as the merger environment and the outflow driven by the merger. Our findings reveal the ejection of approximately \(\sim 0.6\ M_{\odot}\) of material in an asymmetric and somewhat bipolar outflow. We import the post-merger stellar structure into the MESA stellar evolution code to model its long-term nuclear evolution. In certain cases, the post-merger star exhibits persistent rapid equatorial surface rotation as it evolves in the H-R diagram towards the observed location of Betelgeuse. These cases demonstrate surface rotation velocities of a similar magnitude to those observed in Betelgeuse, along with a chemical composition resembling that of Betelgeuse. In other cases, efficient rotationally-induced mixing leads to slower surface rotation. This pioneering study aims to model stellar mergers across critical timescales, encompassing dynamical, thermal, and nuclear evolutionary stages. binaries: close -- stars: evolution -- hydrodynamics -- methods: numerical 0000-0002-4181-8088]Sagiv Shiber 0000-0002-4880-7886]Emmanouil Chatzopoulos 0000-0002-4880-0888]Bradley Munson 0000-0002-4883-0888]Juhan Frank ## 1 Introduction It has been firmly established that the majority of massive stars exist within binary systems (Mason et al., 2009; Dunstall et al., 2015). Furthermore, a significant portion of these systems undergo binary interactions at some point during their evolution (Sana et al., 2012; de Mink et al., 2013; Renzo et al., 2019). In extreme cases, these interactions can result in the complete merging of the two stars (Bonnell & Bate, 2005; de Mink et al., 2014). Depending on the initial conditions that trigger a stellar merger, there can be various outcomes (Podsiadlowski et al., 2006; Schneider et al., 2016). The merger event may be accompanied by a short-lived, relatively faint electromagnetic transient referred to as a "merge-burst" (Soker & Tylenda, 2006), as well as mass loss occurring along the orbital plane of the binary progenitor system (Bodenheimer & Taam, 1984; Taam & Bodenheimer, 1991; Terman et al., 1994). If the secondary star gets past the initial common-envelope (CE) phase and spirals inward into the extended envelope of a primary star that is evolved past the main-sequence (MS) with a helium (He) core, the outcome will depend on which component has the highest central density. If mass transfer started in Case B, when the primary had already developed a compact core, the secondary may experience tidal disruption outside the core of the primary and subsequently merge with it. If no hydrogen-rich material is mixed into the primary core, we have what we may classify as a Case B-_quiet_ merger. The extent to which the disrupted secondary material mixes with the core of the primary, known as the penetration depth of the stream (Ivanova et al., 2002; Ivanova & Podsiadlowski, 2002), determines whether there is sufficient fresh fuel to rejuvenate the nuclear evolution of the secondary star, leading to a new phase of core hydrogen burning (Ivanova, 2002; Ivanova & Podsiadlowski, 2003). Rejuvenated post-merger stars are likely to evolve into compact, hot blue supergiant (BSG) supernova (SN) progenitors, similar to Sk - 69\({}^{\circ}\) 202, the progenitor of SN1987A (Menon and Heger, 2017; Menon et al., 2019). Recent studies have demonstrated that the degree of core penetration and rejuvenation of the primary star strongly depends on the original binary mass ratio, denoted as \(q=M_{2}/M_{1}\), where \(M_{1}\) and \(M_{2}\) are the masses of the primary star (initially the donor) and the secondary star (initially the accretor) respectively. Mergers following Case B mass transfer, in systems where \(q\lesssim 0.25\) tend to yield a quiet merger and evolve towards the red supergiant (RSG) phase (Podsiadlowski et al., 1990; Ivanova et al., 2002). Even in the case of a Case B-quiet merger scenario, the post-merger star's evolution is unlikely to resemble that of a single massive star. Depending on the initial period (or binary separation) of the system when the CE phase begins, the secondary star may deposit significant amounts of angular momentum into the envelope of the primary star, resulting in spin-up and enhanced rotation-induced chemical mixing (Wheeler et al., 2017). These effects can give rise to a rapidly rotating RSG with elevated surface nitrogen (\({}^{14}\)N) abundances. While a few observations of evolved, lower-mass red giant stars support this notion (Costa et al., 2015), the most intriguing case is that of Betelgeuse (\(\alpha\) Orionis), a massive, evolved RSG star exhibiting evidence of fast surface equatorial rotation (Uitenbroek et al., 1998; Gilliland and Dupree, 1996; Wheeler and Chatzopoulos, 2023). Recent measurements from the _Atacama Large Millimeter Array_ (_ALMA_) indicate a surface equatorial rotational velocity of 5-15 km s\({}^{-1}\) for Betelgeuse (Kervella et al., 2018), while measurements of its surface abundances reveal nitrogen-to-carbon (N/C) and nitrogen-to-oxygen (N/O) ratios of 2.9 and 0.6, respectively (Lambert et al., 1984) far in excess of standard solar values. Recent theoretical work has shown that early Case B mergers with mass ratio in the range \(0.07<q<0.25\), primary mass 15-16\(M_{\odot}\), secondary masses 1-4 \(M_{\odot}\) where the CE phase is triggered during the expansion of the primary to the RSG stage (the "Hertzsprung gap" crossing phase), at primary radii 50-300 \(R_{\odot}\) can reproduce the current observed state of Betelgeuse, indicating that a past merger is a viable explanation for this extraordinary star (Chatzopoulos et al., 2020). While Chatzopoulos et al. (2020) present a compelling argument for explaining some of the observed characteristics of Betelgeuse, their approach is constrained by several simplifying assumptions that rely on basic approaches, including a spherically-symmetric 1D treatment and an analytical expression for the specific angular momentum deposition into the envelope of the primary star. These assumptions assume that the structure of the secondary star remains unaffected during the common-envelope (CE) in-spiral phase and that the in-spiral time-scale of the secondary is significantly shorter than the thermal adjustment time-scale of the primary's envelope. Additionally, there is some disagreement regarding whether the high measured rotational velocity of Betelgeuse reflects actual surface rotation or other phenomena, such as large-scale convective plume motion (Lopez Ariste et al., 2018; Jadlovsky et al., 2023). Furthermore, certain properties of the star, such as the enhanced surface abundance of \({}^{14}\)N, remain puzzling within the context of the evolution of single massive stars. The objective of this research is to comprehensively and consistently investigate the merger scenario for Betelgeuse and, more broadly, rapidly-spinning red supergiant (RSG) stars, using three-dimensional (3D) hydrodynamics merger simulations. In particular, we make use of the updated and bench-marked 3D adaptive mesh refinement (AMR) Octo-Tiger(Kadam et al., 2018; Marcello et al., 2021) code to model the CE in-spiral and final secondary-primary core merger of a \(q=0.25\), 16 \(M_{\odot}\) primary right after the end of its core H-burning phase and expanded at a radius of \(\sim 50\)\(R_{\odot}\) with a 4 \(M_{\odot}\) main-sequence (MS) secondary. Additionally, we track the 3D evolution of the post-merger star for several orbits after the merger to obtain a realistic structure of the post-merger star, including the distribution of its internal angular momentum. We also monitor the expansion of dynamically unbound mass loss caused by the merger. The resulting 3D post-merger structure is then spherically averaged and incorporated into the 1D stellar evolution code known as _Modules for Experiments in Stellar Astrophysics_(MESA; Paxton et al., 2011, 2013, 2015, 2018, 2019). This integration enables us to explore the long-term nuclear evolution of the post-merger star until it reaches the RSG phase and compare our findings with observations of Betelgeuse. To the best of our knowledge, this is the first study in the literature that self-consistently simulates the 3D dynamics of the CE and merger phase, as well as the long-term nuclear evolution of the post-merger star in 1D. Given the extensive simulation box used to track the expansion of the mass loss induced by the merger, and the high resolution required to accurately depict the structure of the secondary star and ensure its structural stability, this simulation is the first in its kind. It took nearly an entire calendar year to complete and involved multiple successive compute time allocations. The investment of time was justified, as we were able to confirm that, under certain conditions, "quiet" high mass ratio mergers of this nature can indeed produce RSGs with properties similar to those observed in Betelgeuse. Moreover, the significant mass loss that can occur during the merger process can lead to non-spherically symmetric circumstellar (CS) environments with properties and geometries reminiscent of those observed in other systems, such as SN1987A (Plait et al., 1995). In Section 2, we provide a comprehensive description of our numerical setup and the initial conditions of the binary system in detail. Subsequently, in Section 3, we concisely outline the outcomes derived from our three-dimensional (3D) dynamical merger simulations employing the Octo-Tiger model, as well as the long-term nuclear evolution simulations conducted in one dimension (1D) using the MESA code. Lastly, in Section 4, we succinctly summarize the findings of our study and engage in a thorough discussion of the principal conclusions drawn, along with their implications for the evolution of massive stars following a merger event. ## 2 Numerical Set-Up Our primary and secondary models are constructed based on one-dimensional profiles that were simultaneously evolved using the MESA stellar evolution code from the Zero Age Main Sequence (ZAMS). The primary star originates with a ZAMS mass of \(16M_{\odot}\), while the secondary star begins with a ZAMS mass of \(4M_{\odot}\). These masses align with the \(q=0.25\) binary mergers examined in Chatzopoulos et al. (2020), which have been identified as fitting Betelgeuse's inferred surface rotation with surface velocities ranging from \(4-12\) km/sec. These velocities persist for time spans of \(7-8\times 10^{5}\) yr. Both stellar models were initialized assuming solar metallicity and utilized standard mass-loss prescriptions appropriate for their respective masses and evolutionary phases (de Jager et al., 1988; Vink et al., 2001). Following the methodology employed in Chatzopoulos et al. (2020), our models were terminated after the primary star completed its evolution off the main sequence and traversed the Hertzsprung gap (HG) phase. During this transitional stage, the primary's envelope expands on a thermal time scale, progressing towards the RSG state. The primary star reaches a radius of \(R_{1}=50R_{\odot}\), which is smaller in comparison to the primary radius ranges investigated in Chatzopoulos et al. (2020) (typically \(>200R_{\odot}\)). This deliberate choice was made to minimize computational time, as a larger envelope would result in longer dynamical timescales (proportional to \(\sim R^{3/2}\)) and require a higher cell count for adequate resolution. At this stage, the age of the binary system is \(1.11\times 10^{7}\) yr, and the primary star's mass has decreased to \(M_{1}=15.48M_{\odot}\) due to wind-driven mass loss. The primary star exhibits a luminosity of \(L_{1}=55,000L_{\odot}\) and an effective temperature of \(T_{\rm eff,1}=12,600\) K. The secondary star, on the other hand, remains in the main sequence phase at this age, with a radius of \(R_{2}=2.43\)\(R_{\odot}\). To incorporate the MESA structures into Octo-Tiger's grid, we first approximate the internal structures of the two stars by fitting them to single polytropes. We determine that polytropic indices of \(n_{1}=4.1\) and \(n_{2}=3.4\) provide the best fit for the primary star (referred to as star 1) and the secondary star (referred to as star 2) respectively. Subsequently, we utilize Octo-Tiger's self-consistent field (SCF) technique (Kadam et al., 2018) to construct an equilibrium binary system with synchronous rotation. The SCF method initializes each star as a polytrope and iterates until the desired level of equilibrium is attained (for further details, refer to Marcello et al., 2021). The level of equilibrium is quantified by deviations from the virial relation. Specifically, the SCF produces models in which the virial error, defined as \(\text{VE}=\left|\left(2E_{k}+E_{g}+2E_{i}\right)/E_{g}\right|\), is as small as \(10^{-3}\). Here \(E_{k}\), \(E_{g}\), and \(E_{i}\), are the total kinetic, gravitational and internal (thermal) energies of the binary, respectively. The SCF process ensures the conservation of mass for each star, the ratio of core mass to the total star mass (particularly relevant in cases where there is a difference in mean molecular weight or composition between the core and the envelope, as seen in star 1), as well as the initial Roche Lobe (RL) filling factor. Additionally, the structures generated by the SCF technique incorporate the effects of tidal distortion and rotational flattening of the stars as intentional design features. In our simulation, we assign the masses to be identical to those specified in the original MESA 1D profiles. Additionally, we opt for a RL filling factor of 1, indicating that the primary star precisely occupies its RL. These choices establish the initial separation at \(99.6R_{\odot}\). Furthermore, to thoroughly investigate the structure, kinematics, and energetics of merger-driven outflows, we utilize an expansive simulation box with dimensions approximately 40 times the initial orbital separation, resulting in a box size of \(\left(4000R_{\odot}\right)^{3}\). Our AMR grid has 10 levels of refinement that are controlled by a density refinement criterion, and two extra refinement levels for the secondary star. These two extra refinements are done by exploiting Octo-Tiger's ability to define different species, which we use to differentiate between primary and secondary star densities. Specifically, we refine by two extra levels cells in which the secondary mass fraction exceeds the value of half. The two extra refinement levels are required for the correct computation of the gravitational force onto the sec ondary star's cells (see subsection 2.2 for more details). This yields a minimal cell size of \((\Delta x)_{\rm min}^{\rm primary}=0.49~{}R_{\odot}\) and \((\Delta x)_{\rm min}^{\rm secondary}=0.12~{}R_{\odot}\), for the primary and secondary stars, respectively. In Figure 1 we show the initial density (left) and pressure (right) maps of the binary that were produced by the SCF. We plot density and pressure at the equatorial plane. The right panel also illustrates the adaptive mesh in our simulation (gray and black squares). Each such a square represents a subgrid consisting of \(8^{3}=512\) cells. We resolve \(\sim 200\) cells across the primary star's diameter (along the x axis), and \(\sim 70\) across the secondary star's diameter. The left panel presents contours of effective potential which is the sum of the gravitational and rotational potentials, demonstrating the teardrop shape of the primary star occupying its RL. The overall initial cell count in the grid is \(6.6\times 10^{6}\). During the SCF iterations, a polytropic equation of state (EoS) is assumed, with each star having a different polytropic index. However, once the simulation enters the evolution phase after initialization, an ideal gas equation of state is utilized. Radiation pressure is disregarded, and the temperature is calculated using the ideal gas law, taking into consideration the mean molecular weight of each cell. This approach may result in an overestimation of the temperature, as the inclusion of radiation pressure in the EoS would have likely resulted in lower temperatures. In the future, a revised version of Octo-Tiger will be employed, incorporating a more comprehensive stellar equation of state (EoS) to effectively address temperature inconsistencies. It is crucial to emphasize that Octo-Tiger handles the evolution of inertial frame quantities within a rotating grid (for a detailed and comprehensive code description, refer to Marcello et al.2021). This implementation involves a grid that constantly rotates at the same frequency as the initial binary frequency. In the rotating frame, the stars begin without initial velocities, while the dilute gas surrounding the stars does possess velocities. This deliberate approach significantly diminishes the impact of viscosity effects that could potentially hinder the accurate progression of the binary system. The upcoming two subsections focus on individual examinations of the initial structures and dynamical stability of each star. It is demonstrated that, despite inherent limitations in resolution and computational resources, we successfully replicate the structure of both stars with a commendable level of accuracy. Notably, our simulations closely replicate the primary star's envelope as per the original MESA structure. This enables us to investigate the impact of envelope spin-up, which stands as a significant objective of this study. ### Primary star's structure and stability As previously mentioned, our primary star initiates the Octo-Tiger binary simulation with a mass of \(15.48M_{\odot}\). Leveraging Octo-Tiger's capability to track different species, we initialize the primary star with two distinct species: one for the core and one for the envelope. To determine the core mass, we locate the mass coordinate in the MESA model where the abundance of Helium falls below that of Hydrogen, which corresponds to \(M_{1,\rm core}=6.04M_{\odot}\). Subsequently, we set the molecular weight of the core to the average molecular weight computed over the mass coordinate \(m\leq M_{1,\rm core}\), resulting in \(\mu_{1,\rm core}=1.21\). For the envelope, we calculate the average molecular weight over the mass coordinate \(m>M_{1,\rm core}\), yielding \(\mu_{1,\rm env}=0.63\). It is important to note that various definitions exist for the core of a star. However, we adopt a definition that primarily captures the characteristics of the envelope rather than the core, as our primary focus lies in investigating the envelope of the primary star and its acquisition of angular momentum through the merger process. In Figure 2 we compare the primary's structure in Octo-Tiger (thick lines) with the MESA model (thin lines). This figure is similar to Figure 1 of Lau et al. (2022), where they plot composition (dashed and dash-dotted lines) and density (blue solid lines) as a function of mass coordinate, but we also plot the initial composition of our 3D model as well as the radius vs mass coordinates (red dotted lines). The density and composition in our simulation align well with the MESA values in the outermost 5 solar masses of the star. There is also a good agreement in an intermediate region spanning from \(6M_{\odot}\) to \(10M_{\odot}\). However, due to our approximation of the star as a polytrope, we face a limitation in resolving the high-density regions near the core. This limitation arises from numerical constraints and the considerable computational cost associated with our 3D simulation. The dense and compact core cannot be adequately resolved without encountering small time steps that would render the merger simulation unfeasible within a reasonable timeframe. Consequently, in Octo-Tiger, the "core" extends to approximately \(5~{}R_{\odot}\), which is approximately \(2.7\) times larger than the core in the MESA model. In alternative studies (such as in Lau et al.202), the dense core is replaced with a point particle to circumvent these challenges. However, the use of a point particle introduces a softening length that may influence the evolution when the secondary star approaches the core particle closely, and it remains uncertain whether a merger could occur under such circumstances. Nevertheless, ongoing work is being conducted to implement point particle treatment within the Octo-Tiger code, which will be beneficial for future investigations. In summary, the structure of our primary star adequately represents the corresponding MESA structure in the regions of primary interest for this study. This enables us to examine the post-merger spin-up and envelope composition. Figure 3 illustrates the comparison between our simulation and the MESA model, depicting the initial primary star's density (left), pressure (center), and temperature (right) as functions of radius. Because the primary star fills its RL, it has a teardrop shape and is more extended towards the binary axis, \(x_{\rm Binary}\). Therefore we show three line-plots in each panel: along the line that go from the center of the primary to the direction of the secondary star center (\(\parallel x_{\rm binary}\); blue circles), in perpendicular to this binary axis (\(\perp x_{\rm binary}\); orange x's) as well as mass averaged profile ('Avg.'; green pentagons). The figure demonstrates a close match between the two models in most spatial regions. In order to assess the stability of the primary star, we conducted single star simulations using polytropic models with an index of \(n=4.1\). Our investigations revealed that resolution plays a crucial role in maintaining stability, and we determined that a minimum resolution of 128 cells across the diameter of the primary star is necessary to ensure stability over an extended period (see Appendix A for further details). At this resolution, the star experiences only a slight collapse after approximately 140 dynamical orbits. Although this collapse may potentially reduce mass transfer during the binary simulation and consequently prolong the time to merger, we mitigate this effect by employing a higher resolution in the binary simulation. In fact, we use 3/2 more cells across the diameter of the primary star in the binary simulation, thereby minimizing the impact of this minor collapse and maintaining the star's stability throughout the binary evolution. Furthermore, due to the limited lifetime of a star with coarser grid resolution, it is not feasible to run a binary simulation using a coarser grid as the star would collapse Figure 1: Density (left) and pressure (right) slices at the equatorial plane at \(t=0\) of our Octo-Tiger simulation. The right panel also shows the AMR grid and the extra refinement placed on the secondary star. The left panel also presents contours of effective potential (gravitational plus rotational). Both panels zoom-in on the central 1 AU of the grid, (\(\sim 1/20\) of the simulation domain size) Figure 2: Comparison of the primary star structure when imported to the 3D grid (thick lines) with the original MESA model (thin lines). Density (blue solid lines), radius (red dotted lines), Hydrogen mass fraction (orange dashed lines), and Helium mass fraction (green dashed-dotted lines) are plotted as a function of the mass coordinate \(m\). The adequate representation of the envelope (\(m=6-15.5\;M_{\odot}\)) enables us to study the post-merger spin-up and envelope composition before the merger could take place. Conversely, a finer grid simulation would incur a prohibitively high computational cost. Consequently, we were unable to conduct a comprehensive resolution study for the specific binary merger simulation presented in this paper. ### Secondary star's structure In accordance with the SCF technique, the size of the secondary star is determined by its RL filling factor. However, it is important to note that the resolution around the secondary star can impact its size and the SCF may not converge for certain small filling factor values if the resolution is insufficient. Insufficient resolution around the secondary star can also lead to significant fluctuations in the calculated gravitational forces, which can greatly hinder the conservation of angular momentum, and can cause the stars to wiggle around the center of mass. To address this issue, we apply two additional refinement levels around the secondary star. Additionally, we select the smallest filling factor that still converges at this resolution. In Figure 4 we plot density (left), pressure (center), and temperature (right) as a function of radius of the resulted secondary star structure and we compare it to the MESA model. The secondary star obtained by the SCF is larger than the corresponding MESA model but remains within a reasonable range for a main sequence star. Increasing the resolution around the secondary star could have resulted in a smaller star size. However, it would also slow-down the calculation by at least a factor two, causing the simulation to become too expensive to run. Since the drag force depends on the accretion radius, it will not be affected by the actual size of the secondary star unless the orbital separation becomes very small. However, by that stage, ablation may have already changed the effective size of the secondary star. Nonetheless, a smaller star might accrete less mass and more artificial driving by extraction of angular momentum would be required for the dynamical plunge-in to occur. Overall, resolving the secondary star is already an improvement from previous studies which utilize the secondary star as a point-particle that does not have an actual size (except maybe a numerical size like a softening length). While the secondary star's central density is slightly smaller than in the MESA model, the overall structure, characterized by an \(n=3.4\) polytrope, is similar. Importantly, this structure exhibits greater stability compared to that of the primary star. Lastly, we assign a molecular weight of \(\mu_{2}=0.62\) to the secondary star, based on the average molecular weight of the MESA model. ## 3 Results In this section, we present the findings from our simulations of the merger and post-merger evolution of a 16+4 \(M_{\odot}\) binary system. Our analysis focuses on three key aspects. First, in subsection 3.1, we examine the binary's evolution leading up to the merger. This includes the behavior of the binary system as it approaches the merging stage. Second, in subsection 3.2, we investigate the amount of mass that becomes unbound due to the binary interaction. We analyze the properties of the mass loss resulting from the merger, including the morphology, geometry, and dynamics of the outflows. Finally, in subsection 3.3, we explore the characteristics of the post-merger phase. We perform long-term nuclear evolution simulations using the MESA code, utilizing the averaged three-dimensional post-merger structure from Octo-Tiger to generate a one-dimensional profile. This allows us to study the properties of the post-merger system in terms of its nuclear evolution. By examining these three aspects, we gain a comprehensive understanding of the merger and post-merger processes in the 16+4 \(M_{\odot}\) merger model. Figure 3: Density (left), pressure, (center), and temperature (right) as a function of radius of the initial primary star in our simulation. As the primary star fills its Roche Lobe, it is not completely spherically symmetric, and we therefore show lines that goes along the binary axis, \(x_{\rm Binary}\) (lines that go from the center of the primary to the direction of the secondary star center; blue circles), and in perpendicular to the binary axis (orange x’s) as well as mass averaged profiles (green pentagons). We also plot the 1D MESA profile (solid red line), showing again that the envelope structure in our simulation is well matched to the MESA model ### Orbital Evolution To accelerate the merger process and minimize computational time, we employ a driving mechanism in the binary system. Following the approach outlined in Marcello et al. (2021), we extract angular momentum from the grid at a constant rate of 1% per initial orbit. This manipulation leads to a reduction in the orbital separation and facilitates a more significant mass transfer between the stars. Without this driving mechanism, the system would undergo numerous orbits, potentially several hundreds, before a merger occurs. Given our current resources and time constraints, simulating such a prolonged evolution would be impractical. In Figure 5 we plot the conservation of mass (left), energy (center), and angular momentum (right) in the Octo-Tiger merger simulation. In this plot, we examine the changes in total mass, total energy, and total z-angular momentum over time, denoted as \(\Delta M\), \(\Delta E\), and \(\Delta J_{z}\) respectively. These quantities are calculated by comparing the current values to their initial values in the grid: \(\Delta M=M(t)+M_{\rm out}(t)-M_{0}\), \(\Delta E=E(t)+E_{\rm out}(t)-E_{0}\), and \(\Delta J_{z}=J_{z}(t)-J_{z}^{0}\). Here, \(M_{0}\), \(E_{0}\), and \(J_{z}^{0}\) represent the initial total mass, total energy, and total z-angular momentum in the grid. Additionally, \(M_{\rm out}(t)\) and \(E_{\rm out}(t)\) account for the mass and energy that have left the grid, respectively. For energy and angular momentum, we assess their conservation only after discontinuing the driving mechanism at \(t=46P_{0}\). Our findings reveal that mass and energy are conserved at a precision level of \(10^{-6}\). However, due to numerical viscosity-induced torques during and after the merger, the total z-angular momentum increases by 5%. Furthermore, as mass leaves the grid, the angular momentum decreases by 20%. Although quantifying the exact amount of z-angular momentum leaving the grid is challenging, our results indicate that angular momentum is still conserved at a level of approximately 5%. As described in Marcello et al. (2021) we use an iterative post-processing method to identify the cells that belong to each star. This diagnostics scheme allows us to calculate the system's orbital properties, like orbital separation and orbital angular momentum, as well as the mass, energy and spin of each star. We improved the technique and instead of just iterating over a fixed number of iterations we iterate until the stars' masses (\(M_{1}\) and \(M_{2}\)) and the orbital separation (\(a\)) converged, i.e, when the values in the current iteration are at most within \(10^{-5}\) of the values in the previous one. Note that when the system approaches the final merger, it becomes increasingly difficult to differentiate between the two stars and the diagnostics method cannot be reliably used. We typically find that this occurs when the binary frequency, as calculated by the center of mass, \((x^{1,2},y^{1,2},z^{1,2})\) and center of mass velocity \((v_{x}^{1,2},v_{y}^{1,2},v_{z}^{1,2})\) of each star, \(\Omega_{\rm orb}=j_{z,\rm orb}/a^{2}=[(x^{2}-x^{1})(v_{y}^{2}-v_{y}^{1})-(y^{2 }-y^{1})(v_{x}^{2}-v_{x}^{1})]/a^{2}\), falls below half of the nominal Keplerian frequency, \(\Omega_{\rm lep}=\sqrt{G(M_{1}+M_{2})/a^{3}}\). This is to be expected as the donor's envelope is tidally disrupted, and much of its mass increasingly lags behind its center of mass. Therefore we may define the binary merging time as when the binary frequency falls below half of the nominal Keplerian frequency. In Figure 6 we plot orbital separation, orbital angular momentum, mass losses (\(\Delta M=M(t)-M(0)\)), and mass transfer rates as a function of time (in initial orbital period units, \(P_{0}=26\) days). All these quantities were calculated using the diagnostics scheme discussed in the previous paragraph. The mass transfer rates are smoothed by the Savitzky-Golay filter (Savitzky and Golay, 1964) with a window size of \(2P_{0}\). As anticipated, the presence of driving in the system leads to an initial decrease in the orbital separation (panel a) and orbital angular momentum (panel b) that follows a nearly linear trend. At this stage, the primary star overflows its RL, resulting in a high mass loss rate of \(10^{-4}-5\times 10^{-1}\)\(M_{\odot}\) yr\({}^{-1}\) (panel d, indicated by the dashed orange line). However, only a small portion of Figure 4: Density (left), pressure, (center), and temperature (right) as a function of radius of the initial secondary star in our simulation. We plot lines along the binary axis, \(x_{\rm Binary}\) (lines that go from the center of the secondary star to the direction of the primary star center; blue circles), as well as the 1D MESA profile (solid red line) for comparison Figure 5: Conservation of mass, energy, and angular momentum. Note that the in the angular momentum plot we do not take into account angular momentum losses due to gas that flows outside of the simulation domain. Figure 6: Binary diagnostics plots as a function of time. (a) Orbital separation; (b) Orbital angular momentum; (c) mass losses, where \(\Delta M=M(t)-M(0)\); and (d) mass transfer rates. The primary star (star 1) looses mass (dashed orange lines), however, the secondary star only accretes a fraction of it (dashed-dotted green lines) and the rest leaks through L2 and leaves the system (blue solid lines). As the mass transfer is otherwise stable we continuously extract angular momentum at a rate of \(1\%\) per initial orbit to expedite the merger (see discussion on the stability of mass transfer in appendix B). The system merges after 45.96 initial periods (\(P_{0}=26\) days) this lost mass is accreted by the secondary star (green dash-dotted line). Surprisingly, the system experiences a net loss of mass (blue solid line) rather than a net gain by the secondary. As a result, the mass transfer rate does not increase by much, necessitating continuous driving to bring the system into contact and trigger the merger. This continuous driving causes the system's center of mass to spiral-out of the center of the grid to a distance of \(\sim 15R_{\odot}\) by the time of the merger. In the vertical direction, the system's center of mass is not affected by this driving and it moves less than the size of one grid cell. The mass loss from the primary star intensifies as the orbital separation decreases and its RL shrinks. The accretion rate onto the secondary star increases over time as well, however, it settles down on a level of \(\sim 10^{-2}\ M_{\odot}\ \mathrm{yr}^{-1}\), probably because the secondary star cannot accommodate a faster accretion, and the mass lost by the primary star escapes through L2 rather than accreted. These accretion rates are consistent with other studies that conduct 3D hydrodynamic simulations of CEE, although with a red giant and less massive primary star, where they find accretion rates onto a MS secondary star of \(10^{-3}-10^{1}\ M_{\odot}\ \mathrm{yr}^{-1}\)(Chamandy et al., 2018; Shiber et al., 2019; Lopez-Camara et al., 2022). A MS star that accretes mass at a high rate might launch jets (Shiber et al., 2016), which can help in unbinding the envelope and perhaps even postpone the fast plunge-in (grazing envelope evolution; Soker, 2015, Shiber and Soker, 2018, Shiber et al., 2019). However, we do not include jets in our simulation and disregard the jets' energy which can be approximated by taking a small factor \(\eta\) of the accretion energy onto the secondary star, \(E_{\mathrm{jets}}=\eta GM_{2}\Delta M_{2}/R_{2}\simeq\eta\times 2\times 10^{47}\) ergs. We also note that jets may act as a positive feedback that increases the accretion rate onto the companion, either by removing energy and decreasing the pressure in the vicinity of the accreting star, or by pushing mass towards the orbital plane where most of the accretion takes place. If this positive feedback is substantial, the increase in the accretion rate would result in higher jets' energy and the role of jets can become more important. During the later stages of the simulation, when the secondary star penetrates the envelope of the primary star (approximately at \(t=30P_{0}\)), the hydrodynamic and gravitational drag forces cause the secondary star to spiral further into the primary's envelope. Eventually, it merges with the helium (He) core of the primary star, which occurs within a timescale of approximately \(15P_{0}\) or around 1 year. By this point, a total mass of approximately \(\lesssim 1.3M_{\odot}\) is ejected from the system (panel (c)), where some of it actually originated from the secondary star. This mass carries an energy of \(6.5\times 10^{46}\) ergs, and z-angular momentum of \(0.3J_{\mathrm{orb}}^{0}\), where \(J_{\mathrm{orb}}^{0}\), is the initial orbital angular momentum and equals \(8.5\times 10^{53}\) ergs s. However, not all of this mass is necessarily unbound or remains unbound. In the subsequent subsection (subsection 3.2), we will demonstrate that approximately half of this amount of mass becomes unbound during the merger process and ultimately flows out of the computational grid, contributing to the formation of a merger-burst transient event. In Figures 7 and 8 we show density map slices along the equatorial (orbital) and meridional plane, respectively, at nine different times as denoted at the lower-left corner of each panel: \(2.5P_{0}\), \(17.5P_{0}\), \(25.0P_{0}\), \(30.0P_{0}\), \(35.0P_{0}\), \(40.0P_{0}\), \(44.2P_{0}\), \(45.9P_{0}\), and \(47.0P_{0}\). We also plot the velocity field scaled by velocity magnitude (in red; key shown on the upper right corner) to highlight details of the flow1. The plotted velocities at the six first panels (i.e., \(t\leq 30P_{0}\)) are measured with respect to a frame that rotates at the momentary orbital frequency (i.e., frame of reference that rotates with the binary). In the three last panels (\(t>30P_{0}\); closer to the merger time and afterwards), the plotted velocities are measured with respect to a frame that rotates at the initial orbital frequency (i.e., a frame of reference that rotates with the grid). The binary and as a consequence the grid itself both rotate at the counter clock-wise direction. Each panel is centered around the system's center of mass, and the green cross symbol denotes the location of the primary's star center of mass. Footnote 1: Movies of the simulation can be obtained via this link These plots illustrate the accumulation of mass around the secondary star and the system's overall mass loss through L2. Initially, mass flow originating from the primary star begins to envelop the secondary star, while mass escaping through L2 forms a spiral-shaped arm (\(t=0-30P_{0}\)). As the secondary star spirals inward and approaches the envelope of the primary star, a common envelope configuration is established (\(t=30P_{0}\)). Over time, hydrodynamical drag and gravitational forces act upon the system, driving the secondary star deeper into the envelope of the primary star, until the secondary tidally disrupts the primary's helium (He) core at \(t=45.96P_{0}\). Subsequently, the secondary plunges into and mixes with the remnant helium core of the primary over the following couple of \(P_{0}\). However, this final evolution is a consequence of the choices we had to make for numerical reasons, which resulted in a secondary core denser than the primary core. Since the central density of the primary star is \(\sim 30\) times denser than the sec ondary's star central density, according to the MESA models, in a more realistic evolution, it would be the primary core that disrupts the secondary. Therefore we are not able to say anything definite about the potential rejuvenation of the primary core. Our finding that a substantial portion of the mass leaving the primary star is expelled through the L2 Lagrange point and is subsequently lost from the system instead of being accreted onto the secondary star could drastically affect the stability of the mass transfer. A conservative mass transfer in its simplest form dictates that a mass that passed from the more massive to the less massive star leads to a dynamical unstable mass transfer and to the formation of a CE. A more sophisticated analysis includes the response of each star to its mass loss or mass gain, respectively. In our specific case, the primary star possesses a predominantly radiative envelope that we approximate using a polytropic index of \(n_{1}=4.1\). Consequently, its adiabatic mass-radius exponent, denoted as \(\zeta_{\rm ad}\) and calculated as Figure 7: Equatorial density slices at nine different times throughout the evolution up to the dynamical merger phase: \(2.5P_{0}\), \(17.5P_{0}\), \(25.0P_{0}\), \(30.0P_{0}\), \(35.0P_{0}\), \(40.0P_{0}\), \(44.2P_{0}\), \(45.9P_{0}\), and \(47.0P_{0}\). We also plot the velocity field scaled by velocity magnitude (in red; key shown on the upper right corner) to highlight details of the flow. Velocities at the six first panels are measured in the frame rotating with momentary binary frequency, while in the three last panels they are measured in the frame rotating with the _initial_ binary frequency. The panels are centered around the system’s center of mass, and the green cross symbol denotes the location of the primary’s star center of mass. The system rotates counter clock-wise \((d\log R/d\log M)_{\rm ad}\), is approximately 2.82 and the primary radius will shrink when it looses mass. This needs to be compared with the mass-radius exponent associated with the Roche lobe, denoted as \(\zeta_{\rm RL}\), which is estimated to be around \(2.13(M_{1}/M_{2})-1.67\approx 6.5\) according to Tout et al. (1997) for conservative mass transfer. Therefore, based on this criterion, the RL will shrink faster than the primary star, and the mass transfer between the stars should be dynamically unstable. However, our simulations reveal that the assumption that all the mass lost from the donor star is accreted by the secondary does not hold. This phenomenon can greatly enhance the stability of the mass transfer process, depending on the fraction of mass accreted, \(f\), as well as on angular momentum losses through L2. A careful calculation of the stability of mass transfer for non-conservative cases shows that in the absence of angular momentum losses, \(f<f_{\rm max}=0.53\) would be dynamically stable. Angular momentum losses act to destabilize the mass transfer and thus effectively reduces \(f_{\rm max}\) (for a detailed calculation we refer to appendix B). It is therefore plausible that a mass transfer on a thermal timescale is expected to occur under these conditions. Once the primary's envelope expands further and reaches the size of the orbit, the mass transfer will transition into a dynamically unstable regime. Figure 8: Meridional density slices at nine different times throughout the evolution up to the dynamical merger phase. Times and symbols are like in Figure 7 Moreover, even if the conditions are such that the mass transfer is dynamically unstable, it could be that the mass transfer rate will only slowly increase, implying that numerous orbits are still required to simulate before the actual merger will take place. It is not trivial to know a priori the amount of driving required to achieve a specific mass transfer rate nor to predict the subsequent behavior of the mass transfer rate once the driving is stopped. In fact we tried to stop the driving at several instances during the evolution, finding that the following evolution would be still too long to simulate. In practice we applied the driving mechanism continuously until the system merged. Notably, the behavior observed while driving the system to contact (and beyond) suggests that the initial binary separation should have been chosen smaller, around \(50R_{\odot}\) instead of the \(100R_{\odot}\) estimated from the point at which the primary fills its RL. However, a smaller initial separation can result in a lower final equatorial velocity, as demonstrated in Chatzopoulos et al. (2020) through Equation 5 and Figure 1. ### Mergerburst and outflow During the common envelope evolution, as the secondary star interacts with the envelope of the primary, there is an exchange of orbital and thermal energy. This process ultimately leads to the merger of the secondary star with the helium core of the primary. As the merger occurs, a powerful dynamical pulse is triggered that propagates through the envelope of the primary. Under certain conditions, this pulse can induce significant mass loss from the system. The gravitational energy released during the merger is primarily converted into kinetic energy, driving the high-speed outflow of gas from the primary's envelope. Additionally, a portion of the released energy is transformed into internal or thermal energy. As a result, the gas within the primary's envelope is expelled at high velocities, and a substantial fraction of it attains enough energy to surpass the gravitational potential barrier. Consequently, this gas becomes unbound from the system and is no longer gravitationally bound. Figure 9 illustrates the quantity of mass that becomes unbound within the grid over time. We focus on the ten orbits preceding the merger time, \(T_{\rm merge}=45.96P_{0}\), and the subsequent period, as only a small amount of mass becomes unbound during earlier times. We establish two criteria for identifying unbound mass. The first criterion considers gas as unbound when its kinetic energy plus gravitational energy, \(E_{k}+E_{g}\), is positive (depicted by solid lines). The second criterion incorporates internal energy, where gas is considered unbound when its total energy, \(E_{k}+E_{g}+E_{\rm int}\), is positive (represented by dashed lines). It is important to note that due to the finite-volume nature of our grid, the unbound mass eventually flows out of the grid, despite its relatively large size. Consequently, we include in the plot the quantity of mass that exits the grid (indicated by dash-dotted lines). We observe that approximately \(\sim 0.6M_{\odot}\) of mass becomes unbound within the grid. The majority of this unbinding occurs at the moment of merger and during a time scale of one initial orbital period (\(P_{0}=26\) days). When accounting for internal energy in the energy balance, the amount of unbound mass is only slightly higher. Assuming that all the mass leaving the grid remains unbound, the maximum estimate of unbound mass is approximately \(\sim 0.68\)\(M_{\odot}\) (indicated by brown-colored lines in Figure 9). This corresponds to approximately \(\sim 0.07\) of the mass of the primary's envelope or \(\sim 0.03\) of the total stellar mass of the system. Furthermore, a smaller phase of unbinding, approximately \(\sim 0.02\)\(M_{\odot}\), occurs prior to the merger (beginning at \(t=T_{\rm merge}-5P_{0}\) and decaying until \(t=T_{\rm merge}\), allowing all the unbound gas to escape the grid before the merging time). Interestingly, during the pre-merger unbinding phase, all the unbound mass originates from the primary star (depicted by green lines in Figure 9, labeled as'star 1' in the legend). However, after the merger, when the majority of the unbinding occurs, only approximately \(\sim 2/3\) of the unbound mass originates from the primary star, while the remaining \(\sim 1/3\) originates from the sec Figure 9: The unbound mass as a function of time in our simulation (solid lines). The mass that left the grid is also plotted (magenta dashed-dotted line). The cumulative unbound mass (brown lines) sums unbound gas in the grid plus the mass that left the grid, assuming that all the mass that left the grid, left it unbound and remains unbound afterwards. Including internal (thermal) energy only slightly increase the unbound mass (dashed lines) ondary star (indicated by red lines, labeled as'star 2' in the legend). This finding suggests that the common envelope interaction can unbind a portion of the spiraling-in secondary star's mass (approximately \(0.2M_{\odot}\) out of the \(4M_{\odot}\), which is 5%), highlighting the need for future studies to consider this effect rather than relying solely on the simple \(\alpha\) formalism often used. Such results, which cannot be captured when representing the secondary star as a point particle, warrant further investigation. The fraction of the primary's envelope that becomes unbound (7%) is relatively smaller compared to findings from previous studies on common envelope evolution of massive stars. However, in those studies they include some extra sources of energies that our simulation does not account such as recombination energy, radiation energy, and jets (Lau et al., 2022, 2022; Moreno et al., 2022; Schreier et al., 2021). Other previous works that do not include additional energy sources find ejection values more similar to ours (although for a low mass primary star; e.g., Passy et al., 2012; Iaconi et al., 2017; Reichardt et al., 2019; Shiber et al., 2019, and Sand et al., 2020). Another possible explanation stems from the fact that our primary has not evolved yet to become a RSG and thus its envelope possesses much higher binding energy compared to the binding energy its envelope will posses when it will evolve to be a RSG star (Klencki et al., 2021). Other studies deliberately chose primary stars during its RSG phase and at maximal expansion to ease the envelope removal (Lau et al., 2022, 2022; Moreno et al., 2022; Ricker et al., 2019). In Figures 10-12 we explore the geometrical structure and kinematics of the unbound merger-driven outflow. Figure 10 shows the average (inertial) velocities of these unbound outflows as a function of time, where we focus at the same time-span as in Figure 9. We averaged by mass over gas with positive kinetic plus gravitational energy. These velocities can be compared with the nominal escape velocity from the system \(v_{\rm esc}(r)=\sqrt{2G(M_{1}+M2)/r}\simeq 270\ {\rm km/s}\cdot(r/100\ R_{ \odot})^{-1/2}\), where \(r\) is the distance from the system's center of mass. Prior to the merging time, \(T_{\rm merge}\), the amount of unbound mass is small and therefore does not contribute much to the mergerburst. The unbound gas immediately after the merger escapes with radial velocities (solid blue line) which peak at 310 km/sec but also spins at a tangential velocity of 140 km/sec (dashed orange line). As the outflow propagates further out it decelerates due to the gravitational pull from the central, merged object. The tangential velocity decays faster than the radial velocity component. We assume for simplicity that this unbound mass originated from a distance of the initial orbital separation, \(r_{0}=a_{0}=100\ R_{\odot}\) and integrate its radial velocity from the merging time and onward to derive its radial distance, \(r_{\rm out}\) as a function of time (red dotted line), \(r_{\rm out}=r_{0}+\int_{T_{\rm merge}}^{t}v_{r}dt\). We also calculate the escape velocity at the instantaneous radial distance of the outflow and show that the propagation speed of the outflow remains greater than the escape velocity at any time. A purple dashed vertical line at the post-merger time of \(3.6P_{0}=93\) days denotes the time when the outflow arrives to the boundary of the grid. Note that the fastest flow reach the boundary earlier indicated by the peak in unbound mass (orange line in Figure 9) at \(t\simeq 2P_{0}\simeq 52\) days. In Figure 11, we plot three-dimensional renderings of the unbound gas density at six different times after the merger. In each panel we plot densities at the range of \(10^{-8}-10^{-12}\ {\rm g/cm^{3}}\) (colored red to dark blue, respectively, according to the color map). The point of view is of an observer at the corner of the grid, i.e, at \((2000\ R_{\odot},2000\ R_{\odot},2000\ R_{\odot})\), looking towards the grid's center. Our simulation reveals that the outflow exhibits an asymmetric expansion pattern and possesses a complex inner structure. In the early stages (panels (a) and (b)), a distinctive two-ring structure emerges, with one ring located above and another below the equatorial plane Figure 10: Average unbound gas velocities and inferred radial distance as a function of time. The (solid) blue curve denotes spherical radial velocity, \(v_{r}\), and the (dashed) orange curve azimuthal spherical velocity, \(v_{g}\). The (dotted) red curve denotes the outflow average distance, \(r_{\rm out}\), as calculated by integrating \(v_{r}\), \(r_{\rm out}=r_{0}+\int v_{r}dt\) with \(r_{0}\) being equal to the initial separation \(100\ R_{\odot}\), while the (dashed) green curve denotes the escape velocity at this distance, \(v_{\rm esc}(r_{\rm out})\). At each time the average velocity is a mass-averaged calculated only for unbound gas. The purple vertical line denotes the time in which the outflow arrives to the computational grid boundary (depicted in red). This structure expands over time and remains visible in later stages (panel (e), represented by orange). During the process of the secondary star's spiraling-in (prior to its merger), gas accumulates around it, causing the envelope to become inflated and concentrated primarily towards lower latitudes. Consequently, when the rapid outflow bursts out, it escapes and expands more rapidly towards the poles, resulting in the formation of a bipolar outflow structure. The bipolar and clumpy ring-like morphology of the outflow is reminiscent of circumstellar (CS) environments observed around some massive star systems (i.e., \(\eta\)-Carina, Smith & Ferland, 2007) as well as supernova (SN) remnants like SN1987A (Plait et al., 1995). A clumpy ring structure has been also observed in several planetary nebulae (e.g., the necklace nebula; Corradi et al., 2011). However, in our simulation, the unbound outflow forms two clumpy rings, one above and another below the equatorial plane, which expand to higher latitudes rather than an equatorial knotted-ring. Lastly, Figure 12 presents a phase plot that displays the distribution of unbound mass (in solar mass units) within specific density and velocity ranges at three different time intervals following the merger: \(0.37P_{0}\) (left column), \(0.79P_{0}\) (middle column), and \(2.5P_{0}\) (right column). The first row of plots illustrates the mass distribution in specific density and positive z-velocities bins, while the second row depicts the mass distribution in density and radial cylindrical velocities bins. These plots provide insights into the kinematics of the outflow. It is important to note that since only the positive z-direction is depicted, the mass shown in these plots (first row) rep Figure 11: Outflow structure. Three-dimensional renderings of unbound gas density (i.e., gas with \(E_{k}+E_{g}>0\)), at six different times: \(0.26~{}P_{0}\) (\(7\) days), \(0.37~{}P_{0}\) (\(10\) days), \(0.54~{}P_{0}\) (\(14\) days), \(0.79~{}P_{0}\) (\(21\) days), \(1.05~{}P_{0}\) (\(27\) days), and \(1.54~{}P_{0}\) (\(40\) days), after the stars merged. Each panel presents densities at the range of \(10^{-8}-10^{-12}\) g/cm\({}^{3}\) (red to dark blue, respectively, and according to the color map) as observed by a viewer at the corner of the grid looking towards the grid’s center. These renderings extend up to an inner sphere of radius size \(6.5~{}\mathrm{AU}\). A length scale of \(1~{}\mathrm{AU}\) is also plotted. The positive z-axis is directed upward. resents approximately half of the total unbound mass within the grid. During the early period up to \(t=T_{\rm merge}+0.37P_{0}\) (left column), the outflow expands steadily without decelerating. This expansion can be observed in the phase plots as a structure shifting to the left, to lower densities, without changing its shape. Subsequently, the expansion continues, but the outflow starts to decelerate (the structure shifts to lower densities and lower velocities, in the middle column compared to left column). At this stage, the fastest velocities are still primarily in the z-direction. As the merger-burst reaches the boundary of the grid, less unbound mass remains within the grid. The fast outflow in the z-direction exits the grid first, resulting in a more uniform velocity distribution (right column). This can be seen in the phase plots of the right columns, where a similar shape is observed between the upper and lower panels. This analysis indicates that the fastest velocities are predominantly aligned with the z-direction. Based on these results we can extrapolate how a nebula produced by such a mergerburst will appear at later stages. In Figure 13 we show the outflow peak density (left) and its distance from the merger (right) as a function of time. The blue points are data points from the hydrodynamic simulation while the orange curves are extrapolation assuming homologous expansion, i.e., \(\rho_{\rm peak}\sim t^{-3}\), \(r\sim t\) and disregarding interaction with the interstellar medium. In addition, we consider interaction of the outflow with previous winds blown by the massive primary star prior to the merger process. For that we use the equations by Chevalier and Fransson (1994). They developed analytical formulas for the interaction of supernova-driven outflows with winds and we downscale them for mergerburst driven-outflows. We first fit the outflow density profile in the simulation domain to two power-laws. This yields a flat slope of \(\delta=1.8\) within the inner 6.7 AU, and a steep slope of \(n=11.9\) beyond this radius, which we round up to \(n=12\). The mass (\(M\)) and energy (\(E\)) of the outflow is the total mass and energy lost from the grid, i.e., \(0.68~{}M_{\odot}\) and \(1.8\times 10^{47}\) ergs, respectively. As an estimation of the mass loss rate to winds we use the last value as prescribed by the MESA model of our primary star, \(\dot{M}=2\times 10^{-7}~{}M_{\odot}\) yr\({}^{-1}\). We assume circumstellar density that decreases as \(r^{-s}\), where \(s=2\), and plug two possible wind velocities of 10 km/sec and 100 km/sec to equation 2.7 from Chevalier and Fransson (1994) to derive the radius of peak densities shells, the green and red curves on the right panel, respectively. Lastly, we consider an interaction with a more powerful mass-loss of \(\dot{M}=0.1~{}M_{\odot}\) yr\({}^{-1}\) and assuming a uniform wind profile, \(s=0\). As in Chatzopoulos et al. (2012) (appendix B), we use a density scaling in the immediate vicinity of the primary star and therefore obtain \(r_{1}=R_{p}=50~{}R_{\odot}\). We plug wind velocity of 10 km/sec and the right constants for \(s=0\) and \(n=12\) from table 1 of Chevalier (1982) in equation B2 of Chatzopoulos et al. (2012) to obtain the forward shock location as an estimation for the radius of peak densities in this case. We plot this radius as a function of time in dotted purple line in the right panel of Figure 13. From this figure we learn that if the merger will explode in a Core-collapse Supernova within the first few years after the merger, the nebula can affect the supernova lightcurve via interaction. At later times the nebula dissolves and would be difficult to detect nor to play any important role. ### Merger properties and further evolution As mentioned in the Introduction (Section 1), observational evidence supports the presence of rapid surface velocities in a few giant and supergiant stars. For instance, studies on Betelgeuse have demonstrated (Chatzopoulos et al., 2020; Sullivan et al., 2020) that a previous merger between a pre-main sequence massive star with a mass of approximately \(15-17M_{\odot}\) and a low-mass main-sequence companion with a mass of around \(1-4M_{\odot}\) could account for its implied high rotation rate. In this study, we aim to investigate whether the merger product in our simulation has the potential to evolve into a star similar to Betelgeuse. In this subsection, we explore the interaction between the secondary star and the primary star's envelope during the spiral-in process, which leads to the transfer of orbital angular momentum to the envelope. As a result, immediately following the merger, the envelope becomes inflated and rotates faster than its initial state. However, in order to determine whether this dynamical spin-up has any long-term effects on the significantly longer thermal and nuclear timescales that cannot be simulated hydrodynamically, we need to utilize a 1D representation of the post-merger object in a stellar evolution code. In Subsection 3.3.1, we analyze the post-merger structure within the hydrodynamic simulation, while in Subsection 3.3.2, we investigate the long-term evolution by importing the post-merger structure into the MESA stellar evolution code. #### 3.3.1 Immediate dynamical properties Figure 14 illustrates the one-dimensional averaged structure of the merger. To obtain these profiles, we performed a mass average over the azimuthal angle around the point of maximum density, which we define as the center of the merger, and only considered data from the equatorial plane. The following quantities are plotted as a function of the distance from the merger's center (from upper left to lower right): specific angular momentum, angular velocity, density, and temperature. It is important to note that the specific angular momentum and angular velocity are computed with respect to the merger's center, meaning that the radius represents the distance from this point, and the velocities are corrected by subtracting the velocities of this point from the inertial velocities. Averaging only over the equatorial plane may lead to an overestimation of rotation values since the deposition of orbital angular momentum primarily occurs around the orbital plane. Consequently, it is expected that rotation would be slower at higher latitudes. However, as the post-merger object undergoes dynamic relaxation within a few orbits, the bound gas gradually becomes more spherically symmetric, causing the redistribution of angular momentum accordingly. Thus, only minor differences between latitudes are expected during this phase. In each panel, two profiles corresponding to two different times, namely \(0.5P_{0}\) (13 days; blue solid line), and \(0.5P_{0}\) (14 days; blue solid line). The post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and post-merger object dynamic relaxation within a few orbits, and the post-merger object undergoes dynamic relaxation within a few orbits, and post-merger object undergoes dynamic relaxation within a few orbits, and post-merger object dynamic relaxation within a crosses) and \(5P_{0}\) (130 days; orange solid circles) after the merging time, are displayed. As the post-merger object undergoes dynamic relaxation, its internal structure gradually becomes smoother, as evident in the smoother temperature profile. The orange lines in the plots start from further out due to grid derefinement in the central regions following the merger. This de-refinement leads to a drop in central density but also facilitates the redistribution of angular momentum. To assess the effects of the merger on the post-merger object, we compare the specific angular momentum and density profiles with those of the original MESA primary star's model (shown as green dashed lines, denoted as "pre" in the legend). The specific angular momentum gain of the envelope resulting from the merger is clearly visible in the specific angular momentum plot. Moreover, the envelope becomes inflated, and the density contour of \(10^{-9}\) g/cm\({}^{3}\) expands from \(50R_{\odot}\) to 430 \(R_{\odot}\). In the plot, we also include the specific angular momentum profile computed analytically (using Equations 13 and 14 from Chatzopoulos et al.2020), shown as a dashed-dotted green line and denoted as "post" in the legend. This analytical profile was used in Chatzopoulos et al. (2020) to simulate the merger starting from the pre-merger 1D profile. Here, we include it for comparison purposes. It is worth noting that this profile rapidly increases at \(10^{12}\) cm and further out, due to the steep density decline in these regions in the pre-merger primary profile. The specific angular momentum plot conclusively highlights the benefit of simulating the merger in a dynamical three-dimension simulation over a prescribed 1D model, which could not account important features of the merger such as envelope expansion. It is worth mentioning though, that as a consequence of the continuous driving we apply on the system, the merger possesses less angular momentum than it would have without the driving. Specifically, the driving mechanisms absorbs almost 50% of the initial orbital angular momentum. Outflows carried an additional \(\sim 20\%\) of the angular momentum and the merger therefore acquires only 30% of the orbital angular momentum that is initially available. This affects the specific angular momentum of the merger (and its angular velocity), which are probably underestimated and could be higher. In particular, this is true for the profiles at \(t=T_{\rm merge}+0.5P_{0}\) before viscosity torques and grid de-refinement introduce some numerical angular momentum increase (at the level of \(\sim\)5%) To emphasize the dynamic spin-up of the envelope, we present the angular velocity profile of the envelope over time in Figure 15. Prior to the merger, the angular velocity is calculated relative to the point of maximal density, which belongs to the primary star and is determined using the diagnostics scheme described in Section 2. Following the merger, the angular velocity is calculated relative to the center of the post-merger star. Additionally, we include the orbital separation as a function of time, represented by a gray line. This figure reveals that the envelope in regions near the secondary star undergoes a spin-up process during the slow spiraling-in phase. As the binary system approaches contact and enters the fast spiraling-in phase, the rate of angular momentum gain increases. Subsequently, during grid de-refinements, the angular momentum redistributes within the system. As a result, the inner region of approximately 50 \(R_{\odot}\) exhibits a significant increase in rotational velocities, exceeding their initial rotation by more than an order of magnitude. Figure 13: Extrapolation of the mergerburst to later stages post-merger. Left panel: Peak density. Right panel: Distance of the peak density from the post-merger object. The blue points are data points from the hydrodynamic simulation while the orange curves are extrapolation assuming homologous expansion. We also consider interaction of the mergerburst with winds blown by the massive primary star prior to the merger process with wind velocity \(v_{w}\) and a profile that decreases as \(r^{-s}\) (see details in the text) #### 3.3.2 Long term nuclear evolution To study the long-term evolution of the merger on a nuclear timescale, we import the averaged 1D structure of the merger into the MESA stellar evolution code. This is achieved by performing a mass-weighted spherical averaging procedure and importing the system 5 initial orbits after the merger event. The resulting profiles of entropy, specific angular momentum, and composition serve as inputs for MESA. However, due to insufficient resolution in the inner regions of the hydrodynamical simulation, we replace the inner \(9.6M_{\odot}\) of the entropy profile with a core structure from a similar star evolved in MESA (representing the primary's pre-merger/original input structure) that matches in terms of age, mass, and radius. Assuming that the chemical composition remains unchanged by the merger, we also import the abundance profile from the MESA pre-merger model. The specific angular momentum profile is directly imported from the Octo-Tiger model using cylindrical shell averages. After spherically averaging the bound mass of the post-merger object, the resulting mass of the post-merger is \(18.6M_{\odot}\) Figure 14: Post-merger star’s structure azimuthally averaged over the equatorial plane at 0.5 (blue curves) and 5 (orange curves) orbits past merger. From the upper-left panel to the lower-right panel presented specific angular momentum; Angular velocity; Density; and Temperature. The density and specific angular momentum of the primary initial (MESA) model is also plotted (dashed green lines). We compare the specific angular momentum of the merger (as a result of our hydrodynamic simulation; upper-left panel, blue and orange lines) with the post-merger profile as calculated by Equations 13 and 14 in Chatzopoulos et al. (2020) and based on the pre-merger one-dimensional profile of the primary (green dashed-dotted line in the same panel) Figure 15: Illustration of the primary envelope spin-up during the merger process. The color scale corresponds to the angular velocity with respect to the primary’s core position averaged at the equatorial plane. The color map is in units of rad/s. Time (in units of initial periods, \(P_{0}\)) is plotted on the x-axis. Distance from the primary core (in units of cm) is plotted on the y-axis. The orbital separation as a function of time is overplotted as a gray curve Once we have obtained the necessary profiles, we use the external model relaxation procedures in MESA to create a single star with a structure similar to our post-merger object. Figure 16 illustrates the comparison between our input data and the resulting relaxed structure. It can be observed that all three profiles relax to a state that closely matches the input structure. However, it is important to note that the same entropy profile in MESA will yield a different temperature and density profile compared to Octo-Tiger due to the differences in equation of state (EoS) and the requirement of hydrostatic equilibrium (HSE) in MESA. Additionally, the inclusion of radiation effects and tabulated opacity in MESA leads to a larger relaxed structure compared to the post-merger structure observed in Octo-Tiger. The disparities in the density and temperature profiles are evident in Figure 16. It should be mentioned that due to the aforementioned core resolution issue, the Octo-Tiger core does not have a sufficient temperature for helium burning. Conversely, the MESA profile suggests that the primary should be undergoing helium burning. It is important to acknowledge that although the post-merger object also possesses a core that is currently too cool for helium burning, the structure has not yet fully settled after the relaxation procedure, and the star will contract, resulting in a hotter core. After relaxing the post-merger into MESA, we allow the star to dynamically settle and then evolve over nuclear timescales. During the evolution we use the Dutch wind prescription, which uses the de Jegar prescription for effective temperatures less than 8000K (de Jager et al., 1988), and the Vink wind prescription for higher effective temperatures (Vink et al., 2001). Typically, for non-rotating models, a standard value of the wind scaling factor is 0.8. This could be lower for rotating models so we use a value of 0.4. Heger et al. (2000) give an analytical expression for enhanced mass loss due to rotation, which is utilized by default in MESA version r21.12.1, of the following form: \[\dot{M}=\dot{M}_{0}\left[\frac{1}{1-v/v_{crit}}\right]^{\xi} \tag{1}\] Where \(\dot{M}_{0}\) is the mass loss without rotation, \(v\) is the surface rotation, \(v_{crit}\) is the critical surface rotation, and \(\xi\) is the power factor (the default value is 0.43). Following some of the parameters used for the MESA test suite options, we use the Cox MLT option (Cox & Giuli, 1968). We include the Ledoux criterion and use a mixing length parameter of 1.6. We also include the effects of semiconvection and thermohaline mixing, but do not focus on overshooting. Some of the most important set of parameters for this study are the effects of rotationally induced mixing for both chemical mixing and angular momentum diffusion. Because the surface of the star is spun up due to the coalescence of the secondary into the envelope of the primary, it is important that not too much angular momentum is diffused downward toward the core. The types of rotationally induced mixing we use are Solberg Holland (SH), Secular Shear Instability (SSI), Eddington-Sweet Circulation (ES), and Goldreich-Schubert-Fricke Instability (GSF) which are all described in Heger et al. (2000). We also use Spruit-Taylor dynamo action (ST) of Heger et al. (2005) which includes the effects of magnetic fields, but we note that there is a lot of uncertainty regarding the strength and effect of magnetic fields for giant stars. Finally, the overall coefficient which is multiplied by the sum of the diffusion due to all of the above mixing effects is set to 1/30 according to the recommendation of Heger et al. (2000). When evolving the post-merger in MESA we vary the efficiency of rotationally induced diffusion. The reason for this is these effects are inherently 3-dimensional, and their use in MESA is based on the work of Heger et al. (2000), who approximated 1D diffusion coefficients for each process. In addition, we acknowledge that in the case of a star merger, most of the angular momentum deposition occurs over the equatorial plane and non homogeneously throughout the primary star's envelope. The efficiency factors we vary are used in order to account for the uncertainty of the effectiveness of diffusion for each of these mechanisms. This method of simulating mixing is known as the diffusion approximation and Paxton et al. (2013) note that there is another method they refer to as the diffusion-advection approach. While these two methods have nearly identical chemical mixing, the transport of angular momentum can vary significantly. More details regarding this other method of angular momentum diffusion can be found in Maeder & Zahn (1998) and Zahn (1992). A summary of our model parameters, the results, and Betelgeuse observations are shown in Table 1. The post-merger evolution is found to be primarily influenced by the viscosity coefficients associated with the ST and ES mechanisms. These mechanisms play a dominant role in the outer regions of the post-merger star's envelope, facilitating the efficient transport of excess angular momentum towards the inner regions. As a result, they significantly reduce the equatorial rotation rate of the star. However, we have reasons to consider lower efficiencies for the ST and ES processes. This is motivated by the non-spherically symmetric deposition of angular momentum during the 3D common envelope phase. The high rotation is mainly concentrated around the equator and not evenly distributed in the azimuthal layers of the model, thereby reducing the effectiveness of the ES mechanism. In addition, as we discussed earlier, the artificial driving by angular momentum extraction during the pre-merger phase in Octo-Tiger may lead to an under-estimate of the actual angular momentum that would have been deposited due to drag forces driving the in-spiraling phase alone. These uncertainties justify our choice of lower efficiencies for the ES and ST mechanisms. However, more comprehensive and computationally intensive 3D numerical simulations are necessary to accurately quantify these effects over time. After the relaxation, the post-merger first goes through a contraction phase as it settles from the initial structure of the merger. This contraction phase lasts until the core becomes hot enough for He-burning, on the order of \(10^{4}\) years. The evolution we are interested in and that we show in Figure 17 is during the He-burning phase and later, which takes place for another \(10^{6}\) years and is indicated by the solid lines in Figure 17. During these \(10^{6}\) years, the star evolves into the box representing the observed surface temperature and luminosity range of Betelgeuse. We additionally plot black dots placed 10,000 years apart and starting roughly 30,000 years before the model enters this box, for models 4 and 8 (see Table 1). This illustrates the star's color evolution, as there are some indications, according to historical records, that Betelgeuse has evolved rapidly in color (Neuhauser et al., 2022). We also list the time, \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Model & ST & SH & GSF & ES & SSI & \(v_{\rm surf}\) (km/s) & \(\epsilon_{C}\) & \(\epsilon_{N}\) & \(\epsilon_{O}\) & \(t_{cool}\) (years) \\ \hline 1 & 1 & 1 & 1 & 1 & 1 & 0.036 & 8.48 & 8.62 & 8.90 & 14700 \\ 2 & 0 & 1 & 1 & 1 & 1 & 0.29 & 8.46 & 8.74 & 8.89 & 17400 \\ 3 & 1 & 1 & 1 & 0 & 1 & 0.041 & 8.48 & 8.63 & 8.90 & 14900 \\ 4 & 0.1 & 1 & 1 & 0.1 & 1 & 0.063 & 8.48 & 8.65 & 8.90 & 14800 \\ 5 & 0.01 & 1 & 1 & 0.01 & 1 & 0.12 & 8.48 & 8.64 & 8.90 & 14600 \\ 6 & 0.001 & 1 & 1 & 0.001 & 1 & 0.23 & 8.48 & 8.65 & 8.90 & 14500 \\ 7 & \(10^{-7}\) & 1 & 1 & \(10^{-7}\) & 1 & 1.9 & 8.48 & 8.65 & 8.90 & 25600 \\ 8 & 0 & 1 & 1 & 0 & 1 & 5.8 & 8.32 & 8.55 & 8.74 & 22200 \\ BG & & & & & & 5-15 & 8.25-8.55 & 8.45-8.75 & 8.65-8.95 & \\ \hline \end{tabular} \end{table} Table 1: A summary of our model parameters and results for eight different models. BG refers to observed surface values of Betelgeuse taken from Kervella et al. (2018) and Lambert et al. (1984). Columns 2 through 6 are the efficiency factors used for each diffusion mechanism. \(v_{surf}\) is the equatorial surface velocity as calculated in MESA averaged over the time spent in the HR diagram box. \(\epsilon_{i}\) is the surface value of element \(i\) calculated with Equation 2. \(t_{cool}\) is the number of years it takes for the star to cool from 5000K to 4000K (yellow to red in color). Figure 16: A comparison of the inputs from Octo-Tiger to the relaxed output of MESA. The left panels show the input and relaxed profiles of the H and He composition (top), angular momentum (middle), and entropy (bottom). The x-axis “q” is the normalized exterior mass coordinate, namely, \(q=0\) is the star’s surface and \(q=1\) is the star’s center. The interface in the entropy between the MESA and Octo-Tiger initial models occurs at \(q=0.5\). The panel on the right shows the Temperature-Density diagram for the Octo-Tiger post-merger, the primary star from MESA, and the relaxed output from MESA. \(t_{cool}\), which takes for each model to change its effective temperature from 5000K to 4000K in Table 1. We find a slower color evolution of several 10,000 years compared to the suggested fast evolution of two millennia by Neuhauser et al. (2022). We further analyze the surface composition and rotation rate of our models. The surface composition is calculated using the standard equation (see, for example Asplund et al., 2009): \[\epsilon_{i}=\log(X_{i}/X_{H}\mu_{i})+12 \tag{2}\] In order to compare our results to the observations of Betelgeuse, we plot the evolution of the surface composition and velocity and use a box that represents the target values from observations. The top and bottom edges of the box represent the range of acceptable values from observations taken from Lambert et al. (1984), while the left and right edges represent the time that the model spends with a surface temperature and luminosity matching Betelgeuse. This analysis is demonstrated in Figure 18. We can see from the upper left panel that the surface rotational velocity decreases rapidly by nearly three orders of magnitude. The primary reason for such a decrease in surface rotation is the diffusion of angular momentum towards the core. We note that the strongest contributors are specifically the ST and ES diffusion coefficients with ST being about an order of magnitude stronger as demonstrated in Table 1 by comparing Models 2 and 3. After identifying the two biggest contributors to the loss of surface rotation, we ran Models 4-8 in order to explore the effects of decreasing the efficiency of ST and ES diffusion. The results of this experiment are shown in Figure 18. We note that while the range of efficiency factors we explored do not have a significant effect on the surface composition, the less efficient diffusion of angular momentum is critical to the evolution of the surface rotation. We find for our models, we do not see a sufficiently fast rotation rate unless the efficiency is reduced down to \(\sim 10^{.7}\) or more of the default value of 1.0. We also note that as the efficiency is reduced and angular momentum cannot diffuse towards the core as rapidly, the model becomes more unstable during the contraction phase. ## 4 Summary and Discussion This paper presents an attempt to model the complete evolution of a binary star merger, spanning from the onset of the interaction to the common envelope phase, tidal disruption phase, and subsequent aftermath. Our study focuses on the merger between a 16 \(M_{\odot}\) primary star and a 4 \(M_{\odot}\) secondary star during the post-main sequence evolution of the primary towards the RSG branch. The common envelope phase occurs when the primary's expanding envelope comes into contact with the secondary, leading to the decay of the secondary's orbit within the primary's extended envelope. The secondary star merges with the He core of the primary, resulting in a mergeburst transient phase and the ejection of approximately 0.6 \(M_{\odot}\) of material in a bipolar outflow. To conduct this study, we employed various methods and numerical techniques, including 3D hydrodynamical simulations with the Octo-Tiger code and nuclear evolution modeling with the MESA code. The transition between the two codes posed challenges due to differences in assumptions, such as the equation of state, treatment of nuclear burning, and inclusion of radiation effects. The 3D Octo-Tiger simulation required significant resolution (to ensure that the structures of both the primary and secondary stars are adequately resolved for dynamical stability) and a large simulation box (to allow for the tracking of the merger-induced mass-loss in order to investigate the properties of the unbound material). Finally, to facilitate a reasonable computation time, we drove the merger by removing angular momentum from the orbit at a low rate of 1% per (initial) orbital period. Despite the limitations in our approach, we successfully achieved our main goal of studying the post-merger evolution and comparing it to observations of stars with Figure 17: HR diagram for Models 4-8, which use smaller diffusion coefficients for the ST and ES mechanisms. The black dots are placed approximately 10,000 years apart starting roughly 30,000 years before the model enters the observed effective temperature range for Betelgeuse. The values in the parentheses are the coefficients for ES and ST mechanisms. The box represents the observed range of effective temperature and luminosity of Betelgeuse. peculiar envelope properties, such as Betelgeuse. Key findings of our study include the following: * During the common envelope phase, which lasts for 340 days after contact at 50 \(R_{\odot}\), the in-spiral of the secondary into the primary's envelope results in significant angular momentum deposition along the equatorial plane (Figure 15). * Due to resolution limitations, we couldn't investigate the details of the secondary's tidal disruption, stream-core interaction, and the potential rejuvenation of the primary. However, based on the mass ratio studied (q = 0.25), we assume a "quiet merger" scenario (Chatzopoulos et al., 2020), where the post-merger star continues to evolve toward the RSG phase. * The dynamical merger leads to the ejection of approximately 0.6 \(M_{\odot}\) of material at velocities of 200-300 km/s, characteristic of mergeburst events (see Figures 9 and 10, respectively). Interestingly, 1/3 of this unbound gas originated from the spiralling-in secondary star (Figure 9). The unbound gas exhibits a distinct bipolar geometry with most of its mass concentrated in two clumpy rings (Figure 11). * If the post-merger star explodes as a Type IIP supernova within the next 50,000 to 500,000 years, the circumstellar material formed by the previous merger would be located at a distance of \(1-100\) pc, depending on the specific modeling of wind interaction (Figure 13). The densities of this material are unlikely to significantly affect the radiative properties of the supernova. * The long-term nuclear evolution of the post-merger star is consistent with the well-known RSG Figure 18: Evolution of the post merger object for five MESA runs with different rotation mixing coefficients: 0 (magenta), \(10^{-7}\) (red), \(10^{-3}\) (green), \(10^{-2}\) (orange), and \(10^{-1}\) (blue). Plotted are (from top left to bottom right): surface equatorial rotational velocity in km/s, Carbon, Nitrogen, and Oxygen solar scaled surface abundances. The dashed black square frames the time when the models are within the 3\(\sigma\) error bars of the observed luminosity and effective temperature of Betelgeuse (on the x-axis) and the observed Betelgeuse ranges (on the y-axis). star Betelgeuse, as evidenced by the observed surface enhancement of \({}^{14}\)N (see Figure 18) and its location in the HR diagram (see Figure 17). * The long-term surface rotation rate of the post-merger star depends on the efficiency of angular momentum transport mechanisms in the 1D evolution calculation (Figure 18). Rapidly spinning post-mergers are found when the efficiency of meridional circulation (Eddington-Sweet mechanism) is low and magnetically-driven angular momentum transport (Spruit-Taylor mechanism) is minimal (see Table 1). We argue that the anisotropic deposition of angular momentum concentrated along the equatorial plane in a merger scenario suggests low efficiency of inward angular momentum transport. Other factors influencing the surface rotation rate include the angular momentum carried away by mass loss during the merger and post-merger evolution, as well as the underestimation of the total angular momentum deposited in the primary envelope due to our merger driving method. Additionally, the limited size of the primary's envelope expansion up to a radius of 50 \(R_{\odot}\) restricts the number of secondary orbits during the common envelope phase and, consequently, the spin-up potential. Previous studies have shown that contact at higher radii (100-500 \(R_{\odot}\)) can lead to higher post-merger spin-up rates (Chatzopoulos et al., 2020). Our results shed light on the post-merger evolution and provide valuable insights into the properties of merger events and their impact on stellar evolution. However, further improvements are necessary, including refining the codes and techniques used, investigating the details of the tidal disruption and stream-core interaction, and addressing uncertainties related to angular momentum transport and mass loss during the merger and post-merger phases. Moving forward, our future work will focus on enhancing the Octo-Tiger code, incorporating a more suitable equation of state and radiation effects, and exploring the interaction between supernovae and merger-driven circumstellar environments. By delving deeper into the physics of stellar mergers, we aim to advance our understanding of massive star evolution, the properties of supernova progenitors, and the role of mergers in shaping the astrophysical transient landscape. We are grateful to Athira Menon and Craig Wheeler for discussions on Betelgeuse and SN1987A-like progenitors. We also thank Geoffrey Clayton, Dominic Marcello, and Orsola De Marco for useful feedback and discussions. The research of EC was supported by the Department of Energy Early Career Award DE-SC0021228. The numerical work was carried out using the computational resources (QueenBee2) of the Louisiana Optical Network Initiative (LONI) and Louisiana State University's High Performance Computing (LSU HPC). Our use of BigRed3 at Indiana University was supported by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute. This work also required the use and integration of a Python package for astronomy, yt ([http://yt-project.org](http://yt-project.org), Turk et al., 2011). ## Supplementary Materials Octo-Tiger is available on GitHub2 and was built using the following build chain3. On Queen-Bee and BigRed Octo-Tiger version Marcello et al. (2021) was used. All necessary files to reproduce the Betelgeuse post-merger evolution in MESA are available on Zenodo4. Footnote 2: [https://github.com/STEILAR-GROUP/octotiger](https://github.com/STEILAR-GROUP/octotiger) Footnote 3: [https://github.com/STEILAR-GROUP/OctoTigerBuildChain](https://github.com/STEILAR-GROUP/OctoTigerBuildChain) Footnote 4: [https://doi.org/10.5281/zenodo.8422498](https://doi.org/10.5281/zenodo.8422498) #### Software Octo-Tiger(Marcello et al., 2021), MESA-r21.12.1 (Paxton et al., 2011, 2013, 2015, 2018, 2019), Python (available from python.org), Matplotlib (Hunter, 2007), Numpy (van der Walt et al., 2011), yt (Turk et al., 2011)
2303.03247
Safety-Critical Control with Bounded Inputs via Reduced Order Models
Guaranteeing safe behavior on complex autonomous systems -- from cars to walking robots -- is challenging due to the inherently high dimensional nature of these systems and the corresponding complex models that may be difficult to determine in practice. With this as motivation, this paper presents a safety-critical control framework that leverages reduced order models to ensure safety on the full order dynamics -- even when these models are subject to disturbances and bounded inputs (e.g., actuation limits). To handle input constraints, the backup set method is reformulated in the context of reduced order models, and conditions for the provably safe behavior of the full order system are derived. Then, the input-to-state safe backup set method is introduced to provide robustness against discrepancies between the reduced order model and the actual system. Finally, the proposed framework is demonstrated in high-fidelity simulation, where a quadrupedal robot is safely navigated around an obstacle with legged locomotion by the help of the unicycle model.
Tamas G. Molnar, Aaron D. Ames
2023-03-06T16:04:36Z
http://arxiv.org/abs/2303.03247v1
# Safety-Critical Control with Bounded Inputs via Reduced Order Models ###### Abstract Guaranteeing safe behavior on complex autonomous systems--from cars to walking robots--is challenging due to the inherently high dimensional nature of these systems and the corresponding complex models that may be difficult to determine in practice. With this as motivation, this paper presents a safety-critical control framework that leverages reduced order models to ensure safety on the full order dynamics--even when these models are subject to disturbances and bounded inputs (e.g., actuation limits). To handle input constraints, the backup set method is reformulated in the context of reduced order models, and conditions for the provably safe behavior of the full order system are derived. Then, the input-to-state safe backup set method is introduced to provide robustness against discrepancies between the reduced order model and the actual system. Finally, the proposed framework is demonstrated in high-fidelity simulation, where a quadrupedal robot is safely navigated around an obstacle with legged locomotion by the help of the unicycle model. ## I Introduction Real-life engineering systems often exhibit complicated, nonlinear and high-dimensional dynamic behavior. This is especially true of autonomous (robotic) systems, where dynamics play a key role in achieving desired behaviors. This makes them challenging to control, and to attain formal guarantees of stable or safe evolution for the closed control loop. To tackle such complex control problems, simplified, reduced order models (ROMs) of the dynamic behavior are often utilized during controller synthesis with great practical success [1, 2]. Yet there is often a theoretic gap between behaviors certifiable on the ROM and the resulting behaviors observed on the full order system (FOS). In this paper, we focus on the role of ROMs in safety-critical control. Given an accurate dynamical model, there exist tools to synthesize controllers that provide formal guarantees of safety. For example, _control barrier functions (CBFs)_[3] have been proposed to achieve this goal, and they have been proven to be successful in a wide variety of applications from multi-robot systems [4] to spacecraft docking [5]. In many applications, a significant challenge is maintaining safety with limited actuation: most physical systems have finite actuation capability, which manifests itself in the underlying models as constraints on the control input. Several methods have been proposed for input constrained safety-critical control, including the backup set method [6], input constrained CBFs [7], and neural CBFs [8]. While these approaches have shown success in various domains, a general approach remains elusive. Another important challenge in safety-critical control is that no ROM is ever fully accurate: there is always discrepancy between the ROM and the actual FOS. Consequently, robustness is of key importance, and one needs to ensure that safety is preserved even under such discrepancies--and with limited actuation. There exist CBF formulations that provide robustness against disturbances, such as the approaches of input-to-state safety [9, 10] and robust CBFs [11]. These formulations, however, have not yet accommodated input constraints. Meanwhile, the above approaches that address input constraints have not yet been endowed with robustness. On the other hand, there exist robust reachability approaches that handle both input constraints and disturbances [12, 13, 14, 15], but these methods typically suffer from the curse of dimensionality and become intractable for higher dimensional ROMs. This paper presents a robust safety-critical control framework, illustrated in Fig. 1, wherein input constrained ROMs and CBFs are leveraged to achieve formal safety guarantees on systems with complex full order dynamics. To this end, we make the following three key contributions. First, the backup set method is reformulated in the context of ROMs, and conditions for provably safe behavior are given that account for the discrepancy between the ROM and the FOS that tracks it. Second, the _input-to-state safe backup set method_ is introduced to provide robustness against the discrepancy with less restrictive conditions. Third, the method is implemented in the context of an obstacle avoidance problem, wherein safe walking on a quadrupedal robot using the unicycle ROM is demonstrated in high-fidelity simulation. Fig. 1: Overview of the proposed safety-critical control framework. ## II Safety Under Input Constraints Consider the control-affine system: \[\dot{x}=f(x)+g(x)u, \tag{1}\] with state \(x\in\mathbb{R}^{n}\), input \(u\in\mathcal{U}\), convex admissible input set \(\mathcal{U}\subseteq\mathbb{R}^{m}\), and locally Lipschitz continuous functions \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\), \(g:\mathbb{R}^{n}\to\mathbb{R}^{n\times m}\). Consider a controller \(k:\mathbb{R}^{n}\to\mathcal{U}\), \(u=k(x)\), that yields the closed control loop: \[\dot{x}=f(x)+g(x)k(x), \tag{2}\] associated with the initial condition \(x(0)=x_{0}\in\mathbb{R}^{n}\). If \(k\) is locally Lipschitz continuous, the closed-loop system has a unique solution \(\phi(t,x_{0})\) over an interval of existence. For simplicity, we assume that the solution exists for all \(t\geq 0\). Our goal is to design the controller \(k\) such that the closed-loop system is safe. Specifically, we consider the system to be safe if its state \(x\) is located within a safe set \(\mathcal{S}\subset\mathbb{R}^{n}\). For the safe evolution of the closed control loop, we require the forward invariance of the safe set \(\mathcal{S}\) along (2). **Definition 1**.: Given \(k:\mathbb{R}^{n}\to\mathcal{U}\), set \(\mathcal{S}\subset\mathbb{R}^{n}\) is _forward invariant_ along (2) if \(x_{0}\in\mathcal{S}\implies\phi(t,x_{0})\in\mathcal{S}\), \(\forall t\geq 0\). This requirement can be met only if \(\mathcal{S}\) is control invariant. **Definition 2**.: Set \(\mathcal{S}\subset\mathbb{R}^{n}\) is _control invariant_ if there exists \(k:\mathbb{R}^{n}\to\mathcal{U}\) such that \(\mathcal{S}\) is forward invariant along (2). ### _Control Barrier Functions_ Control barrier functions [3] provide a powerful tool for safe control design, hence we briefly revisit this method. Throughout the paper, we consider safe sets defined as the 0-superlevel set of a function \(h:\mathbb{R}^{n}\to\mathbb{R}\): \[\mathcal{S}=\{x\in\mathbb{R}^{n}:h(x)\geq 0\}, \tag{3}\] such that \(h\) is continuously differentiable and zero is a regular value of \(h\), i.e., \(h(x)=0\implies\nabla h(x)\neq 0\). **Definition 3**.: Function \(h\) is a _control barrier function (CBF)_ for (1) on \(\mathcal{S}\) if there exists \(\alpha\in\mathcal{K}_{\infty}\) such that1: Footnote 1: Function \(\alpha:\mathbb{R}_{\geq 0}\to\mathbb{R}_{>0}\) is of class-\(\mathcal{K}_{\infty}\) (\(\alpha\in\mathcal{K}_{\infty}\)) if it is continuous, \(\alpha(0)=0\) and \(\lim_{r\to\infty}\alpha(r)=\infty\). Note that extended class-\(\mathcal{K}_{\infty}\) functions defined over \(\mathbb{R}\) are also used to ensure the attractivity of the safe set. \[\sup_{u\in\mathcal{U}}\dot{h}(x,u)>-\alpha\big{(}h(x)\big{)} \tag{4}\] holds \(\forall x\in\mathcal{S}\), where: \[\dot{h}(x,u)=\nabla h(x)(f(x)+g(x)u). \tag{5}\] Given a CBF, [3] established the following safety result. **Theorem 1** ([3]).: _If \(h\) is a CBF for (1) on \(\mathcal{S}\), then any locally Lipschitz continuous controller \(k:\mathbb{R}^{n}\to\mathcal{U}\) satisfying:_ \[\dot{h}\big{(}x,k(x)\big{)}\geq-\alpha\big{(}h(x)\big{)} \tag{6}\] \(\forall x\in\mathcal{S}\) _renders \(\mathcal{S}\) forward invariant along (2)._ Condition (6) can be used as constraint when synthesizing safe controllers. For example, given a desired controller \(k_{\text{d}}:\mathbb{R}^{n}\to\mathcal{U}\), the following quadratic program-based controller can be used for safety-critical control: \[\begin{split} k(x)=\operatorname*{argmin}_{u\in\mathcal{U}}& \|u-k_{\text{d}}(x)\|_{\Gamma}^{2}\\ \text{s.t.}&\dot{h}(x,u)\geq-\alpha\big{(}h(x) \big{)},\end{split} \tag{7}\] where \(\|u\|_{\Gamma}^{2}=u^{\top}\Gamma u\) and \(\Gamma\in\mathbb{R}^{m\times m}\) is a positive definit weight matrix that can be tuned. ### _Backup Set Method_ While CBFs provide safe behavior, it is nontrivial to verify that a certain choice of \(h\) is indeed a CBF satisfying (4), especially with bounded inputs (\(\mathcal{U}\subset\mathbb{R}^{m}\)). An arbitrary \(h\) may not have control invariant 0-superlevel set, it may not be a CBF, and safe inputs satisfying (6) may not exist. Consequently, optimization problems like (7) may be infeasible with input bounds. The backup set method [6] was proposed to solve this problem, by synthesizing control invariant sets and corresponding safe controllers via the CBF framework. The backup set method is described as follows; while examples are given below and in [6, 16]. First, one must specify a control invariant subset of \(\mathcal{S}\), called the _backup set_: \[\mathcal{S}_{\text{b}}=\{x\in\mathbb{R}^{n}:h_{\text{b}}(x)\geq 0\}, \tag{8}\] such that \(h_{\text{b}}:\mathbb{R}^{n}\to\mathbb{R}\) is continuously differentiable, zero is a regular value of \(h_{\text{b}}\), i.e., \(h_{\text{b}}(x)=0\implies\nabla h_{\text{b}}(x)\neq 0\), and \(\mathcal{S}_{\text{b}}\subseteq\mathcal{S}\). Furthermore, one must define a _backup controller_\(k_{\text{b}}:\mathbb{R}^{n}\to\mathcal{U}\) that renders the backup set forward invariant along the closed-loop system: \[\dot{x}=f(x)+g(x)k_{\text{b}}(x)\triangleq f_{\text{b}}(x). \tag{9}\] We denote the solution of (9) with \(x(0)=x_{0}\in\mathbb{R}^{n}\) by \(\phi_{\text{b}}(t,x_{0})\). To summarize, the choice of backup set and backup controller must satisfy the following assumption. **Assumption 1**.: The backup set \(\mathcal{S}_{\text{b}}\subseteq\mathcal{S}\) is control invariant, and the backup controller \(k_{\text{b}}\) renders \(\mathcal{S}_{\text{b}}\) forward invariant along (9) while satisfying the input constraints: \[x\in\mathcal{S}_{\text{b}}\implies\phi_{\text{b}}(\theta,x)\in\mathcal{S}_{ \text{b}}\subseteq\mathcal{S},\ \forall\theta\geq 0, \tag{10}\] and \(k_{\text{b}}(x)\in\mathcal{U}\), \(\forall x\in\mathcal{S}\). Finding a control invariant subset \(\mathcal{S}_{\text{b}}\) is considerably less difficult than verifying that a given \(\mathcal{S}\) is control invariant. With this, by construction, we have a control invariant set \(\mathcal{S}_{\text{b}}\) and a safe controller \(k_{\text{b}}\) at our disposal. However, methods for constructing \(\mathcal{S}_{\text{b}}\) (see examples in [6, 16]) may result in a very small set, hence operating the system directly within \(\mathcal{S}_{\text{b}}\) may make the behavior overly conservative. To reduce this conservatism, we enlarge \(\mathcal{S}_{\text{b}}\) to the set \(\mathcal{S}_{\text{I}}\subseteq\mathcal{S}\): \[\mathcal{S}_{\text{I}}=\left\{x\in\mathbb{R}^{n}:\begin{array}{l}\phi_{ \text{b}}(\theta,x)\in\mathcal{S},\ \forall\theta\in[0,T],\\ \phi_{\text{b}}(T,x)\in\mathcal{S}_{\text{b}}\end{array}\right\}, \tag{11}\] with \(T\geq 0\); cf. Fig. 1. Note that \(T\) is a design parameter, the size of \(\mathcal{S}_{\text{I}}\) increases with \(T\), and \(T=0\) yields \(\mathcal{S}_{\text{I}}=\mathcal{S}_{\text{b}}\). **Lemma 1** ([6]).: _The set \(\mathcal{S}_{\text{I}}\) is control invariant, and the backup controller \(k_{\text{b}}\) renders \(\mathcal{S}_{\text{I}}\) forward invariant along (9):_ \[x\in\mathcal{S}_{\text{I}}\implies\phi_{\text{b}}(\vartheta,x)\in\mathcal{S}_{ \text{I}},\ \forall\vartheta\geq 0. \tag{12}\] For the proofs of Lemmas 1 and 2, see the Appendix. Thus, (11) yields a larger, practically more useful control invariant set \(\mathcal{S}_{\mathrm{I}}\) than the backup set \(\mathcal{S}_{\mathrm{b}}\); see [16] for an analysis about the size of \(\mathcal{S}_{\mathrm{I}}\). We use \(\mathcal{S}_{\mathrm{I}}\) to provide safety, based on the framework of CBFs. We rely on the derivatives: \[\dot{h}\big{(}\phi_{\mathrm{b}}(\theta,x),u\big{)} =\frac{\partial h\big{(}\phi_{\mathrm{b}}(\theta,x)\big{)}}{ \partial x}\big{(}f(x)+g(x)u\big{)}, \tag{13}\] \[\dot{h}_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x),u\big{)} =\frac{\partial h_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x)\big{)} }{\partial x}\big{(}f(x)+g(x)u\big{)}.\] Then, we can state that the backup controller \(k_{\mathrm{b}}\) satisfies safety conditions analogous to (6). **Lemma 2** ([6]).: _There exist \(\alpha,\alpha_{\mathrm{b}}\in\mathcal{K}_{\infty}\) such that \(\forall x\in\mathcal{S}_{\mathrm{I}}\):_ \[\dot{h}\big{(}\phi_{\mathrm{b}}(\theta,x),k_{\mathrm{b}}(x)\big{)} \geq-\alpha\big{(}h(\phi_{\mathrm{b}}(\theta,x))\big{)},\ \forall\theta\in[0,T], \tag{14}\] \[\dot{h}_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x),k_{\mathrm{b}}( x)\big{)} \geq-\alpha_{\mathrm{b}}\big{(}h_{\mathrm{b}}(\phi_{\mathrm{b}}(T,x))\big{)}.\] This leads to the main result of the backup set method. **Theorem 2** ([6]).: _Consider system (1), set \(\mathcal{S}\) in (3), set \(\mathcal{S}_{\mathrm{b}}\) in (8), Assumption 1, and set \(\mathcal{S}_{\mathrm{I}}\) in (11). Then, there exist \(\alpha,\alpha_{\mathrm{b}}\in\mathcal{K}_{\infty}\) such that a controller \(k:\mathbb{R}^{n}\to\mathcal{U}\) satisfying:_ \[\dot{h}\big{(}\phi_{\mathrm{b}}(\theta,x),k(x)\big{)} \geq-\alpha\big{(}h(\phi_{\mathrm{b}}(\theta,x))\big{)},\ \forall\theta\in[0,T], \tag{15}\] \[\dot{h}_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x),k(x)\big{)} \geq-\alpha_{\mathrm{b}}\big{(}h_{\mathrm{b}}(\phi_{\mathrm{b}}(T,x ))\big{)}.\] \(\forall x\in\mathcal{S}_{\mathrm{I}}\) _is guaranteed to exist. Moreover, any locally Lipschitz continuous controller \(k:\mathbb{R}^{n}\to\mathcal{U}\) that satisfies (15) \(\forall x\in\mathcal{S}_{\mathrm{I}}\) renders \(\mathcal{S}_{\mathrm{I}}\subseteq\mathcal{S}\) forward invariant along (2)._ Proof.: The existence of a controller \(k\) satisfying (15) follows from Lemma 2, since \(k_{\mathrm{b}}\) is such a controller. The forward invariance of \(\mathcal{S}_{\mathrm{I}}\) is the consequence of Theorem 1. ### _Implementation in Optimization Problems_ Theorem 2 can be directly used for controller synthesis, for example, by using (15) in optimization problems like (7): \[k(x) = \tag{16}\] \[\text{s.t. }\dot{h}\big{(}\phi_{\mathrm{b}}(\theta,x),u\big{)} \geq-\alpha\big{(}h(\phi_{\mathrm{b}}(\theta,x))\big{)},\forall \theta\in[0,T],\] \[\dot{h}_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x),u\big{)} \geq-\alpha_{\mathrm{b}}\big{(}h_{\mathrm{b}}(\phi_{\mathrm{b}}(T,x ))\big{)}.\] Note that the constraints are affine in \(u\), cf. (13), hence the optimization problem is convex, and it leads to a quadratic program if \(u\in\mathcal{U}\) is also described by affine constraints. Moreover, unlike (7), the optimization problem (16) is guaranteed to be feasible even if \(h\) is not verified to be a CBF. The constraints of (16) contain the terms in (13), where: \[\frac{\partial h\big{(}\phi_{\mathrm{b}}(\theta,x)\big{)}}{ \partial x} =\nabla h\big{(}\phi_{\mathrm{b}}(\theta,x)\big{)}\frac{\partial \phi_{\mathrm{b}}(\theta,x)}{\partial x}, \tag{17}\] \[\frac{\partial h_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x)\big{)} }{\partial x} =\nabla h_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x)\big{)}\frac{ \partial\phi_{\mathrm{b}}(T,x)}{\partial x}.\] Here, \(Q(\theta,x)\triangleq\partial\phi_{\mathrm{b}}(\theta,x)/\partial x\) is the sensitivity of the flow \(\phi_{\mathrm{b}}(\theta,x)\) to its initial condition \(x\). \(\phi_{\mathrm{b}}(\theta,x)\) and \(Q(\theta,x)\) can be calculated together by solving the initial value problem: \[\phi_{\mathrm{b}}^{\prime}(\theta,x) =f_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(\theta,x)\big{)}, \phi_{\mathrm{b}}(0,x) =x, \tag{18}\] \[Q^{\prime}(\theta,x) =\frac{\partial f_{\mathrm{b}}}{\partial x}\big{(}\phi_{\mathrm{b} }(\theta,x)\big{)}Q(\theta,x), Q(0,x) =I,\] where prime denotes derivative with respect to \(\theta\), \(f_{\mathrm{b}}\) is as in (9), and \(I\) is the \(n\times n\) identity matrix. The optimization problem (16) contains infinitely many constraints parameterized by \(\theta\in[0,T]\). For computational tractability, they are usually discretized into finitely many, \(N_{\mathrm{c}}\) constraints at \(\theta_{i}=iT/N_{\mathrm{c}}\), \(i\in\mathcal{I}=\{0,1,\ldots,N_{\mathrm{c}}\}\), yielding: \[k(x) = \operatorname*{argmin}_{u\in\mathcal{U}}\ \|u-k_{\mathrm{d}}(x)\|_{ \Gamma}^{2} \tag{19}\] \[\text{s.t. }\dot{\bar{h}}_{i}(x,u) \geq-\alpha\big{(}\bar{h}_{i}(x)\big{)},\ \forall i\in\mathcal{I},\] \[\dot{\bar{h}}_{\mathrm{b}}(x,u) \geq-\alpha_{\mathrm{b}}\big{(}\bar{h}_{\mathrm{b}}(x)\big{)}.\] Here, the shorthand notations \(\bar{h}_{i}(x)=h(\phi_{\mathrm{b}}(\theta_{i},x))\) and \(\bar{h}_{\mathrm{b}}(x)=h_{\mathrm{b}}(\phi_{\mathrm{b}}(T,x))\) are used. In what follows, we implement controller (19) in an example. **Example 1** (Unicycle model).: Consider the unicycle model: \[\dot{\xi} =v\cos\psi, \tag{20}\] \[\dot{\eta} =v\sin\psi,\] \[\dot{\psi} =\omega,\] where the planar position \(p=\begin{bmatrix}\xi&\eta\end{bmatrix}^{\top}\in\mathbb{R}^{2}\) and yaw angle \(\psi\in\mathbb{R}\) constitute the state \(x=\begin{bmatrix}p^{\top}&\psi\end{bmatrix}^{\top}\), while the speed \(v\in[v_{\min},v_{\max}]\subset\mathbb{R}\) and yaw rate \(\omega\in[-\omega_{\max},\omega_{\max}]\subset\mathbb{R}\) form the control input \(u=\begin{bmatrix}v&\omega\end{bmatrix}^{\top}\). We seek to drive the unicycle in the \(\xi\) direction at a goal position \(\eta_{\mathrm{g}}\) with a speed \(v_{\mathrm{g}}\), while avoiding a circular obstacle of radius \(R_{\mathrm{O}}>0\) at position \(p_{\mathrm{O}}\). First we consider a stationary obstacle, then a moving obstacle with velocity \(\dot{p}_{\mathrm{O}}\) and acceleration \(\dot{p}_{\mathrm{O}}\). Note that this latter problem is well-studied [17], and closed-form expressions of control invariant sets exist [18]. We realize the target motion by the desired controller: \[k_{\mathrm{d}}(x)=\begin{bmatrix}v_{\mathrm{g}}\\ K_{\eta}(\eta_{\mathrm{g}}-\eta)-K_{\psi}\sin\psi\end{bmatrix}, \tag{21}\] that is to be modified to obtain a safe controller \(k(x)\). To characterize safety, we first introduce the Eucledian distance \(D\) from the obstacle center, the normal vector \(n\) pointing away from the obstacle, and a related projection matrix \(P\): \[D=\|p-p_{\mathrm{O}}\|,\quad n=\frac{p-p_{\mathrm{O}}}{\|p-p_{\mathrm{O}}\|}, \quad P=I-nn^{\top}. \tag{22}\] Notice that \(\partial D/\partial p=n^{\top}\) and \(\partial n/\partial p=P/D\) hold. Furthermore, let us describe the heading direction by: \[q=\begin{bmatrix}\cos\psi\\ \sin\psi\end{bmatrix},\quad r=\begin{bmatrix}-\sin\psi\\ \cos\psi\end{bmatrix}. \tag{23}\] With these preliminaries, we introduce the following function from [19] to characterize safety: \[h(x,t)=D-R_{\mathrm{O}}+\delta n^{\top}q, \tag{24}\] where a tunable parameter \(\delta\geq 0\) penalizes heading towards the obstacle. The corresponding derivatives read: \[\nabla h(x,t) =\begin{ Note that \(h\) explicitly depends on time through \(p_{\mathrm{O}}\) if the obstacle is moving, and one must include \(\partial h/\partial t\) in \(\dot{h}\). For stationary obstacle, this dependence on \(t\) can be omitted. Without input bounds, \(h\) could be used as CBF and controller (7) would ensure safe behavior. The result of executing (7) while excluding the input bounds (i.e., taking \(\mathcal{U}\!=\!\mathbb{R}^{2}\)) is illustrated in Fig. 2(a) for the parameters in Table I and \(x_{0}\!=\!0\). For \(\delta\!=\!0\), i.e., when the heading direction is not penalized by the CBF, the unicycle stops in front of the obstacle (dashed line), which is safe but overly conservative. For \(\delta>0\), the unicycle safely executes the task (solid line). However, since the input bounds are not incorporated into the optimization problem, the lower and upper speed limits are violated. On the other hand, \(h\) is not necessarily a valid CBF in the presence of input bounds. To address input bounds, we rely on the backup controller: \[k_{\mathrm{b}}(x,t)=\begin{bmatrix}v_{\mathrm{max}}\\ \omega_{\mathrm{max}}\tanh(n^{\top}r/\varepsilon)\end{bmatrix}, \tag{26}\] that seeks to turn the unicycle away from the obstacle as fast as possible and drive away with maximum speed. Parameter \(\varepsilon\) tunes the aggressiveness of turning, and the yaw rate \(\pm\omega_{\mathrm{max}}\) is achieved as \(\varepsilon\!\to\!0\). Hence, this controller allows us to keep safety against obstacles that move slower than \(v_{\mathrm{max}}\) and turn slower than \(\omega_{\mathrm{max}}\). The backup controller is associated with: \[h_{\mathrm{b}}(x,t)=n^{\top}(qv_{\mathrm{max}}-\dot{p}_{\mathrm{O}}), \tag{27}\] whose derivatives are: \[\nabla h_{\mathrm{b}}(x,t) =\left[(qv_{\mathrm{max}}-\dot{p}_{\mathrm{O}})^{\top}P/D\quad n ^{\top}rv_{\mathrm{max}}\right], \tag{28}\] \[\frac{\partial h_{\mathrm{b}}}{\partial t}(x,t) =-(qv_{\mathrm{max}}-\dot{p}_{\mathrm{O}})^{\top}P\dot{p}_{ \mathrm{O}}/D-n^{\top}\ddot{p}_{\mathrm{O}}.\] We remark that the backup set that is kept invariant by \(k_{\mathrm{b}}\) is in fact given by both \(h_{\mathrm{b}}(x,t)\!\geq\!0\) and \(h(x,t)\!\geq\!0\), (rather than just \(h_{\mathrm{b}}(x,t)\!\geq\!0\)), but both of these functions are involved in (15). The efficacy of the backup set method with controller (19) is shown in Fig. 2(b) for parameters in Table I and \(\delta\!=\!0\). The controller maintains safety while satisfying the input bounds, and note that even \(\delta=0\) yields desired behavior. The same controller is tested for the case of a moving obstacle in Fig. 2(c). The obstacle moves in the \(\eta\) direction sinusoidally, with \(p_{\mathrm{O}}(t)\!=\!\left[\xi_{\mathrm{O}}\quad\eta_{\mathrm{O}}(t)\right]^ {\top}\), \(\eta_{\mathrm{O}}(t)\!=\!\bar{\eta}_{\mathrm{O}}-A_{\eta}\sin(\Omega t)/\Omega\). The end result is still safety with bounded inputs. ## III Reduced Order Models Model (1) is often a simplified representation of a real control system. The actual dynamics may be more compli Fig. 2: Safety-critical control of the unicycle model for obstacle avoidance. (a) The CBF-based controller (7) maintains safety, without limits on the inputs (speed and yaw rate). (b) The backup set method-based controller (19) enforces safety with input constraints. (c) Controller (19) handles moving obstacle. cated, higher dimensional, involving unmodeled phenomena. Hence, we call (1) as _reduced order model (ROM)_. The backup set method is able to control the ROM with formal safety guarantees while respecting input constraints. Yet, the safety of the actual _full order system (FOS)_ is not necessarily ensured. Next, we investigate the effect of unmodeled dynamics on safety, and derive conditions for the safety of the FOS by following our previous work [19]. Then, we propose a robustified backup set method. We consider the ROM to be given, while approaches to construct ROMs are out of scope of this paper. Finally, we demonstrate our framework on an example, in which the locomotion of a quadruped (FOS) is controlled to follow the unicycle model (ROM). Consider a FOS given by state \(X\in\mathbb{R}^{N}\), input \(U\in\mathbb{R}^{M}\), locally Lipschitz continuous functions \(F:\mathbb{R}^{N}\to\mathbb{R}^{N}\) and \(G:\mathbb{R}^{N}\to\mathbb{R}^{N\times M}\), and dynamics: \[\dot{X}=F(X)+G(X)U. \tag{29}\] Furthermore, let a _reduced order state_\(x\in\mathbb{R}^{n}\) be defined by a continuously differentiable map \(P:\mathbb{R}^{N}\to\mathbb{R}^{n}\): \[x=P(X). \tag{30}\] The reduced order state is selected such that it describes safety-critical behavior. Specifically, consider the safe set: \[\mathcal{C}=\{X\in\mathbb{R}^{N}:h(P(X))\geq 0\} \tag{31}\] for the FOS with \(h:\mathbb{R}^{n}\to\mathbb{R}\) given as before. To achieve safe FOS behavior, one may construct a ROM like (1), design a safety-critical ROM controller \(u=k(x)\), and utilize a tracking controller \(K:\mathbb{R}^{N}\times\mathbb{R}^{m}\to\mathbb{R}^{M}\), \(U=K(X,u)\) so that the closed-loop FOS: \[\dot{X}=F(X)+G(X)K(X,u) \tag{32}\] tracks the ROM. With appropriate ROM and tracking controller, the true dynamics of the reduced order state \(x\) track the ROM accurately. The true reduced order dynamics are: \[\dot{x}=f(x)+g(x)u+d, \tag{33}\] where \(d\in\mathbb{R}^{n}\) is the deviation from the ROM, given by: \[d=\nabla P(X)\big{(}F(X)\!+\!G(X)K(X,u)\big{)}\!-\!f(P(X))\!-\!g(P(X))u. \tag{34}\] Note that while \(d\) acts as disturbance on the ROM, it can be viewed as tracking error that the FOS seeks to eliminate. If the discrepancy \(d\) is zero, the ROM captures the safety-critical behavior of the FOS accurately, and the backup set method can be used directly with the control invariant set: \[\mathcal{C}_{\text{I}}\!=\!\left\{\!X\!\in\!\mathbb{R}^{N}:\,\begin{matrix}h \big{(}\phi_{\text{b}}(\theta,P(X))\big{)}\!\geq\!0,&\forall\theta\!\in\![0,T],\\ h_{\text{b}}\big{(}\phi_{\text{b}}(T,P(X))\big{)}\!\geq\!0\end{matrix}\!\right\}, \tag{35}\] for which \(X\in\mathcal{C}_{\text{I}}\iff x\in\mathcal{S}_{\text{I}}\). Then, per Theorem 2, there exists a controller \(k\) that satisfies (15) and renders \(\mathcal{C}_{\text{I}}\subseteq\mathcal{C}\) forward invariant along (29). However, nonzero discrepancy \(d\) may lead to safety violations. Below we discuss conditions under which safety is preserved, and we investigate how to provide robustness against \(d\). During robustification, (33) is considered while the discrepancy \(d\) is viewed as an unknown but bounded term (see assumptions below) that represents modeling errors and disturbances associated with the ROM. ### _Safety with Ideal Tracking_ If the ROM and the tracking controller are well-designed, the true reduced order dynamics converges to the ROM and the discrepancy \(d\) vanishes. First, we consider this ideal scenario as reflected by the following assumption. **Assumption 2**.: The tracking controller \(U=K(x,u)\) drives the discrepancy between the true reduced order dynamics and the ROM to zero exponentially. That is, there exist \(A\geq 0\) and \(\lambda>0\) such that \(\forall t\geq 0\): \[\|d\|\leq A\mathrm{e}^{-\lambda t}. \tag{36}\] For simplicity, we assume exponential convergence, although one could also consider asymptotic stability with a class-\(\mathcal{KL}\) function on the right-hand side. Similarly, to simplify our discussion, we choose linear class-\(\mathcal{K}_{\infty}\) functions: \(\alpha(r)=\gamma r\), \(\alpha_{\text{b}}(r)=\gamma_{\text{b}}r\), with \(\gamma,\gamma_{\text{b}}>0\). Furthermore, we assume that the gradients of \(h(\phi_{\text{b}}(\theta,x))\) and \(h_{\text{b}}(\phi_{\text{b}}(\theta,x))\) are bounded, i.e., there exist \(D,D_{\text{b}}\geq 0\) such that \(\|\partial h\big{(}\phi_{\text{b}}(\theta,x)\big{)}/\partial x\|\leq D\), \(\forall\theta\in[0,T]\) and \(\|\partial h_{\text{b}}\big{(}\phi_{\text{b}}(\theta,x)\big{)}/\partial x\| \leq D_{\text{b}}\) hold \(\forall x\in\mathcal{S}_{\text{I}}\) with the Euclidean norm \(\|.\|\). These assumptions are relaxed in the next section. Under these assumptions, we show that a time-varying subset of \(\mathcal{C}_{\text{I}}\) is control invariant. We define this set \(\mathcal{C}_{\text{d}}(t)\) by: \[\mathcal{C}_{\text{d}}(t)=\left\{X\!\in\!\mathbb{R}^{N}:\begin{array}{l}H( \theta,X,t)\geq 0,\ \forall\theta\!\in\![0,T],\\ H_{\text{b}}(T,X,t)\geq 0\end{array}\right\}, \tag{37}\] with: \[H(\theta,X,t) =h\big{(}\phi_{\text{b}}(\theta,P(X))\big{)}-\frac{DA\mathrm{e}^{- \lambda t}}{\lambda-\gamma}, \tag{38}\] \[H_{\text{b}}(T,X,t) =h_{\text{b}}\big{(}\phi_{\text{b}}(T,P(X))\big{)}-\frac{D_{\text {b}}A\mathrm{e}^{-\lambda t}}{\lambda-\gamma_{\text{b}}}.\] **Theorem 3**.: _Consider the ROM (1), set \(\mathcal{S}_{\text{I}}\) in (11) and a locally Lipschitz continuous controller \(k:\mathbb{R}^{n}\to\mathcal{U}\) that satisfies (15) with \(\alpha(r)=\gamma r\), \(\alpha_{\text{b}}(r)=\gamma_{\text{b}}r\), \(\forall x\in\mathcal{S}_{\text{I}}\). Furthermore, consider the FOS (29), reduced order state (30), set \(\mathcal{C}\) in (31), set \(\mathcal{C}_{\text{d}}(t)\) in (37)-(38), and Assumption 2. If \(\gamma,\gamma_{\text{b}}<\lambda\), then \(\mathcal{C}_{\text{d}}(t)\subseteq\mathcal{C}\) is forward invariant along (32)._ Proof.: For \(\gamma,\gamma_{\text{b}}<\lambda\), the terms \(DA/(\lambda-\gamma)\mathrm{e}^{-\lambda t}\) and \(D_{\text{b}}A/(\lambda-\gamma_{\text{b}})\mathrm{e}^{-\lambda t}\) in (38) are nonnegative \(\forall t\geq 0\). Hence, \(\mathcal{C}_{\text{d}}(t)\subseteq\mathcal{C}_{\text{I}}\), \(\forall t\geq 0\), and \(X\in\mathcal{C}_{\text{d}}(t)\) implies \(x\in\mathcal{S}_{\text{I}}\). Then, Theorem 2 can be applied, and a controller \(k\) satisfying (15) is guaranteed to exist for all \(X\in\mathcal{C}_{\text{d}}(t)\) since \(x\in\mathcal{S}_{\text{I}}\). Given (38) and (15), the derivative of \(H\) along (32) satisfies: \[\dot{H}(\theta,X,t,k(P(X)),d) \tag{39}\] \[\geq-\gamma h(\phi_{\text{b}}(\theta,x))-DA\mathrm{e}^{-\lambda t }+\frac{\lambda DA\mathrm{e}^{-\lambda t}}{\lambda-\gamma}\] \[\geq-\gamma H(\theta,X,t).\] Similarly, \(H_{\mathrm{b}}(T,X,t,k(P(X)),d)\geq-\gamma_{\mathrm{b}}H_{\mathrm{b}}(T,X,t)\) can be proven. Thus, by Theorem 1 we can conclude the forward invariance of \(\mathcal{C}_{\mathrm{d}}(t)\subseteq\mathcal{C}_{\mathrm{I}}\), that implies a safe FOS. **Remark 1**.: Theorem 3 states that with fast enough tracking of the ROM, the FOS stays safe and evolves in a region where backup set method-based controllers are guaranteed to exist. However, this result is conditioned on ideal exponential tracking (and the technical assumption about the bounded gradients of \(h\) and \(h_{\mathrm{b}}\)). Next, we relax these restrictions. ### _Input-to-State Safe Backup Set Method_ Let us use the following weaker assumption on tracking. **Assumption 3**.: The tracking controller \(U=K(x,u)\) drives the discrepancy between the true reduced order dynamics and the ROM to a _neighborhood_ of zero exponentially. That is, there exist \(A,B\geq 0\) and \(\lambda>0\) such that \(\forall t\geq 0\): \[\|d\|^{2}\leq A\mathrm{e}^{-\lambda t}+B. \tag{40}\] Note that this assumption includes the case \(A=0\), i.e., when the discrepancy does not necessarily decay but stays bounded below \(B\). We also remark that the square after the norm of \(d\) is introduced for algebraic convenience only. The assumption is shown to hold for the quadruped example below. When the discrepancy does not decay to zero (\(B\neq 0\)), safety can no longer be formally guaranteed by (15). To remedy this, some CBF approaches add extra robustifying terms to their safety constraints [9, 11]. For example, the approach of _input-to-state safe_ CBFs modifies (6) to \(\dot{h}\big{(}x,k(x)\big{)}\geq-\alpha\big{(}h(x)\big{)}+\sigma\|\nabla h(x) \|^{2}\) with \(\sigma>0\) (where \(\nabla h(x)\) could be replaced with \(\nabla h(x)g(x)\) in case of matched disturbances) [10]. We propose to extend this approach to the _input-to-state safe backup set method_, by modifying (15) to: \[\dot{h}\big{(}\phi_{\mathrm{b}}(\theta,x),k(x)\big{)} \geq-\alpha\big{(}h(\phi_{\mathrm{b}}(\theta,x))\big{)} \tag{41}\] \[+\sigma\bigg{\|}\frac{\partial h(\phi_{\mathrm{b}}(\theta,x))}{ \partial x}\bigg{\|}^{2},\ \forall\theta\in[0,T],\] \[\dot{h}_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}(T,x),k(x)\big{)} \geq-\alpha_{\mathrm{b}}\big{(}h_{\mathrm{b}}(\phi_{\mathrm{b}}(T, x))\big{)}\] \[\quad+\sigma_{\mathrm{b}}\bigg{\|}\frac{\partial h_{\mathrm{b}}( \phi_{\mathrm{b}}(T,x))}{\partial x}\bigg{\|}^{2},\] with tunable parameters \(\sigma,\sigma_{\mathrm{b}}>0\). The approach of input-to-state safe CBFs is able to keep a neighborhood of the safe set invariant even with disturbances, and this neighborhood can be tuned as small as desired by parameter \(\sigma\). We seek to achieve the same results with input constraints using the backup set method. Accordingly, we consider a neighborhood \(\mathcal{S}_{\mathrm{d}}\) of the control invariant set \(\mathcal{S}_{\mathrm{I}}\): \[\mathcal{S}_{\mathrm{d}}\!=\!\bigg{\{}\!x\!\in\!\mathbb{R}^{n}\!\!:\,\,h(\phi_ {\mathrm{b}}(\theta,x))\!\!\geq\!\!-\!B/(4\sigma\gamma),\,\forall\theta\!\in\! [0,T],\bigg{\}}, \tag{42}\] determined by \(\sigma\), \(\sigma_{\mathrm{b}}\), and we redefine set \(\mathcal{C}_{\mathrm{d}}(t)\) in (37) with: \[H(\theta,X,t)\!=\!h\big{(}\phi_{\mathrm{b}}(\theta,P(X))\big{)}\!- \!\frac{A\mathrm{e}^{-\lambda t}}{4\sigma(\lambda-\gamma)}\!+\!\frac{B}{4 \sigma\gamma}, \tag{43}\] \[H_{\mathrm{b}}(T,X,t)\!=\!h_{\mathrm{b}}\big{(}\phi_{\mathrm{b}}( T,P(X))\big{)}\!-\!\frac{A\mathrm{e}^{-\lambda t}}{4\sigma_{\mathrm{b}}(\lambda- \gamma_{\mathrm{b}})}\!+\!\frac{B}{4\sigma_{\mathrm{b}}\gamma_{\mathrm{b}}}.\] Then, we state the invariance of set \(\mathcal{C}_{\mathrm{d}}(t)\) that can be made arbitrarily close to the safe set \(\mathcal{C}\) by increasing \(\sigma\), \(\sigma_{\mathrm{b}}\). **Theorem 4**.: _Consider the ROM (1), set \(\mathcal{S}_{\mathrm{d}}\) in (42) and a locally Lipschitz continuous controller \(k:\mathbb{R}^{n}\to\mathcal{U}\) that satisfies (41) with \(\alpha(r)=\gamma r\), \(\alpha_{\mathrm{b}}(r)=\gamma_{\mathrm{b}}r\), \(\forall x\in\mathcal{S}_{\mathrm{d}}\). Furthermore, consider the FOS (29), reduced order state (30), set \(\mathcal{C}_{\mathrm{d}}(t)\) in (37)-(43), and Assumption 3. If \(\gamma,\gamma_{\mathrm{b}}<\lambda\), then \(\mathcal{C}_{\mathrm{d}}(t)\) is forward invariant along (32)._ Proof.: First, we show that the following inequality holds: \[\sigma\bigg{\|}\frac{\partial h(\phi_{\mathrm{b}}(\theta,x))}{ \partial x}\bigg{\|}^{2}-\bigg{\|}\frac{\partial h(\phi_{\mathrm{b}}(\theta,x))} {\partial x}\bigg{\|}\|d\| \tag{44}\] \[\geq\bigg{(}\sqrt{\sigma}\bigg{\|}\frac{\partial h(\phi_{\mathrm{b} }(\theta,x))}{\partial x}\bigg{\|}-\frac{\|d\|}{2\sqrt{\sigma}}\bigg{)}^{2}- \frac{\|d\|^{2}}{4\sigma}\] \[\geq-\frac{A\mathrm{e}^{-\lambda t}+B}{4\sigma}.\] Then, the rest of the proof follows that of Theorem 3: \[\dot{H}(\theta,X,t,k(P(X)),d) \tag{45}\] \[=\dot{h}\big{(}\phi_{\mathrm{b}}(\theta,x),k(x)\big{)}\!+\!\frac{ \partial h\big{(}\phi_{\mathrm{b}}(\theta,x)\big{)}}{\partial x}d\!+\!\frac{ \lambda A\mathrm{e}^{-\lambda t}}{4\sigma(\lambda-\gamma)}\] \[\geq-\gamma h(\phi_{\mathrm{b}}(\theta,x))-\frac{A\mathrm{e}^{- \lambda t}+B}{4\sigma}+\frac{\lambda A\mathrm{e}^{-\lambda t}}{4\sigma(\lambda- \gamma)}\] \[\geq-\gamma H(\theta,X,t),\] cf. (39), where in the second step we used the Cauchy-Schwartz inequality and substituted (44). **Remark 2**.: Theorem 4 states that input-to-state stable tracking of the ROM (i.e., when the discrepancy \(d\) decays to or is within a neighborhood of zero) makes the FOS stay in the set \(\mathcal{C}_{\mathrm{d}}(t)\). This set can be tuned to be as close to the safe set \(\mathcal{C}\) as desired using \(\sigma\) and \(\sigma_{\mathrm{b}}\), cf. (43), and it no longer depends on the bounds of the gradients of \(h\) and \(h_{\mathrm{b}}\). However, we cannot claim the existence of a controller \(k\) satisfying (41) \(\forall x\in\mathcal{S}_{\mathrm{d}}\) anymore. Hence, instead of (19), one may implement a relaxed optimization problem: \[k(x)\!=\!\operatorname*{\arg\min_{\begin{subarray}{c}u\in\mathcal{U}\\ \delta_{i},\delta_{\mathrm{b}}\geq 0\end{subarray}}} \|u-k_{\mathrm{d}}(x)\|_{\mathrm{I}}^{2}+\sum_{i\in\mathcal{I}}p_{i }\delta_{i}^{2}+p_{\mathrm{b}}\delta_{\mathrm{b}}^{2} \tag{46}\] \[\text{s.t.} \dot{h}_{i}(x,u)\!\geq\!-\!\alpha\big{(}\dot{h}_{i}(x)\big{)}\!+ \!\sigma\bigg{\|}\frac{\partial\ddot{h}_{i}(x)}{\partial x}\bigg{\|}^{2}\!\!-\! \delta_{i},\] \[\forall i\!\in\!\mathcal{I},\] \[\dot{h}_{\mathrm{b}}(x,u)\!\geq\!-\alpha_{\mathrm{b}}\big{(}\dot{h}_ {\mathrm{b}}(x)\big{)}\!+\!\sigma_{\mathrm{b}}\bigg{\|}\frac{\partial\ddot{h}_{ \mathrm{b}}(x)}{\partial x}\bigg{\|}^{2}\!\!-\!\delta_{\mathrm{b}},\] with slack variables \(\delta_{i},\delta_{\mathrm{b}}\geq 0\) and penalties \(p_{i},p_{\mathrm{b}}\gg 1\), \(i\in\mathcal{I}\). Formulating provably safe and feasible controllers without this relaxation is subject to future research. **Example 2** (Quadrupedal locomotion).: Consider the Unitree A1 quadrupedal robot shown in Fig. 3. We seek to execute legged locomotion and accomplish the obstacle avoidance task of Example 1. We consider the quadruped as FOS, and we rely on an existing walking controller for locomotion with given speed and yaw rate. As such, the walking tracks the unicycle model in Example 1, which serves as ROM. The quadruped has 18 degrees of freedom and 12 actuated joints. Its motion is described by the configuration \(q\in\mathbb{R}^{18}\), velocities \(\dot{q}\in\mathbb{R}^{18}\), inputs \(U\in\mathbb{R}^{12}\) and holonomic constraints \(c(q)\equiv 0\in\mathbb{R}^{n_{\mathrm{c}}}\) at the \(n_{\mathrm{c}}\) number of contacts between the feet and the ground. The dynamics are governed by the Euler-Lagrange equations: \[\begin{split} D(q)\ddot{q}+H(q,\dot{q})&=BU+J(q)^{ \top}\lambda,\\ J(q)\ddot{q}+\dot{J}(q,\dot{q})\dot{q}&=0,\end{split} \tag{47}\] with mass matrix \(D(q)\in\mathbb{R}^{18\times 18}\), Coriolis and gravity terms \(H(q,\dot{q})\in\mathbb{R}^{18}\), Jacobian \(J(q)=\partial c(q)/\partial q\in\mathbb{R}^{n_{\mathrm{c}}\times 18}\), and constraint wrench \(\lambda\in\mathbb{R}^{n_{\mathrm{c}}}\). This yields the FOS (29) with the state \(X=\begin{bmatrix}q^{\top}&\dot{q}^{\top}\end{bmatrix}^{\top}\in\mathbb{R}^{36}\) and expressions: \[F(X)\!\!=\!\!\!\begin{bmatrix}\dot{q}\\ -D(q)^{-1}\!\!\left(\!H(q,\dot{q})\!-\!J(q)^{\top}\!\lambda\!\right)\!\end{bmatrix} \!,\,G(X)\!\!=\!\!\!\begin{bmatrix}0\\ D(q)^{-1}B\end{bmatrix}\!\!. \tag{48}\] During obstacle avoidance, safety is determined by the planar body position \(\xi\) and \(\eta\) and the yaw angle \(\psi\), leading to the reduced order state \(x\in\mathbb{R}^{3}\) of Example 1. These states are elements of the full state \(X\). The corresponding equations in the FOS (47) reduce to the unicycle model (20) if roll and pitch are neglected, thus the unicycle is chosen as ROM. For legged locomotion, we use the inverse dynamics quadratic program based walking controller, \(U=K(X,u)\), specified in [20]. This controller is able to track speed and yaw rate commands in the reduced order input \(u\in\mathbb{R}^{2}\) as long as they are below \(v_{\mathrm{max}}\) and between \(\pm\omega_{\mathrm{max}}\), respectively. We also prescribe the minimum speed \(v_{\mathrm{min}}\) so that the quadruped is not allowed to stop. We use the input-to-state safe backup set method, with details in Example 1, to find safe speed and yaw rate commands within these bounds. Fig. 3 shows high-fidelity simulations of the quadrupedal locomotion2. The speed and yaw rate are commanded using the proposed controller (46), the formulas in Example 1, the parameters in Table I, and the CVXOPT solver [21]. The radius \(R_{\mathrm{O}}\), that the robot's center should stay outside of, consists of the radius of the obstacle (\(0.45\,\mathrm{m}\)) and the size of the quadruped (\(0.3\,\mathrm{m}\)). With the proposed controller, the quadruped successfully navigates around the obstacle as shown by the motion tiles. Observe that safety is maintained with respect to the specification \(h\). Meanwhile, speed and yaw rate commands stay within desired bounds (while their actual value may exceed the bounds). The figure also indicates the tracking performance of the walking controller, by comparing the actual speed \(v_{\mathrm{a}}\) and yaw rate \(\omega_{\mathrm{a}}\) (extracted from \(X\)) to the commands \(v\) and \(\omega\). Indeed, the discrepancy \(d\) between the commanded velocities \(\dot{x}=\begin{bmatrix}v\cos\psi&v\sin\psi&\omega\end{bmatrix}^{\top}\) and the corresponding actual values decays and stays bounded, as in (40) in Assumption 3. Finally, the trajectory with the standard backup set method, (i.e., controller (19) and \(\sigma=0\), \(\sigma_{\mathrm{b}}=0\)) is shown by dashed lines. This case gets closer to safety violations due to lack of robustness to the discrepancy between the ROM and FOS. Footnote 2: See video at: [https://youtu.be/h8-x7-4eqWs](https://youtu.be/h8-x7-4eqWs). This example demonstrates the efficacy of the proposed safety-critical control approach, in which an input constrained ROM is combined with the backup set method and a reliable tracking controller. The results show safe behavior on a complex quadrupedal robot during obstacle avoidance. ## IV Conclusions This paper addressed safety-critical control using reduced order models that have bounded inputs. To formally guarantee safety while respecting input bounds, the backup set method was used. Robustness with respect to the discrepancy between the reduced order model and the full order system was analyzed. Conditions were derived for the safety of the full system, and the input-to-state safe backup set method was proposed to robustify against the above mentioned discrepancy. The efficacy of the proposed control framework was demonstrated by controlling a quadruped for obstacle Fig. 3: Application of the proposed safety-critical control framework with input constrained reduced order model in quadrupedal locomotion. The quadruped safely navigates by tracking the speed and yaw rate synthesized with the unicycle model and the input-to-state safe backup set method. avoidance while relying on the unicycle model. Future work includes studying the feasibility of the robustified controller. ## Appendix Proof of Lemma 1.: By definition (11) of \(\mathcal{S}_{\text{I}}\) and Assumption 1, we have: \[x\in\mathcal{S}_{\text{I}}\implies\phi_{\text{b}}(\theta,x)\in\mathcal{S}_{ \text{b}}\subseteq\mathcal{S},\ \forall\theta\geq T. \tag{49}\] From this, and the fact that: \[\phi_{\text{b}}(\theta+\vartheta,x_{0})=\phi_{\text{b}}(\theta,\phi_{\text{b}} (\vartheta,x_{0})), \tag{50}\] holds for any arbitrary \(\theta,\vartheta\geq 0\) and \(x_{0}\in\mathbb{R}^{n}\), we obtain: \[x\in\mathcal{S}_{\text{I}}\implies\phi_{\text{b}}(T,\phi_{\text{b}}(\vartheta,x))\in\mathcal{S}_{\text{b}},\ \forall\vartheta\geq 0. \tag{51}\] Furthermore, the definition (11) of \(\mathcal{S}_{\text{I}}\) and (49) give: \[x\in\mathcal{S}_{\text{I}}\implies\phi_{\text{b}}(\theta,x)\in\mathcal{S},\ \forall \theta\geq 0. \tag{52}\] Using the property (50) again, we obtain: \[x\!\in\!\mathcal{S}_{\text{I}}\implies\phi_{\text{b}}(\theta,\phi_{\text{b}} (\vartheta,x))\!\in\!\mathcal{S},\ \forall\theta\!\in\![0,T],\ \forall\vartheta\!\geq\!0. \tag{53}\] Thus, (51), (53) and the definition (11) of \(\mathcal{S}_{\text{I}}\) yield (12). Proof of Lemma 2.: The definition (11) of \(\mathcal{S}_{\text{I}}\) can be re-written as: \[\mathcal{S}_{\text{I}}=\left\{x\in\mathbb{R}^{n}:\begin{array}{c}h(\phi_{ \text{b}}(\theta,x))\geq 0,\ \forall\theta\in[0,T],\\ h_{\text{b}}(\phi_{\text{b}}(T,x))\geq 0\end{array}\right\}. \tag{54}\] \(\mathcal{S}_{\text{I}}\) is rendered forward invariant by the backup controller \(k_{\text{b}}\) per Lemma 1. Therefore, Nagumo's theorem [22] states: \[h(\phi_{\text{b}}(\theta,x))=0\implies\dot{h}\big{(}\phi_{\text{b }}(\theta,x),k_{\text{b}}(x)\big{)}\geq 0, \tag{55}\] \[h_{\text{b}}(\phi_{\text{b}}(T,x))=0\implies\dot{h}_{\text{b}} \big{(}\phi_{\text{b}}(T,x),k_{\text{b}}(x)\big{)}\geq 0.\] Consider the second condition and let: \[\tilde{\mathcal{S}}(x)=\{\tilde{x}\in\mathbb{R}^{n}:h_{\text{b}}(\phi_{\text{ b}}(T,x))\!\geq\!h_{\text{b}}(\phi_{\text{b}}(T,\tilde{x}))\!\geq\!0\}. \tag{56}\] Note that \(\forall x\in\mathcal{S}_{\text{I}}\), \(\tilde{\mathcal{S}}(x)\) is nonempty and \(x\in\tilde{\mathcal{S}}(x)\), thus: \[\dot{h}_{\text{b}}\big{(}\phi_{\text{b}}(T,x),k_{\text{b}}(x)\big{)}\geq\inf _{\tilde{x}\in\tilde{\mathcal{S}}(x)}\dot{h}_{\text{b}}\big{(}\phi_{\text{b}} (T,\tilde{x}),k_{\text{b}}(\tilde{x})\big{)}. \tag{57}\] Now let us define the set \(\tilde{\mathcal{S}}_{r}\) for \(r\geq 0\) and \(\tilde{\alpha}_{\text{b}}:\mathbb{R}_{\geq 0}\to\mathbb{R}\): \[\tilde{\mathcal{S}}_{r}=\{\tilde{x}\in\mathbb{R}^{n}:r\geq h_{ \text{b}}(\phi_{\text{b}}(T,\tilde{x}))\geq 0\}. \tag{58}\] \[\tilde{\alpha}_{\text{b}}(r)=-\inf_{\tilde{x}\in\tilde{\mathcal{S }}_{r}}\dot{h}_{\text{b}}\big{(}\phi_{\text{b}}(T,\tilde{x}),k_{\text{b}}( \tilde{x})\big{)}. \tag{59}\] Then, (57) is equivalent to: \[\dot{h}_{\text{b}}\big{(}\phi_{\text{b}}(T,x),k_{\text{b}}(x)\big{)}\geq-\tilde {\alpha}_{\text{b}}\big{(}h_{\text{b}}(\phi_{\text{b}}(T,x))\big{)}. \tag{60}\] Note that \(\tilde{\alpha}_{\text{b}}\) is monotonically increasing with respect to \(r\) since the \(\inf\) is taken over a larger set \(\tilde{\mathcal{S}}_{r}\) as \(r\) grows. Furthermore, \(\tilde{\alpha}_{\text{b}}\) satisfies \(\tilde{\alpha}_{\text{b}}(0)\leq 0\) based on (55). Therefore, there exists \(\alpha_{\text{b}}\!\in\!\mathcal{K}_{\infty}\) such that \(\alpha_{\text{b}}(r)\!\geq\!\tilde{\alpha}_{\text{b}}(r),\ \forall r\!\geq\!0\). This, together with (60), leads to the second statement in (14). The first statement can be proven the same way: showing the existence of \(\alpha_{\theta}\!\in\!\mathcal{K}_{\infty}\) for each \(\theta\!\in\![0,T]\) and defining \(\alpha\!\in\!\mathcal{K}_{\infty}\) such that \(\alpha(r)=\max_{\theta\in[0,T]}\alpha_{\theta}(r)\). **Acknowledgment.** We thank Albert Li and Andrew Taylor for discussions about safety with reduced order models, and Wyatt Ubellacker for his invaluable help in synthesizing low-level controllers for the quadruped.
2310.04410
Evidence for Spatial Separation of Galactic Dust Populations
We present an implementation of a Bayesian mixture model using Hamiltonian Monte Carlo (HMC) techniques to search for spatial separation of Galactic dust populations. Utilizing intensity measurements from Planck High Frequency Instrument (HFI), we apply this model to high-latitude Galactic dust emission. Our analysis reveals a strong preference for a spatially-varying two-population dust model over a one-population dust model, when the latter must capture the total variance in the sky. Each dust population is well characterized by a single-component spectral energy distribution (SED) and accommodates small variations. These populations could signify two distinct components, or may originate from a one-component model with different temperatures resulting in different SED scalings. While no spatial information is built into the likelihood, our investigation unveils large-scale spatially coherent structures with high significance, pointing to a physical origin for the observed spatial variation. These results are robust to our choice of likelihood and of input data. Furthermore, this spatially varying two-population model is the most favored from Bayesian evidence calculations. Incorporating IRAS 100 $\mu$m to constrain the Wein-side of the blackbody function, we find the dust populations differ at the 2.5$\sigma$ level in the spectral index ($\beta_d$) vs. temperature ($T_d$) plane. The presence of multiple dust populations has implications for component separation techniques frequently employed in the recovery of the cosmic microwave background.
Corwin Shiu, Steven J. Benton, Jeffrey P. Filippini, Aurélien A. Fraisse, William C. Jones, Johanna M. Nagy, Ivan L. Padilla, Juan D. Soler
2023-10-06T17:57:03Z
http://arxiv.org/abs/2310.04410v3
# Evidence for Spatial Separation of Galactic Dust Components ###### Abstract We present an implementation of a Bayesian mixture model using Hamiltonian Monte Carlo (HMC) techniques to search for spatial separation of Galactic dust components. Utilizing intensity measurements from _Planck_ High Frequency Instrument (HFI), we apply this model to high-latitude Galactic dust emission. Our analysis reveals a strong preference for a spatially-varying two-population dust model in intensity, with each population being well characterized by a single-component dust spectral-energy distribution (SED). While no spatial information is built into the likelihood, our investigation unveils spatially coherent structures with high significance, pointing to a physical origin for the observed spatial separation. These results are robust to our choice of likelihood and of input data. Furthermore, they are favored over a single-component dust model by Bayesian evidence calculations. Incorporating _IRAS_\(100\,\mu m\) to constrain the Wein-side of the blackbody function, we find the dust populations differ at the \(2.5\sigma\) level in the spectral index (\(\beta_{d}\)) vs. temperature (\(T_{d}\)) plane. The presence of multiple dust populations has implications for component separation techniques frequently employed in the recovery of the cosmic microwave background. ISM: dust, extinction -- ISM: structure -- Submillimeter: diffuse background ## 1 Introduction Understanding Galactic dust emission in the millimeter (mm) wavelength range plays an important role in cosmic microwave background (CMB) science. This diffuse emission, originating from \(\sim\)20 K thermal emission of interstellar dust grains within our Milky Way galaxy, peaks at \(\sim\)2 THz, but remains significant relative to the polarized CMB anisotropy at frequencies above about 100 GHz. Understanding and characterizing Galactic dust emission is essential for accurately modeling its emission to correct for its distortions of the underlying CMB signal. The properties of interstellar dust grains, including their size and shape, composition, distribution, and temperature, all influence the resulting radiation (Li & Draine (2001), Draine & Li (2007), Draine & Fraisse (2009)). In the context of component separation, the complexity of interstellar dust is typically flattened; its spectral energy distribution (SED) is modeled as a modified blackbody, with emissions resembling that of a thermal blackbody at temperature \(T_{d}\) and a fractional emissivity that scales with frequency according to its spectral index \(\beta_{d}\). (Brandt et al. (1994); Finkbeiner et al. (1999)). Much of our understanding of the mm-wave emission of dust comes from two satellite data sets: first from _IRAS_(Neugebauer et al. (1984)) and then _Planck_(Collaboration et al. (2020), Planck Collaboration et al. (2020)). _Planck_ uses three bands, 353, 545 and 857 GHz, in combination with DIRBE _IRAS_\(100\,\mu m\) to construct an all-sky model of dust-emission parameterized by a single-component emissivity spectral index and temperature (\(\beta_{d},T_{d}\)) that vary along the line-of-sight (Planck Collaboration et al. (2014), Planck Collaboration et al. (2014)). Meisner and Finkbeiner observed that there is a flattening of the SED model at lower frequencies (100-350 GHz) that departs from a single-component SED. Instead, they proposed a two-component model that incorporates the superposition of a hot and cold component to more accurately capture the shape of the dust SED at these lower frequencies (Finkbeiner et al. (1999); Meisner & Finkbeiner (2015)). Furthermore, Hensley and Draine have shown that dust grains with distinct sizes and compositions will attain different temperatures, even in the same radiative environment. They have developed a physically motivated two-component dust model positing that dust emission originates from a mixture of carbonaceous and silicate sources (Hensley & Bull (2018), Hensley & Draine (2021)). In their recent work, however, Hensley and Draine state that the observed lack of frequency dependence in the far-infrared polarization fraction is most naturally explained by a single-composition dust model (Hensley & Draine (2022)). Given the inherent complexity of Galactic dust emission, there exists considerable scope for refining modeling efforts. In this paper, we exclusively examine dust emission in intensity and present an approach that looks for SED variations in the spatial domain. We identify variations of the SED and assign them to different global populations of dust. These dust populations, we believe, have different optical properties that can arise due to differences in their physical characteristics or their respective radiative environment. In section 2.1, we first describe a generic method for linear regression with uncertainties. Then, in section 2.2, we utilize this method to build a likelihood model for dust emission with multiple populations. In section 3, we demonstrate that this likelihood is unbiased and can recover components even in highly mixed situations. In section 4, we describe the preprocessing steps for this analysis, and section 5 provides the primary result of this analysis. In section 6, we look for the same features in the region observed by the Spider balloon experiment (Filippini et al. (2010), Rahlin et al. (2014), Gualtieri et al. (2018)), a relatively large (\(\sim 5\%\)) and clean patch of the sky used for CMB B-mode studies. Then, in section 7 explore several analysis choices and how these choices affect the results. In section 8, we calculate the Bayesian evidence to determine which model is most favored. Lastly, in section 9, we fit a modified blackbody model to the multiple populations. ## 2 Method ### "Mahalanobis" regression in the generic case with X,Y uncertainties Traditional linear regression assumes that independent variables (the abscissa) are known with certainty. In many real-world scenarios, however, uncertainties and errors in data collection can affect both the abscissa and ordinate. Indeed, in our analysis of the _Planck_ data, the uncertainties are of a similar order to each other. Ignoring or underestimating these uncertainties when using standard methods, such as ordinary least squares, can lead to biased parameter estimates. Handling "Errors-in-variables" in linear regression is a challenging topic. Unlike the standard approach, incorporating uncertainties in both the abscissa and ordinate requires sophisticated methods to account for uncertainty propagation. Moreover, there is no consensus on the best approach (Hogg et al. (2010), Willick (1994)). Orthogonal regression is a popular approach that minimizes the orthogonal distance of all the points to a regression line. While the orthogonal distance is the smallest geometric distance, however, it may not correspond to the smallest distance in probability space in cases where errors in the abscissa and ordinate are unequal. We make a modification to the method to minimize the Mahalanobis distance between each point \(i\) and the regression line \(k\). Therefore, we find the distance along a unit-vector \(\mathbf{u_{i}}\) whose direction is described by the angle \(\phi\), \[\phi=\arctan\left(-\frac{1}{\tan\theta}\frac{\sigma_{y}^{2}}{\sigma_{x}^{2}}\right) \tag{1}\] and the uncertainties of each point \(i\) is also projected along \(\mathbf{u_{i}}\). The derivation of this expression can be found in Appendix A. We call this method "Mahalanobis regression". This regression method, similar to orthogonal regression, fails to capture the variance along the regression line. Handling the full uncertainty in the generic case is computationally challenging. Rather than reducing dimensionality by using a regression model that has two variables, namely a slope and an offset, a fully generic regression would require \(N+2\) variables to account for \(N\) data points. Each observed point represents a sample from a two-dimensional Gaussian distribution, originating from a pair of "true" values \((x^{*},y^{*})\) within a regression model. Figure 1: Diagram of fitting a point \(\mathbf{Z_{i}}\) with covariance \(\Sigma_{i}\) to a model line \((\theta,b)\). A vector \(\mathbf{v}\) perpendicular to the slope \(\theta\) is constructed so that we can project the point \(\mathbf{Z_{i}}\) onto the line such that the perpendicular distance is given by \(\Delta_{i}=\mathbf{v}^{\mathsf{T}}\mathbf{Z_{i}}+b\cos\theta\). While this may be the minimum geometric distance, the minimum Mahalanobis distance is along the vector \(\mathbf{u_{i}}\). Then the uncertainties \(\Sigma_{i}\) are projected along the vector \(\mathbf{u_{i}}\) to the line: \(S_{i}=\mathbf{u_{i}^{\mathsf{T}}}\Sigma_{i}\mathbf{u_{i}}\). However, this approach becomes computationally impractical when considering the construction of a mixture model in the following section. Therefore, we chose to reduce the dimensionality of the problem using Mahalanobis regression as described. ### Gaussian Mixture Model We introduce a generic Gaussian Mixture Model (GMM), in which a given map pixel \(i\) can belong to one of \(K\) populations. Depending on the specifics of the analysis, certain parameters can be constrained to have a fixed value of zero. We parameterize each population by its slope \(\theta_{k}\), calibration offset \(b_{k}\), and assign each point \(i\) a population using \(q_{i,k}\). \(q_{i,k}\) is a boolean value for point \(i\) to belong in population \(k\). We define \(q_{i,k}=0\) as a rejection of point \(i\) from population \(k\), forcing that point to be assigned to a different population. We fit this model to a scatter plot of points using _Mahalanobis regression_, as defined in the previous section, performed by maximizing the following likelihood: \[\mathcal{L}(\{q_{i,k}\}, \theta_{k},b_{k},V_{k},p_{k})=\] \[\prod_{i}\prod_{k}\left[\frac{1}{\sqrt{2\pi(\Sigma_{i,k}+V_{k})}}\right.\] \[\times\left.\exp\left(-\frac{(\Delta_{i,k}/\cos(|\phi|+\theta- \pi/2))^{2}}{2(\Sigma_{i,k}+V_{k})}\right)\right]^{q_{i,k}} \tag{2}\] \[\Delta_{i,k}=\mathbf{v}(\theta_{k})^{\mathrm{T}}Z_{i}-b_{k}\cos( \theta_{k})=-\sin(\theta_{k})x_{i}+\cos(\theta_{k})(y_{i}-b_{k})\] (3) \[\Sigma_{i,k}=\mathbf{u_{i}}(\theta_{k})^{\mathrm{T}}\left(\begin{array} []{cc}\sigma_{x}^{2}&0\\ 0&\sigma_{y}^{2}\end{array}\right)\mathbf{u_{i}}(\theta_{k}) \tag{4}\] The orthogonal distance of point \(i\) to line \(k\) is represented by \(\Delta_{i,k}\). Then, the Mahalanobis distance can be computed by simple trigonometry: \(\Delta_{i,k}/\cos(|\phi|+\theta-\pi/2)\). To get the uncertainties, we use eq. 4 to project the covariance of each data point along the Mahalanobis distance, \(\mathbf{v}\), resulting in the scalar value \(\Sigma_{i,k}\). Furthermore, we assign an additional variance term \(V_{k}\) to each population, representing the intrinsic scatter fitted for that population. We further impose a prior on \(q_{i}\) of the form of a Dirichlet-Multinomial distribution: \[q_{i,k}\sim\mathrm{DirMult}(n=1,\{p_{A},p_{B},\cdots p_{K}\}) \tag{5}\] The hyperparameters \(\{p_{A}\cdots p_{K}\}\) have an additional constraint that \(\sum_{k}p_{k}=1\) so that the prior distribution therefore has an expected value of \(p_{k}\) for each population \(k\). This has a physically motivated interpretation that \(p_{k}\) represents the global fraction of points belonging to the \(k^{\mathrm{th}}\) population. In the common case where there are only two populations, this distribution reduces to the beta-binomial distribution. Drawing from this prior can be thought of as a two-step process: first sample from a Beta(\(\alpha=p_{A},\beta=1-p_{A}\)) to get a probability \(\pi_{i,A}\) for each point, then run a Bernoulli trial with probability \(\{\pi_{i,A},(1-\pi_{i,A})\}\) to determine the assignment \(q_{i}\) of this point. With multiple populations, this can be conceptually generalized to drawing a probability from a Dirichlet distribution and then drawing from a categorical distribution. We treat \(\{p_{A}\cdots p_{K}\}\) as global hyper-parameters, and fit for them simultaneously alongside the other parameters. This likelihood construction limits the propagation of noise into our final population assignments: statistically indistinct points will largely be determined by the prior with \(\pi_{i,A}=p_{A}\). Effectively, these points are assigned to a population based on the prevalence of that population. ### Implementation in Python Scikit-learn is perhaps the most well-established and widely used open-source implementation of GMMs (Pedregosa et al. (2011)). The code is based on the Expectation-Maximization (EM) algorithm (Dempster et al. (1977)). While this algorithm is computationally efficient and converges far faster than Monte Carlo methods, EM methods provide only a point estimate of the MLE. Monte Carlo methods, on the other hand, naturally estimate uncertainty by sampling the posterior distribution and are therefore applied in this analysis. Additionally, we need estimates of \(\pi_{i,k}\), the probability a given pixel belongs to a population \(k\). Therefore, we need to sample the full likelihood (eq. 2), including the discrete variables, \(q_{i,k}\), which is a task not handled by all probabilistic programming packages (Team (2022)). This analysis is computationally challenging, because \(N\) is the number of pixels in our region of interest, of order \(10^{5}\), and the probability volume is \(\sim K^{N}\), representing an enormous space for MCMCs. The computational cost of any Monte-Carlo algorithm can be expressed as \(\mathcal{O}(M*N)\) where \(M\) is the number of Monte-Carlo samples and \(N\) is the dimensionality. This is because proposing new states scales linearly with the number of dimensions. However, each proposal step is not independent; for a random-walker algorithm, \(M\sim\mathcal{O}(N)\) generates an independent point (Neal (2011)). Therefore, a Metropolis-Hastings algorithm is expected to scale as \(\mathcal{O}(N^{2})\) making it computationally impossible to use rejection samplers in high-dimensional spaces. Hamiltonian Monte-Carlo (HMC) instead uses dynamical properties to more efficiently generate independent points (Duane et al. (1987)). The proposals are more effective for exploring high-dimensional spaces, making them particularly useful for complex Bayesian inference. Independent points scale according to \(M\sim N^{\frac{1}{4}}\)(Neal (2011), Beskos et al. (2010)), so the total algorithm scales as \(\mathcal{O}(N^{\frac{1}{4}})\). This analysis leverages _NumPyro_, which is a probabilistic programming language that relies on _JAX_ for automatic differentiation of functions (Phan et al. (2019), Bingham et al. (2019)) and efficiently generates HMC samples. ## 3 Validations on Simulations To validate this analysis method, we can generate data consisting of two populations with similar slopes and show that the pipeline is capable of distinguishing the populations. Figure 2 shows an example of such a test, where two populations with very different proportions (\(N_{\{0,1\}}\) = \(\{200,2000\}\)) and input slopes (\(\theta_{\{0,1\}}\) = \(\{0.10,0.12\}\)) are observed with highly overlapping populations. In this simulated analysis, we do not include an offset \(b_{k}\) = 0 or variance \(V_{k}\) = 0 in the model. The figures then differ on whether they include the fitted hyperparameter \(p_{k}\) in the analysis. On the left, we use the nominally described method, and we are able to accurately distinguish points that belong to each population in the extremes with high probability \(\pi_{i}\). Points clustered in the middle, well within the uncertainties for either model \(\theta_{\{0,1\}}\), are preferentially assigned to the population that is more numerous. The fitted hyperparameter \(p_{A}\) can be interpreted as the global population fraction. In the case where a point's uncertainty makes it indistinguishable from both populations, \(p_{A}\) acts as a prior on how to assign that point through the \(\pi_{i}\)'s. On the right, we disable the hyperparameter, \(p_{k}\), and instead draw \(\pi_{i}\) from a uniform prior ranging from 0 to 1. In this case, points clustered between the populations are assigned \(\pi_{i}\approx 0.5\) indicating no preference for either population. However, due to the difference in prevalence of the two populations, there is an over-assignment of points to the blue population, introducing a bias in the recovered slope. While a uniform prior may seem impartial at the individual data point level, its effects are less innocent on a global scale. A uniform distribution is equivalent to a Beta distribution with fixed \(\alpha\) = \(\beta\) = 1, resulting in a mean of 0.5. The presence of this uniform prior is globally informative, leading to incorrect assignments of populations and ultimately causing a failure in the recovery of the simulated inputs. For this reason, we include the fitted hyperparameter \(p_{k}\) in all configurations when running the data analysis. ## 4 Preprocessing Pipeline We begin with the complete set of full-mission _Planck_ HFI total intensity maps: 100, 143, 217, 353, 545, and 857 GHz1. The latter two HFI maps are converted from intensity units (MJy/sr) to thermodynamic units (\(\mu K_{cmb}\)). All maps are then smoothed to a common 10-arcminute resolution. Next, the cosmic microwave background anisotropies are subtracted to get a clean measurement of dust. For this, we use the _Planck_ 2018 component-separated CMB map from their SMICA pipeline 2. All maps are converted back to MJy/sr units following the _IRAS_ convention, assuming a SED \(I_{\nu}\propto 1/\nu\). Footnote 1: _Planck_ public data release 3.01. In this document, all.fits files are found on the _Planck_ Legacy Archive Noise estimates are needed per-pixel for this analysis. For the Planck dataset, we use an ensemble of FFP10 simulations, pixelized and converted to MJy/sr units. We found that the half-mission difference maps have more variance than expected from FFP10 simulations for the Planck sub-mm channels: 545 GHz and 857 GHz. In the interest of making a conservative set of assumptions, the noise estimates used for these frequencies are scaled up by 50%, which is a scaling more than necessary to account for the observed variance. Because the signal-to-noise ratio is so high in these channels, the conclusions of this analysis are not sensitive to this addition of noise. The subtraction of the SMICA CMB template introduces additional noise. We can estimate the inherent noise of the CMB template by looking at the variance of the half-mission difference map and scaling by a factor of four3. For the cleanest 40% of the sky and at \(N_{side}\) = 64, the half-mission difference variance is approximately \(\sim 4.4\,\mu K_{cmb}^{2}\). Therefore, the estimated pixel noise comes from the quadrature addition of FFP10 simulations with this estimated CMB template noise term. This factor only increases the uncertainty of _Planck_ 217 GHz by 5% and _Planck_ 353 GHz by 1%. At 545 GHz and 857 GHz, the CMB template noise addition term is negligible. Footnote 2: COM_CMB_IQU-smica_2048_R3_00_full.fits Footnote 3: a factor of two due to data splits, and another factor of two due to considering the distribution of the difference of two Normals. Zero-level offsets must be applied to all these dust maps to avoid monopole artifacts in scaling. A typical approach is to use HI as a tracer of dust and to require that when HI trends to zero, the dust emission is also expected to be zero (Planck Collaboration et al. (2014a)). In the interest of mak \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Planck Band & 100 & 143 & 217 & 353 & 545 & 857 \\ \hline Unit Conversion & 243.5 & 371.1 & 482.4 & 288.1 & 58.02 & 2.29 \\ (MJy/sr \(K_{cmb}^{-1}\)) & & & & & & \\ Avg. Noise level & 0.7 & 1.0 & 2.3 & 1.5 & 2.0 & 1.9 \\ (MJy/sr) \(\times 10^{3}\) & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Key numbers for the preprocessing pipeline in this analysis. The entries in the first row are re-derived unit conversions used to convert thermodynamic units (\(K_{cmb}\)) to flux density following the _IRAS_ convention of a \(I_{\nu}\propto\nu^{-1}\). RIMO bandpasses were used in this calculation. All values in this row are consistent with _Planck_’s published values to less than half a percentage Planck Collaboration et al. (2014a). The second row shows the average noise level, computed from an ensemble of FFP10 simulations, converted to \(MJy/sr\) over the LR40 analysis region. ing a minimal set of assumptions, we can circumvent this issue by including this monopole offset in our fits. We then mask the resulting maps to different regions and excise point sources 4. Lastly, we pixelate the maps from their native \(N_{side}\) to \(N_{side}=32,64,128\), which effectively applies a \(\sim\) 110, 55, and 27 arcmin beam to these maps, respectively. We have both computational and astrophysical motivations for this. From an astrophysical perspective, we encounter the most problematic foregrounds for CMB component separation at the largest angular scales. Furthermore, we can neglect the cosmic infrared background (CIB) at these angular scales, as confirmed by appropriately masking and pixelating derived CIB templates (Lenz et al. (2019)). Footnote 4: HFI_Mask_PointsSrc_2048_R2.00.fits We neglect other sources of foregrounds in intensity in this analysis. We do not model free-free and synchrotron emission, as they are expected to be substantially less significant at the mm and sub-mm wavelengths (Collaboration et al. (2020)). Furthermore, CO emission is primarily contained in the Galactic plane, and we anticipate that the contamination in our high-latitude region of the sky will be minimal Planck Collaboration et al. (2014c). ## 5 Global Dust Populations in LR40 We first conducted this analysis masking the Galactic plane 5 over the cleanest 40% of sky, LR40. We used the full likelihood as described in equation 2 with K=2 populations. Physically, this model corresponds to two dust populations, with a mean frequency scaling \(\theta_{k}\), and with some inherent variability inside each population parameterized by \(V_{k}\). Therefore, this model expects dust scaling to be a distribution of values rather than a single true population value obscured by statistical noise. The role of modeling \(V_{k}\) will be discussed in a future section 7.1. This model allows for miscalibration between frequency pairs and corrects for this by fitting a monopole calibration parameter. Footnote 5: HFI_Mask_GalPlane-apo0_2048_R2.00.fits While there is a preference for maximally uninformative priors, mixture models are highly multi-modal and susceptible to being stuck at local extrema (Chen (1995), Wu et al. (2016)). Therefore, in this analysis, we chose largely uninformative priors around realistic values, while not extending infinite support for all values. The slope was parameterized in angle from the \(x\)-axis and flat between \([0,\pi/2]\) to not prefer any slope aside from a positive relationship between dust emission. The monopole correction parameter has a truncated Normal distribution \(b\sim\) Figure 2: We conducted two validation analyses using Mahalanobis regression (eq 2) on the same input dataset. On the left, we assumed two distinct populations \((m_{0},m_{1})\) and estimated a global population fraction parameter (\(p_{A}\)) without including offset or additional variance parameters (\(b_{k}=0,V_{k}=0\)). On the right, the hyperparameter \(p_{A}\) is turned off and the prior probability of each point \(\pi_{i}\) is instead drawn from a uniform distribution. These fits are labeled “Mix. fit” for the two populations respectively. These are compared to “Single fit,” which is the Mahalanobis regression for each of the input populations. The colors represent the probability of a data point \(p_{i}\) belonging to population 1. On the left, even with two highly overlapping populations, we recover both the intrinsic relationship and can distinguish points to their respective population with high confidence. On the right, without this global hyperparameter, points with an equivalent likelihood of belonging to either population fit \(\pi_{i}\sim\) 0.5. Because the steeper blue population intrinsically has a far smaller prevalence, this equal assignment biases the slope downward, and we fail to recover the input slopes. Figure 4: Fitted dust populations over four different frequency pairings for LR40. The likelihood has no inherent spatial information; nevertheless, in all scenarios, we identify two distinct dust populations with spatial coherence. The top (green) population is the dominant Galactic dust emission, with an increasing amount of statistical confidence the closer we are to the Galactic plane. The bottom (blue) population is a dust population that contains a different frequency scaling and consists of isolated dust clouds. Figure 3: Fitted populations over four different frequency pairings for LR40. Each data point is shown in gray, with errors in the abscissa and ordinate. Additionally, the data points in the panel are color-coded by \(\pi_{i}\), representing the confidence that point belongs to each of the dust populations. The dashed line shows the population fitted mean value. \(\mathcal{N}(0,1,\mathrm{low}=-2,\mathrm{high}=2)\) that is practically flat across all relevant values. We initially used an InvGamma(\(\epsilon\), \(\epsilon\)) prior for the variance due to its favorable properties, including (1) being a conjugate prior for a Gaussian, (2) support for infinite positive variance models, and (3) approximate flatness for small \(\epsilon\) values. However, we discovered that this prior may lead to pathological fits. Sometimes, the fitted variance is driven by a few outliers and may be many times larger than the variance of the data itself. To address this, we instead selected the prior on the variance to a uniform distribution ranging from 0 to 2\(\times\) the variance of the underlying data. Figure 3 shows the results of the fits to the LR40 data; table 2 summarizes the fitted parameters. The fitted slopes are consistent with our general expectation from _Planck_ all-sky average values \((\beta_{d},T_{d})\sim(1.53,19.6\,K)\). We observe an alternate population over \(\sim\)15-20% of the analyzed sky at LR40. The fractional separation between the two population slopes, \((m_{1}-m_{0})/m_{0}\), is more pronounced when the frequencies have a larger difference between them. This is consistent with a dust frequency scaling behavior because if the observed dust is composed of several populations, it becomes advantageous to make comparisons with a lower frequency map. This is due to the longer lever arm it provides, facilitating the differentiation of various SEDs. However, there is a natural trade-off, because lower-frequency maps have lower signal-to-noise. At 857\(\times\)143 GHz, the identification of multiple populations is not observed because the uncertainty for 143 GHz is larger than the separation of the two populations. For this reason, the analysis with 143 GHz or 100 GHz is not included in table 2. In figure 3, each plot is derived from a pair of HFI maps, one a sub-mm channel (545 GHz,857 GHz), the other a lower frequency map (217 GHz,353 GHz), and every data point has a probability \(\pi_{i,k}\) associated with it that describes the probability the point belongs to the \(k^{\mathrm{th}}\) population. Because this is treated as a Bernoulli process, each point can be converted to a statistical significance, \[\sigma_{i,k}=\frac{\pi_{i,k}}{\sqrt{\pi_{i,k}(1-\pi_{i,k})}} \tag{6}\] Then, we can transform these pixels back into the spatial domain and plot them in Figure 4. Recall that no spatial information is built into the likelihood in equation 2. Nevertheless, each population that is fitted shows a coherent spatial structure. We observe a primary dust population in green, with high confidence, originating from dusty regions near the Galactic plane, and we identify isolated blue dust clouds that indicate the presence of a distinct population. ### Comparison to known sources We can compare our identified dust population to a dust extinction map. One such map is the Draine-Li 2007 dust extinction map6. A visual comparison between this map and our most sensitive cross, 857\(\times\)353 GHz, only shows a weak correlation. While some of the denser dust regions show up in the dust population mask, our analysis primarily identifies clouds on the eastward side of the Galactic plane. A simple Pearson correlation coefficient between a dust extinction map and our own analysis, over LR40, puts a correlation of \(\rho=0.32\). Clearly, dust density itself cannot be a good explanation for the observed population distribution. Footnote 6: COM_ComMap-Dust-DL07-AvMaps2048_R2.00.fits Polarized radio loops are the largest structures in the sky and cause extended filamentary structures extending from the Galactic Plane (Vidal et al. (2015)). They are believed to be caused by supernovae remnants locally, and while brightest at radio frequencies from synchrotron radiation, they are also visible in the mm and sub-mm (Liu et al. (2014)). Indeed, the largest loops: I, II, III\({}^{\prime}\) are clearly missing in the identified dust population. However, smaller more localized bubbles may be responsible for some of the identified structures Joubaud et al. (2019). A more systematic and careful comparison is warranted but beyond the scope of this paper. ## 6 Analysis over the Spider Region The investigation of dust properties in high-Galactic latitudes holds considerable significance in the context of CMB studies. The conventional approach has focused on observing smaller sky areas, selected for reduced contamination from foreground sources (SPIDER Collaboration et al. (2022), Ade et al. (2021), Adachi et al. (2022)). We repeat this analysis over the Spider region to identify if we can detect multi-population dust in this region: a high-Galactic latitude region \begin{table} \begin{tabular}{l c c c c c c} Freq. Pair & \multicolumn{2}{c}{Slope} & \multicolumn{2}{c}{Std. \((MJy/sr)\)} & \multicolumn{2}{c}{Global frac} \\ & \(m_{0}\) & \(m_{1}\) & \(10^{3}\cdot\sqrt{V_{0}}\) & \(10^{3}\cdot\sqrt{V_{1}}\) & \(p_{0}\) & \(p_{1}\) \\ \hline 857\(\times\)545 & 0.328 & 0.362 & 10.3 & 17.3 & 0.67 & 0.33 \\ 857\(\times\)353 & 0.093 & 0.110 & 5.4 & 10.2 & 0.78 & 0.22 \\ 857\(\times\)217 & 0.021 & 0.027 & 1.8 & 3.2 & 0.84 & 0.16 \\ 545\(\times\)353 & 0.284 & 0.310 & 1.6 & 3.2 & 0.85 & 0.15 \\ 545\(\times\)217 & 0.064 & 0.074 & 0.8 & 2.2 & 0.90 & 0.10 \\ \hline \end{tabular} \end{table} Table 2: Fitted parameters for different pairs of analysis in \(LR40\) and at \(N_{side}=64\). Slopes are presented as \(m=\tan(\theta)\), and the variance \(V_{k}\) is the variance of the residuals. Note that the variance and slopes are in different units and cannot be directly compared. Instead, these should be compared to the noise levels of table 1 to see the increase in variance relative to the inherent noise level. The expected slopes between frequency pairs obtained with _Planck_ all-sky average values (\(\beta_{d},T_{d}=1.53,19.6\) K Planck Collaboration et al. (2020)) are [0.356, 0.102, 0.022, 0.287, 0.062] in descending order in the table. (\(|b|>20^{\circ}\)) in the southern hemisphere comprised of \(\sim\)5% of the full sky. This analysis is limited to dust _intensity_ rather than _polarization_. Indeed, regions that are bright in polarization typically are the edges of dust clouds, where the magnetic field environments are more orderly than in the dense inner regions of clouds that are optically deep in intensity (Planck Collaboration et al. (2015)). However, the detection of a multi-population dust model in intensity would reveal that the foregrounds in this region are more complex than commonly modeled. The preprocessing pipeline is identical to that of section 4, except the results presented here are at higher resolution \(N_{side}\) = 128 and includes the ISSA reprocessing of the _IRAS_ 100 \(\mu\)m data 8(Wheelock et al. (1994) ). The CMB is not subtracted from this map, as it's expected to be negligible at this frequency. Additionally, this map is smoothed to a common 10 arcmin resolution, which necessitates careful handling to avoid artifacts in the Spider region. ISSA maps provide coverage for 98% of the sky, with the missing 2% region coming close to the Spider observation region. The missing region can be masked with a 3.75 deg apodization, chosen optimally to minimize ringing while avoiding masking the region of interest itself. Noise in the ISSA map is complex, although a uniform noise estimate has been tabulated as 0.07 MJy/sr\(\pm\) 0.03 MJy/sr (Wheelock et al. (1994), Miville-Deschenes & Lagache (2005)). Out of caution, we adopt a noise level estimate that is 50% larger than the quoted value, 0.105 MJy/sr, uniformly across all pixels in the region. Footnote 8: ISSA_B4H0_healpix_ns1024.fits For this analysis, we use the same dust modeling as in section 5, where we fit two dust populations with a mean and variance for each. A summary of the results is shown in figure 5 where four different frequency pairs are considered: _Planck_ 217, 353, and 545 GHz and _IRAS_ 100 \(\mu m\), all relative to _Planck_ 857 GHz. The fitted population fraction for the alternative model is \(p_{1}=0.12,0.15,0.12,0.34\), respectively, with \(\sigma(p_{1})=0.01\) across all frequency pairs. In every case examined, we consistently observe coherent and filamentary structures across all pairs included in this analysis. The comparison of _Planck_ 857 GHz with _Planck_ 545 GHz is the most challenging due to the smallest frequency separation and, therefore, the most similar slope between populations. Indeed, in the analysis with 545 GHz, the fit prefers a very similar slope between the two populations \(m_{0}\approx m_{1}\) with a substantially wider variance \(V_{1}\gg V_{0}\) which captures many of the outlier dust points. Other analysis crosses instead distinguish themselves in \(m_{k}\). For that reason, only a few points distinguish themselves in the cross with 545 GHz. Conversely, _IRAS_ has the longest Figure 5: Analysis of the dust population over the Spider observation region: a region in the Southern sky at Galactic latitudes greater than \(|b|>20^{\circ}\). The leftmost panel shows _Planck_’s 857 GHz map, which is a sensitive tracer of diffuse dust emission, over this region. The plots on the right show the identification of the dust populations, plotted as the difference \((\sigma_{1}-\sigma_{0})\) for brevity. The four panel plot displays the analysis conducted with _Planck_ 857 GHz and its corresponding frequency pair indicated above or below each subplot. With high confidence, we identify two populations of dust using any of the four input maps (_Planck_ 217,353, and 545 GHz and _IRAS_ 100 \(\mu m\)). We observe coherent dust clouds (in blue) that are interspersed between filamentary structures (in green). A visual comparison of the cloud structures with the dust intensity on the left shows little correlation between the two maps. -level arm and also contains the most different spatial arrangement. Specifically, the shape of the dust cloud is different in the upper left region, and there is a disagreement in population assignment in the central part of the region. ISSA has its own residual artifacts in its map, which differ from those of _Planck_'s. Therefore, identifying common dust populations strengthens the case for spatially separated dust populations in this region. We thus demonstrate that even the relatively clean Spider region, known to be composed mainly of diffuse, non-turbulent dust (Planck Collaboration et al. (2016)), contains statistically detectable dust populations in intensity. While we expect the populations responsible for polarized mm-emission to differ from those that are bright in intensity, the existence of multiple dust populations in intensity suggests that dust is more complicated than commonly modeled. ## 7 Robustness to Analysis Choices The likelihood function plays a central role in Bayesian parameter estimation. Therefore, it is crucial to check the appropriateness of the likelihood for the data at hand. In this section, we explore different likelihood choices to test the robustness of the conclusions. ### Modeling dust without inherent scatter, \(V_{k}=0\) In section 5, our likelihood model incorporates a term denoted as \(V_{k}\), representing the variance specific to each population. This has a physical interpretation within the model: we are studying different dust populations, and these populations are generated through a process characterized by an average scaling value, \(\theta_{k}\), with inherent variations about that mean value captured by \(V_{k}\). In this section, our objective is to investigate whether the data can be accurately represented using only a scaling value, \(\theta_{k}\), with a monopole offset. In this model, any observed variance should be attributed solely to measurement noise. To account for the variation in data points, we find that the data prefers a fit with three slopes. In figure 5(b), these are sorted from shallowest (green), middle (red), to steepest (blue). When we look at our most significant dust cross, 857\(\times\)353 GHz, we have three distinct dust populations, with fitted population fractions of {0.47, 0.46, 0.07} from the shallowest to the steepest slope. These proportions remain true for several different frequency crosses. In cases where there is insufficient statistical power to fit three populations and instead, two populations are fit, the likelihood analysis prefers to lump populations 0 and 1 together and still distinguish population 2 separately. This is evident in the cross 545\(\times\)217 GHz. In the likelihood analysis, each pixel is assigned a probability \(\pi_{i}\) of belonging to one of K populations. The probability is determined by its distance to each population based on the likelihood ratio: \(\pi_{i,k}=\mathcal{L}_{i}(k)/\sum_{k}\mathcal{L}_{i}\). The likelihood function, comprised of a sum of Gaussians, makes the likelihood ratio highly sensitive to the distance of each data point from its nearest population. Within this framework, the impact of statistical noise inadvertently magnifies. Points exhibiting fluctuations near the population boundaries draw exponentially closer to one population over another, leading to \(\pi_{i}\) assignments that are overly confident. \[cdf_{i}=\left(\int_{-\infty}^{-|\Delta|}+\int_{|\Delta|}^{\infty}\right) \mathcal{N}(0,\sigma) \tag{7}\] To accurately assess the likelihood of a data point belonging to a particular population, we employ a two-part measure. We combine the likelihood ratio, \(\pi_{i}\), with a confidence score (eq. 7) based on a two-sided cumulative distribution value. This distribution is Normal with mean 0 and a width equal to the statistical uncertainty of point \(i\). This confidence score reflects the point's proximity to the population and down-weights points if they are close to a likelihood boundary. We convert the probability to a \(\sigma\)-level by treating each pixel as a Bernoulli process, \[\sigma_{i}=\frac{\pi_{i}\cdot\text{cdf}_{i}}{\sqrt{\pi_{i}\cdot\text{cdf}_{i} \cdot(1-\pi_{i}\cdot\text{cdf}_{i})}} \tag{8}\] Figure 5(b) plots the \(\sigma_{i}\) back into spatial coordinates. However, our inclusion of the cdf factor eliminates all visible structure for the two extremum populations. Mathematically, this implies the variability within each population exceeds the quoted uncertainties. This has a physical interpretation in that dust must not be generated from a physical process with a "true" set of (\(\beta_{d},T_{d}\)) values. Even in the absence of measurement noise, the expectation is a natural variation in dust parameters (\(\beta_{d},T_{d}\)). As a result, the likelihood 2 with \(V_{k}=0\) cannot accurately capture the physics of this observed set of data. ### Using spatial information While the likelihood with \(V_{k}=0\) may not capture all the nuances of the data, it does offer a valuable basis for further investigation. Rather than introduce a confidence factor \(cdf_{i}\), we can instead make use of spatial arrangements of pixels to suppress the effects of noise. When multiple neighboring pixels agree on a population, then this consensus strengthens our confidence in the correctness of the assignment. Alternatively, if neighboring pixels conflict, we should suppress the assignment. A simple way to implement this _a posteriori_ is to apply a smoothing factor on the probability map \(\pi\). We smooth the map with a symmetric Gaussian beam, and then calculate the \(\sigma\)-significance in the same way (eq 6). Figure 6c shows the results of our most sensitive cross, 857\(\times\)353 GHz, when smoothed with a 1-degree beam. From the top to the bottom, we see dust emission centered around the Galactic plane, followed by more diffuse Galactic emission at the edges of the Galaxy closer to the poles. The last population is isolated cloud structures that deviate from the general dust population. This analysis reveals insights about the arrangement of dust data points in the scatterplot (Fig. 3). Notably, the stratification of pixels in the scatterplot space holds physical significance. Points with lesser slopes come from the brightest regions of the Galaxy, while points with greater slopes come from more diffuse regions of the galaxy. This analysis suggests that incorporating spatial information into the likelihood equation could potentially yield additional insights into dust populations. Such an extension would require us to encode the spatial relationship between pixels in the likelihood. Neighboring pixels may collectively indicate the presence of an alternate population, even if individual pixels lack sufficient statistical significance for identification. Such an analysis would be a substantial increase in modeling complexity. In intensity, we have high-frequency data with strong signal-to-noise ratio, allowing us to differentiate points without relying on spatial information. Instead, this extension could prove valuable in the study of polarized foregrounds. However, it falls outside the scope of this paper to introduce spatial modeling. ### Removing dipole residuals with harmonic space filters There are several challenges with analyzing native _Planck_ maps. _Planck_ maps contain an irreducible zero-level offset. The monopole offset has physical sources, including cosmic infrared background and zodiacal emission (Planck Collaboration et al. (2020)). Typically this offset is adjusted based on tracers from other datasets, such as HI (Planck Collaboration et al. (2020)). The monopole offset Additionally, the _Planck_ maps contain residual artifacts from the Solar dipole: the motion of the Solar System relative to the rest frame of the CMB. The major effect of residual dipole occurs at \(\ell=1\). However, relativistic corrections will leak into smaller scales \(\ell>1\)(Collaboration et al. (2016)). While the zero-level offset is naturally handled in the likelihood for equation 2, a residual dipole in the map has no natural way to be built into the likelihood and, therefore, no way to correct for its effect. Figure 6: Investigating alternate analysis choices and their effect on dust populations. (a) Reference figure as shown in figure 4 for the basis of comparison. (b) Likelihood model of dust composed of discrete populations with no intrinsic variations, \(V_{k}=0\). The plotted results include a two-sided CDF factor to suppress points far away from the model. All structures disappear with this additional confidence factor. Therefore, the data points are distributed further away than if noise was the sole factor for variations from the mean. (c) Instead of a CDF factor, a beam smoothing factor with a FWHM = 1deg is applied to the \(\pi_{i}\) to suppress noise. Coherent structures remain, suggesting that spatial information may allow the separation of even more components. (d) Filtered maps that remove large-scale artifacts in the map. The variance is fit \(V_{k}\), while the monopole offset is not needed (\(b_{k}=0\)). We see a similar spatial arrangement of our two populations, indicating that large-scale residual effects, primarily the residual solar dipole, do not inadvertently influence our identification of two populations. (e) No hyperparameter prior \(p_{A}\). In this configuration, \(\pi_{i}\) is drawn from a uniform distribution from 0 to 1. The approach in this section is to circumvent these issues by filtering the maps to eliminate large scale features and fit only dust anisotropies. We apply an aggressive window function that removes all power below \(\ell\leq 8\), effectively eliminating all possible large-scale dipole residuals. To reduce the impact of undesired ringing, we choose a cosine apodization around \(\ell=20\) to smoothly taper the filter's response function. \[W(\ell)=\begin{cases}0&\text{if }\ell\leq 8,\\ 0.5\left(1-\cos\left(\frac{2\pi(\ell-8)}{50}\right)\right)&\text{if }8<\ell\leq 33,\\ 1&\text{if }\ell>33\end{cases} \tag{9}\] Additionally, we take precautions to prevent bright Galactic plane emissions from causing unwanted leakage into the rest of the map, which would inadvertently bias our analysis. For this reason, we apply the filter to the LR60 masked region, ensuring minimal ringing in LR40, the specific region of interest for this analysis. With these preprocessing steps, no offsets are needed in equation 2; therefore, we set \(b_{k}=0\) for all \(K\) populations. We include \(V_{k}\) in this version in order to compare it to our nominal case. We fit the filtered maps with \(K=2\) populations, resulting in two distinct slopes. Figure 6d visualizes the results in the spatial domain. We see a spatial arrangement of pixels similar to the unfiltered analysis: A dominant diffuse population alongside an alternate population localized to specific clouds. This is consistent with the analysis in section 5; therefore, we believe that large-scale residual artifacts in _Planck_ cannot be the origin of these SED variations. ### Removing hyperparameter \(p_{k}\) In section 3, we demonstrated the necessity of the hyperparameter \(p_{A}\) for \(q_{i}\sim\text{BetaBinom}(n=1,\alpha=p_{A},\beta=1-p_{A})\) to ensure an unbiased recovery of our simulated inputs. However, in this section, we intentionally omit the hyperparameter to observe the impact on this analysis. In this configuration, \(q_{i}\sim\text{BetaBinom}(n=1,\alpha=1,\beta=1)\). Effectively, we first draw \(\pi_{iA}\) from a Uniform(0,1) distribution. Then, \(q_{i}\) is drawn from a Bernoulli distribution with \(p=\pi_{iA}\). The outcomes of this configuration, as depicted in Fig. 6e, reveal a similar spatial arrangement of pixels. The similarity arises because data points that show substantial separation between two populations possess sufficient statistical power to remain largely unaffected by the absence of the prior. Indeed, a simple calculation of the Pearson correlation coefficient between the nominal case and this case is 0.88. The scatterplot of \(p_{i}\) between both scenarios demonstrates a monotonic but non-linear relationship. Therefore, despite a large difference in analysis choice, we maintain a relatively high correlation coefficient. However, points that fall statistically in-between populations are instead heavily influenced by the choice of prior. In this case, points are assigned evenly between two populations, causing the global slopes to converge and leading to an over-identification of the less numerous blue population. This behavior is evident in the simulations (fig. 2) where there is a substantial bias in the globally fitted slope and a greater assignment of the alternate population. On the data, the alternate population \(\geq 3\sigma\) expands substantially from 6% to 15%. While a uniform distribution is often thought to be uninformative, it's important to note that its mean is fixed at 0.5. This fixed mean influences \(\pi_{iA}\) to tend towards larger values compared to a scenario where a hyperparameter \(p_{A}\) is allowed to adapt the mean based on what is preferred by the data. Consequently, using a uniform prior leads to a _less conservative_ identification of an alternate population. For this reason, and because it fails to recover inputs on simulations, we opt not to use a uniform prior on \(\pi_{iA}\) for this analysis. ## 8 Bayesian evidence for the models The previous sections have demonstrated there is a lot of choice in the models we can fit to our data. Indeed, more components will always fit the data better and return a higher likelihood value, but at the cost of model complexity. To select the correct model to fit, we ask which of the models has the highest marginal likelihood, or the highest Bayesian evidence. Calculation of Bayesian evidence is computationally expensive and requires a sum (of eq 2) over \(K^{N}\) states of \(q_{i}\). This is simply intractable for our pixelization and sky area. Instead, we can perform an analytical trick to simplify the calculation of the marginal likelihood. Following the method of Hogg et al. (2010), each \(q_{i}\) has K possible states with probability \(\{p_{1},\cdots p_{k}\}\) so the likelihood, marginalized over \(q_{i}\), can instead be written as a sum of states, \[\mathcal{L}(\theta_{k},b_{k},V_{k},p_{k})=\prod_{i}\sum_{k}\left[\frac{p_{k}} {\sqrt{2\pi(\Sigma_{k,i}+V_{k})}}\exp\left(-\frac{\Delta_{k,i}^{2}}{2(\Sigma _{k,i}+V_{k})}\right)\right] \tag{10}\] where now the likelihood only has \(4K-1\) parameters. We use the nested sampler _dynesty_(Speagle, 2020), Koposov et al. (2023) to perform this calculation with results in table 3. The statistical and sampling uncertainty for all the quoted values is subdominant to the significant figures in the table. The preferred model, for the majority of the crosses, is a two-population (\(K=2\)) model that include an additional variance term \(V_{k}\). For this reason, the quoted results in section 5 are those of this analysis. However, as we go to lower frequency crosses, the signal-to-noise is lower on dust and simpler models are adequate to explain the variations in the data. For \(545\times 217\), a multi-population dust model with no additional variance (\(V_{k}=0\)) has the highest evidence. One could combine all datasets _Planck_ 100-857 GHz and _IRAS_ 100 \(\mu\)m and construct a multi-variate likelihood to fit them simultaneously. Such an approach would not significantly increase the number of fitted parameters, because the parameter \(q_{i}\) is associated with each spatial pixel and would not scale with the number of inputs. However, since our pairwise analysis shows a clear distinction between the two populations, we have chosen not to pursue this extension at this time. ## 9 Inferred Galactic Dust Properties Having identified the two dust populations, we can further infer the properties of each. We construct a mask to optimally separate the two populations. We use a 857\(\times\)353 GHz \(\sigma\)-map, as shown in figure 4, and include only high-confidence pixels with \(\geq 3\sigma\) confidence in the assignment of population. We apply this mask to all frequencies, _Planck_ HFI and _IRAS_ 100 \(\mu m\). Then, we subtract a previously fitted monopole correction term and calculate the flux ratio \(\alpha\) relative to our reference frequency for each frequency and population. The \(\alpha\)'s are shown on the right side of figure 7, where the uncertainties signify a 1\(\sigma\) variation in the population. These distributions are inherently non-Gaussian. To analyze them, we perform a Gaussian kernel density estimate (KDE), denoted by \(\alpha_{kde}\), using Scott's rule for bandwidth estimation (Scott (1992), Virtanen et al. (2020)). The flux ratio can be related to a standard dust SED by the following definition: \[\alpha(\nu,\beta_{d},T_{D})=\frac{I(\nu)}{I(\nu_{0})}=\frac{b_{c}(\nu)}{b_{c}( \nu_{0})}\left(\frac{\nu}{\nu_{0}}\right)^{\beta_{d}}\frac{B(\nu,T_{d})}{B( \nu_{0},T_{d})}, \tag{11}\] where \(\beta_{d}\) is the spectral index of dust, \(B(\nu,T_{d})\) is the Planck blackbody emission at temperature \(T_{d}\), \(\nu_{0}\) is the reference frequency 857 GHz, and \(b_{c}(\nu)\) is a bandpass correction for map \(\nu\) as defined by the reciprocal of equation 35 in Planck Collaboration et al. (2014d). The full spectral bandpass for _Planck9_ and _IRAS_10 are both used for the color correction. Fitting flux ratios instead of fluxes directly allows us to avoid fitting for the dust optical depth, as it is not a fundamental property of dust physics. Footnote 9: _HFI_RIMO_R3.00.fits Footnote 10: **Table II.** C.5 of IRAS Explanatory Supplement We construct a likelihood function based on the Gaussian KDE of \(\alpha\), \[\log{\cal L}(\beta_{d},T_{d})=\sum_{\nu}^{\rm{HFI,\ {\it IRAS}}\ 100}\log \alpha_{kde}(\alpha(\nu,\beta_{d},T_{d})) \tag{12}\] To find the best-fit dust parameters, we maximize the likelihood with respect to \(\beta_{d},T_{d}\). We choose uniform flat priors for \(\beta_{d},T_{d}\). The fits are performed with emcee (Foreman-Mackey et al. (2013)) and are shown on the left of figure 7. We see distinctive dust properties, with the majority of the dust being a "hot" dust (plotted in green), with interspersed "colder" dust (plotted in blue). The marginal parameters show a substantial overlap between the two populations because of the high degeneracy between \(\beta_{d}-T_{d}\). Intuitively, this degeneracy naturally occurs because both \(\beta_{d}\) and \(T_{d}\) can influence the peak of the blackbody function. An underestimation of \(\beta_{d}\) leads to a more emissive Rayleigh-Jeans portion of the curve and pulls the peak towards lower frequencies. To compensate, a hotter dust temperature pushes the peak back towards higher frequencies (Chen et al. (2016)). This anti-correlation has been observed in other analyses (Planck Collaboration et al. (2014e)), and Shetty et al. (2009) has attributed this anti-correlation to measurement uncertainty. While a hierarchical Bayesian model has been demonstrated to mitigate this effect (Kelly et al. (2012)), this modeling extension was not pursued in this analysis. The 2D likelihood contours show separation of the two populations at the mutual \(\sim\)2.5\(\sigma\) level. These values of \(\beta_{d},T_{d}\) accurately reproduce the observed flux ratios at all frequencies, as evident on the right side of figure 7. This implies that we have distinct spatially separated dust populations, with each population being effectively characterized by a single-component SED. It is important to remember that the uncertainties in these parameter estimations represent the variability of the population rather than statistical uncertainty on the populations. ## 10 Conclusion We have developed a powerful technique for spatially separating dust populations based on Gaussian mixture models using HMC methods. We constructed a GMM likelihood function with no inherent spatial information. The analysis \begin{table} \begin{tabular}{l c c c c c} \hline & & Inc. \(V_{k}\) & & \(V_{k}=0\) & \\ Frequency pair & \(K=1\) & \(K=2\) & \(K=1\) & \(K=2\) & \(K=3\) \\ \hline 857\(\times\)545 GHz & 1.3E4 & 1.4E4 & -3.7E5 & -1.3E5 & -7.5E4 \\ 857\(\times\)353 GHz & 4.0E3 & 5.4E3 & -5.0E4 & -1.4E4 & -5.6E3 \\ 857\(\times\)217 GHz & 7.5E2 & 1.7E3 & -9.9E3 & -1.6E3 & -6.4E2 \\ 545\(\times\)353 GHz & 1.1E4 & 1.3E4 & 7.4E3 & 1.2E4 & 1.2E4 \\ 545\(\times\)217 GHz & 4.5E3 & 4.6E3 & 4.7E3 & 4.9E3 & 5.0E3 \\ \hline \end{tabular} \end{table} Table 3: Calculation of marginal likelihood over _LR40_ with \(N_{side}=64\) using two different models: inc \(V_{k}\) as described in section 5 and \(V_{k}=0\) as described in section 7.1. All values of marginal likelihood are in log\({}_{10}\) units. The highest evidence values are bolded. The highlighted blue background provides a qualitative comparison of the log marginal likelihood values in relation to the highest value in the row. results provide, for each pixel \(i\), a probability \(\pi_{i,k}\) that it belongs to population \(k\). We have shown it is highly accurate at recovering input parameters, even in the limit where populations are heavily mixed. We then applied this method to the _Planck_ HFI intensity maps to identify pixels that statistically deviate from a main population. When we show the arrangement of pixels, clear patterns emerge in the map, showing distinctive regions where an alternate population is favored. We tested this conclusion to many different analysis choices: (1) in section 5, we used entirely independent frequency crosses, i.e., 545\(\times\)217 GHz and 857\(\times\)353 GHz, and confirmed the presence of two dust populations with similar spatial arrangements, (2) in section 6, we changed the masked area and showed that multiple populations are detectable even in cleaner sky regions with more diffuse dust, (3) in section 7, we explored the sensitivity of the likelihood model (\(V_{k}=0\)) and confirmed that this effect is still observed when the likelihood is modified, (4) in section 7.3, we repeated the analysis on filtered maps that are designed to remove residual large-scale artifacts in the _Planck_ maps and confirmed that this effect is not attributable to such artifacts. Then, in section 8, we calculate the Bayesian evidence to see which of these analysis choices are favored. The most favored model for the majority of the crosses is a two-population dust model with inherent variability of each population \(V_{k}>0\). As the signal to noise on dust drops, so does the Bayesian evidence for this effect. Indeed, for the 545v217 cross, we compute the highest Bayesian evidence for a three-population dust with no variability (\(V_{k}=0\)). We also provide a promising path for improving this analysis. All datasets could be combined and simultaneously fit with a multi-variate likelihood. We expect such an extension to be computationally feasible since the dominant contributor to variable count does not scale with the number of input maps. Additionally, in this work, we excluded spatial information in the likelihood in due caution to be unbiased in population recovery. Combining statistical power of neighboring pixels can provide additional statistical weight that is otherwise unused in this analysis, potentially revealing further insights about the variability of dust populations. Figure 7: Inferred dust properties from the two populations as shown in figure 4. The mask chosen was LR40, constructed from 857\(\times\)353 GHz, applied to all frequencies _Planck_ HFI and _IRAS_ 100 \(\mu m\), and only high confidence pixels were included in the analysis with \(\geq 3\sigma\) confidence in assignment of population. On the left, we have the best-fit dust parameters (\(\beta_{d},T_{d}\)). Plotted in black is _Planck_’s 2015 thermal dust model from the Commander pipeline: \({}^{\mathrm{a}}\) distribution of (\(\beta_{d},T_{d}\)) values masked to LR40. The two populations here have overlapping contours only at the \(\sim 2.5\sigma\) with the alternate population (in blue) at fixed \(\beta_{d}\), preferring a colder \(T_{d}\), or at fixed \(T_{d}\), preferring a shallower \(\beta_{d}\). This indicates that there are physical differences between the two populations. On the right, the SED model is compared to the measured flux ratios with their full bandpass corrections. The alternate population, the blue points, are shifted in frequency purely for visual separation of the two populations. The SED model accurately captures the measurements at all frequencies. This analysis reveals that the nature of dust is more complex than can be adequately described by a single-component modified blackbody model. Instead, our findings strongly suggest that a spatially varying two-population dust model better aligns with the observed data. Identifying an alternate population and its distribution can provide further insights into the nature and structures of Galactic dust. With a two-population dust, CMB component separation becomes a substantially more challenging task. Failure to correctly account for the spectral variations can lead to residual contamination. To effectively distinguish these dust populations, we need high-fidelity, high-frequency measurements. Indeed, this analysis would not have been possible without using intensity measurements from _Planck_ 545 and 857 GHz. A similar understanding of dust polarization remains an open question. Polarized maps do not exist at comparable high frequencies. Further insights into the nature of polarized dust emission necessitate higher sensitivity data, particularly in the sub-mm and far-infrared ranges. These measurements are accessible only through next-generation sub-orbital and orbital experiments. In conclusion, this analysis is another step in understanding the complexity of Galactic foreground emission. The improvement of statistical tools, in conjunction with data from _Planck_'s highest frequency channels, has enabled us to differentiate between multiple un-polarized dust populations. Increased instrumental sensitivity of next-generation CMB experiments will lead to data that is more discerning. Simple foreground models may no longer suffice. A continued refinement of foreground models is essential to enhance our ability to extract ever-tighter cosmological constraints from the CMB. This research is supported by the National Aeronautics and Space Administration grant NNX17AC55G. We want to express our thanks to Elle Shaw. Her careful review and helpful editorial suggestions significantly improved our work. Many of the results in this paper have been derived using the HEALPix package (Gorski et al. (2005)). Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium. SciNet is funded by Innovation, Science and Economic Development Canada; the Digital Research Alliance of Canada; the Ontario Research Fund: Research Excellence; and the University of Toronto (Ponce et al. (2019)). ## Appendix A Derivation of the Mahalanobis Regression Distance and Angle We want to derive the expression that minimizes the Mahalanobis distance between a data point with \(mu_{0}=(x_{0},y_{0})^{\mathrm{T}}\) and covariance \(\Sigma\) and a line parametrized by \(X=(x,y)^{\mathrm{T}}=(x,mx+b)^{\mathrm{T}}\). The Mahalanobis distance is defined, \[d^{2}=(X-\mu_{0})^{\mathrm{T}}\Sigma^{-1}(X-\mu_{0})\] (A1) We assume no off-diagonal terms are in the covariance matrix as frequency maps are independent. Then this expression is simply, \[d^{2}=\frac{(x-x_{0})^{2}}{\sigma_{x}^{2}}+\frac{(y-y_{0})^{2}}{\sigma_{y}^{2}}\] (A2) Figure 8: Diagram fitting a point \(\mathbf{Z_{4}}\) with covariance \(\Sigma_{4}\) to a model line \((\theta,b)\). A mirrored geometry of figure 1 showing equivalent expressions. We want to find a point \((\hat{x},\hat{y})\) along the line \(y=mx+b\) such that the Mahalanobis distance is minimized. \[\left.\frac{\partial d^{2}}{\partial x}\right|_{m\hat{x}} =\frac{2(\hat{x}-x_{0})}{\sigma_{x}^{2}}+\frac{2m(\hat{y}-y_{0})} {\sigma_{y}^{2}}=0\] \[(\hat{x}-x_{0})\sigma_{y}^{2} =-m\sigma_{x}^{2}(\hat{y}-y_{0})\] \[\hat{x}\sigma_{y}^{2}+m\sigma_{x}^{2}\hat{y} =x_{0}\sigma_{y}^{2}+my_{0}\sigma_{x}^{2}\] \[\hat{x}(\sigma_{y}^{2}+m^{2}\sigma_{x}^{2})+mb\sigma_{x}^{2} =\] \[\hat{x} =\frac{x_{0}\sigma_{y}^{2}+m(y_{0}-b)\sigma_{x}^{2}}{\sigma_{y}^ {2}+m^{2}\sigma_{x}^{2}}\] Then, using the relationship that \(y=mx+b\), \[\hat{y} =\frac{mx_{0}\sigma_{y}^{2}+m^{2}(y_{0}-b)\sigma_{x}^{2}+b( \sigma_{y}^{2}+m^{2}\sigma_{x}^{2})}{\sigma_{y}^{2}+m^{2}\sigma_{x}^{2}}\] \[=\frac{(mx_{0}+b)\sigma_{y}^{2}+m^{2}y_{0}\sigma_{x}^{2}}{\sigma_ {y}^{2}+m^{2}\sigma_{x}^{2}}\] Then, we can find the angle at which minimizes the Mahalanobis distance, \[\phi =\arctan\left(\frac{y_{0}-\hat{y}}{x_{0}-\hat{x}}\right)\] \[=\arctan\left(\frac{y_{0}\sigma_{y}^{2}+y_{0}m^{2}\sigma_{x}^{2}-( mx_{0}+b)\sigma_{y}^{2}-m^{2}y_{0}\sigma_{x}^{2}}{x_{0}\sigma_{y}^{2}+x_{0}m^{2} \sigma_{x}^{2}-x_{0}\sigma_{y}^{2}-m(y_{0}-b)\sigma_{x}^{2}}\right)\] \[=\arctan\left(\frac{y_{0}\sigma_{y}^{2}-(mx_{0}+b)\sigma_{y}^{2}}{ x_{0}m^{2}\sigma_{x}^{2}-m(y_{0}-b)\sigma_{x}^{2}}\right)\] \[=\arctan\left(-\frac{1}{m}\frac{\sigma_{y}^{2}}{\sigma_{x}^{2}}\right)\] The corresponding distance from point \((x_{0},y_{0})\) to the line is then found by trigonometry \[d=\frac{\Delta}{\cos(|\phi|+\theta-\pi)}\] (A3) It is important to remember that the geometry for computing the distance \(d\), as shown in fig 1 and \(8\), already takes into account the sign of the angle. Therefore, it would be inappropriate to use arctan2 in the calculation of \(\phi\).
2302.04601
Magnetic square lattice with vertex coupling of a preferred orientation
We analyze a square lattice graph in a magnetic field assuming that the vertex coupling is of a particular type violating the time reversal invariance. Calculating the spectrum numerically for rational values of the flux per plaquette we show how the two effects compete; at the high energies it is the magnetic field which dominates restoring asymptotically the familiar Hofstadter's butterfly pattern.
Marzieh Baradaran, Pavel Exner, Jiří Lipovský
2023-02-09T12:31:53Z
http://arxiv.org/abs/2302.04601v1
# Magnetic square lattice with vertex coupling of a preferred orientation ###### Abstract We analyze a square lattice graph in a magnetic field assuming that the vertex coupling is of a particular type violating the time reversal invariance. Calculating the spectrum numerically for rational values of the flux per plaquette we show how the two effects compete; at the high energies it is the magnetic field which dominates restoring asymptotically the familiar Hofstadter's butterfly pattern. ## 1 Introduction Lattice quantum graphs exposed to a homogeneous magnetic field are among systems exhibiting interesting and highly nontrivial spectral properties the origin of which lies in the (in)commensurability of the two inherent lengths, the lattice spacing and the cyclotronic radius, as first noted by Azbel [1] and made widely popular by Hofstadter [2]. The setting of the problem may differ: quantum (metric) graphs are through the duality [3, 4, 5, 6] related to discrete lattices [7], and those in turn can be translated into the Harper (or critical almost Mathieu) equation. In particular, the Cantor nature of the spectrum for irrational flux values, the so-called Ten Martini Problem, was one of the big mathematical challenges for two decades. It was finally demonstrated by Avila and Jitomirskaya [8] and more subtle properties of the spectrum were subsequently revealed, see, e.g., [9, 10]. Similar effect was observed in one-dimensional arrays with a magnetic field changing linearly along them with an irrational slope [11]. Quantum graphs [12] are most often investigated under the assumption that the wave functions are continuous at the graph vertices, in particular, having the coupling usually called Kirchhoff. This is, however, by far not the only possibility [13], and neither the only interesting one. Following the attempt to model the anomalous Hall effect using lattice quantum graphs [14, 15] it was noted, and illustrated on a simple example, that vertex coupling itself may be a source of a time-reversal violation [16]. The vertex coupling in question was found to have interesting properties, among them the fact that the transport properties of such a vertex at high energies depend crucially on its parity which was shown to lead to various consequences [17, 18, 19, 20]. It also exhibited a nontrivial \(\mathcal{PT}\)-symmetry although the corresponding Hamiltonians were self-adjoint [22]. Since the magnetic field is a prime source of time-reversal invariance violation, in graphs with the indicated coupling we have two such effects that may either enhance mutually or act against each other. The aim of this paper is to analyze a model of a magnetic square lattice graph to see how the preferred-orientation vertex coupling can influence its spectral properties. We will compute the spectrum numerically for various rational values of the magnetic flux; the result will show that at high energies the dominating behavior comes from the field alone. The vertex coupling effects are suppressed asymptotically, however, they never completely disappear. The paper is structured as follows. First, we recall how the Hamiltonians of magnetic quantum graphs look like in general (Section 2). Then, in Section 3, we investigate in detail the cases when the 'unit-flux cell' contains two and three vertices (Subsections 3.1 and 3.2) and give the properties of the general model (Subsection 3.3). ## 2 Magnetic graphs with preferred-orientation coupling In the usual setting [12] we associate with a metric graph a state Hilbert space consisting of (classes of equivalence of) \(L^{2}\) functions on the edges of the graphs. In presence of a magnetic field, the particle Hamiltonian acts as the magnetic Laplacian on it, namely as \(\left(-i\nabla-\mathbf{A}\right)^{2}\) on each graph edge. Here we naturally employ the rational system of units \(\hbar=2m=1=e=c\); should a purist object that the fine structure constant does not equal one, we include the corresponding multiplicative factor into the magnetic intensity units. In other words, since the motion on the graph edges is one-dimensional, the operator action on the \(j\)th edge is \(\psi_{j}\mapsto-\mathcal{D}^{2}\psi_{j}\), where \(\mathcal{D}\) is the quasi-derivative operator \[\mathcal{D}:=\frac{\mathrm{d}}{\mathrm{d}x}-i\,A_{j} \tag{1}\] with \(A_{j}\) being the tangential component of a vector potential referring to the given field on that edge; as usual we are free to choose the gauge which suits our purposes. To make such a magnetic Laplacian a self-adjoint operator, the functions at each vertex \(v\), connecting \(n\) edges, have to be matched through the coupling conditions [23] \[(U_{v}-I_{v})\Psi_{v}+i\ell(U_{v}+I_{v})(\mathcal{D}\Psi)_{v}=0, \tag{2}\] where \(\ell\in\mathbb{R}_{+}\) is a parameter fixing the length scale; we set \(\ell=1\) here for the sake of simplicity. Furthermore, \(U_{v}\) is an \(n\times n\) unitary matrix, \(\Psi_{v}\) and \((\mathcal{D}\Psi)_{v}\) are, respectively, vectors of boundary values of the functions \(\psi_{j}(x)\) and their quasi-derivatives, the latter being conventionally all taken in the outward direction. This is a large family; its elements can be obviously characterized by \(n^{2}\) real parameters. The couplings with the wave function continuity, the so-called \(\delta\)-couplings, form a one-parameter subfamily in it. Here we are going to consider a particular, very different case of matching condition introduced in [16], which corresponds to a matrix \(U_{v}\) of the circulant type, namely \[U_{v}=\begin{pmatrix}0&1&0&\ldots&0&0\\ 0&0&1&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\ldots&0&1\\ 1&0&0&\ldots&0&0\end{pmatrix}\,. \tag{3}\] Writing the condition (2) with this matrix \(U_{v}\) in components, we get \[(\psi_{j+1}-\psi_{j})+i\left(\mathcal{D}\psi_{j+1}+\mathcal{D}\psi_{j}\right) =0,\hskip 14.226378ptj=1,\ldots,n, \tag{4}\] where \(\psi_{j}\) is the boundary value of the function \(\psi_{j}\) at the vertex, and similarly for \(\mathcal{D}\psi_{j}\). The index \(j\) labels the edges meeting at the vertex and the numeration is cyclic; we identify \(\psi_{n+k}\) with \(\psi_{k}\) for \(k\in\mathbb{Z}\). This coupling violates the time-reversal invariance exhibiting a preferred orientation, most pronounced at the momentum \(k=\ell^{-1}\); to see it, it is enough to recall that \(U_{v}\) is nothing but the on-shell scattering matrix \(S(\ell^{-1})\)[12]. The violation is also related to the fact that the above matrix \(U_{v}\) is manifestly non-invariant with respect to transposition [22]. ## 3 The model After this preliminary, we can describe the system of our interest in more details. We consider a square lattice, which is placed into a homogeneous magnetic field \(\mathbf{B}=(0,0,B)\) perpendicular to the lattice plane; without loss of generality; we again opt for simplicity and assume that the edges of the lattice cells are of unit length. We focus on situations which can be treated by methods devised for periodic systems, thus we suppose that the magnetic flux \(\Phi\) per plaquette is a rational multiple of the flux quantum which in the chosen units is \(\Phi_{0}=2\pi\), or more specifically, that the dimensionless flux ratio \(\frac{\Phi}{\Phi_{0}}\) is equal to a rational number \(0<\frac{p}{q}<1\) with coprime positive integers \(q\geq 2\) and \(p=1,2,...,q-1\). Parts of the lattice corresponding to the unit flux consist thus of \(q\) plaquettes. To find the spectrum using the Floquet-Bloch decomposition theorem [12, Chap. 4] we have to choose therefore a plane tiling by domains the areas of which are \(q\). Naturally, this can be done in different ways; we choose the simplest one in which the 'tiles' are arrays of \(q\) elementary cells. In that case, it is suitable to adopt the Landau gauge \({\bf A}=B(0,x,0)\,\) so that \(A_{x}=0\) holds on the horizontal edges, while \(A_{y}\) on the vertical edges is linear with the slope being an integer multiple of \(B\). Another ambiguity concerns the choice of the elementary cell; since the main object of our interest is the lattice, we select for it the symmetric cross-shaped neighborhood of a vertex. Consequently, the 'unit-flux cell' will contain \(q\) vertices of degree four as indicated in Fig. 1. As shown in the figure, the coordinates are supposed to increase 'from left to right' and 'from bottom to top', in which case the constant components of \(A_{j}\) in (1) have positive signs on the vertical edges and the magnetic Laplacian operator at such an edge of the magnetic unit cell acts as \(-{\cal D}_{v}^{2}:=-\big{(}\frac{d}{dy}-ivB\big{)}^{2}\), where \(v=1,2,...,q\) is the vertex index. At the horizontal edges, on the other hand, the constant components of \(A_{j}\) in (1) are absent and the operator acts as the usual Laplacian i.e. \(-\frac{d^{2}}{dx^{2}}\). **Fig. 1.** An elementary cell of the square lattice in a homogeneous magnetic field with the flux value \(\Phi=2\pi\frac{p}{q}\) per plaquette, consisting of the neighborhood of \(q\) vertices of degree four indicated by thick black lines; the arrows on the vertical edges represent the vector potential. The fiber operators in the Floquet-Bloch decomposition have a purely discrete spectrum; each component of the eigenfunctions with energy \(E=k^{2}>0\) is a linear combination of the functions \({\rm e}^{\pm ikx}\) on the horizontal edges, and \({\rm e}^{ivBy}{\rm e}^{\pm iky}\) on the vertical edges with the vertex index \(v\). As for the negative spectrum, the solutions are combinations of real exponentials; one can simply replace \(k\) by \(i\kappa\) with \(\kappa>0\) as we will do in Secs. 3.1.2 and 3.2.2 below. The fiber operators are labeled by the quasimomentum components \(\theta_{1},\theta_{2}\in[-\pi,\pi)\) which indicate how is the phase of the wavefunctions related at the opposite ends of the unit-flux cell; it varies by \(\exp(i\theta_{1})\) in the \(x\)-direction, over \(q\) elementary cells, and by \(\exp(i\theta_{2})\) in the \(y\)-direction, or a single cell size. To find the spectra of the fiber operators for a given flux value of \(\Phi=2\pi\frac{p}{q}\) per plaquette as functions of the quasimomentum, and through them the spectral bands of the system, is a straightforward but tedious procedure which becomes increasingly time-consuming as \(q\) becomes a larger number. This follows from the fact that the vertices are of degree four, and consequently, to derive the spectral condition, one has to deal for a given \(q\geq 2\) with a system of \(4q\) linear equations, requiring the corresponding \(4q\times 4q\) determinant of such a system to vanish. In this paper, we investigate and report the results concerning all the coprime ratios \(\frac{\Phi}{\Phi_{0}}=\frac{p}{q}\) with \(q\in\{2,...,12\}\) and \(p=1,...,q-1\), having used Wolfram Mathematica 12 for all the calculations. In the next two sections, we describe in detail the derivation of spectral properties of the first two cases, \(q=\{2,3\}\), corresponding to the flux values of \(\Phi=\pi\) and \(\Phi=2\pi\frac{p}{3}\) per plaquette, where \(p=\{1,2\}\). For higher values of \(q\) the determinants are obtained in a similar way; they are explicit but increasingly complicated functions and we report here just the resulting spectral bands rather than the huge expressions that determine them. ### The case \(\Phi=\pi\) We begin with the simplest nontrivial case, \(q=2\), and specify the Ansatze for the wavefunction components indicated in Fig. 1 as follows: \[\begin{array}{ll}\psi_{j}(x)=a_{j}^{+}\mathrm{e}^{ikx}+a_{j}^{-}\mathrm{e}^{ -ikx},&x\in[-\frac{1}{2},0],\\ \psi_{2}(y)=\big{(}a_{2}^{+}\mathrm{e}^{iky}+a_{2}^{-}\mathrm{e}^{-iky}\big{)} \mathrm{e}^{iBy},&y\in[0,\frac{1}{2}],\\ \psi_{4}(y)=\big{(}a_{4}^{+}\mathrm{e}^{iky}+a_{4}^{-}\mathrm{e}^{-iky}\big{)} \mathrm{e}^{iBy},&y\in[-\frac{1}{2},0],\\ \varphi_{j}(x)=b_{j}^{+}\mathrm{e}^{ikx}+b_{j}^{-}\mathrm{e}^{-ikx},&x\in[0, \frac{1}{2}],\\ \varphi_{2}(y)=\big{(}b_{2}^{+}\mathrm{e}^{iky}+b_{2}^{-}\mathrm{e}^{-iky} \big{)}\mathrm{e}^{2iBy},&y\in[-\frac{1}{2},0],\\ \varphi_{4}(y)=\big{(}b_{4}^{+}\mathrm{e}^{iky}+b_{4}^{-}\mathrm{e}^{-iky} \big{)}\mathrm{e}^{2iBy},&y\in[0,\frac{1}{2}],\end{array} \tag{5}\] where \(j=1,3\); while the \(y\)-coordinate is the same in both the elementary cells, from computational reasons we consider the range of the \(x\)-coordinate for each edge segment separately. The functions \(\psi_{1}\) and \(\varphi_{1}\) have to be matched smoothly at the midpoint of the edge; together with the conditions imposed by the Floquet-Bloch decomposition at the endpoints of the unit-flux cell, we have \[\begin{array}{ll}\psi_{1}(0)=\varphi_{1}(0),&\psi_{1}^{\prime}(0)=\varphi_{ 1}^{\prime}(0),\\ \varphi_{3}\big{(}\frac{1}{2}\big{)}=\mathrm{e}^{i\theta_{1}}\,\psi_{3}\big{(} -\frac{1}{2}\big{)},&\varphi_{3}^{\prime}\big{(}\frac{1}{2}\big{)}=\mathrm{e} ^{i\theta_{1}}\,\psi_{3}^{\prime}\big{(}-\frac{1}{2}\big{)},\\ \psi_{2}\big{(}\frac{1}{2}\big{)}=\mathrm{e}^{i\theta_{2}}\,\psi_{4}\big{(}- \frac{1}{2}\big{)},&\mathcal{D}_{1}\psi_{2}\big{(}\frac{1}{2}\big{)}=\mathrm{e }^{i\theta_{2}}\,\mathcal{D}_{1}\psi_{4}\big{(}-\frac{1}{2}\big{)},\\ \varphi_{4}\big{(}\frac{1}{2}\big{)}=\mathrm{e}^{i\theta_{2}}\,\varphi_{2} \big{(}-\frac{1}{2}\big{)},&\mathcal{D}_{2}\varphi_{4}\big{(}\frac{1}{2}\big{)} =\mathrm{e}^{i\theta_{2}}\,\mathcal{D}_{2}\varphi_{2}\big{(}-\frac{1}{2} \big{)},\end{array} \tag{6}\] where the operators \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), as already introduced above, are \(\mathcal{D}_{v}:=\frac{d}{dy}-ivB\) with \(v\in\{1,2\}\). Next, imposing the matching conditions (4) at the two vertices of the'magnetic unit cell', and taking into account that the derivatives have to be taken in the outward direction, we arrive at the following set of equations \[\psi_{2}(0)-\psi_{1}\left(-\tfrac{1}{2}\right)+i\left(\mathcal{D}_{1 }\psi_{2}(0)+\psi_{1}^{\prime}\left(-\tfrac{1}{2}\right)\right)=0,\] \[\psi_{3}(0)-\psi_{2}(0)+i\left(-\psi_{3}^{\prime}(0)+\mathcal{D}_{ 1}\psi_{2}(0)\right)=0,\] \[\psi_{4}(0)-\psi_{3}(0)+i\left(-\mathcal{D}_{1}\psi_{4}(0)-\psi_{ 3}^{\prime}(0)\right)=0,\] \[\psi_{1}\left(-\tfrac{1}{2}\right)-\psi_{4}(0)+i\left(\psi_{1}^{ \prime}\left(-\tfrac{1}{2}\right)-\mathcal{D}_{1}\psi_{4}(0)\right)=0, \tag{7}\] \[\varphi_{2}(0)-\varphi_{1}\left(\tfrac{1}{2}\right)+i\left(- \mathcal{D}_{2}\varphi_{2}(0)-\varphi_{1}^{\prime}\left(\tfrac{1}{2}\right) \right)=0,\] \[\varphi_{3}(0)-\varphi_{2}(0)+i\left(\varphi_{3}^{\prime}(0)- \mathcal{D}_{2}\varphi_{2}(0)\right)=0,\] \[\varphi_{4}(0)-\varphi_{3}(0)+i\left(\mathcal{D}_{2}\varphi_{4} (0)+\varphi_{3}^{\prime}(0)\right)=0,\] \[\varphi_{1}\left(\tfrac{1}{2}\right)-\varphi_{4}(0)+i\left(- \varphi_{1}^{\prime}\left(\tfrac{1}{2}\right)+\mathcal{D}_{2}\varphi_{4}(0) \right)=0.\] Substituting from (5) into (6) makes it possible to express the coefficients \(b_{3}^{\pm},a_{2}^{\pm},b_{4}^{\pm}\) and \(a_{1}^{\pm}\) in terms of \(a_{3}^{\pm},a_{4}^{\pm},b_{2}^{\pm}\) and \(b_{1}^{\pm}\); substituting then from (5) into (2.1), we get a system of eight linear equations which is solvable provided its determinant vanishes. After simple manipulations, and neglecting the inessential multiplicative factor \(-2048\,k^{6}\,e^{i(2\theta_{2}+\theta_{1})}\), we arrive at the spectral condition \[4k^{2}-\left(k^{2}-1\right)^{2}\cos 2k+\left(k^{2}+1\right)^{2}\cos 4k+\left(k ^{2}-1\right)^{2}\Theta_{2}\;\sin^{2}k=0, \tag{8}\] where the quasimomentum-dependent quantity \(\Theta_{2}:=\cos 2\theta_{2}+\cos\theta_{1}\) ranges through the interval \([-2,2]\). Let us discuss the positive and negative part of the spectrum separately. #### 3.1.1 Positive spectrum As in other cases where periodic quantum graphs are investigated [16, 17, 18, 19, 20, 24], let us first ask whether the system can exhibit flat bands or not. This happens if the spectral condition (8) has a solution independent of the quasimomentum components \(\theta_{1}\) and \(\theta_{2}\), or equivalently, of the quantity \(\Theta_{2}\). One easily checks, however, that for \(k=1\) and \(k=n\pi\), \(n\in\mathbb{N}\), the left-hand side of the equation (8) reduces to \(8\cos^{2}2\neq 0\) and \(8\pi^{2}n^{2}\neq 0\), respectively. Accordingly, there are no flat bands, and the spectrum is absolutely continuous having a band-gap structure; from condition (8), taking into account that \(\Theta_{2}\in[-2,2]\), we find that the number \(k^{2}\) belongs to the spectral bands if and only if it satisfies the condition \[-2\leq\frac{-4k^{2}+\left(k^{2}-1\right)^{2}\cos 2k-\left(k^{2}+1\right)^{2} \cos 4k}{\left(k^{2}-1\right)^{2}\sin^{2}k}\leq 2. \tag{9}\] The band-gap pattern, containing also the negative spectrum as well, which will be discussed below, is illustrated in Fig. 2. Before turning to the negative part, let us look into the asymptotic behavior of the spectral bands in the high energy regime, \(k\to\infty\). To this aim, we rewrite the spectral condition (8) in the form \[\alpha_{1}(k)+\frac{\alpha_{2}(k)}{k^{2}}={\cal O}(k^{-4}), \tag{10}\] where \[\alpha_{1}(k) =(-4\cos 2k-2+\Theta_{2})\;\sin^{2}k\;,\] \[\alpha_{2}(k) =2\left(\cos 2k+\cos 4k+2-\Theta_{2}\;\sin^{2}k\right).\] Hence for large values of \(k\), the solutions are close to the points where the leading term, \(\alpha_{1}(k)\), vanishes. This gives rise to two types of spectral bands: * Pairs of narrow bands in the vicinity of the roots of \(\sin^{2}k\) which together with the gap between them have asymptotically constant width at the energy scale. To see that, we set \(k=n\pi+\delta\) with \(n\in\mathbb{N}\) and consider the limit \(n\to\infty\). Substituting it into (10) and expanding then the expression in the spectral condition at \(\delta=0\), we get in the leading order at a quadratic equation in \(\delta\) which yields \[\delta_{n}=\pm\frac{2\sqrt{2}}{n\pi}\sqrt{\frac{1}{6-\Theta_{2}}}+{\cal O}(n^{ -3}).\] Since the band edges correspond to \(\Theta_{2}=-2\) and \(2\), the width of the bands and the gap between them are (at the energy scale) respectively determined as \(\triangle E_{n,b}=2(\sqrt{2}-1)+{\cal O}(n^{-2})\) and \(\triangle E_{n,g}=4+{\cal O}(n^{-2})\) where the subscripts \(b\) and \(g\) refer to the band and gap, respectively. * Pairs of wide bands determined by the condition \(-1\leq\cos 2k\leq 0\) corresponding to the vanishing of the bracket expression in \(\alpha_{1}(k)\). The different character of those bands is illustrated in Fig. 2. It is also seen in the probability that Figure 2: Spectral bands of the square lattice with the flux value of \(\Phi=\pi\) per plaquette. Here and in the subsequent figures 3 and 4, the positive and negative spectra are simply given by \(k^{2}\) and \(-\kappa^{2}\), respectively. an energy value belongs to the spectrum for a randomly chosen value of the momentum \(k\), \[P_{\sigma}(H):=\lim_{K\to\infty}\frac{1}{K}\left|\sigma(H)\cap[0,K]\right|. \tag{11}\] This quantity was introduced by Band and Berkolaiko [25] who demonstrated its universality - meaning its independence on graph edge lengths as long as they are incommensurate - in periodic graphs with Kirchhoff coupling; the claim was recently extended to graphs with the coupling considered here [20, 21]. For equilateral graphs which we deal with here the universality makes naturally no sense but the probability (11) can be useful to compare our results to those concerning discrete magnetic Laplacians of [7]. The width of the narrow bands at the momentum scale \(\triangle k_{n,b}=\frac{\sqrt{2}-1}{n\pi}+\mathcal{O}(n^{-3})\), and as a consequence, their contribution to the probability (11) is zero. For the wide bands, on the other hand, it obviously equals \(\frac{1}{2}\). #### 3.1.2 Negative spectrum As we have mentioned, to find the negative spectrum, it is only needed to replace \(k\) by \(i\kappa\) in (8) and (9) which, respectively, leads to the following spectral and band conditions \[-4\kappa^{2}-(\kappa^{2}+1)^{2}\cosh 2\kappa+(\kappa^{2}-1)^{2}\cosh 4\kappa-( \kappa^{2}+1)^{2}\;\Theta_{2}\;\sinh^{2}\kappa=0, \tag{12}\] \[-1\leq\frac{\kappa^{4}-6\kappa^{2}+2(\kappa^{2}-1)^{2}\cosh 2\kappa-4\kappa^{2} \operatorname{csch}^{2}\kappa+1}{(\kappa^{2}+1)^{2}}\leq 1. \tag{13}\] More precisely, a negative eigenvalue \(-\kappa^{2}\) belongs to a spectral band if the positive number \(\kappa\) satisfies the band condition (13). As Fig. 2 illustrates, there is no flat band which is obvious from the spectral condition (12) in which the coefficient of \(\Theta_{2}\) in the last, the only quasimomentum dependent term is nonzero. Concerning the number of negative bands, let us first recall that the Hamiltonian of a star graph with \(N\geq 3\) semi-infinite edges and the coupling given by the matrix \(U_{v}\) (see (3)) at the central vertex has a nonempty discrete spectrum in the negative part in which the eigenvalues are by [16] equal to \[E=-\tan^{2}\frac{m\pi}{N}, \tag{14}\] with \(m\) running through \(1,\cdots,[\frac{N}{2}]\) for odd \(N\) and \(1,\cdots,[\frac{N-1}{2}]\) for even \(N\); their number coincides with the number of eigenvalues of the matrix \(U_{v}\) with positive imaginary part. Since the unit-flux cell contains for \(\Phi=\pi\) two vertices of degree four, the corresponding matrix \(U_{v}\) in each of them has only one negative eigenvalue in the upper complex halfplane, and consequently, in accordance with Theorem 2.6 of [19] the negative spectrum cannot have more than two bands. However, as can be seen in Fig. 2, in reality there is only one negative band which can be checked by inspecting the band condition (13). Denoting the function inside the inequality by \(f(\kappa)\), we find that each condition \(f(\kappa)=\pm 1\) can have only one solution for \(\kappa>0\). Indeed, consider first the condition \(f(\kappa)=-1\) which, after simple manipulations, can be rewritten in the factorized form \[(\kappa-\coth\kappa)(\kappa-\tanh\kappa)(\sinh\kappa+\kappa\cosh\kappa)(\kappa \sinh\kappa+\cosh\kappa)\,\csc{\rm sch}^{2}\kappa=0\,, \tag{15}\] where we have divided \(f(\kappa)+1\) by \(4\,\sinh\kappa\,\cosh\kappa>0\) and multiplied by \((\kappa^{2}+1)^{2}\). It is easy to check that only the expression in the first bracket in (15) can be zero since it is monotonically increasing (with the first derivative, \(\coth^{2}\kappa\), positive) on the interval \(\kappa\in(0,\infty)\) ranging from \(-\infty\) to \(+\infty\), which implies that it has only one root on the domain. The second term, \(\kappa-\tanh\kappa\), cannot be zero since it is also monotonically increasing on the domain (with the first derivative \(\tanh^{2}\kappa>0\)) but ranging from \(0\) to \(+\infty\); the expressions in the last two brackets are obviously nonzero for \(\kappa>0\), needless to say that the last term, \(\csc{\rm sch}^{2}\kappa\), cannot give a solution since the left hand side of (15) tends to infinity as \(\kappa\to\infty\). Now, let us pass to the condition \(f(\kappa)=1\) which, after manipulations, can be rewritten as \[(\coth^{2}\kappa+1)\lambda(\kappa)=0;\qquad\lambda(\kappa):=(\kappa^{2}-1)^{2 }\cosh 2\kappa-(\kappa^{2}+1)^{2},\] in which \(\coth^{2}\kappa+1\) is obviously nonzero; moreover, we see that \(\lambda(\kappa)\) is negative for \(\kappa\in(0,1]\) in view of that \(0\leq(\kappa^{2}-1)^{2}\cosh 2\kappa<1\) while \((\kappa^{2}+1)^{2}>1\), note that \(\cosh 2\kappa>1\) for \(\kappa>0\). Hence, it suffices to inspect the interval \(\kappa\in(1,\infty)\); to this aim, we rewrite the condition \(\lambda(\kappa)=0\) in the new form \(\xi(\kappa):=\cosh 2\kappa-(\frac{\kappa^{2}+1}{\kappa^{2}-1})^{2}=0\) from which we have \(\xi^{\prime}(\kappa)=2\sinh 2\kappa+\frac{8(\kappa^{3}+\kappa)}{(\kappa^{2}-1)^{ 3}}>0\) for \(\kappa>1\), implying that \(\xi(\kappa)\) is monotonically increasing on the domain. On the other hand, we have \(\lim_{\kappa\to 1}\xi(\kappa)=-\infty\) and \(\lim_{\kappa\to\infty}\xi(\kappa)=+\infty\) ; this, together with the monotonicity of \(\xi(\kappa)\), confirms that it can have only one root on the mentioned domain which concludes the claim. ### The case \(\Phi=2\pi\frac{p}{3}\) Let us consider next the case when the flux value per plaquette is \(\Phi=2\pi\frac{p}{3}\), where \(p=\{1,2\}\); the elementary cell now contains three vertices of degree four. The spectral condition can be derived in a similar way as in Sec. 3.1 by employing the appropriate Ansatze and matching them at each vertex as we did when deriving (8); note that in this case one has to also consider the quasi-derivative \({\cal D}_{3}:=\frac{d}{dy}-3iB\) referring the vertical edge passing through the third vertex. Seeking non-trivial solutions of the corresponding system of twelve linear equations, we arrive at the spectral condition \[g(k)+(k^{2}-1)^{3}\;\Theta_{3}\;\sin^{3}k=0, \tag{16}\] where \(\Theta_{3}:=\cos 3\theta_{2}+\cos\theta_{1}\in[-2,2]\) and \[g(k):= \ 6(k^{2}-1)^{2}(k^{2}+1)\sin k\ \cos^{3}k-(k^{2}+1)^{3}\sin 6k\] \[-8k\Big{(}(k^{4}+6k^{2}+1)\cos 2k-(k^{2}-1)^{2}\Big{)}\sin^{3} \frac{\pi p}{3}\,\cos\frac{\pi p}{3}\] \[-3(k^{2}+1)\Big{(}(k^{2}-1)^{2}\cos 2k-(k^{2}+1)^{2}\Big{)}\sin 2k \ \cos\frac{2\pi p}{3}.\] #### 3.2.1 Positive spectrum As in Sec. 3.1.1, we ask first whether the spectral condition (16) can give rise to flat bands or not; the values to explore are again \(k=1\) and \(k=n\pi\) with \(n\in\mathbb{N}\), for which the quasimomentum-dependent part of the spectral condition vanishes. Evaluating the function \(g(k)\) at the corresponding energies, we arrive at the expressions \(-8\sin 6-64\cos 2\sin^{3}\frac{\pi p}{3}\cos\frac{\pi p}{3}+24\sin 2\cos\frac{2 \pi p}{3}\) and \(-64\pi^{3}n^{3}\sin^{3}\frac{\pi p}{3}\cos\frac{\pi p}{3}\), respectively; one easily checks that both the expressions are nonzero for \(p=\{1,2\}\), so that there is no flat band. The spectrum is thus absolutely continuous having a band-gap structure, as illustrated in Fig. 3. More explicitly, using (16) and taking into account the range of the quasimomentum-dependent quantity \(\Theta_{3}\), we find that an energy \(k^{2}\) belongs to a spectral band if and only if it satisfies the condition \[-2\leq\frac{g(k)}{-(k^{2}-1)^{3}\ \sin^{3}k}\leq 2. \tag{17}\] As before, we are interested in the asymptotic behavior of the bands in the high energy regime, \(k\rightarrow\infty\). To this aim, we rewrite the spectral condition (16) in the form \[\beta_{1}(k)+\frac{\beta_{2}(k)}{k}+\frac{\beta_{3}(k)}{k^{2}}+\frac{\beta_{4 }(k)}{k^{3}}=\mathcal{O}(k^{-4}), \tag{18}\] Figure 3: Spectral bands of the square lattice with the flux per plaquette \(\Phi=2\pi\frac{p}{3}\), \(p=\{1,2\}\). where \[\beta_{1}(k) =\Big{(}\Theta_{3}+6\cos\big{(}k+\frac{2\pi p}{3}\big{)}+6\cos \big{(}k-\frac{2\pi p}{3}\big{)}+18\cos k+8\cos 3k\Big{)}\sin^{3}k\;,\] \[\beta_{2}(k) =8\sin^{2}\frac{\pi p}{3}\;\sin\frac{2\pi p}{3}\;\sin^{2}k\;,\] \[\beta_{3}(k) =\frac{3}{2}(6\sin 2k+\sin 4k)\cos\frac{2\pi p}{3}-3\Theta_{3} \sin^{3}k-3\sin 6k-6\sin k\,\cos^{3}k\;,\] \[\beta_{4}(k) =-16(3\cos 2k+1)\sin^{3}\frac{\pi p}{3}\,\cos\frac{\pi p}{3}. \tag{19}\] Seeking the solution in the vicinity of the points where the leading term, \(\beta_{1}(k)\), vanishes, we find again two types of spectral bands: * Series of three narrow bands in the vicinity of the roots of \(\sin^{3}k\) which, again, have an asymptotically constant width on the energy scale as \(k\to\infty\). To estimate the width of these bands, as in the previous case, we set \(k=n\pi+\delta\) with \(n\in\mathbb{N}\) and consider the limit \(n\to\infty\); substituting it into (18) and expanding then the resulting equation at \(\delta=0\), we get in the leading order at a cubic equation in \(\delta\) from which we obtain three solutions for \(\delta\) in the form of Cardano's formula that are asymptotically of the form \[\delta_{n,j}=\frac{\mathcal{G}(p,\Theta_{3})}{n}+\mathcal{O}(n^{-3}),\qquad j =1,2,3.\] Then, taking into account that the band edges correspond to \(\Theta_{3}=-2\) and \(2\), the width of these three bands, \(\triangle E_{n,j}(p)\), \(j=1,2,3\), at the energy scale is obtained as \[\triangle E_{n,1}(1) =\triangle E_{n,3}(2)=\frac{1}{\sqrt{3}}+\mathcal{O}(n^{-2})\;,\] \[\triangle E_{n,2}(1) =\triangle E_{n,2}(2)=\frac{1}{22}\left(3\sqrt{3}-11\sqrt{19}+4 8\right)+\mathcal{O}(n^{-2})\;,\] \[\triangle E_{n,3}(1) =\triangle E_{n,1}(2)=\frac{1}{22}\left(3\sqrt{3}+11\sqrt{19}-4 8\right)+\mathcal{O}(n^{-2}).\] The results, as expected, coincide with the narrow bands pattern in Fig. 3 confirming that these bands are asymmetric with respect to a swap of \(p\) and \(q-p\); for more details, see the fourth bullet point in Sec. 3.3. * Series of three wide bands determined by the condition \[-1\leq\;-3\cos\big{(}k+\frac{2\pi p}{3}\big{)}-3\cos\big{(}k-\frac{2\pi p}{3} \big{)}-9\cos k-4\cos 3k\;\leq 1,\] (20) referring to the vanishing of the 'large' bracket in \(\beta_{1}(k)\), taking into account that \(\Theta_{3}\in[-2,2]\); note that for \(q=3\) one easily checks that the function in the inequality is the same for \(p=1,2\) and condition (20) simplifies to \[-1\leq\;-6\cos k-4\cos 3k\;\leq 1. \tag{21}\] The narrow bands again do not contribute to (11). As for the wide ones, it is easy to compute on a single period of the function inside the inequality (21) the ratio of the sum of intervals of \(k\) satisfying the condition to the period; this shows that the probability (11) of belonging to the spectrum for a randomly chosen value of \(k\) equals \(P_{\sigma}(H)=-\frac{1}{3}+\frac{4}{\pi}\,\arctan\sqrt{6-\sqrt{33}}\approx 0.262498\). The result is independent of \(p\); in the following we will see that this is no longer true for higher values of \(q\,\). #### 3.2.2 Negative spectrum The corresponding spectral condition is again obtained by replacing the momentum variable \(k\) in (16) and (17) by \(i\kappa\). The spectrum is absolutely continuous; there is no flat band as one can check in a way analogous to that of Sec. 3.1.2. The unit-flux cell now contains three vertices so in accordance with Theorem 2.6 of [19] the negative spectrum cannot have more than three spectral bands; as we see in Fig. 3, this happens for \(p=2\) while for \(p=1\) we have only two negative bands. ### The case \(\Phi=2\pi\frac{p}{q}\) with \(q=\{2,3,...,12\}\) After dealing with the two simplest cases, we pass to the situation with the flux values \(\Phi=2\pi\frac{p}{q}\) for all the coprime ratios \(\frac{p}{q}\) with \(q\in\{2,...,12\}\) and \(p=1,...,q-1\,\). As we have already mentioned, the higher the \(q\) becomes, the more complicated the spectral condition is; we proceeded here to the limit of what was computationally manageable, reaching it at systems of 48 linear equations. The scheme remains the same; the spectral condition takes generally the form \[h(k;p,q)+(k^{2}-1)^{q}\;\Theta_{q}\;\sin^{q}k=0, \tag{22}\] where the quasimomentum-dependent quantity \(\Theta_{q}:=\cos q\theta_{2}+\cos\theta_{1}\) ranges again through the interval \([-2,2]\). While the latter is simple and depends as in Secs. 3.2 and 3.1 on \(q\) only, we have not been able to find a general expression of the function \(h(k;p,q)\) for any \(p\) and \(q\). One can check that \(h(k;p,q)\) does not vanish at \(k=1\) and \(k=n\pi\) so that the spectrum is absolutely continuous having a band-and-gap character. The results of the computation for the considered flux values given in Fig. 4 show an intricate spectral pattern: * Given the fact that \(\Theta_{q}\in[-2,2]\), away from \(k=1\) and \(k=n\pi\) a positive number \(k^{2}\) belongs to the spectrum if and only if \[-2\leq\frac{h(k;p,q)}{(k^{2}-1)^{q}\;\sin^{q}k}\leq 2\,;\] (23) given the increasingly complicated form of the function \(h(\cdot;p,q)\) the number of bands becomes larger with increasing \(q\) which motivates one to conjecture that the spectrum could be _fractal_, in fact a _Cantor set_, for \(\Phi\not\in 2\pi\mathbb{Q}\). As the momentum \(k\) grows, the dominating part of the spectrum - for the sake of brevity we label it for obvious reasons as _butterfly_ - can be found within the spectral bands of the non-magnetic lattice; recall that in that case the bands dominate the spectrum [16]. * let us call it _non-butterfly_ - diminishes as \(k\) increases. Its components remain nevertheless visible being of asymptotical constant width at the energy scale, separated by linearly blowing up butterfly patterns. The numerical results also show that the number of bands in these parts increases with growing \(q\), and one can conjecture that this part of the spectrum would again have a Cantor character for \(\Phi\not\in 2\pi\mathbb{Q}\). * The negative spectrum also has a band-gap structure coming from condition (23) in which \(k\) is replaced by \(i\kappa\) with \(\kappa>0\); in accordance with Theorem 2.6 of [19] it cannot have more than \(q\) spectral bands for the flux value \(\Phi=2\pi\frac{p}{q}\) per plaquette. * in contrast to the rest of the spectrum - are, even at high energies, visibly asymmetric with respect to a swap of \(p\) and \(q-p\), which is equivalent to flipping the field direction. * In the high energy regime, \(k\to\infty\), one can rewrite the spectral condition (22) in the polynomial form \[k^{2q}\left(\Theta_{q}+w(k;p,q)\right)\sin^{q}k+\mathcal{O}(k^{2q-1})=0,\] (24) where \(w(\cdot;p,q)\) is a periodic function, generalizing the functions \(\frac{\alpha_{1}(k)}{\sin^{2}k}-\Theta_{2}\), \(\frac{\beta_{1}(k)}{\sin^{3}k}-\Theta_{3}\) above, which means that the butterfly part is asymptotically \(\pi\)-periodic in the momentum variable. We plot the pattern obtained from the requirement of the large bracket vanishing in (24) in Fig. 5 for one interval, \(n\pi\leq k\leq(n+1)\pi\). Despite the computational restriction on the value of \(q\) we see Hofstadter's pattern clearly emerging. * In view of the asymptotic periodicity and the fact that non-butterfly part becomes negligible as \(n\to\infty\), the probability \(P_{\sigma}(H)\) given by (11) coincides with the Lebesgue measure of the band spectrum normalized to one on the interval \((n\pi,(n+1)\pi)\). We plot this quantity in dependence of the considered flux values in Fig. 6; it decreases with the growing \(q\) in accordance with the expectation that for \(\Phi\not\in 2\pi\mathbb{Q}\) the spectrum has measure zero. Recall further that while the 'total bandwidth' of the rational Harper operator is in general not known, for increasingly complicated coprime ratios we have the Thouless conjecture [26, 27] which states that \[\lim_{\begin{subarray}{c}q\,\rightarrow\,\infty\\ p\,\wedge\,q\,=\,1\end{subarray}}\;\;q\big{|}\sigma\big{(}\Phi=2\pi\frac{p}{q} \big{)}\big{|}=\frac{16\,C_{\rm Cat}}{\pi} \tag{25}\] with the Catalan constant \(C_{\rm Cat}=\sum_{n\in\mathbb{N}}(-1)^{n}(2n+1)^{-2}\approx 0.9159...\). We can compare this claim with (11). Having in mind that the standard interval on which the Hofstadter's butterfly is plotted is \([-2,2]\), the normalized measure values of \(\frac{4C_{\rm Cat}}{\pi q}\) are according to Fig. 7 not far from \(P_{\sigma}(H)\) even for the relatively small values of \(q\) we consider here. Figure 4: Spectrum of the square lattice of unit edge length for the flux ratio per plaquette \(\frac{\Phi}{\Phi_{0}}=\frac{p}{q}\) with \(q\in\{2,...,12\}\) and \(p=1,...,q-1\). At the top and bottom the spectral bands of \(\Phi\in 2\pi\mathbb{Z}\) corresponding to the non-magnetic case [16] are shown. **Fig. 5.** The asymptotic shape of the butterfly part of the spectrum. At the top and bottom the spectral bands of the non-magnetic case are again shown. **Fig. 6.** The probability (11) of belonging to the spectrum for a randomly chosen value of \(k\). To make the pattern more visible, we join the points referring to the adjacent values of \(\frac{\Phi}{\Phi_{0}}\). **Fig. 7. Comparison of (11) to the Thouless conjecture values indicated by the red diamonds.** ### Data availability statement Data are available in the article. ### Conflict of interest The authors have no conflict of interest. ### Acknowledgments M.B. and J.L. were supported by the Czech Science Foundation within the project 22-18739S. The work of P.E. was supported by the Czech Science Foundation within the project 21-07129S and by the EU project CZ.02.1.01/0.0/0.0/16_019/0000778.
2302.01989
Robust and Verifiable Proportionality Axioms for Multiwinner Voting
When selecting a subset of candidates (a so-called committee) based on the preferences of voters, proportional representation is often a major desideratum. When going beyond simplistic models such as party-list or district-based elections, it is surprisingly challenging to capture proportionality formally. As a consequence, the literature has produced numerous competing criteria of when a selected committee qualifies as proportional. Two of the most prominent notions are Dummett's proportionality for solid coalitions (PSC) and Aziz et al.'s extended justified representation (EJR). Both guarantee proportional representation to groups of voters who have very similar preferences; such groups are referred to as solid coalitions by Dummett and as cohesive groups by Aziz et al. However, these notions lose their bite when groups are only almost solid or almost cohesive. In this paper, we propose proportionality axioms that are more robust: they guarantee representation also to groups that do not qualify as solid or cohesive. Further, our novel axioms can be easily verified: Given a committee, we can check in polynomial time whether it satisfies the axiom or not. This is in contrast to many established notions like EJR, for which the corresponding verification problem is known to be intractable. In the setting with approval preferences, we propose a robust and verifiable variant of EJR and a simply greedy procedure to compute committees satisfying it. In the setting with ranked preferences, we propose a robust variant PSC, which can be efficiently verified even for general weak preferences. In the special case of strict preferences, our notion is the first known satisfiable proportionality axiom that is violated by the Single Transferable Vote (STV). We also discuss implications of our results for participatory budgeting, querying procedures, and to the notion of proportionality degree.
Markus Brill, Jannik Peters
2023-02-03T20:20:51Z
http://arxiv.org/abs/2302.01989v1
# Robust and Verifiable Proportionality Axioms for Multiwinner Voting ###### Abstract When selecting a subset of candidates (a so-called _committee_) based on the preferences of voters, proportional representation is often a major desideratum. When going beyond simplistic models such as party-list or district-based elections, it is surprisingly challenging to capture proportionality formally. As a consequence, the literature has produced numerous competing criteria of when a selected committee qualifies as proportional. Two of the most prominent notions are Dummett's _proportionality for solid coalitions_ (PSC) and Aziz et al.'s _extended justified representation_ (EJR). Both definitions guarantee proportional representation to groups of voters who have very similar preferences; such groups are referred to as _solid coalitions_ by Dummett and as _cohesive groups_ by Aziz et al. However, these notions lose their bite when groups are only almost solid or almost cohesive. In this paper, we propose proportionality axioms that are more robust than their existing counterparts, in the sense that they guarantee representation also to groups that do not qualify as solid or cohesive. Importantly, we show that these stronger proportionality requirements are always satisfiable. Another important advantage of our novel axioms is that their satisfaction can be easily verified: Given a committee, we can check in polynomial time whether it satisfies the axiom or not. This is in contrast to many established notions like EJR, for which the corresponding verification problem is known to be intractable. In the setting with approval preferences, we propose a robust and verifiable variant of EJR and a simply greedy procedure to compute committees satisfying it. We show that our axiom is considerably more discriminating in randomly generated instances compared to EJR and other existing axioms. In the setting with ranked preferences, we propose a robust variant of Dummett's PSC. In contrast to earlier strengthenings of PSC, our axiom can be efficiently verified even for general weak preferences. In the special case of strict preferences, our notion is the first known satisfiable proportionality axiom that is violated by the _Single Tranferable Vote_ (STV). In order to prove that our axiom can always be satisfied, we extend the notion of priceability to the ranked preferences setting. We also discuss implications of our results for participatory budgeting, querying procedures, and to the notion of proportionality degree. ## 1. Introduction The proportional representation of preferences is an important goal in many scenarios in which a subset of candidates needs to be selected based on the preferences of voters over those candidates. Such scenarios occur in a wide variety of applications, including parliamentary elections (Pukelsheim, 2014), participatory budgeting (Peters et al., 2021), digital democracy platforms (Behrens et al., 2014), and blockchain consensus protocols (Cevallos and Stewart, 2021). In the (computational) social choice literature, this type of problem is often referred to as _committee selection_ or _multiwinner voting_(Faliszewski et al., 2017; Lackner and Skowron, 2022). Some classic applications assume that candidates or voters (or both) come in predefined categories (political parties or voting districts), which greatly simplifies the task of finding representative outcomes. In the general case, when neither candidates nor voters come in predefined groups, it is surprisingly challenging to capture proportional representation formally. Perhaps as a consequence of this, the (computational) social choice literature has produced numerous competing criteria for when a selected committee qualifies as "proportional." What many of the existing definitions have in common is that they define proportionality over groups of voters whose preferences are similar to each other. This approach goes back to the seminal work of Dummett (1984), who defined _proportionality for solid coalitions (PSC)_ in the setting where voters cast ranked ballots. PSC guarantees an appropriate level of representation to any group of voters that is "solidly committed" to a set of candidates in the sense that all voters of the group rank those candidates (in some order) over all other candidates. The most prominent example of a voting rule ensuring PSC is the widely used _single transferable vote (STV)_.1 Similar notions were subsequently introduced in the setting of approval-based multiwinner voting (Lackner and Skowron, 2022). In particular, extended justified representation (EJR) (Aziz et al., 2017) and proportional justified representation (PJR) (Sanchez-Fernandez et al., 2017) formulate proportional representation guarantees for "cohesive" groups; a group of voters qualifies as cohesive if the intersection of their approval sets is sufficiently large. Footnote 1: In his article on STV, Tideman remarked that “it is the fact that STV satisfies PSC that justifies describing STV as a system of proportional representation” (Tideman, 1995, page 27). When voters with similar preferences fall short of the high standard of uniformity defined by "solid coalitions" or "cohesive groups," the axioms stay mostly mute.2 Indeed, this reliance on highly uniform voter groups has attracted criticism in the literature. For instance, Tideman remarked (in the context of discussing a rule satisfying PSC) that there may be "voters who would be members of a solid coalition except that they included an 'extraneous' candidate, which is quickly eliminated among their top choices. These voters' nearly solid support for the coalition counts for nothing, which seems to me inappropriate" (Tideman, 2006, page 279). Aziz and Lee gave a concrete example for this behavior and stated -- with regard to their own Expanding Approvals Rule (EAR) -- that "understanding formally whether EAR, or other rules, satisfy Tideman's notion of 'robust' PSC is an interesting avenue for future work" (Aziz and Lee, 2020, page 33). Relatedly, Hoffman et al. (2021) criticize that PSC is not compatible with ballot truncation. For instance, in an election where two candidates are to be elected and one quarter of the voters only rank \(a\) whereas another quarter of the voters only rank \(b\) before \(a\), PSC would not require either \(b\) or \(a\) to be elected. This is further corroborated by the work of Marsh and Plescia (2016), who find frequent cases of vote splitting in Irish STV elections, which has the potential to make large solid coalitions quite rare. Footnote 2: PSC does not impose any lower bounds on the representation of an almost solid group. EJR, on the other hand, does at least impose weakened representation guarantees for less cohesive groups (Sánchez-Fernández et al., 2017). Similar empirical criticism was also voiced for the justified-representation axioms in approval-based multiwinner voting. For instance, Bredereck et al. (2019) find that large cohesive groups do not seem to be very common in their experiments and that, for the preference models they studied, even a randomly chosen committee satisfies EJR and PJR with non-negligible probability. A similar effect was noticed by Szufa et al. (2022), who noted that the more "realistic" of their statistical models seem to have a low "cohesiveness level." An unrelated criticism of proportionality notions such as EJR and PJR is that they cannot be verified in polynomial time: It is coNP-complete to check whether a given committee satisfies EJR (Aziz et al., 2017) or PJR (Aziz et al., 2018). This is a crucial downside in applications in which the proportionality of the outcome needs to be verifiable (e.g., Cevallos and Stewart, 2021; Munagala et al., 2022). The same criticism applies to Aziz and Lee's generalization of PSC to weak preferences (i.e., rankings containing ties): it is coNP-complete to check whether a given committee satisfies the axiom (Aziz and Lee, 2020, Proposition 13). PSC itself, which is only defined for the special case of strict preferences (i.e., rankings without ties), is verifiable in polynomial time. ### Our Contribution In this paper, we propose novel proportionality axioms that address the criticisms described above. Our axioms are (1) _robust_ in the sense that they guarantee proportional representation also to voter groups that do not qualify as "solid" or "cohesive" and (2) _verifiable_ in the sense that it can be checked in polynomial time whether a given committee satisfies the axiom or not. Our axioms are more demanding than existing ones, as they impose strictly more constraints on committees. Importantly, however, we show that these stronger proportionality requirements can be satisfied in all instances. Indeed, we identify voting rules from the literature that always produce committees satisfying our strong requirements. Our results can, therefore, be interpreted as evidence that those rules satisfy proportionality to a high extent. For an overview of the proportionality axioms considered in this paper, we refer to Figure 7 on page 26. In the setting with approval preferences, we propose _EJR+_ as a robust and verifiable strengthening of EJR, together with a simply greedy procedure to compute committees satisfying EJR+. Using randomly generated preference profiles, we demonstrate that EJR+ is a considerably more demanding axiom compared to EJR and other existing axioms. We also observe that established rules such as _Proportional Approval Voting (PAV)_ and the _Method of Equal Shares (MES)_ satisfy EJR+ and that EJR+ can be -- in contrast to EJR and PJR -- efficiently verified. In the setting with ranked preferences, we propose _rank-PJR+_ as a robust strengthening of Dummett's PSC. In contrast to earlier strengthenings of PSC, rank-PJR+ can be efficiently verified even for general weak preferences. We observe that STV violates rank-PJR+. To the best of our knowledge, this establishes rank-PJR+ as the first satisfiable proportionality axiom that separates STV from more sophisticated methods such as the expanding approvals rule (EAR).3 In order to prove that rank-EJR+ can always be satisfied, we extend the notion of priceability (Peters and Skowron, 2020) to the ranked preferences setting and show that EAR satisfies it. Moreover, we use randomly generated preference profiles to show that rank-PJR+ is much more demanding than PSC. Footnote 3: Earlier strengthenings of PSC that are violated by STV are either sometimes unsatisfiable (Aziz et al., 2017) or equivalent to PSC in the case of strict preferences (for which STV is defined) (Aziz and Lee, 2020, 2021). Finally, we extend our robustness approach to the proportionality degree (Skowron, 2021) and to two applications that are closely related to multiwinner voting: participatory budgeting (Peters et al., 2021) and querying procedures for civic participation platforms (Halpern et al., 2023). Our paper treats approval preferences and ranked preferences within a unified framework and establishes novel relationships between approval-based and ranking-based axioms. Hence, our work helps to consolidate the literature from the approval-based and ranking-based model, a task explicitly encouraged by Lackner and Skowron (2022, pages 95-96). ### Related Work The study of proportional representation in multiwinner voting has a long tradition and voting rules aiming to produce proportional committees have been proposed long before the first proportionality notions have been formalized (see, e.g., the historical notes in the surveys by Tideman (1995), McLean et al. (1996), and Janson (2016)). For ranked preferences, the first and most well-known formal proportionality axiom is the aforementioned _proportionality for solid coalitions (PSC)_, which was introduced by eminent philosopher Sir Michael Dummett (Dummett, 1984). Extensions of PSC to weak rankings were only recently introduced by Aziz and Lee (2020, 2021), who also provided a characterization of committees satisfying PSC (Aziz and Lee, 2022). Aziz et al. (2017b) discussed several extensions or variants of Condorcet consistency to multiwinner voting. One of their notions, local stability, was independently studied by Jiang et al. (2020). In _approval-based_ multiwinner voting (Lackner and Skowron, 2022), proportionality axioms have received a lot of attention in recent years. Starting with the work of Aziz et al. (2017a), who introduced not only EJR but also _core stability_, several papers either generalized these proportionality notions, found new rules satisfying them, or identified new settings to apply them. For instance, PJR was introduced by Sanchez-Fernandez et al. (2017) and subsequently studied by Brill et al. (2017) and Aziz et al. (2018a), the _proportionality degree_ was introduced by Skowron (2021) and further studied by Janeczko and Faliszewski (2022), _individual representation_ was introduced by Brill et al. (2022), and _fully justified representation_ was introduced by Peters and Skowron (2020), who also proposed the _Method of Equal Shares_ and the concept of priceability. Commonly studied formalisms that are closely related to approval-based multiwinner voting include _participatory budgeting_(Aziz et al., 2018b; Brill et al., 2023; Los et al., 2022; Peters et al., 2021), _proportional rankings_(Israel and Brill, 2021; Rosenfeld et al., 2022; Skowron et al., 2017), and _public decision-making_(Freeman et al., 2021; Skowron and Gorecki, 2022). In many cases, axioms like EJR and PJR and voting rules like PAV and MES have been adapted to these related settings. Interestingly, Skowron and Gorecki (2022) motivate their axioms with the goal to "[...] guarantee fair treatment for all groups of voters, not only the cohesive ones." Since they work in a setting with multiple binary issues, their concepts and results do not translate to the multiwinner setting we study. Finally, a recent line of work studying approximations of core stability in approval-based multiwinner voting and beyond (Cheng et al., 2019; Jiang et al., 2020; Munagala et al., 2022, Peters and Skowron, 2020). Determining whether the core of an approval-based multiwinner election is always nonempty is considered an important open question (Lackner and Skowron, 2022). ## 2. Preliminaries In this section, we formally introduce the setting and review proportionality axioms and voting rules from the literature. For a natural number \(n\), let \([n]\) denote the set \(\{1,\ldots,n\}\). ### Setting We consider a social choice setting with a finite set \(C=\{c_{1},\ldots,c_{m}\}\) of \(m\)_candidates_ and a finite set \(N=[n]\) of _voters_ who have ordinal preferences over the candidates. Throughout this paper, we assume that preferences have the following form: For each voter \(i\in N\), there is a set \(A_{i}\subseteq C\) of _acceptable candidates_ and a complete and transitive preference relation \(\succeq_{i}\subseteq A_{i}\times A_{i}\) over the acceptable candidates. In other words, \(\succeq_{i}\) is a _weak order_ over \(A_{i}\). We let \(\succ_{i}\) denote the strict part of \(\succeq_{i}\). We assume that voters strictly prefer acceptable candidates to unacceptable ones, and that they are indifferent among unacceptable candidates. For \(A,B\subseteq C\), we write \(A\succeq_{i}B\) (respectively, \(A\succ_{i}B\)) if \(a\succeq_{i}b\) (respectively, \(a\succ_{i}b\)) holds for all \(a\in A\) and \(b\in B\). Further, for any \(c\in A_{i}\) we let \(\operatorname{rank}(i,c)=|\{c^{\prime}\in C\colon c^{\prime}\succ_{i}c\}|+1\) denote the _rank_ voter \(i\) assigns to candidate \(c\). We say that voter \(i\) ranks candidate \(c\)_higher_ than candidate \(c^{\prime}\) if \(c\succ_{i}c^{\prime}\), or, equivalently, \(\operatorname{rank}(i,c)<\operatorname{rank}(i,c^{\prime})\). All unacceptable candidates \(c\in C\setminus A_{i}\) are assigned a rank of \(\operatorname{rank}(i,c)=+\infty\). Besides the general case of weak-order preferences, we consider two important special cases. If \(\succeq_{i}\) is a linear order over \(A_{i}\), we say that voter \(i\) has _strict_ preferences. In this case, there are no ties between acceptable candidates. If, on the other hand, a voter is indifferent among all candidates in \(A_{i}\), we say that the voter has _dichotomous_ preferences. Dichotomous preferences naturally occur when using approval ballots, which is why we also refer to them as _approval preferences_. We use the term _weak preferences_ to refer to the general case, i.e., when preferences are not assumed to be strict or dichotomous. A _preference profile_\(P=(\succeq_{1},\ldots,\succeq_{n})\) contains the preferences of all voters. Note that, for each voter \(i\), the set \(A_{i}\) can be deduced from \(\succeq_{i}\). We call a preference profile _strict_ if all voters have strict preferences. If all voters have dichotomous preferences, we refer to \(P\) as an _approval profile_ and denote it as \(P=(A_{1},\ldots,A_{n})\). For a given approval profile and a candidate \(c\in C\), we let \(N_{c}=\{i\in N\colon c\in A_{i}\}\) denote the set of approvers of \(c\). We often write the preferences of voters as a strict ranking over indifference classes, omitting unacceptable candidates. The following example illustrates this. Example 1 ().: _Consider the following preference profile with \(m=6\) candidates and \(n=3\) voters._ \[1 :c_{1}\succ\{c_{2},c_{3},c_{4}\}\] \[2 :\{c_{2},c_{3}\}\] \[3 :c_{5}>c_{4}>c_{3}\] _Here, we have \(A_{1}=\{c_{1},c_{2},c_{3},c_{4}\}\), \(A_{2}=\{c_{2},c_{3}\}\), and \(A_{3}=\{c_{3},c_{4},c_{5}\}\). The ranks that voter 1 assigns to the candidates are given by \(\operatorname{rank}(1,c_{1})=1\), \(\operatorname{rank}(1,c_{2})=\operatorname{rank}(1,c_{3})=\operatorname{rank }(1,c_{4})=2\), and \(\operatorname{rank}(1,c_{5})=+\infty\). Voter 2 has dichotomous preferences and voter 3 has strict preferences._ A _(multiwinner voting) instance_ consists of a set \(N\) of voters, a set \(C\) of candidates, a preference profile \(P\), and target committee size \(k\leq m\). A _feasible committee_ is any subset \(W\subseteq C\) with \(|W|\leq k\). A _(multiwinner voting) rule_ maps every instance \((N,C,P,k)\) to a non-empty set of feasible committees. We allow a rule to output more than one committee to account for ties and in order to be able to speak about rules such as EAR and STV (see Section 2.5) that come in several different variants. We say that a rule "satisfies" a proportionality notion if and only if, for each instance, _every_ committee in the output of the rule satisfies the respective notion. ### Proportionality Notions for Strict Preferences We now turn to the proportionality notions defined in the literature, starting with the oldest and most prominent setting: multiwinner elections with strict preferences. In his classical work, Dummett (1984) introduced the notion of _Proportionality for Solid Coalitions (PSC)_. To define this property, we first need to define the eponymous solid coalitions. Definition 1 (Solid Coalition).: _Given a strict preference profile, a subset \(N^{\prime}\subseteq N\) of voters forms a solid coalition over a set of candidates \(C^{\prime}\subseteq C\) if \(C^{\prime}\succ_{i}C\setminus C^{\prime}\) for all \(i\in N^{\prime}\)._ Hence, voters in a solid coalition rank all candidates in \(C^{\prime}\) higher than candidates outside of \(C^{\prime}\), but the order among candidates in \(C^{\prime}\) may differ among voters in the coalition. Since the group contains an \((|N^{\prime}|/n)\)-fraction of all voters, PSC requires that at least \(\lfloor(|N^{\prime}|/n)k\rfloor\) candidates from this prefix are selected.4 Footnote 4: In accordance with the literature on approval-based committee voting, our definition of PSC is based on the so-called _Hare quota_\(\frac{n}{k}\). Different choices of quota are often discussed; e.g., the _Droop quota_ is given by \(\frac{n}{k\epsilon!}\)(Aziz and Lee, 2020). Definition 2 (PSC).: _Given an instance with strict preferences, a feasible committee \(W\) satisfies proportionality for solid coalitions (PSC) if for any subset \(N^{\prime}\subseteq N\) of voters forming a solid coalition over \(C^{\prime}\subseteq C\) and any \(\ell\in\mathbb{N}\) such that \(|N^{\prime}|\geq\ell\frac{n}{k}\) it holds that \(|C^{\prime}\cap W|\geq\min\left(|C^{\prime}|,\ell\right).\)_
2304.05861
Scaled boundary isogeometric analysis with C1 coupling for Kirchhoff plate theory
Although isogeometric analysis exploits smooth B-spline and NURBS basis functions for the definition of discrete function spaces as well as for the geometry representation, the global smoothness in so-called multipatch parametrizations is an issue. Especially, if strong C1 regularity is required, the introduction of function spaces with good convergence properties is not straightforward. However, in 2D there is the special class of analysis-suitable G1 (AS-G1) parametrizations that are suitable for patch coupling. In this contribution we show that the concept of scaled boundary isogeometric analysis fits to the AS-G1 idea and the former is appropriate to define C1-smooth basis functions. The proposed method is applied to Kirchhoff plates and its capability is demonstrated utilizing several numerical examples. Its applicability to non-trivial and trimmed shapes is demonstrated.
Jeremias Arf, Mathias Reichle, Sven Klinkel, Bernd Simeon
2023-04-12T13:46:23Z
http://arxiv.org/abs/2304.05861v1
# Scaled boundary isogeometric analysis with \(C^{1}\) coupling for Kirchhoff plate theory ###### Abstract Although isogeometric analysis exploits smooth B-spline and NURBS basis functions for the definition of discrete function spaces as well as for the geometry representation, the global smoothness in so-called multipatch parametrizations is an issue. Especially, if strong \(C^{1}\) regularity is required, the introduction of function spaces with good convergence properties is not straightforward. However, in \(2D\) there is the special class of analysis-suitable \(G^{1}\) (AS-\(G^{1}\)) parametrizations that are suitable for patch coupling. In this contribution we show that the concept of scaled boundary isogeometric analysis fits to the AS-\(G^{1}\) idea and the former is appropriate to define \(C^{1}\)-smooth basis functions. The proposed method is applied to Kirchhoff plates and its capability is demonstrated utilizing several numerical examples. Its applicability to non-trivial and trimmed shapes is demonstrated. _Keywords--_ Isogeometric analysis, Analysis-suitable \(G^{1}\) parametrization, Scaled boundary method, Kirchhoff plate theory ## 1 Introduction Isogeometric Analysis (IGA) is a concept which was introduced by Hughes et al. [1] and developed to a widely used and very successful approach within numerical analysis and computational mathematics. It connects the fields of geometry representation in Computer Aided Design (CAD) with the framework of Finite Elements (FE). For both, geometry parametrization and discretization space definition, one utilizes B-splines and NURBS (Non-Uniform Rational B-Splines). In IGA we find different built-in features that allow for interesting applications. For example, at least in the so-called single patch case, smoothness of test and ansatz functions is no issue anymore since the underlying B-spline and NURBS basis functions can be chosen with high global regularity. This is an advantage compared to classical FE ansatz spaces which are mostly continuous or discontinuous mappings. Clearly, also \(C^{1}\)-smooth FE spaces can be defined without the ideas of IGA. But, for this regularity requirement and in particular if we look for \(C^{k}\) basis functions where \(k>1\), the definitions become complicated. Whereas utilizing splines, a regularity increase can be implemented efficiently. This comes along with the \(k\)-refinement ansatz. The latter means a simultaneous degree and regularity elevation step. Such a change of the discretization spaces has no pendant in standard FE theory. The possibility to vary regularities in IGA implies two important aspects. On the one hand, IGA is a suitable discretization approach if one deals with high-order problems. On the other hand, increasing the regularity without changing the degree leads to a significant reduction of degrees of freedom although the approximation behaviour changes only slightly. Especially, as long as \(p>r\), where \(p\) stands for the B-spline degree and \(r\) for regularity, the observed convergence rates are basically the same. For more information on IGA we recommend [1, 2, 3]. Unfortunately, if one considers complex geometries, multipatch IGA meshes are necessary, i.e. the computational domain is decomposed in several subdomains that are associated to isogeometric spaces. These patch-wise spaces have to be coupled in order to get high regular basis functions on the whole domain. And this coupling is in general not trivial. Basically, when coupling the patches one faces similar problems like for the coupling of classical FE mesh elements. This means, the smoother the basis functions across patch interfaces, the harder the coupling procedure gets. Even worse, for different geometries locking effects arise, i.e. bad approximation results, if we want strong global \(C^{r}\)-smoothness with \(r\geq 1\); see [4]. Consequently, in literature one can find the concept of weak patch-coupling to define proper multi-patch IGA spaces; see [5, 6, 7]. But in \(2D\), if one restricts oneself to a special class of parametrizations, the so-called _analysis-suitable \(G^{1}\) parametrizations_, the strong \(C^{1}\) coupling of patches is possible; cf. [4]. Besides, for the latter geometries the problem of \(C^{1}\)-locking can be avoided. Further we note the framework of _polar splines_, see e.g. [8], that are useful to handle disk like domains and to define globally smooth basis functions. In this article, we combine the approach and the results from the mentioned reference [4] to show that scaled boundary IGA (SB-IGA) is suitable to introduce \(C^{1}\)-smooth basis functions. In SB-IGA we assume the domain to be defined by a scaling of its boundary w.r.t. to some given scaling center; see [9, 10, 11]. At first glance, this requires star-convex domains and introduces a singular parametrization mapping. Nevertheless, it fits to the fact that in CAD the computational domains are often represented via its boundaries and furthermore we explain in the subsequent parts why it is still useful for non-star-convex geometries. To demonstrate the \(C^{1}\) regularity and capabilities we apply our approach to the problem class of Kirchhoff plates. The related strong PDE formulation is of fourth order and hence in the classical weak formulation second derivatives appear ; see [12, 13]. Furthermore, we briefly explain, why the proposed scaled boundary meshes (SB-meshes) are convenient when dealing with trimming. Latter domain modification is often applied in IGA and central if topologically complicated domains appear. We refer to [14] for details regarding trimming. More precisely, the structure of the proposal is the following. We start with a very brief discussion of B-splines and SB-IGA in Sec. 2 to clarify the mathematical notions. We proceed with two sections, Sec. 3 and Sec. 4, dedicated to the coupling problem. We discuss the standard two-patch case as introduced in [4] and in a second section we show the application to SB-meshes. Afterwards, in Sec. 5, we briefly explain generalization possibilities, where we emphasize the application to trimmed geometries. In Sec. 6 we look at several numerical examples in the context of Kirchhoff plate theory. In Sec. 7 we discuss the problem of stability coming along with the singular parametrizations. We close with a short conclusion in Sec. 8. ## 2 B-splines and SB-IGA In this section we want to introduce briefly some basic notion and clarify the SB-IGA ansatz. First, we state a short overview of B-spline functions, B-spline spaces, respectively. Exploiting [2, 3] for a short exposition, we call an increasing sequence of real numbers \(\Xi\coloneqq\{\xi_{1}\leq\xi_{2}\leq\cdots\leq\xi_{n+p+1}\}\) for some \(p\in\mathbb{N}\)_knot vector_, where we assume \(0=\xi_{1}=\cdots=\xi_{p+1},\ \xi_{n+1}=\cdots=\xi_{n+p+1}=1\), and call such knot vectors \(p\)-open. Further the multiplicity of the \(j\)-th knot is denoted by \(m(\xi_{j})\). Then the univariate B-spline functions \(\widehat{B}_{j,p}(\cdot)\) of degree \(p\) corresponding to a given knot vector \(\Xi\) is defined recursively by the _Cox-DeBoor formula_: \[\widehat{B}_{j,0}(\zeta)\coloneqq\begin{cases}1,&\text{if }\zeta\in[\xi_{j},\xi_{j+1}), \\ 0,&\text{else},\end{cases} \tag{1}\] and if \(\ p\in\mathbb{N}_{\geq 1}\) we set \[\widehat{B}_{j,p}(\zeta)\coloneqq\frac{\zeta-\xi_{j}}{\xi_{j+p}-\xi_{j}} \widehat{B}_{j,p-1}(\zeta)+\frac{\xi_{j+p+1}-\zeta}{\xi_{j+p+1}-\xi_{j+1}} \widehat{B}_{j+1,p-1}(\zeta), \tag{2}\] where one puts \(0/0=0\) to obtain well-definedness. The knot vector \(\Xi\) without knot repetitions is denoted by \(\{\psi_{1},\ldots,\psi_{N}\}\). The multivariate extension of the last spline definition is achieved by a tensor product construction. In other words, we set for a given tensor knot vector \(\mathbf{\Xi}\coloneqq\Xi_{1}\times\cdots\times\Xi_{d}\), where the \(\Xi_{l}=\{\xi^{l}_{1},\ldots,\xi^{l}_{n_{l}+p_{l}+1}\},\ l=1,\ldots,d\) are \(p_{l}\)-open, and a given _degree vector_\(\mathbf{p}\coloneqq(p_{1},\ldots,p_{d})\) for the multivariate case \[\widehat{B}_{\mathbf{i},\mathbf{p}}(\mathbf{\zeta})\coloneqq\prod_{l=1}^{d}\widehat{B} _{i_{l},p_{l}}(\zeta_{l}),\ \ \ \ \forall\,\mathbf{i}\in\mathbf{I},\ \ \mathbf{\zeta}\coloneqq(\zeta_{1},\ldots,\zeta_{d}), \tag{3}\] with \(d\) as the underlying dimension of the parametric domain \(\widehat{\Omega}=(0,1)^{d}\) and \(\mathbf{I}\) the multi-index set \(\mathbf{I}\coloneqq\{(i_{1},\ldots,i_{d})\ |\ 1\leq i_{l}\leq n_{l},\ l=1,\ldots,d\}\). B-splines fulfill several properties and for our purposes the most important ones are: * If for all internal knots the multiplicity satisfies \(1\leq m(\xi_{j})\leq m\leq p\), then the B-spline basis functions \(\widehat{B}_{i,p}\) are globally \(C^{p-m}\)-continuous. Therefore we define in this case the regularity integer \(r\coloneqq p-m\). Obviously, by the product structure, we get splines \(\widehat{B}_{\mathbf{i},\mathbf{p}}\) which are \(C^{r_{1}}\)-smooth w.r.t. the \(l\)-th coordinate direction if the internal multiplicities fulfill \(1\leq m(\xi^{l}_{j})\leq m_{l}\leq p_{l},\ r_{l}\coloneqq p_{l}-m_{l},\ \forall l\in 1, \ldots,d\) in the multivariate case. We write in the following \(\mathbf{r}\coloneqq(r_{1},\ldots,r_{d})\), for the regularity vector to indicate the smoothness. In case of \(r_{i}<0\) we have discontinuous splines w.r.t. the \(i\)-th coordinate direction. To later to emphasize the regularity of the splines we introduce a upper index \(r\) and write in the following \(\widehat{B}^{r}_{i,p},\ \widehat{B}^{r}_{\mathbf{i},\mathbf{p}}\), respectively. * For univariate splines \(\widehat{B}^{r}_{i,p}\), \(p\geq 1\), \(r\geq 0\) we have \[\partial_{\zeta}\widehat{B}^{r}_{i,p}(\zeta)=\frac{p}{\xi_{i+p}-\xi_{i}} \widehat{B}^{r-1}_{i,p-1}(\zeta)+\frac{p}{\xi_{i+p+1}-\xi_{i+1}}\widehat{B}^{ r-1}_{i+1,p-1}(\zeta),\] (4) with \(\widehat{B}^{r-1}_{1,p-1}(\zeta)\coloneqq\widehat{B}^{r-1}_{n+1,p-1}(\zeta)\coloneqq 0\). * The support of the spline \(\widehat{B}^{r}_{i,p}\) is contained in the interval \([\xi_{i},\xi_{i+p+1}]\). * We have the partition of unity property \(\sum_{i}\widehat{B}^{r}_{i,p}=1\). The space spanned by all univariate splines \(\widehat{B}^{r}_{i,p}\) corresponding to given knot vector and degree \(p\) and global regularity \(r\) is denoted by \[S^{r}_{p}\coloneqq\text{span}\{\widehat{B}^{r}_{i,p}\ |\ i=1,\ldots,n\}.\] For the multivariate case one can define the spline space as the product space \[S^{r_{1},\ldots,r_{d}}_{p_{1},\ldots,p_{d}}\coloneqq S^{r_{1}}_{p_{1}}\otimes \cdots\otimes S^{r_{d}}_{p_{d}}=\text{span}\{\widehat{B}^{r}_{\mathbf{i},\mathbf{p }}\ |\ \mathbf{i}\in\mathbf{I}\}\] of proper univariate spline spaces. To obtain more flexibility it could be useful to introduce a strictly positive weight function \(W=\sum_{i}w_{i}\widehat{B}^{r}_{\mathbf{i},\mathbf{p}}\in S^{r_{1},\ldots,r_{d}}_ {p_{1},\ldots,p_{d}}\) and use NURBS functions \(\widehat{N}^{r}_{\mathbf{i},\mathbf{p}}\coloneqq\frac{\text{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rmrmrm{\rm{\rm{\rm{\rm{\rm{\rm 0\ under the mapping \(\mathbf{F}\), i.e. \(\mathcal{M}\coloneqq\{\mathbf{F}(K)\mid K\in\widehat{M}\}\), gives us a mesh structure in the physical domain. By inserting knots without changing the parametrization we can refine the mesh, which is the concept of \(h\)-refinement; see [1, 2]. For a mesh \(\mathcal{M}\) we can introduce the global mesh size through \(h\coloneqq\max\{h_{K}\mid K\in\widehat{M}\}\), where for \(K\in\widehat{M}\) we denote with \(h_{K}\coloneqq\mathrm{diam}(K)\) the _element size_ and \(\widehat{M}\) is the underlying parametric mesh. The underlying idea of SB-IGA takes into account that in CAD applications the computational domain is often represented by means of its boundary. As long as the region of interest is star-shaped we can choose a scaling center \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) and the domain is then defined by a scaling of the boundary w.r.t. to \(\mathbf{x}_{0}\). In the planar case, which is the one we focus on here, and in view of the isogeometric analysis we have some boundary NURBS curve \(\gamma(\zeta)=\sum_{i}\mathbf{C}_{i}\ \widehat{N}^{r}_{i,p}(\zeta),\ \mathbf{C}_{i}\in \mathbb{R}^{2}\) and define the SB-parametrization of \(\Omega\) through \[\mathbf{F}\colon\widehat{\Omega}\coloneqq(0,1)^{2}\to\Omega\,\ (\zeta,\xi) \mapsto\xi\ \big{(}\gamma(\zeta)-\mathbf{x}_{0}\big{)}+\mathbf{x}_{0}\quad(\text{see Fig. \ref{fig:s-1}}).\] Depending on the situation it might be useful to allow for more flexibility and we replace in this article the prefactor \(\xi\) by a degree \(1\) polynomial of the form \(q(\xi)\coloneqq c_{1}\xi+c_{2},\ c_{1}>0,\ c_{2}\geq 0\), i.e. \[\mathbf{F}(\zeta,\xi)=q(\xi)\big{(}\gamma(\zeta)-\mathbf{x}_{0}\big{)}+\mathbf{x}_{0}. \tag{5}\] By the linearity w.r.t. the second parameter \(\xi\) we can assume for \(\Omega\subset\mathbb{R}^{2}\) that \(\mathbf{F}\in\big{[}N^{r}_{p}\otimes S^{r}_{p}\big{]}^{2}\). In particular, the weight function depends only on \(\zeta\). Figure 1: Within SB-IGA a boundary description is used. However, a suitable scaling center is required. In other words, there are so-called control points \(\mathbf{C}_{i,j}\in\mathbb{R}^{2}\) associated to the NURBS \((\zeta,\xi)\mapsto\widehat{N}^{r}_{i,p}(\zeta)\widehat{B}^{r}_{j,p}(\xi)\in N^ {r}_{p}\otimes S^{r}_{p}\) which define \(\mathbf{F}\), namely \[\mathbf{F}(\zeta,\xi)=\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}\mathbf{C}_{i,j}\ \widehat{N}^{r}_{i,p}(\zeta)\widehat{B}^{r}_{j,p}(\xi).\] For reasons of simplification we suppose equal degree and regularity w.r.t. each coordinate direction. Due to the SB ansatz we obtain in the physical domain \(\Omega\) layers of control points and it is \(\mathbf{C}_{1,1}=\mathbf{C}_{2,1}=\cdots=\mathbf{C}_{n_{1},1}\); cf. Fig. 2. _Remark 1_.: The structure of the domain and the control points in Fig. 2 are reminiscent of the mentioned _polar splines_ framework; see [8]. In fact, the idea of degenerating an edge of the standard parametric domains is analogous to the SB-IGA ansatz followed in this article. In some sense, one can interpret the polar spline approach as a special case of a scaled boundary representation. But we emphasize that there are also several differences between the mentioned polar spline approach and the content of this publication. First of all, we do not work with periodic spline functions and the computational domain boundary domain can be a non-smooth curve. Secondly, our treatment of the scaling center basis functions differs, namely, we use always the original B-spline functions from the non-degenerate parametric domain, whereas in [8] triangular Bernstein polynomials are considered. In particular, we do not construct polar spline basis functions. Besides, in the subsequent parts we look at the coupling of different SB-parametrizations as well as at the coupling of different star-shaped blocks. This leads in general to geometries that can not be treated directly with polar splines. The isogeometric test functions, starting point for discretization methods, are then defined as Figure 2: The mesh and corresponding control net for a simple SB parametrization. Here we have \(p=3,r=1\) for the underlying NURBS definition. One notes the usage of B-splines without weight function for the scaling direction \(\xi\). the push-forwards of the NURBS, namely \[\mathcal{V}_{h}=\mathcal{V}_{h}(r,p)\coloneqq\{\phi\ |\ \phi\circ\mathbf{F}\in N _{p}^{r}\otimes S_{p}^{r}\}.\] If the domain boundary \(\partial\Omega\) is composed of different curves \(\gamma^{(m)}\), one defines parametrizations for each curve as written above and we are in the field of multipatch geometries; cf. Fig. 3. To be more precise, for a \(n\)-patch geometry we have \[\bigcup_{m=1,\ldots,n}\overline{\Omega_{m}}=\overline{\Omega},\ \ \Omega_{k}\cap\Omega_{l}=\emptyset\ \text{if}\ k\neq l,\quad\mathbf{F}_{m}\colon\widehat{\Omega}\to\Omega_{m},\ \mathbf{F}_{m}\in\big{[}N_{p}^{r}\otimes S_{p}^{r}\big{]}^{2} \tag{6}\] and \(\mathbf{F}_{m}\) is defined analogous to (5). Unfortunately, especially if the curves meet not \(G^{1}\) but high regularity of the corresponding IGA test functions is required, then the coupling gets involved. IGA spaces in the multipatch framework are straight-forwardly defined as \[\mathcal{V}_{h}^{M}\coloneqq\{\phi\colon\Omega\to\mathbb{R}\ |\phi_{|\Omega_{m}}\in \mathcal{V}_{h}^{(m)},\ \forall m\},\] where \(\mathcal{V}_{h}^{(m)}\) denotes the IGA space corresponding to the \(m\)-th patch, to \(\mathbf{F}_{m}\), respectively. For all the patch coupling considerations we suppose the next assumption. **Assumption 1** (Regular patch coupling).: * _In each patch we use NURBS and B-splines with the same degree_ \(p\) _and regularity_ \(r\)_._ * _The control points at interfaces match, meaning the control points of meeting patches coincide at the respective interface._ Thus it is justified to write for the set of parametric basis functions in the \(m\)-th patch \[\{\widehat{N}_{i,p}^{r}\cdot\widehat{B}_{j,p}^{r}\ |\ 1\leq i\leq n_{1}^{(m)}, \ 1\leq j\leq n_{2}\},\] for proper \(n_{1}^{(m)},\ n_{2}\in\mathbb{N}_{>1}\). The main part of the article is dedicated to the \(C^{1}\) coupling of such SB-IGA patches, i.e. face spaces of the form \[\mathcal{V}_{h}^{M,1}\coloneqq\mathcal{V}_{h}^{M}\cap C^{1}(\overline{\Omega}), \tag{7}\] where the singularity of the \(\mathbf{F}_{i}\) at \(\mathbf{x}_{0}\) for \(c_{2}=0\) requires a special attention. Figure 3: The boundary can be determined by the concatenation of several curves. In this situation the patch-wise defined discrete function spaces have to be coupled in order to obtain the wanted global smoothness. Classical planar two-patch coupling For reasons of simplification, we restrict ourselves until Sect. 5 to the situation of two-patch geometries. The aspects of coupling can be straightforwardly generalized to three or more patches. First, before we come back to SB-IGA we turn towards the simple case of a classical two-patch parametrization, meaning there are no singular points for the \(\mathbf{F}_{m}\). For such a situation we explain the needed conditions for a \(C^{1}\) coupling and exploit the notion of analysis-suitable \(G^{1}\) planar parametrizations. Here we mainly focus on the results and the framework of [4]. The setting is now the following. Assume that we have two mappings \(\mathbf{F}^{(S)},\;\;S\in\{L,R\}\) corresponding to the left and right side of a two-patch situation as displayed in Fig. 4. In more detail, we have \[\mathbf{F} \in C^{0}(\overline{\Omega};\mathbb{R}^{2}),\;\overline{\Omega} \coloneqq\overline{\widehat{\Omega}^{(L)}\cup\widehat{\Omega}^{(R)}},\;\;\text{ as global parametrization, i.e.}\] \[\mathbf{F}_{|\widehat{\Omega}^{(S)}} =\mathbf{F}^{(S)}\colon\widehat{\Omega}^{(S)}\to\Omega^{(S)},\; \;\text{with}\;\;\mathbf{F}^{(S)}\in C^{1}(\overline{\widehat{\Omega}^{(S)}}; \mathbb{R}^{2})\;\;\text{and}\] \[\widehat{\Omega}^{(L)} =(-1,0)\times(0,1),\quad\widehat{\Omega}^{(R)}=(0,1)\times(0,1).\] One notes the colors in the mentioned Fig. 4 that should determine the parametrization orientation used here and in the following. Further we assume that the \(\mathbf{F}^{(S)}\) are diffeomorphisms and write \(\widehat{\Gamma}=\mathbf{F}^{-1}(\Gamma)\coloneqq\{(0,\xi)\;|\xi\in[0,1]\}\) for the interface. And, using the notation from the previous section, we can write \(\mathbf{F}_{1}\coloneqq\mathbf{F}^{(L)}(\cdot-1,\cdot)\), \(\mathbf{F}_{2}\coloneqq\mathbf{F}^{(R)}\), i.e. there is a shift in the parametric domain. In view of isogeometric analysis we suppose two functions \(g^{(L)}\in C^{1}(\Omega^{(L)})\), \(g^{(R)}\in C^{1}(\Omega^{(R)})\) that can be combined to a continuous mapping \(g\colon\Omega\to\mathbb{R}\) defined by \(g_{|\Omega^{(S)}}=g^{(S)}\) and we want to know which properties of the \(g^{(S)}\) lead to a \(C^{1}\)-regular \(g\). The latter is the case if and only if the graph \(\mathcal{G}\coloneqq\{(\mathbf{F}(\boldsymbol{\zeta}),g(\mathbf{F}(\boldsymbol {\zeta})))\;|\;\boldsymbol{\zeta}\in\widehat{\Omega}\}\) as a surface in \(3D\) has well-defined tangent planes along the patch interface. This condition can be formulated by means of auxiliary coefficient functions \(\alpha^{(S)},\;\beta\) as written down in the next lemma which uses Proposition 2 and Definition 2 of [4]. **Lemma 1** (\(C^{1}\) regularity).: _Let \(\hat{g}\colon\tilde{\Omega}\to\mathbb{R}\) be a continuous function with \(\hat{g}^{(S)}\coloneqq\hat{g}_{|\widehat{\Omega}^{(S)}}\in C^{1}(\overline{ \widehat{\Omega}^{(S)}})\) and let \(g\coloneqq\hat{g}\circ\mathbf{F}^{-1}\colon\Omega\to\mathbb{R}\). Then \(g\in C^{1}(\Omega)\) if and only if there exist mappings \(\;\alpha^{(L)},\;\alpha^{(R)},\;\beta\colon[0,1]\to\mathbb{R}\;\;\text{s.t.}\; \forall\xi\in[0,1]\):_ \[\alpha^{(L)}(\xi)\,\alpha^{(R)}(\xi)>0\quad\text{and}\] \[\alpha^{(R)}(\xi)\,\begin{bmatrix}\partial_{\xi}\mathbf{F}^{(L)} (0,\xi)\\ \partial_{\zeta}\hat{g}^{(L)}(0,\xi)\end{bmatrix}-\alpha^{(L)}(\xi)\,\begin{bmatrix} \partial_{\xi}\mathbf{F}^{(R)}(0,\xi)\\ \partial_{\zeta}\hat{g}^{(R)}(0,\xi)\end{bmatrix}+\beta(\xi)\,\begin{bmatrix} \partial_{\xi}\mathbf{F}^{(R)}(0,\xi)\\ \partial_{\xi}\hat{g}^{(R)}(0,\xi)\end{bmatrix}=\boldsymbol{0}. \tag{8}\] Proof.: This follows directly by Definition 2 and Proposition 2 in [4]. Note that in (8) the last term of the sum on the left side does not change if \(R\) is replaced by \(L\), due to the assumed continuity of \(\mathbf{F}\) and \(g\). In the IGA context, one considers cases like Figure 4: A planar non-degenerate two-patch domain. \(\hat{g}^{(L)}(\cdot-1,\cdot)\in N^{r,r}_{p,p}\) and \(\hat{g}^{(R)}\in N^{r,r}_{p,p}\). More precisely, the standard two-patch isogeometric spaces without coupling conditions are defined in the parametric domain through \[\tilde{\mathcal{V}}^{M}_{h}\coloneqq\{\hat{\phi}\colon\tilde{\Omega}\to \mathbb{R}\ |\ \hat{\phi}_{|\widehat{\Omega}^{(L)}}(\cdot-1,\cdot)\in N^{r,r}_{p,p},\ \ \hat{\phi}_{|\widehat{\Omega}^{(R)}}\in N^{r,r}_{p,p}\}\] and in the physical domain \(\mathcal{V}^{M}_{h}:=\tilde{\mathcal{V}}^{M}_{h}\circ\mathbf{F}^{-1}\). Although there is a clear criterion for \(C^{1}\) regularity, the actual calculation of test functions meeting the conditions within the scope of numerical methods is in general not trivial. Especially if test functions are defined separately for each patch, like for the isogeometric spaces, and a suitable global \(C^{1}\)-smooth linear combination of the test functions is sought one can observe bad approximations properties and a loss of convergence in different situations. This is a reason why in [4] a class a special parametrizations are introduced leading to optimal convergence of the isogeometric spline spaces under \(h\)-refinement, namely the so-called analysis-suitable \(G^{1}\) parametrizations. **Definition 1** (Analysis-suitable \(G^{1}\) parameterizations; see [4]).: _Assume there are polynomial functions \(\alpha^{(S)},\ \beta^{(S)}\colon[0,1]\to\mathbb{R}\) of degree at most \(1\) s.t._ \[\alpha^{(R)}(\xi)\,\partial_{\zeta}\boldsymbol{F}^{(L)}(0,\xi)-\alpha^{(L)}( \xi)\,\partial_{\zeta}\boldsymbol{F}^{(R)}(0,\xi)+\beta(\xi)\,\partial_{\xi} \boldsymbol{F}^{(R)}(0,\xi)=\boldsymbol{0},\ \forall\xi, \tag{9}\] _with \(\beta=\alpha^{(L)}\,\beta^{(R)}-\alpha^{(R)}\,\beta^{(L)}\). Then the parametrization \(\boldsymbol{F}\) is called analysis-suitable \(G^{1}\)._ As already mentioned, the function spaces of interest are \(C^{1}\)-regular spline spaces \[\mathcal{V}^{M,1}_{h}\coloneqq\mathcal{V}^{M}_{h}\cap C^{1}(\Omega). \tag{10}\] **Lemma 2**.: _Let the parametrization be AS-\(G^{1}\) and let \(p>r+1>1\) for the underlying B-splines. Further, assume the patch coupling to be regular in the sense of Assumption 1 and let us assume only B-spline basis functions in the parametric domain, i.e. constant weight functions \(W\). Then in numerical applications the asymptotic convergence behaviour of the coupled spaces \(\mathcal{V}^{M,1}_{h}\) conforms to the optimal approximation rates. In other words, AS-\(G^{1}\) parametrizations do not suffer from order reductions, \(C^{1}\) locking, respectively._ Proof.: Compare Theorem 1 in [4]. An important point is the appearance of \(C^{1}\) locking if we choose \(p=r+1\) even if \(\mathbf{F}\) is AS-\(G^{1}\) as shown by Theorem 2 in [4]. After the consideration of the regular two-patch case in \(2D\) we face now the coupling problem for SB-parametrizations. ## 4 Planar SB-IGA with \(C^{1}\) coupling Here we show that planar SB-IGA parametrizations are quasi analysis-suitable, except at the scaling center. This is important to obtain good convergence properties for the \(C^{1}\)-coupled test functions. The problem with the singular point is addressed in the subsequent second subsection. ### SB-parametrization as quasi AS-\(G^{1}\) Analogous to the classical planar two-patch case we first look at two scaled boundary patches as displayed in Fig. 5. This means, we have constants \(c_{1}>0,\ c_{2}\geq 0\) with two boundary curves \(\gamma^{(S)}\) and a common scaling center \(\boldsymbol{x}_{0}\) and the patch parametrizations are of the form \[\mathbf{F}^{(S)}(\zeta,\xi)=(c_{1}\xi+c_{2})\ (\gamma^{(S)}(\zeta)-\boldsymbol{x}_{0} )+\boldsymbol{x}_{0}. \tag{11}\] **Assumption 2**.: _Here and in the rest of the article we assume that the boundary curves \(\gamma^{(S)}\) are parametrized in a strong \(C^{1}\) sense and it is_ \[\mathbf{0}\neq\partial_{\zeta}\gamma^{(L)}(0),\ \ \mathbf{0}\neq\partial_{\zeta} \gamma^{(R)}(0),\ \ \gamma^{(L)}(0)=\gamma^{(R)}(0).\] _Moreover, we assume that the boundary curves are chosen in such a way that for every \(\delta>0\), it is \(\textbf{F}^{(S)}_{|\widehat{\Omega}^{(S)}\cap\widehat{\Omega}_{\delta}}\in C^ {1}(\overline{\widehat{\Omega}^{(S)}\cap\widehat{\Omega}_{\delta}}),\ \widehat{\Omega}_{\delta}\coloneqq\widehat{\Omega}\{(\zeta,\xi)\ |\ \zeta\in[-1,1],\ \xi\in[0, \delta]\}\). Further, the restriction \(\textbf{F}^{(S)}_{|\widehat{\Omega}^{(S)}\cap\widehat{\Omega}_{\delta}}\) defines a diffeomorphism._ **Lemma 3** (SB-IGA patch coupling as quasi AS-\(G^{1}\)).: _For \(c_{2}>0\) the two-patch param. 11 is AS-\(G^{1}\). And for \(c_{2}=0\) the parametrization is AS-\(G^{1}\) except at the scaling center \(\boldsymbol{x}_{0}\). In other words, the condition in Definition 1 is fulfilled._ Proof.: We show the assertion for the interesting case \(c_{2}=0\). If \(c_{2}>0\) similar proof steps can be used. First let \(\delta>0\) and \(\partial_{\zeta}\gamma^{(R)}(0)\not\parallel\partial_{\zeta}\gamma^{(L)}(0)\). Obviously, \(\textbf{F}_{|\widehat{\Omega}_{\delta}}\) is globally continuous. In view of (11) and Assumption 2 there are \(d_{1},\ d_{2}\in\mathbb{R}\) with \(d_{1}\ \partial_{\zeta}\gamma^{(L)}(0)+d_{2}\ \partial_{\zeta}\gamma^{(R)}(0)=\gamma^{(L)}(0)- \boldsymbol{x}_{0}\). By the orientation of the parametrization as elucidated in Fig. 6 it has to be \(d_{1}\neq 0,\ d_{2}\neq 0\) and \(d_{1}\,d_{2}<0\). Namely, for the set \(\mathcal{S}=\{a_{1}\partial_{\zeta}\gamma^{(L)}(0)+a_{2}\partial_{\zeta}\gamma^ {(R)}(0)\ |\ a_{1}>0,\,a_{2}>0\}\) we see \(\gamma^{(L)}(0)-\boldsymbol{x}_{0}\notin\mathcal{S}\) and \(-\gamma^{(L)}(0)+\boldsymbol{x}_{0}\notin\mathcal{S}\). Set \(\alpha^{(R)}\coloneqq d_{1},\ \alpha^{(L)}\coloneqq-d_{2}\) and \(\beta\coloneqq\frac{c_{1}\xi+c_{2}}{-c_{1}}\). Then: \[\alpha^{(R)}\partial_{\zeta}\textbf{F}^{(L)}(0,\xi)-\alpha^{(L)}\partial_{ \zeta}\textbf{F}^{(R)}(0,\xi)+\beta(\xi)\,\partial_{\xi}\textbf{F}^{(L)}(0,\xi)\] \[=d_{1}\ (c_{1}\xi+c_{2})\ \partial_{\zeta}\gamma^{(L)}(0)+d_{2}\ (c_{1}\xi+c_{2})\ \partial_{\zeta}\gamma^{(R)}(0)+\frac{c_{1}\xi+c_{2}}{-c_{1}}\,c_{1}\,( \gamma^{(L)}(0)-\boldsymbol{x}_{0})\] \[=(c_{1}\xi+c_{2})\ \big{[}d_{1}\ \partial_{\zeta}\gamma^{(L)}(0)+d_{2}\ \partial_{\zeta} \gamma^{(R)}(0)-\ (\gamma^{(L)}(0)-\boldsymbol{x}_{0})\big{]}=0.\] Then setting \(\beta^{(L)}\coloneqq\frac{-\beta}{2\,\alpha^{(R)}},\ \beta^{(R)}\coloneqq\frac{\beta}{2\,\alpha^{(L)}}\) we get \(\beta=\alpha^{(L)}\beta^{(R)}-\alpha^{(R)}\beta^{(L)}\). This finishes the proof for \(\partial_{\zeta}\gamma^{(R)}(0)\not\parallel\partial_{\zeta}\gamma^{(L)}(0)\). Now let \(\partial_{\zeta}\gamma^{(R)}(0)=c\ \partial_{\zeta}\gamma^{(L)}(0)\) for \(c\neq 0\). It is easy to check that it has to be \(c>0\) otherwise we can not have \(\textbf{F}^{(S)}_{|\widehat{\Omega}^{(S)}\cap\widehat{\Omega}_{\delta}}\in C ^{1}(\overline{\widehat{\Omega}^{(S)}\cap\widehat{\Omega}_{\delta}})\). Then with \(\alpha^{(R)}\coloneqq c,\ \alpha^{(L)}\coloneqq 1,\ \beta=0\) one gets \[\alpha^{(R)}\partial_{\zeta}\textbf{F}^{(L)}(0,\xi)-\alpha^{(L)} \partial_{\zeta}\textbf{F}^{(R)}(0,\xi)+\beta\,\partial_{\xi}\textbf{F}^{(L)}(0,\xi)\] \[=c\ (c_{1}\xi+c_{2})\ \partial_{\zeta}\gamma^{(L)}(0)-(c_{1}\xi+c_{2}) \ \partial_{\zeta}\gamma^{(R)}(0)\] \[=c\,(c_{1}\xi+c_{2})\ \big{(}\partial_{\zeta}\gamma^{(L)}(0)-\ \partial_{\zeta}\gamma^{(L)}(0)\big{)}=0.\] Figure 5: Two-patch situation for SB-IGA. Lemma 2 and Lemma 3 suggest that \(C^{1}\) coupling for SB-IGA planar parametrizations leads to adequate approximation results of the underlying isogeometric spaces. In the paper [4] an important property of the AS-\(G^{1}\) geometries is the large enough interface spaces of traces and transversal directional derivatives, see (27) and (28) in [4]. More precisely, for the SB-IGA two-patch case we look at the space \[\widehat{\mathcal{V}}_{h,\Gamma}\coloneqq\{\xi\mapsto[\phi,\nabla\phi\cdot \mathbf{d}]\circ\mathbf{F}^{(R)}(0,\xi)\ |\ \phi\in\mathcal{V}_{h}^{M,1}\},\] where \(\mathbf{d}\) is the transversal vector at the interface defined through \[\mathbf{d}\circ\mathbf{F}^{(R)}(0,\xi)=\frac{1}{\alpha^{R}}\Big{[}\partial_{ \zeta}\mathbf{F}^{(R)}(0,\xi)-\beta^{(R)}(\xi)\partial_{\xi}\mathbf{F}^{(R)}( 0,\xi)\Big{]}\ ;\ \ \text{cf. (\ref{eq:23}) in \@@cite[cite]{[\@@bibref{}{HJ}{}{}]}}.\] Note that we can think of constant \(\alpha^{(S)}\) due to the proof of Lemma 3. Now we want to explain, why for SB-IGA with NURBS boundary curves the corresponding interface space still contains enough elements, at least away from the scaling center. The problem with the singularity is then faced in the subsequent section. **Lemma 4**.: _Let \(p>r+1>1\) and \(j>2,\ l>2\). Then \(\widehat{B}^{r+1}_{j,p}\times\widehat{B}^{r}_{l,p}\in\widehat{\mathcal{V}}_{h,\Gamma}\). Moreover, if \(c_{2}=0\) in (11) and if the \(\mathbf{F}^{(S)}\in C^{1}(\overline{\widehat{\Omega}^{(S)}})\) have a smooth inverse, it holds \(S_{p}^{r+1}\times S_{p}^{r}\subset\widehat{\mathcal{V}}_{h,\Gamma}\)._ Proof.: We follow the proof idea from [4]. First, let \(c_{2}=0\) and assume a singular parametrization. By the properties of the NURBS, it is straightforward to see that there are functions \(\hat{c}^{(S)}\in N_{p}^{r}\) with \(\hat{c}^{(S)}(\zeta)=\zeta+\mathcal{O}(\zeta^{2})\). Let \(j>2\), \(l>2\) and define \[\hat{g}^{(S)}(\zeta,\xi)\coloneqq\widehat{B}^{r+1}_{j,p}(\xi)+\big{[}\beta^{ (S)}(\xi)\partial_{\xi}\widehat{B}^{r+1}_{j,p}(\xi)+\alpha^{(S)}\widehat{B}^{r }_{l,p}(\xi)\big{]}\ \hat{c}^{(S)}(\zeta)\] Obviously, the composed mapping \(\hat{g}\), i.e. \(\hat{g}_{|\widehat{\Omega}^{(S)}}=\hat{g}^{(S)}\) is continuous and except of a shift in the parameter \(\zeta\) they are feasible NURBS basis functions in \(N_{p}^{r}\otimes S_{p}^{r}\). Moreover, one has \(\hat{g}^{(S)}(\zeta,\xi)\in O(\xi^{2})\) and in view of the considerations in Sec. 4.2 this implies \(\hat{g}_{|\widehat{\Omega}^{(S)}}\circ(\mathbf{F}^{(S)})^{-1}\in C^{1}( \overline{\Omega^{(S)}})\). The \(C^{1}\) regularity of \(g=\hat{g}\circ\mathbf{F}^{-1}\) across the interface follows by Lemma 1, meaning \(g\in\mathcal{V}_{h}^{M,1}\). And using (28)-(29) from [4] we can conclude \([g,\nabla g\cdot\mathbf{d}]\circ\mathbf{F}^{(R)}(0,\xi)=[\widehat{B}^{r+1}_{j,p}(\xi),\widehat{B}^{r}_{l,p}(\xi)]\). Basically, an analogous argumentation yields the second part of the assertion. In view of the Theorem 1 in [4], the space \(\mathcal{V}_{h}^{M,1}\) seems appropriate for approximations away from the scaling center. From now on, we concentrate on the more interesting case with a singular parametrization, i.e. we assume that we have the scaling factor \(q(\xi)=\xi\); cf. (5). ### Approximation in the scaling center Clearly, a special consideration of the behavior near the scaling center is needed. But a simple calculation shows that only such splines may cause problems at the scaling center which have non-vanishing values or derivatives in points \((\zeta,0)\). For this purpose assume w.l.o.g. \(\boldsymbol{x}_{0}=\mathbf{0},\ \mathbf{F}_{m}(\zeta,\xi)=\xi\ \gamma(\zeta)\) and let \[\hat{\phi}\in\text{span}\{\widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p}\ |\ j\geq 3,r\geq 1\},\] Figure 6: The orientation of the parametrization leads to \(d_{1}\,d_{2}<0\). i.e. we have a \(C^{1}\) function with \(\hat{\phi}=\partial_{\xi}\hat{\phi}=\partial_{\zeta}\hat{\phi}=0\) on \(\{(\zeta,0)\ |\ \zeta\in[0,1]\}\); cf. (4). Now we check that the push-forward \(\phi:=\hat{\phi}\circ\mathbf{F}_{m}^{-1}\) has a well-defined value and derivatives in \(\mathbf{x}_{0}\). The former is obvious and we concentrate on the derivatives. The chain rule yields \[\begin{bmatrix}\partial_{\zeta}\hat{\phi}(\zeta,\xi)\\ \partial_{\xi}\hat{\phi}(\zeta,\xi)\end{bmatrix}=\begin{bmatrix}\xi\partial_{ \zeta}\gamma_{1}(\zeta)&\xi\partial_{\zeta}\gamma_{2}(\zeta)\\ \gamma_{1}(\zeta)&\gamma_{2}(\zeta)\end{bmatrix}\begin{bmatrix}\partial_{x} \phi\circ\mathbf{F}_{m}(\zeta,\xi)\\ \partial_{y}\phi\circ\mathbf{F}_{m}(\zeta,\xi)\end{bmatrix}, \tag{12}\] where \(\gamma=(\gamma_{1},\gamma_{2})\). If we consider the derivatives away from the singular point, we get by assumption that the matrix on the right-hand side above is invertible, i.e. \[0\neq d(\zeta)\coloneqq\partial_{\zeta}\gamma_{1}(\zeta)\ \gamma_{2}(\zeta)- \partial_{\zeta}\gamma_{2}(\zeta)\ \gamma_{1}(\zeta).\] Hence, it is \[\begin{bmatrix}\partial_{x}\phi\circ\mathbf{F}_{m}(\zeta,\xi)\\ \partial_{y}\phi\circ\mathbf{F}_{m}(\zeta,\xi)\end{bmatrix}=\frac{1}{\xi\ d( \zeta)}\begin{bmatrix}\gamma_{2}(\zeta)&-\xi\partial_{\zeta}\gamma_{2}(\zeta )\\ -\gamma_{1}(\zeta)&\xi\partial_{\zeta}\gamma_{1}(\zeta)\end{bmatrix}\begin{bmatrix} \partial_{\zeta}\hat{\phi}(\zeta,\xi)\\ \partial_{\xi}\hat{\phi}(\zeta,\xi)\end{bmatrix}. \tag{13}\] By linearity and the definition of the B-spline basis functions we can suppose w.l.o.g. \(\hat{\phi}(\zeta,\xi)=\widehat{N}(\zeta)\ \xi^{2}\), for a suitable \(\widehat{N}\). Note that \(\widehat{B}^{r}_{j,p}(\xi)\in\mathcal{O}(\xi^{2})\) for \(\xi\to 0\) if \(j\geq 3\). For this case we study the derivatives when \(\xi\to 0\). With (13) one sees \[\partial_{x}\phi\circ\mathbf{F}_{m}(\zeta,\xi)=\frac{\partial_{\zeta}\widehat {N}(\zeta)\ \xi^{2}}{\xi\ d(\zeta)}\ \gamma_{2}(\zeta)-2\frac{\xi\ \widehat{N}(\zeta)\ \xi}{\xi\ d(\zeta)}\ \partial_{\zeta}\gamma_{2}(\zeta) \stackrel{{\xi\to 0}}{{\longrightarrow}}\ 0. \tag{14}\] But this implies directly that \(\phi(x,y)\) has a well-defined \(x\)-derivative in the scaling center, namely \(\partial_{x}\phi=0\) in \(\mathbf{x}_{0}\). Analogously one gets the well-defined derivative \(\partial_{y}\phi(\mathbf{x}_{0})=0\). Thus we can summarize that in the \(m\)-th patch the push-forwards of the \(C^{1}\)-smooth basis functions \(\widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p},\ j\geq 3\) define mappings in \(C^{1}(\overline{\Omega_{m}})\). This means, it is justified to remove in each patch initially before patch coupling all the parametric basis functions \(\widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p}\) with \(j\leq 2\). But clearly, to preserve the approximation ability of SB-IGA test functions, we have to introduce new basis functions in the physical domain that determine the function value and derivatives at the scaling center. In the planar case, three additional test functions are sufficient, where we exploit the isoparametric paradigm to define preliminary test functions \(\phi_{i,sc}\in C^{0}(\Omega)\) with \[\phi_{1,sc}(\mathbf{x}_{0})=1,\ \ \partial_{x}\phi_{1,sc}(\mathbf{x}_{0})= \partial_{y}\phi_{1,sc}(\mathbf{x}_{0})=0,\] \[\phi_{2,sc}(\mathbf{x}_{0})=0,\ \ \partial_{x}\phi_{2,sc}(\mathbf{x}_{0})=1,\ \ \partial_{y}\phi_{2,sc}(\mathbf{x}_{0})=0,\] \[\phi_{3,sc}(\mathbf{x}_{0})=0,\ \ \partial_{x}\phi_{3,sc}(\mathbf{x}_{0})=0,\ \ \partial_{y}\phi_{3,sc}(\mathbf{x}_{0})=1.\] Latter requirements can be easily satisfied if we use the entries of the geometry control points as coefficients for the parametric pendants \(\hat{\phi}_{i,sc}\). To be more precise, if we have \[\mathbf{F}_{m}=\sum_{j=1}^{n_{2}}\sum_{i=1}^{n_{1}^{(m)}}\mathbf{C}_{i,j}^{(m) }\widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p},\] then we set \[\hat{\phi}_{1,sc}^{(m)}\coloneqq\sum_{j=1}^{p+1}\sum_{i=1}^{n_{1}^ {(m)}}\widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p},\ \ \ \hat{\phi}_{2,sc}^{(m)}\coloneqq\sum_{j=1}^{p+1}\sum_{i=1}^{n_{1}^{(m)}}( \mathbf{C}_{i,j}^{(m)})_{1}\ \widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p}, \tag{15}\] \[\hat{\phi}_{3,sc}^{(m)}\coloneqq\sum_{j=1}^{p+1}\sum_{i=1}^{n_{1} ^{(m)}}(\mathbf{C}_{i,j}^{(m)})_{2}\ \widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p}. \tag{16}\] Note that in the two-patch case we have \(\mathbf{F}_{1}=\mathbf{F}^{(L)}(.-1,.),\ \mathbf{F}_{2}=\mathbf{F}^{(R)}\). Then the \(\phi_{i,sc}\) are defined through \[(\phi_{i,sc})_{|\Omega_{m}}\circ\mathbf{F}_{m}=\hat{\phi}_{i,sc}^{(m)}.\] For example, the geometry in Fig. 2 (a) with control points in Fig. 2 (b) leads to the three scaling center functions in Fig. 7 below. For a \(C^{1}\) spline \(\sum_{j}c_{j}\ \tilde{B}_{j,p}^{r}(\xi)\) the values in the first mesh interval are completely determined by the terms with \(j\leq p+1\). Thus using the partition of unity property we get directly that \(\hat{\phi}_{1,sc}^{(m)}=1,\ \hat{\phi}_{2,sc}^{(m)}=(\mathbf{F}_{m})_{1},\ \hat{\phi}_{2,sc}^{(m)}=(\mathbf{F}_{m})_{2}\) in a neighborhood of \(\{(\zeta,0)\ |\ \zeta\in[0,1]\}\) and hence \(\phi_{1,sc}=1,\phi_{2,sc}=x,\ \phi_{3,sc}=y\) in a neighborhood of \(\mathbf{x}_{0}\). In other words, we can choose the latter three functions for the determination of values and derivatives at \(\mathbf{x}_{0}\), i.e. we add them to the set of basis functions used for the coupling step. Note that the continuity of the global parametrization \(\mathbf{F}\) implies the continuity of the composed \(\phi_{i,sc}\). From the previous remarks, the principal idea how to determine the globally \(C^{1}\)-smooth basis functions is clear. Let \(\mathcal{B}\) be the set of all uncoupled basis functions. First, one removes in each patch all the basis functions \(\hat{N}_{i,p}^{r}\cdot\tilde{B}_{j,p}^{r},\ j<3\). Then one adds the three basis functions that determine the Hermite data in the scaling center. After these two steps, we can be sure that the remaining functions have well-defined derivatives and values in \(\mathbf{x}_{0}\). In particular, we deal now with a modified basis \(\mathcal{B}^{{}^{\prime}}\). Consequently, the actual \(C^{1}\) coupling is then done only with the functions in \(\mathcal{B}^{{}^{\prime}}\). As a result, one obtains some space \(\mathcal{W}_{h}^{M,1}\). It should now we verified that indeed \[\mathcal{W}_{h}^{M,1}=\mathcal{V}_{h}^{M,1}(:=\mathcal{V}_{h}^{M}\cap C^{1}( \overline{\Omega})).\] **Lemma 5**.: _It holds \(\mathcal{W}_{h}^{M,1}=\mathcal{V}_{h}^{M,1}\)._ Proof.: It is enough to consider \({}^{*}\supset^{*}\). Assume there is \(\phi\in\mathcal{V}_{h}^{M,1}\backslash\mathcal{W}_{h}^{M,1}\). By the definition of the scaling center basis functions and the iso-parametric paradigm the three polynomials \((x,y)\mapsto x,\ (x,y)\mapsto y,\ (x,y)\mapsto 1\) are elements of \(\mathcal{W}_{h}^{M,1}\). Thus, we can say w.l.o.g. \(\phi(\mathbf{x}_{0})=\partial_{x}\phi(\mathbf{x}_{0})=\partial_{y}\phi(\mathbf{x}_{0})=0\). Writing \(\hat{\phi}^{(m)}=\phi\circ\mathbf{F}_{m}\) this implies on the one hand obviously \(\hat{\phi}^{(m)}(\zeta,0)=0,\forall\zeta\). And, on the other hand by the continuity of the derivative (12) implies \(\partial_{\zeta}\hat{\phi}^{(m)}(\zeta,0)=\partial_{\xi}\hat{\phi}^{(m)}(\zeta, 0)=0,\forall\zeta\). But then it follows \(\phi\in\mathcal{W}_{h}^{M,1}\). After we have seen how the scaling center issue can be handled, we have to face now the coupling conditions. This is content of the next section. ### Enforcing the coupling conditions We shortly mention an adaption of the approach of [4] to compute the actual \(C^{1}\)-regular test functions. We want to emphasize that there are also other ways to enforce the coupling conditions, Figure 7: The auxiliary scaling center basis functions for the geometry and control net from Fig. 2 e.g. a least squares ansatz. Nevertheless, in view of clarity we restrict ourselves to one coupling procedure. Furthermore we explain the steps for a two-patch situation like Fig. 5. We apply similar steps as in [4] to our context which can be summarized as follows. Initially, we start with the set of all the non-coupled basis functions \[\tilde{\mathcal{B}}\coloneqq\{\widehat{N}^{r}_{i,p}(\cdot+1)\cdot\widehat{B}^{r }_{j,p}\ |\ 1\leq i\leq n^{(L)}_{1},\ 1\leq j\leq n_{2}\}\cup\{\widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p}\ |\ 1\leq i\leq n^{(R)}_{1},\ 1\leq j \leq n_{2}\},\] \(\mathcal{B}\coloneqq\tilde{\mathcal{B}}\circ\mathbf{F}^{-1}\) respectively. Here we extend an uncoupled basis function to \(\tilde{\Omega}\) by setting it to zero on the remaining patch. Further, below we write \(\tilde{\mathcal{B}}^{k}\) for the set of parametric basis functions after the \(k\)-th step of the coupling procedure. The hat \({}^{*}\stackrel{{\cdot}}{{\dots}}{}{}^{*}\) is used if we consider the functions in the parametric domain \(\tilde{\Omega}\) and we drop it if we mean the corresponding obvious push-forwards. 1. Remove all basis functions corresponding to \(\widehat{N}^{r}_{i,p}(\cdot+1)\cdot\widehat{B}^{r}_{j,p},\ \widehat{N}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p}\) with \(j\leq 2\). We have now a new basis \(\tilde{\mathcal{B}}^{1}\). Latter step is done in order to remove the problematic test functions near the scaling center. 1. The remaining test functions in \(\tilde{\mathcal{B}}^{1}\) are coupled continuously which leads to a modified basis \(\tilde{\mathcal{B}}^{2}\). The \(C^{0}\) coupling is easily achieved due to Assumption 1. This means we once more reduce again the number of basis functions. 1. Incorporate the three scaling center test functions \(\phi_{i,sc}\) determined by (15)-(16); \(\tilde{\mathcal{B}}^{2}\longrightarrow\tilde{\mathcal{B}}^{3}\). This is, we can handle values and derivatives at the scaling center. 1. Remove all basis functions that violate possible (problem-dependent) boundary conditions; \(\tilde{\mathcal{B}}^{3}\longrightarrow\tilde{\mathcal{B}}^{4}\). All four steps can be summarized by means of a transformation matrix \(M_{0}\in\mathbb{R}^{N\times N_{4}},\ N=\#\tilde{\mathcal{B}},\ N_{4}=\# \tilde{\mathcal{B}}^{4}\) that connects the new global continuous basis functions with the uncoupled ones. Now we are able to face the actual \(C^{1}\) coupling. 1. One computes the normal derivative jumps across the interface, more precisely the derivative jump matrix \((M_{J})_{i,j}=\langle[\nabla\phi_{i}^{4}\cdot\boldsymbol{n}_{L}],[\nabla \phi_{j}^{4}\cdot\boldsymbol{n}_{L}]\rangle_{L^{2}(\Gamma)},\ \ \mathcal{B}^{4}=\{\phi_{1}^{4},\phi_{2}^{4},\dots,\phi_{N_{4}}^{4}\}\). Here \(\boldsymbol{n}_{L}\) is the unit outer normal vector of the interface side w.r.t. to the left patch; see Fig. 5. And \([\![g]\!]\) stands for the jump value of \(g\) across the interface \(\Gamma\). 2. The global \(C^{1}\) test functions are then obtained by the the null space matrix \(M_{1}=\text{null}(M_{J})\). Thereby we get the wanted basis function set \(\mathcal{B}^{6}\). This means if one has a weak linear formulation of some problem, the assembly of matrices of the form \((A)_{i,j}=b(\phi_{i}^{6},\phi_{j}^{6})\), where \(b(\cdot,\cdot)\) is a proper bilinear form and \(\mathcal{B}^{6}=\{\phi_{1}^{6},\phi_{2}^{6},\dots,\phi_{N_{6}}^{6}\}\), can be computed from the uncoupled system matrix \((\tilde{A})_{i,j}=b(\phi_{i},\phi_{j}),\ \mathcal{B}=\{\phi_{1},\phi_{2},\dots,\phi_{N}\}\) just by matrix multiplications, namely \[A=M_{1}^{T}\ M_{0}^{T}\ \tilde{A}\ M_{0}\ M_{1}.\] _Remark 2_.: Applying the above steps we can calculate \(C^{1}\)-regular basis functions in the two-patch case. In case of a SB-parametrization which consists of more than two patches the coupling is done for each interface according to the mentioned approach. However, the definition of the additional scaling center test functions only has to be done once. _Remark 3_.: Later in the numerics part, we use Gauss quadrature rules to compute the matrix \(M_{J}\) and other appearing integrals. In particular, we do not need to evaluate the basis functions at the singular point. Thus, the computation of the matrix \(M_{J}\) is well-defined. Remarks on generalizations In the first part, the \(C^{1}\)-coupling was explained for the two-patch case, but as already mentioned the approach can be generalized to situations with more patches. Hereto one enforces the \(C^{1}\) coupling at each interface, but in case of \(c_{2}=0\), i.e. classical SB-IGA, the scaling center test functions \(\phi_{i,sc}\) are defined once for the complete multipatch domain. ### Non-star-shaped domains Although we are now able to handle various geometries we are still limited to star-shaped domains. But frequently in applications the computational domain is not star-shaped. Nevertheless, in a special situation the \(C^{1}\)-coupling can be generalized without loosing the (quasi) AS-\(G^{1}\) structure. Namely, lets assume a decomposition of the domain \(\Omega\) into star-shaped subdomains \(\Omega_{m}\), where the interfaces between the different subdomains are straight lines; see for example Fig. 8. Then we know how to couple the patches within each subdomain and the new interfaces between the star-shaped subdomains again fit to the AS-\(G^{1}\) framework since the two elements corresponding to that interface have a w.l.o.g. bilinear parametrization (see Fig. 8). And due to Proposition 3 in [4] we have that bilinear multipatch parametrizations are AS-\(G^{1}\). This property becomes evident in the numerical examples shown in the last part of the article. ### Application to trimmed domains Trimming, i.e. the cut off of domain parts utilizing trimming curves or surfaces, is a fundamental operation within CAD and is applied frequently. For more information and a detailed study we refer to [14]. But as explained in latter reference the implementation of general trimming procedures is quite complicated and for different approaches there arise several difficulties. Especially the stable integration over trimmed geometries might be an issue. SB-IGA with its boundary representation is suitable for the consideration of trimming and we want to explain here briefly, how we can handle \(C^{1}\)-coupling if we incorporate trimming. The basic idea is to interpret the trimmed domain as a new computational domain which is defined by appropriate new boundary curves and to apply the coupling approach from above. First, we explain the procedure by means of a simple example and concerning more complicated situations we add some remarks later. Let the boundary curves \(\gamma^{(m)},\ m=1,\ldots,n\) of the untrimmed domain \(\Omega\) be given; see the blue square boundary in Fig. 9 (a). Further, let a trimming curve \(\gamma_{T}\) be given such that there are two intersections with the boundary \(\partial\Omega\), e.g. \(\gamma^{(1)}(\zeta^{(1)})=\gamma_{T}(s_{1})\) and \(\gamma^{(2)}(\zeta^{(2)})=\gamma_{T}(s_{2})\) for proper Figure 8: A non-star-shaped domain which is divided into star-shaped subdomains that have non-curved interfaces. Such multipatch structures are still suitable for a \(C^{1}\) coupling. If the interface between SB-parametrizations with two different scaling centers is a straight line, then w.l.o.g. the patches meet as two bilinear patches and the parametrization is quasi AS-\(G^{1}\), i.e. AS-\(G^{1}\) except at the singular points. \(\zeta^{(k)}\in[0,1]\). Then, we have to check which curve segments belong to the boundary of the wanted trimmed domain \(\Omega_{T}\). For the example in Fig. 9 (a) we assume that the curve segments \[\tilde{\gamma}_{T},\ \tilde{\gamma}^{(1)},\ \tilde{\gamma}^{(2)}\ \ \text{and}\ \ \gamma^{(3)},\ \gamma^{(4)}\] define the new boundary of \(\Omega_{T}\). If we parameterize now the modified boundary parts \(\tilde{\gamma}_{T},\ \tilde{\gamma}^{(1)},\ \tilde{\gamma}^{(2)}\) utilizing standard NURBS or B-splines, i.e. e.g. \(\tilde{\gamma}_{T}\in(N_{p}^{r})^{2}\) we are again in the situation of classical untrimmed SB-IGA provided that we have a suitable scaling center for the trimmed domain; see Remark 4 below. The exact boundary representation of the trimmed domain is always possible which can be seen as follows. Let \(\gamma\colon[0,1]\to\mathbb{R}^{2}\) be a NURBS curve in \((N_{p}^{r})^{2}\) and \(0\leq\zeta^{(1)}<\zeta^{(2)}\leq 1\). Inserting knots at \(\zeta^{(1)},\zeta^{(2)}\) s.t. the multiplicity of the knots \(\zeta^{(1)},\zeta^{(2)}\) is \(p+1\), we obtain NURBS with discontinuities at the mentioned two knots that represent the original curve \(\gamma\); cf. [2]. Consequently, each new NURBS basis function is non-zero only in one of the three intervals \[I_{1}=[0,\zeta^{(1)}],\ \ I_{2}=[\zeta^{(1)},\zeta^{(2)}],\ \ I_{3}=[\zeta^{(2)},0],\ \ \text{see Fig. \ref{fig:NURBS}}.\] Hence, choosing the appropriate NURBS and after a simple re-scaling it is easy to see that we can describe each of the three curve segments \(\gamma_{|I_{i}}\) with NURBS curves such that they start and end in points in \(\{\gamma(0),\ \gamma(\zeta^{(1)}),\ \gamma(\zeta^{(2)}),\ \gamma(1)\}\). In other words, if the intersection parameter values in the trimming example above are known, we just have to apply some knot insertion steps to find exact parametrizations \(\tilde{\gamma}^{(m)}\colon[0,1]\to\partial\Omega_{T}\) of the trimmed domain boundary curves. Clearly, the trimming example in Fig. 9 is very simple and in general situations some issues might occur. However, most problems can be handled as the next remarks indicate. _Remark 4_.: On the one hand, if we have the boundary curves of the trimmed domain, we need to compute a suitable scaling center, but the trimmed domain may loose its star-shape structure. Nevertheless, the trimmed domain can be partitioned into smaller star-shaped regions. We propose a naive ansatz, namely we divide the trimmed domain via straight cut lines into smaller star-shaped blocks in order to maintain the analysis-suitability. In more detail, we repeat trimming steps with straight lines but incorporate all the subdomains arising from the trimming with that straight lines. This simple partition approach is illustrated in Fig. 11. There we have the square domain with a trimming curve that leads to a non-star-shaped new domain. But the application of an division step with a proper straight line leads to a two star-domains for which scaling centers can be chosen, see Fig. 11 (b). _Remark 5_.: On the other hand, there could be more than 2 intersection points between trimming curve and original boundary. Then, the overall trimming still can be done in an iterative manner, i.e. we cut off successively simple regions. Also the situation of a trimming curve completely defined in the interior of the domain is manageable. One partitions the trimmed domain via straight lines into star-shaped blocks and handles each of this subdomains with SB-IGA. We refer to Fig. 12 for an illustration. _Remark 6_.: We assume always that the SB-mesh boundary is given by NURBS curves, i.e. curves of the form \(\gamma\colon[0,1]\to\mathbb{R}^{2}\). Then, experiments not shown here indicate that if one uses exact boundary representations of the trimmed domain based on knot insertion together with the iso-parametric paradigm one might obtain SB meshes which are not optimal. To be more precise, one might get meshes with very thin elements which are not optimal in the sense of condition. Maybe if we only approximate the boundary curves with NURBS, e.g. with a \(L^{2}\) projection ansatz, we might relax latter issue. Figure 10: For the trimming example from Fig. 9 we have the NURBS on the left to describe the trimming curve \(\gamma_{T}\). The intersections of \(\gamma_{T}\) with the boundary curves correspond to the parameter values indicated with the black dots on the right. If we insert knots at these values \(\zeta^{(1)},\ \zeta^{(2)}\) we introduce new NURBS which are suitable for a decoupling and the extraction of the relevant part of the trimming curve that is related to the highlighted NURBS in the right figure. Figure 9: The SB-IGA ansatz is suitable to study trimmed geometries. One computes the intersections and the new boundary curves utilizing a knot insertions to decouple the boundary curves in without changing the geometry. Above the original 4-patch domain is trimmed and as a result we get a 5-patch geometry. ## 6 Numerical examples In this section, the proposed method is investigated regarding its performance by means of several numerical experiments. There are different application examples and methods where \(C^{1}\) regularity of test and ansatz functions is required for the numerical calculations. Here we want to study the application in structural mechanics of Kirchhoff plate theory for which we can exploit the increased inter-patch regularity ansatz from above. The approach is based on the assumptions made by Kirchhoff who states that "planes perpendicular to the mid surface will remain plane and perpendicular to the deformed mid surface". The chosen examples demonstrate the power of the theory in the context of scaled boundary isogeometric analysis and contain untrimmed and trimmed examples, both. The approach has been implemented utilizing MATLAB [15] in combination with the open source package GeoPDEs [16]. It is especially designed for the solution of partial differential equations in the framework of isogeometric analysis. At first, the Kirchhoff plate formulation is stated briefly. Afterwards, the proposed method is checked on various examples to outline the features and characteristics of SB-IGA plates. The reference solutions are either obtained by the analytical solution, if available, or by results from literature. Figure 11: Here we see a trimming curve that would lead to a non-star domain. We can divide the trimmed domain into two star-shaped parts using a straight cut line. Figure 12: Here we see a trimming curve that would lead to a non-star domain. We can divide the trimmed domain into two star-shaped parts using a straight cut line. ### Kirchhoff plate theory Kirchhoff plates determine a two-dimensional fourth-order boundary value problem governed by the bi-Laplace operator. The description of the formulation below follows the notation and derivation presented in [17] and [18]. We consider domains \(\Omega\subset\mathbb{R}^{2}\) that have a sufficient smooth boundary \(\partial\Omega=\Gamma\) such that the unit normal vector \(\mathbf{n}\) is well-defined. Further, the boundary is partitioned into parts for the specification of the energetically conjugate deflections and shear forces \(\Gamma=\overline{\Gamma_{u}\cup\Gamma_{Q}}\) and rotations and bending moments \(\Gamma=\overline{\Gamma_{\phi}\cup\Gamma_{M}}\). Further, we suppose that \(\Gamma_{u}\cap\Gamma_{Q}=\varnothing\) and \(\Gamma_{\phi}\cap\Gamma_{M}=\varnothing\), respectively. The strong form of the Kirchhoff plate formulation is stated as \[\Delta^{2}u =\frac{g}{D} \text{in}\quad\Omega \tag{17}\] \[u =u_{\Gamma} \text{on}\quad\Gamma_{u}\] (18) \[-\nabla u\cdot\mathbf{n} =\phi_{\Gamma} \text{on}\quad\Gamma_{\phi}\] (19) \[D\left(\nabla\left(\Delta u\right)+(1-\nu)\mathbf{\Psi}(u)\right) \cdot\mathbf{n} =Q_{\Gamma} \text{on}\quad\Gamma_{Q}\] (20) \[\nu D\Delta u+(1-\nu)D\mathbf{n}\cdot(\nabla\nabla u)\mathbf{n} =M_{\Gamma} \text{on}\quad\Gamma_{M}. \tag{21}\] with \(\nabla(\cdot)\) as the gradient operator, \(\Delta(\cdot)\) the Laplace operator and \(\mathbf{\Psi}(\cdot)\) as the third order differential operator \[\mathbf{\Psi}(\cdot)=\left[\partial_{xyy}(\cdot),\,\partial_{xxy}(\cdot) \right]^{T}. \tag{22}\] Moreover, \(u\) is the deflection of the plate, \(g\) the load per unit area and \(u_{\Gamma}\), \(\phi_{\Gamma}\), \(M_{\Gamma}\) and \(Q_{\Gamma}\) the prescribed deflections, rotations, bending moments and shear forces acting on the boundary. \(D\) denotes the bending stiffness consisting of the thickness \(t\), the Poisson ratio \(\nu\) and the Young modulus \(E\). As \(t\), \(\nu\) and \(E\) are assumed to be constant over the domain, yielding an isotropic, homogeneous material, the bending stiffness is defined as \[D=\frac{Et^{3}}{12(1-\nu^{2})}. \tag{23}\] The classical primal weak form for the computation of approximate solutions has the form \[\text{Find}\quad a(u_{h},v_{h})=F(v_{h}),\qquad\forall v_{h}\in V_{h} \tag{24}\] for a suitable test function space \(V_{h}\) that depends on the boundary conditions. Moreover, \(a(u_{h},v_{h})\) is a bilinear form defined as \[a(u_{h},v_{h})=\int_{\Omega}D\left[(1-\nu)\,\nabla(\nabla v_{h}):\nabla( \nabla u_{h})+\nu\Delta v_{h}\Delta u_{h}\right]\mathrm{d}x\mathrm{d}y. \tag{25}\] Furthermore, the linear functional \(F\) is denoted as \[F(v_{h})=\int_{\Omega}gv_{h}\,\mathrm{d}x\mathrm{d}y+\int_{\Gamma_{M}}M_{ \Gamma}\frac{\partial v_{h}}{\partial\mathbf{n}}\mathrm{d}\gamma+\int_{ \Gamma_{Q}}Q_{\Gamma}v_{h}\mathrm{d}\gamma. \tag{26}\] We remark that in the numerical tests no external shear forces \(Q_{\Gamma}\) and bending moments \(M_{\Gamma}\) are applied except for the L-bracket 6.5. Moreover, the boundary conditions for \(u_{\Gamma}\) (18) and \(\phi_{\Gamma}\) (19) need to be enforced strongly, while the conditions of \(Q_{\Gamma}\) (20) and \(M_{\Gamma}\) (21) are natural boundary conditions. Provided proper boundary conditions and suitable \(V_{h}\subset H^{2}(\Omega)\), the conditions of the Lax-Milgram theorem are satisfied and the weak form has a unique solution \(u_{h}\in V_{h}\). In the examples below, the discretization of the solution field is performed by the previously defined SB-IGA test functions including the coupling approach at the patch boundaries, which ensure \(C^{1}\)-continuity within the whole domain that is required for the plate formulation. In other words, the discrete spaces are of the form \(V_{h}\subset\mathcal{V}_{h}^{M,1}\); see (10), since boundary conditions are taken into account. In the following examples, we mean mesh size \(h\) if the underlying parametric meshes for each patch have \(1/h\) equidistant subdivisions with respect to both parametric coordinate directions. ### Smooth solution on a square plate At first, the plate formulation is checked on its general performance. A square plate of \(\Omega=[-0.5,0.5]^{2}\) is subjected to a smoothly distributed source function \(g\) that is chosen such that the exact solution is \(u=\cos(\pi x)^{2}\,\cos(\pi y)^{2}\). A plate of thickness \(t\approx 0.1063\) is considered of elastic material with Young's modulus \(E=10^{4}\) and Poisson's ratio \(\nu=0.0\), such that the flexural stiffness \(D=1\). We have clamped boundary conditions and the domain is discretized with four SB-IGA patches. The scaling center is placed with an offset of the plate center, namely \(x_{off}=-0.15\) and \(y_{off}=0.1\), to demonstrate its approximation behaviour for non-symmetric meshes. Fig. 13 exemplary shows the mesh for \(h=1/4\) and the corresponding deformation plot. The numerical solution is evaluated with respect to the \(H^{2}\) seminorm and \(L^{2}\) norm that are computed for third, fourth and fifth order basis functions, where the error norms are defined as \[|u-u_{h}|_{H^{2}(\Omega)}\quad\text{for the}\,H^{2}\text{-seminorm} \tag{27}\] \[||u-u_{h}||_{L^{2}(\Omega)}\quad\text{for the}\,L^{2}\text{-norm}. \tag{28}\] For the corresponding convergence rates, we refer to Fig. 14. Figure 13: Example of the smooth solution on a square plate. On the left side, the underlying mesh for \(h=1/4\) is shown. On the right, the deformation plot of the problem for the corresponding mesh is depicted, where \(p=3\) and \(r=1\). The convergence rate indicate for both error estimates optimal convergence rates \(\mathcal{O}(h^{p-1})\) for the \(H^{2}\) seminorm and \(\mathcal{O}(h^{p+1})\) for the \(L^{2}\) norm. Especially we do not see a \(C^{1}\) locking effect. ### Point load on a square plate The next example shows the capability of the proposed formulation to consider point loads even in the scaling center. Again, the square plate defined as \(\Omega=[-0.5,0.5]^{2}\) is considered and the boundaries \(\partial\Omega\) are simply supported. For the material parameters we choose \(E=10^{6}\) and \(\nu=0\) and the thickness is chosen such that \(D=1\). The point load is defined as \(F=1\). There is an analytical solution for the displacement of the plate center under a point load, namely using [19] one gets: \[u_{ex}=\frac{4FL^{2}}{D\pi^{4}}\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\frac{1}{ (m^{2}+n^{2})^{2}}\approx 0.01160, \tag{29}\] where \(L\) is the length of plate. The reference solution \(u_{ref}\approx u_{ex}\) for the deflection in the center under the load application is obtained from the series above by taking sufficient terms into account. Fig. 15 shows a mesh exemplary and the corresponding deformation plot. Figure 14: Convergence studies of the \(H^{2}\) seminorm and the \(L^{2}\) norm on the example of the smooth solution on a square plate of orders \(p=3\), \(p=4\) and \(p=5\). For all these cases the regularity \(r\) is \(1\). Besides we look at the relative errors calculated by \(|1-u/u_{ref}|\), where the convergence study is shown in Fig. 16. Here, the example points out that the formulation is capable to determine results properly even for loading applied in the singular point. We think that the deviation from the best approximation rates is caused by the lack of regularity for the point load, which is not a function anymore, strictly speaking. In the mentioned figure we display also the bending moment \(m_{11}=D\,\partial_{xx}u+\nu\,\partial_{yy}u\), a quantity for which the \(C^{1}\) regularity is crucial. Figure 16: On the left, the convergence study of the example of the point load on a square plate and orders of \(p=3\), \(p=4\), and \(p=5\) is shown. For all these cases the regularity is \(r=1\). On the right, the bending moment \(m_{11}\) is plotted for \(p=4\) and \(h=1/8\). Figure 15: Example of the point load on a square plate. On the left side, the underlying mesh for \(h=1/4\) is shown. On the right side the deformation plot of the corresponding mesh with \(p=3\) and \(r=1\) is displayed. ### Perforated circular disk Having tested the method for the plate formulation in a simply square domain, trimmed geometries are now incorporated. Therefore, a simply supported disk of radius \(R=1\) and \(t=0.02\), \(E=10^{7}\), \(\nu=0.3\) is trimmed by four holes of diameter \(d_{hole}=0.1\), see Fig. 17 (a). The center of the holes are placed on the \(x\) and \(y\) axes with a distance of \(0.4\) from the origin. The disk is loaded by a constant surface load \(g=1\). The reference solution is obtained from [12] as the maximal deflection in the center \(u_{ref}=0.008950\), which is a numerically converged solution determined using the commercial software Abaqus with the triangular, linear shell element S3. Since the SB-IGA formulation requires star shaped patches, it is not possible to determine the deflection by a single scaling center but several scaling centers are required; see Sec. 5.1. In the following, a mesh of 25 scaling centers is evaluated (see Fig. 17 (b)) with varying numbers of elements and orders on each patch. Fig. 18 (a) shows the deformation plot of the example corresponding to the mesh presented in Fig. 17. Furthermore, the convergence is evaluated in 18 (b) for the orders of \(p=3\), \(1/h=2,3,4,5\) and \(p=4\), \(1/h=2,3,4\). For a constant surface load nearly exact results are obtained by at least cubic basis functions. This is fulfilled for both orders. A comparison to an automatic meshing approach of triangular Bezier spline elements presented in [12] shows that the incorporation of trimming in the structural description of the model as in the context of SB-IGA has advantages over mesh finding algorithms as the meshing algorithm does not converge towards the reference solution even for very fine meshes. ### L-bracket In the second last example, an L-bracket is investigated analogously to [20] and [21] with the geometry shown in Fig. 19. The bracket has underlying parameters \(E=200\cdot 10^{9}\), \(\nu=0\) and \(t=0.01\). The boundary conditions are clamped conditions on the left holes (upper and lower hole) as well as a constant line load of \(f=100\) on the upper right edge marked in blue. Further, the partitioning of the mesh created with 20 scaling centers is given in Fig. 19 (b). We emphasize that meshes with less scaling centers could be applied, however, the model is not chosen to demonstrate the convergence, but the general application to complex models. Figure 17: Model of the perforated circular disk. On the left side the geometry with the trimming curves is drawn. On the right side, the underlying mesh of \(h=1/2\) is shown. Figure 19: The L-bracket. On the left side, the geometry is shown. On the right side the underlying mesh for \(h=1/4\) is given. Figure 18: On the left side, the corresponding deformation plot to the mesh of Fig. 17 with \(p=3\) and \(r=1\) is shown. On the right side, the convergence studies of the displacement in the middle of the perforated disk with orders of \(p=3\), \(p=4\) (\(r=1\)) is compared to [12]. We present the numerical results in terms of the deformation \(u\) and the bending moments \(m_{11}\), \(m_{22}\) and \(m_{12}\). These are computed utilizing the bending stress tensor \(\mathbf{m}=(m_{ij})\) defined component-wise as \[m_{11}=D\,\partial_{xx}u+\nu\,\partial_{yy}u\qquad m_{22}=D\,\partial_{yy}u+\nu \,\partial_{xx}u\qquad m_{12}=D(1-\nu)\,\partial_{xy}u. \tag{30}\] The plots of the deformation \(u\) for mesh \(h=1/4\) and orders \(p=3\) and \(p=4\) respectively are shown in Fig. 20. If we look at the latter one observes a good agreement with results given by [20, Fig. 14 (b)]. Moreover, the bending moments are presented in Fig. 21. Compared to [20, Fig. 15 (d)-(f)] we obtain very similar approximations and the extremal values at the upper left hole and the vertical part of the bracket fit to the mentioned reference. We note that the proposed method is capable to use the exact geometries. Further, boundary conditions can be applied at trimmed curves. Figure 20: Example of the L-bracket with line load. On the left and in the middle, the deformation plots of the proposed approach are shown for the mesh Fig. 19 (b), \(r=1\) and polynomial order of \(p=3\) and \(p=4\), respectively. ### A clamped violin In this example, we demonstrate the applicability of the coupling procedure in the context of more complicated geometries involving non-trivial trimmed parts. In more detail, we look at a KL-plate which has the shape given in Fig. 22 (a). This geometry is inspired by the violin example in [22], however, here the setting as well as the geometry differs. We note that the boundaries of Fig. 22 (a) are represented by degree 3 B-splines. A decomposition of the domain into star-shaped blocks with straight interfaces is possible and for the computations we use the mesh structure given in Fig. 22 (b). We consider the case of a uniform load function \(f=-0.01\) and as boundary conditions we require the clamping of the inner boundaries that define the f-holes. All the other boundaries can move freely. Further we choose the material parameters \(E=10^{5},\ \nu=0.1,\ t=0.2\). Using B-splines of degree 3 and regularity 1 for the determination of the coupled basis functions we obtain as deformation the result in Fig. 23. Although a reference solution is missing, we regard the numerical deformations as reasonable and the coupling ansatz seems to work for this advanced setting. Figure 21: Example of the L-bracket with line load. The results of the proposed approach with mesh of \(h=1/4\) and \(p=4\), \(r=1\) are shown. ## 7 Remarks on stability The examples from above confirm that we can apply the \(C^{1}\)-coupled basis functions in the context of the Kirchhoff plate. In particular, it is still possible to handle the high-order problem although we have a singular parametrization mapping. Certainly, this singularity comes along with some difficulties. The main bottleneck is the occurrence of large condition numbers for the underlying linear systems, especially if the mesh sizes gets small. The latter causes problems in the sense of Figure 23: The violin example. On the left side, we show the deformation. On the right side the deformation is illustrated utilizing a side view. Figure 22: The violin example. On the left side, the computational domain is illustrated. On the right side the underlying mesh is shown. stability of the approximate solution and below we discuss this briefly. First, we bring to mind that the definition of the \(C^{1}\) basis functions implies the boundedness of the \(H^{2}\)-norm, i.e. \(\mathcal{V}_{h}^{M,1}\subset H^{2}(\Omega)\) if we have B-spline ansatz spaces and B-spline parametrizations. By construction, we have the smoothness of each basis function in the interior of each mesh element and as already mentioned the global continuity of the first derivatives. Then, if we look at 'Case I' from in Sec. 4. of [23], we get with Theorem 4.4. from latter article that the coupled SB-IGA test functions are in \(H^{2}(\Omega_{m})\) for each patch. To see this, we observe that we use for the coupling method only such B-splines \(\widehat{B}^{r}_{i,p}\cdot\widehat{B}^{r}_{j,p}\) with \(j>2\) or the additional scaling center test functions, which are obviously smooth in a whole neighborhood of the scaling center. But, as piece-wise \(H^{2}\) mappings which are globally \(C^{1}\) we also get the \(H^{2}\) property in the whole domain. Since we use a quadrature formula with discrete evaluation points away from the scaling center, we obtain well-defined integral values and one does not see directly the singularity within the computations of system matrix entries. However, we can observe large second derivative entries caused by the mesh degeneration. This comes along with instability problems for the discrete problem in the case of fine meshes. In other words, the condition number increases strongly. This instability issue can be seen in figures Fig. 25 (a)-(b) and Fig. 26 (a). Here we have to note that we use for all examples the standard _mldivide()_ function from MATLAB [15] to solve appearing linear systems. A naive idea to relax the problem of evaluations near the singular point is to reduce the number of basis functions near the scaling center. An easy way without much additional effort, is to combine the basis functions of two meshes. One coarse mesh, for the basis functions with support near the scaling center and one refined mesh, for the areas away from the singular point. In more detail, one could proceed as follows. We have given two SB-meshes for some domain. One fine mesh with subdivisions w.r.t. both parametric coordinates (see Fig. 24 (a)) and secondly we have the analogous SB-mesh of the same domain, but which is only refined in the radial direction, compare Fig. 24. Clearly, the coarse mesh is not suitable to highly accurate approximations. However, the mesh elements adjacent to the scaling center have diameter \(\mathcal{O}(h)\) and the mesh from Fig. 24 (b) is enough for an approximation in the neighborhood of \(\boldsymbol{x}_{0}\). Hence, in view of Fig. 24(a) and Fig. 24 (b) we consider only those basis functions in the fine mesh which have a support outside \(A\) which stands for the union of mesh elements adjacent to the singular point. This means we consider \[\mathcal{V}_{h}^{\text{fine}}\coloneqq\text{span}\{\phi\in\mathcal{V}_{h}^{(1 )}\ |\ \text{supp}(\phi)\subset\Omega\backslash\overline{A}\},\] where \(\mathcal{V}_{h}^{(1)}\) is the (un-coupled) SB-IGA space for the fine mesh. From the SB-IGA space \(\mathcal{V}_{h}^{(2)}\) corresponding to mesh Fig. 24 (b) we add the space \[\mathcal{V}_{h}^{\text{coarse}}\coloneqq\text{span}\{\phi\in\mathcal{V}_{h}^{( 2)}\ |\ \text{supp}(\phi)\cap\overset{\circ}{A}\neq\emptyset\}.\] Consequently, our modified (uncoupled) SB-IGA test function space is \[\mathcal{V}_{h}^{\text{combined}}\coloneqq\mathcal{V}_{h}^{\text{fine}}\oplus \mathcal{V}_{h}^{\text{coarse}}.\] After this we can go through similar steps like in the sections before to define \(C^{1}\)-regular global basis functions. This idea can straightforwardly generalized to more complex geometries. _Remark 7_.: By the definition of the two spaces utilizing the support of the basis functions it is clear that the basis from \(\mathcal{V}_{h}^{\text{fine}}\) together with the basis functions from \(\mathcal{V}_{h}^{\text{coarse}}\) directly determines a basis for \(\mathcal{V}_{h}^{\text{combined}}\). Although such a combination of two meshes is certainly not the only method to remove problematic basis functions near the singularity, it is easy to implement and it is helpful to demonstrate the stabilizing influence of a such mesh coarsening near the center. Therefore, we repeat below the convergence test from Sec. 6.2 as well as the example of the plate with point load from Sec. 6.3, but now with the above explained two-mesh ansatz. If we look at the results in Fig. 14 and 16, we see an obviously more stable decay behavior compared to the original unstabilized computations. These first experiments with modified spaces reveal the importance of a good mesh choice, at least when dealing with high order problems. For sure, different refinement strategies should be discussed, too, maybe also in combination with some adaptive scheme. And the application of the naive two mesh ansatz in the context of more complicated test cases is advisable. But, it is not our goal of this article to study the stability issue in detail. This seems a reasonable object of investigations with another publication. We want to conclude this subsection with the following summarizing words. The SB-IGA space with \(C^{1}\) coupled basis functions suffers from bad conditions numbers, especially if we look at high order problems combined with fine meshes. Nevertheless, there is hope to alleviate this problem. We think that removing basis functions near the scaling center leads to increased stability in the computations without reducing the overall convergence behaviour significantly. ## 8 Conclusion In this contribution, an approach of the incorporation of SB-IGA patches and \(C^{1}\) coupling in terms of analysis suitable \(G^{1}\) parametrization was presented. This implies a special consideration of the basis functions at the scaling center. The method was tested in the context of the Kirchhoff plate formulation. It is especially suitable for trimmed models since the boundary representation can easily replace the existing boundary by the trimming curve, no matter if the trimming curve is entirely inside the domain or partially outside. The proposed approach was tested in various ways such as the \(L^{2}\) norm and the \(H^{2}\) seminorm against the optimal convergence rates, the bending moments and the deflection at specific points on the accuracy per degree. The stability issues arising at the scaling center for fine meshes are discussed and a stabilization remedy is presented that yields improved results. Moreover, the results were compared to other approaches considering meshing for unfavorably shaped problems and to results from other coupling approaches in isogeometric analysis. In conclusion, we presented the feasibility of SB-IGA with \(C^{1}\) coupling in terms of Kirchhoff plates that is especially powerful for sophisticated structures. AcknowledgmentsThe financial support of the DFG (German Research Foundation) under Grant No. KL1345/10-2 (project number: 667493) and Grant No. SI756/5-2 (project number: 667494) is gratefully acknowledged. Figure 24: To demonstrate the influence of the basis functions near the scaling center, we combine the basis functions from two meshes. The right mesh with refinement only in radial direction is exploited for the approximation near the scaling center, whereas the fully refined mesh on the left is only used for the basis functions with support outside the mesh elements adjacent to the singular point. Figure 25: Convergence studies of the \(H^{2}\) seminorm and the \(L^{2}\) norm on the example of the smooth solution on a square plate with and without the stabilization ansatz and orders of \(p=3\), \(p=4\) and \(p=5\) (\(r=1\)).
2310.07131
Echocardiography video synthesis from end diastolic semantic map via diffusion model
Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated significant achievements in various image and video generation tasks, including the domain of medical imaging. However, generating echocardiography videos based on semantic anatomical information remains an unexplored area of research. This is mostly due to the constraints imposed by the currently available datasets, which lack sufficient scale and comprehensive frame-wise annotations for every cardiac cycle. This paper aims to tackle the aforementioned challenges by expanding upon existing video diffusion models for the purpose of cardiac video synthesis. More specifically, our focus lies in generating video using semantic maps of the initial frame during the cardiac cycle, commonly referred to as end diastole. To further improve the synthesis process, we integrate spatial adaptive normalization into multiscale feature maps. This enables the inclusion of semantic guidance during synthesis, resulting in enhanced realism and coherence of the resultant video sequences. Experiments are conducted on the CAMUS dataset, which is a highly used dataset in the field of echocardiography. Our model exhibits better performance compared to the standard diffusion technique in terms of multiple metrics, including FID, FVD, and SSMI.
Phi Nguyen Van, Duc Tran Minh, Hieu Pham Huy, Long Tran Quoc
2023-10-11T02:08:05Z
http://arxiv.org/abs/2310.07131v1
# Echocardiography Video Synthesis from End Diastolic Semantic Map via Diffusion Model ###### Abstract Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated significant achievements in various image and video generation tasks, including the domain of medical imaging. However, generating echocardiography videos based on semantic anatomical information remains an unexplored area of research. This is mostly due to the constraints imposed by the currently available datasets, which lack sufficient scale and comprehensive frame-wise annotations for every cardiac cycle. This paper aims to tackle the aforementioned challenges by expanding upon existing video diffusion models for the purpose of cardiac video synthesis. More specifically, our focus lies in generating video using semantic maps of the initial frame during the cardiac cycle, commonly referred to as end diastole. To further improve the synthesis process, we integrate spatial adaptive normalization into multiscale feature maps. This enables the inclusion of semantic guidance during synthesis, resulting in enhanced realism and coherence of the resultant video sequences. Experiments are conducted on the CAMUS dataset, which is a highly used dataset in the field of echocardiography. Our model exhibits better performance compared to the standard diffusion technique in terms of multiple metrics, including FID, FVD, and SSMI. Nguyen Van Phi\({}^{1}\), Tran Minh Duc\({}^{1}\), Pham Huy Hieu\({}^{2\dagger}\), Tran Quoc Long \({}^{1\dagger}\)+\({}^{1}\) University of Engineering and Technology, VNU, Hanoi, Vietnam \({}^{2}\)VinUni-Illinois Smart Health Center, VinUni, Hanoi, Vietnam Diffusion Model, Echocardiography, Semantic Generation, Video Synthesis Footnote †: Supported by VINIF under project code VINIF:2019.DA02. ## 1 Introduction Echocardiography is an ultrasound of the heart that supports cardiologists in assessing the structure and function of the heart. It is widely used in diagnosis thanks to its accessibility, affordability, and non-invasiveness. In recent years, many research efforts have been made to apply machine learning to echocardiography to improve image analysis, automate diagnostic tasks, and advance our understanding of cardiac conditions [1, 2]. However, one of the biggest challenges in echocardiography today is the degradation of image quality caused by the ultrasound image formation process. Ultrasound images frequently contain speckle noise and motion artifacts, which can result in inaccurate anatomical structure examination and expensive manual annotation of echocardiographygrams. [3]. The traditional method of developing machine learning involves collecting and manually labeling a large number of data samples. Manual annotations must be done by clinical specialists, which is costly and time-consuming. Synthesizing ultrasound images has recently come to light as a promising method for obtaining a wide range of reliable datasets for training machine learning models [4]. Two primary approaches are frequently utilized for synthesizing ultrasound images: physics-based simulation and machine learning-based image generation. The goal of physics-based simulation is to reproduce the processes of beamforming and ultrasound formation, seeking to mimic their behavior and characteristics [5, 6]. However, to accurately simulate the underlying physics, these methods require realistic scatter maps of the heart. Moreover, obtaining these realistic scatter maps at a large scale is extremely difficult. On the other hand, conditional ultrasound image generation has demonstrated promising outcomes [4, 7, 8]. Previously, Generative Adversarial Networks (GANs) have been the go-to solution to generate ultrasound sequences [9]. Multiple approaches were also put forth to control the model's Figure 1: **Conditional Diffusion Model for Semantic Echocardiography Video Synthesis.** Our framework transforms a tensor of standard Gaussian noise into a realistic echocardiography video via iterative denoising process, given the guidance of the semantic label map \(x\) generating behavior. Liang _et al._[10] proposed a sketch guided GANs to obtain an editable image synthesis model. Gilbert _et al._[4] suggested using GANs conditioned on semantic maps from 3D deformable cardiac models to generate ultrasound images. Nevertheless, GANs still suffered from poor mode coverage [11]. Recently, diffusion models [12] have emerged as powerful generative models and demonstrated their effectiveness in generating realistic data. They could produce ultrasound images and videos based on conditions such as clinical attributes or segmentation maps. For example, Reynaud _et al._[7] utilized cascaded diffusion model conditioned on End Diastolic (ED) frame to generate ultrasound image with various left ventricle ejection fractions (LVEFs) levels. However, the model requires a real ultrasound image as an initial condition, limiting the diversity of generated images. Stojanovski _et al._[8] also used diffusion models but use semantic label maps as conditions. But their model was designed to generate single image, and extending it to generate image sequences is non-trivial due to the unavailability of fully annotated ultrasound videos. A solution is to convert the 2D Convolutional Neural Networks (CNNs) into 3D CNNs [13]. However, this approach is computationally inefficient and requires a lot of memory. Pan _et al._[14] proposed a method that generates videos based on optical flow, but accurate optical flow estimation is necessary for its success. As a result, little has been done to synthesize echocardiography videos using single semantic label map. This presents a unique challenge that needs to be addressed to be able to generate coherent and accurate echocardiography sequences. **Contribution.** In this paper, we present an echocardiography video generation model based on DDPMs [15, 12]. By dynamically incorporating the semantic label map of diastolic in multi-scale decoder, our model could produce realistic echocardiography sequences with diverse anatomical structures. Our contributions could be summarized as follows: (1) To the best of our knowledge, this study is the first attempt at generating echocardiography video from a semantic label map using diffusion model. (2) We propose a new network structure to handle noisy input and semantic mask effectively in order to incorporate anatomical information and produce realistic ultrasound sequences. ## 2 Method Our model generates a new video by gradually removing Gaussian noise under the guidance from semantic segmentation map (see Figure 1). In the next section, we will describe the conditional DDPMs and explain how we incorporate semantic information into the denoising process. **Conditional DDPMs.** There are two Markov process involved in DDPMS, i.e. forward process and reverse process. The forward process progressively adds noise into data, whereas the reverse process tries to eliminate it. Given the condition \(x\), the goal of conditional DDPMs is to maximize the likelihood \(p_{\theta}(y_{0}|x)\) while the conditional data follows to a distribution \(q(y_{0}|x)\). Starting from a Gaussian noise \(p(y_{T})\sim\mathcal{N}(0,\mathbf{I})\), the reverse process \(p_{\theta}(y_{0:T}|x)\) is a Markov process with learned Gaussian transitions, which is formulated as follows: \[p_{\theta}(y_{0:T}|x)=p(y_{T})\prod_{t=1}^{T}p_{\theta}(y_{t-1}|y_{t},x) \tag{1}\] \[p_{\theta}(y_{t-1}|y_{t},x)=\mathcal{N}(y_{t-1};\mu_{\theta}(y_{t},x,t),\Sigma _{\theta}(y_{t},x,t)) \tag{2}\] The forward process takes a data sampled from a real data distribution \(q(y_{0})\) and iteratively perturbing the data by adding a small Gaussian noise according to a variance schedule \(\beta_{1},\dots\beta_{T}\). The transition distribution is formulated as follows: \[q(y_{t}|y_{t-1})=\mathcal{N}(y_{t};\sqrt{1-\beta_{t}}y_{t-1},\beta_{t}\mathbf{ I}) \tag{3}\] Let \(\alpha_{t}=\prod_{s=1}^{t}(1-\beta_{t})\), we could compute the transition distribution of \(y_{t}\) given \(y_{0}\) directly with formula: \[q(y_{t}|y_{0})=\mathcal{N}(y_{t};\sqrt{\alpha_{t}}y_{0},(1-\alpha_{t})\mathbf{ I}) \tag{4}\] The conditional DDPMs is trained by maximizing the Evidence Lower Bound. By applying reparameterization trick, this is equivalent to minimizing the discrepancy between the noise added in the forward process and the noise removed during the reverse process. Therefore, the objective function at time step \(t\) is defined as follow: \[\mathcal{L}_{t}=\mathbb{E}_{y_{0}\sim q(y_{0}),e\sim\mathcal{N}(0,\mathbf{I}) }\left\|\ \epsilon-\epsilon_{\theta}(y_{t},x,t)\ \right\|^{2} \tag{5}\] where \(t\) is sampled uniformly from the range \([1\dots T]\) and \(y_{0}\) is sampled from real data distribution \(q(y_{0})\). In the context of our study, \(y_{0}\) is a sequence of frames captured within a cardiac cycle \(y_{0}\in\mathbb{R}^{K\times C\times H\times W}\). Where \(K\) represents the fixed number of selected frames in one video and \(C,H,W\) is the spatial dimensions of each frame. Moreover, each cycle was given an annotated semantic map of the first frame \(x\in\mathbb{R}^{K\times C\times H\times W}\). The second frame \(x\) is sampled from the first frame \(x\) and the second frame \(y_{t}\) is sampled from the second frame \(x\). The second frame \(y_{t}\) is sampled from the second frame \(y_{t}\) and the second frame \(y_{t}\) is sampled from the second frame \(y_{t}\). The second frame \(y_{t}\) is sampled from the second frame \(y_{t}\) and the second frame \(y_{t}\) is sampled from the second frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the second frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the second frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the second frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the second frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the second frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the second frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the second frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) are sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) are sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) is sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) are sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) are sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) are sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) are sampled from the third frame \(y_{t}\). The third frame \(y_{t}\) is sampled from the third frame \(y_{t}\) and the third frame \(y_{t}\) are sampled from the third frame \(y \(\mathbb{R}^{C\times H\times W}\), which will serve as the condition for our model. This semantic map has same spatial dimension as the original images. Thus, our goal is to learn a model that could generate realistic data from given semantic structure. **Semantic Conditioned Diffusion Model.** Figure 2 shows an overview of our conditional denoising network architecture, which is based on 3D-Unet proposed by Ho _et al._[15]. The denoising encoder receives the noisy image sequence and computes the feature representations. The decoder then uses these feature vectors and the injected semantic information to reconstruct the real images. Since our input is a sequence of frames, we used a stack of multiple 3D Residual Convolution Blocks as our encoder. For each block, 3D Convolution layers were used to compute the feature representations. The time step information \(t\) was encoded by cosine embedding and then added to every feature outputs. Group normalization is then used to normalize those features. Further more, we used a spatial attention layer followed by a temporal attention layer in each block to allow the model to learn the spatial and temporal relationships between each frame. In the decoder, each residual block was modified so that the condition information, which is the semantic map describing the structure of the heart, could be effectively injected. Saharia _et al._[16] showed that directly concatenate the condition information and noisy images as input does not fully leverage the semantic information. Whereas, Wang _et al._[17] demonstrated the effective of Spatial Adaptive Normalization (SPADE) for adding the semantic label map. The features were regulated by the SPADE in a learnable, spatially-adaptive manner. Therefore, we inject the semantic label map using SPADE layer over Group Normalization layer. Specifically, given a feature vector \(f_{i}\) of input images from a decoder block, we want to add the condition information \(x\), which is the semantic label map of the first frame. Since \(x\) does not initially match the size of the input images, it must be duplicated along the temporal axis. The normalization is formulated as follows: \[f^{i+1}=\gamma^{i}(x,k)\cdot\text{Norm}(f^{i})+\delta^{i}(x,k) \tag{6}\] where \(f^{i}\) and \(f^{i+1}\) are the input and output features of SPADE. \(\text{Norm}(\cdot)\) is parameter-free group normalization. The \(\gamma^{i}(x,k)\) and \(\delta^{i}(x,k)\) are the spatially-adaptive weight and bias learned from the semantic label map \(x\) and cosine embedding \(k\) of frame time. Since we only inject the label map of the first frame, \(k\) was added to provide the temporal information. Inspired by [18], we applied classifier free approach to train our model. Since it is showed that the gradient \(\nabla_{y}\text{log}p(x|y)\) of an extra classifier could improve samples from conditional diffusion models [19]. The key idea is to replace the semantic label map \(x\) with a null label \(\emptyset\) under certain probability. [18] showed that this technique implicitly infers the gradient of the log probability. The sampling procedure is obtained as following formula: \[\epsilon_{\theta}(y_{t}|x)=\epsilon_{\theta}(y_{t}|x)+s\cdot(\epsilon_{\theta }(y_{t}|x)-\epsilon_{\theta}(y_{t}|\emptyset)) \tag{7}\] In our implementation, \(\emptyset\) is a black image with all-zero elements. ## 3 Experiments **Dataset.** Our experiments were conducted on the CAMUS dataset [20]. There are 450 patients in this dataset, and each has three recorded chamber views. For simplicity, we only conducted experiments on 2 chamber view videos in our study. For each data sample, we have a video of a complete cardiac cycle, from the ED phase to ES phase. However, only the ED and ES frames have semantic map annotation, which were labeled by cardiologists. There are four classes on each segmentation map: background, epicardium, myocardium, and left atrium. To avoid data leakage, we split the dataset by patients using 80-10-10 ratio. As a result, the training set contains 360 patients, the validation and test set contains 45 patients, respectively. **Baselines.** Since our model was based on DDPMs, we used this model as the baseline. Besides, we also implemented the cascade diffusion architecture [21] to validate the efficacy of this technique for semantic conditional generation. Since this model have been shown to generate videos more efficiently, including unconditioned ultrasound generation [7]. In addition to these two primary architectures, we have implemented SPADE and concatenation as two condition features injection approaches. We validated our models using variety of number frames settings, including taking 16 or 24 frames. **Experimental Settings.** Every models were trained on a node with three NVIDIA A100 gpus. We set the batch size of 24 for three GPUs. We chose the total diffusion steps \(T=1000\), and the classifier-free guidance factor \(s=7.0\) was used. We used Adam optimizer with learning rate \(lr=1e-4\) in every training. Two UNet backbones were utilized for the cascade network architecture. One for low-resolution video synthesis, i.e. generate sequences with dimension of \(n_{frame}\times 56\times 56\). The second one is for super-resolution which converts output with spatial dimension of \(56\times 56\) into \(128\times 128\). We kept all of the settings the same to ensure a fair comparision. **Evaluation Metrics.** We assessed the models' performances using three metrics. Which are the Frechet Inception Distance (FID) [22], Frechet Video Distance (FVD) [23], and Structure Similarity Index (SSIM) score [24]. FID and FVD have been commonly used in many studies to measure generated images and videos quality. FID computes the distance between two distributions, one from generated images and the other from real images, whereas FVD does the same for videos. A lower score indicates higher quality in terms of visual fidelity, diversity, and temporal consistency of the generated videos. SSIM score is used for measuring the similarity between two images. In our study, we calculate this score by averaging SSIM between frames of generated videos with frames from ground truth videos, while both have the same segmentation map. Higher SSIM score indicates higher similarity between synthetic frames and the ground-truth frames. We generated totally 450 videos for the test set, with 10 videos for each segmentation map. **Results.** Table 1 presents the results of the methods on various metrics. Overall, using SPADE as the input to the denoising model instead of concatenating the segmentation map and ultrasound led to an improvement in the quality of ultrasound images from both a single image and a video perspective. For instance, in the case of DDPM, SPADE has FID of 16.05 as compared to concatenation's 21.46 and FVD of 115.79 as compared to concatenation's 144.79. When comparing cascade and DDPM, we noticed that DDPM produce better performance, with FVD of 89.78 for DDPM versus 214.45 for cascade. One explanation could be that the DDPM architecture uses convolution layers and attention layers for the entire input resolution, but the cascade model only performs temporal attention on downsampled versions of inputs. The cascade approach, however, is more effective in terms of processing and memory. Additionally, we found that the amount of frames had no noticeable effect on the model's performance. This could be as a result of the segmentation map being the same for all video frames. Finally, SSIM score are generally better when using SPADE than those using concatination. A visual comparison of different approaches is shown in Figure 3. In general, the images produced by our suggested method using SPADE have higher fidelity and closely resemble actual ultrasound images. More specifically, our model generates images with sharper edges and more realistic anatomical structures, especially in the region of endocardi and myocardium. While the original DDPMs with concatination produces images with blurry edges and artifacts, SPADE could produce images with perceivable speckle motion. Comparing DDPM and Cascade, we found that DDPM produces better images in terms of visual fidelity and speckle motion over time. While our photos and the SPADE's ground truth have extremely similar anatomical structures, there are still some artifacts and blurry speckles in them. We also recommended that human review should be done to better understand the quality of the generated videos and that future work should take into account training the models on longer, higher resolution videos that show different moments in the cardiac cycle. ## 4 Conclusion In our study, we demonstrate the first attempt to synthesize an echocardiography video using a diffusion model from a single semantic segmentation map. In order to effectively use the semantic information in the generation process, we proposed spatial adaptive normalization to better incorporate the semantic maps into the denoising model. This results in our model producing more realistic echocardiography videos that are consistent with the input segmentation maps in comparison with previous methods, as we show on the CAMUS dataset. We also examine the shortcomings of recent work and consider potential directions for future investigation. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Cond. & Model & \(K\) & FID\(\downarrow\) & FVD\(\downarrow\) & SSIM\(\uparrow\) \\ \hline \multirow{4}{*}{Concat} & Cascade & 16 & 42.93 & 278.35 & 0.47 \\ \cline{2-6} & & 24 & 50.93 & 310.05 & 0.51 \\ \cline{2-6} & DDPM & 16 & 28.56 & 137.73 & 0.55 \\ \cline{2-6} & & 24 & 21.46 & 144.79 & 0.53 \\ \hline \multirow{4}{*}{SPADE(our)} & Cascade & 16 & 34.52 & 214.45 & 0.49 \\ \cline{2-6} & & 24 & 40.87 & 231.88 & 0.52 \\ \cline{2-6} & DDPM & 16 & 18.65 & **89.78** & **0.56** \\ \cline{1-1} \cline{2-6} & & 24 & **16.05** & 115.79 & 0.54 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison with existing methods on semantic echocardiography video synthesis. \(\uparrow\) indicates the higher the better, while \(\downarrow\) indicates the lower the better. Notably, our method achieves state-of-the-art performance on all metrics. Figure 3: Qualitative results on the CAMUS dataset. All models were conditioned on the same segmentation map of the ED frame. We selected frames every 4 time steps to show the temporal change in one video. More videos can be found at [https://tinyurl.com/5n8m6k92](https://tinyurl.com/5n8m6k92).
2302.07320
Policy gradient learning methods for stochastic control with exit time and applications to share repurchase pricing
We develop policy gradients methods for stochastic control with exit time in a model-free setting. We propose two types of algorithms for learning either directly the optimal policy or by learning alternately the value function (critic) and the optimal control (actor). The use of randomized policies is crucial for overcoming notably the issue related to the exit time in the gradient computation. We demonstrate the effectiveness of our approach by implementing our numerical schemes in the application to the problem of share repurchase pricing. Our results show that the proposed policy gradient methods outperform PDE or other neural networks techniques in a model-based setting. Furthermore, our algorithms are flexible enough to incorporate realistic market conditions like e.g. price impact or transaction costs.
Mohamed Hamdouche, Pierre Henry-Labordere, Huyen Pham
2023-02-14T20:04:19Z
http://arxiv.org/abs/2302.07320v1
Policy gradient learning methods for stochastic control with exit time and applications to share repurchase pricing ###### Abstract We develop policy gradients methods for stochastic control with exit time in a model-free setting. We propose two types of algorithms for learning either directly the optimal policy or by learning alternately the value function (critic) and the optimal control (actor). The use of randomized policies is crucial for overcoming notably the issue related to the exit time in the gradient computation. We demonstrate the effectiveness of our approach by implementing our numerical schemes in the application to the problem of share repurchase pricing. Our results show that the proposed policy gradient methods outperform PDE or other neural networks techniques in a model-based setting. Furthermore, our algorithms are flexible enough to incorporate realistic market conditions like e.g. price impact or transaction costs. ## 1 Introduction Let us consider a controlled Markov state process \(X=(X^{\alpha})_{t}\) valued in \(\mathcal{X}\subset\mathbb{R}^{d}\) with a control process \(\alpha=(\alpha_{t})\) valued in \(A\subset\mathbb{R}^{m}\). Given an open set \(\mathcal{O}\) of \(\mathcal{X}\), we denote by \(\tau\)\(=\tau^{\alpha}\) the exit time of the domain \(\mathcal{O}\) before a terminal horizon \(T<\infty\), i.e., \[\tau=\ \inf\{t\geq 0:X_{t}\notin\mathcal{O}\}\wedge T,\] with the usual convention that \(\inf\emptyset=\infty\). The objective is then to maximize over control process \(\alpha\) a criterion in the form \[J(\alpha)=\ \mathbb{E}\big{[}g(X^{\alpha}_{\tau})\big{]},\quad\to\quad V_{0} \ =\ \sup_{\alpha}J(\alpha), \tag{1.1}\] for some terminal reward function \(g\) on \(\mathbb{R}^{d}\). In typical examples, \(X\) is modelled by a controlled diffusion process as \[\mathrm{d}X_{t}=\ \mu(X_{t},\alpha_{t})\mathrm{d}t+\sigma(X_{t},\alpha_{t}) \mathrm{d}W_{t}, \tag{1.2}\] and we can also consider jump-diffusion processes, which is in particular relevant for insurance/reinsurance problem with minimization of the ruin probability in finite time. **Remark 1.1**.: _Notice that there is no loss of generality to focus on the above Mayer form, as the case of Bolza criterion with running reward:_ \[J(\alpha)=\ \mathbb{E}\big{[}\int_{0}^{\tau}f(X_{t}^{\alpha},\alpha_{t}) \mathrm{d}t+g(X_{\tau}^{\alpha})\big{]},\] _can be reduced to the Mayer form by considering as usual the additional component \((Y_{t})_{t}\) of the state process, driven by_ \[\mathrm{d}Y_{t}=\ f(X_{t},\alpha_{t})\mathrm{d}t,\] _and the corresponding terminal reward function \(\tilde{g}(x,y)\) = \(y+g(x)\)._ The control problem (1.1) with exit time can be solved in a model-based setting, e.g. when the coefficients \(\mu\), \(\sigma\) in (1.2), and the analytical form of \(g\) are known, by PDE methods with splitting scheme as described in appendix B, and eventually by backward SDE methods, see [4]. In the last years, there has been an important literature about the use of deep learning techniques in the numerical resolution of stochastic control problems and PDEs, which have shown success for notably overcoming the curse of dimensionality, and we refer the reader to the recent surveys by [3] and [6]. However, these methods do not work well for our class of control problem with exit time. Indeed, for example, when trying to apply the global method of [9] by approximating the policy by a neural network with parameters \(\theta\), the differentiation of the associated gain function would lead to a Dirac function due to the presence of an indicator function related to the exit time, hence the gradient is ill-posed, which prevents an efficient implementation of the stochastic gradient ascent algorithm. In this paper, we propose two types of algorithms based on reinforcement learning for estimating the solution to the control problem (1.1) in a model-free setting, i.e., without _a priori_ knowledge of the model coefficients. We develop policy gradient methods for learning approximate optimal control and value function based on samples of state and rewards. A key feature is to consider parametrized randomized policies, notably for overcoming the issue related to exit time in the policy gradient representation. Our first algorithm learns directly the optimal policy, while the second type of algorithm is of actor-critic nature by learning alternately the policy and the value function. This can be done either in an offline setting with updates rules based on the whole trajectories of the state, or in an online setting with update rules in real-time incrementally. Our algorithms can be viewed as extensions to controlled processes with exit time of policy gradients in reinforcement learning usually designed for infinite or finite horizon, see [12]. The main application that we develop in this paper for stochastic control in the form (1.1) concerns the pricing of buyback options in Stock Repurchase Programs (in short SRPs). Those are defined as transactions initiated by companies to re-buy their proper stocks for various reasons including the raising of the debt-to-equity ratio or the improvement of earnings per share by reducing the number of outstanding shares. SRPs are also an alternative way to distribute the dividends to the shareholders, see [11]. For more details about SRPs and its regulatory issues and associated tools, the reader can consult this report [1]. There exist several mechanisms for SRPs with complex contracts involving investment banks, where the company mandates a bank to repurchase its shares through a derivative product. A well-known example often used by practitioners is Accelerated Share Repurchases (ASRs), where at time \(t=0\) the bank borrows a quantity \(B\) of shares required by the company from shareholders, and then purchases progressively from the open market the quantity \(B\) to give it back to shareholders. In addition, the bank becomes in a long position of an American option where at some exercise time \(\tau\), the company should pay the bank the average price between \(0\) and \(\tau\) for each share. The valuation of ASRs has recently attracted attention in the literature. Gueant et al. [8] consider in a discrete time/space model the pricing of ASRs, which leads to a tree based algorithm. Jaimungal et al. [10] investigate the same problem in continuous time/space setting by additionally taking into consideration temporary and long-term market impact, and characterize the execution frontier. Gueant et al. [7] use deep learning algorithms in the spirit of [5] and [2] for the pricing of ASRs contracts and buyback contract called VWAP-minus profit-sharing. In such contract, the exercise time \(\tau\) is chosen by the bank once the amount of shares requested by the company is redeemed. In this paper, we consider a buyback contract where the exercise time \(\tau\) is entirely characterized by the execution strategy and can not be chosen by any party. We shall call such a buyback product as Barrier VWAP-minus. Actually, one can show (see Appendix A) that in absence of market impact, the price of the Barrier VWAP-minus is equal to the price of the VWAP-minus. The pricing of barrier VWAP-minus leads to a stochastic control formulation as in (1.1) where the exit time is defined as the first stopping time when the controlled inventory exceeds the quantity of shares to be purchased by the bank within a finite time interval. We implement our algorithms to this pricing problem: since they are model-free, they are robust to model misspecifications, and are valid notably for general model for the stock price including market impact and transaction costs. We first compare our numerical results with those obtained by PDE methods with splitting scheme as detailed in Appendix B. Our validation test consists in approximating the optimal policy and then computing the price using Monte Carlo: it provides then by definition a lower bound to the true price of the constrained VWAP-minus contract. We show that our model-free policy gradient algorithms yield accurate results similar to PDE schemes designed in a specific model-based setting. It is also less costly and more stable than methods performed in [7] in a model-based setting, where the control and the stopping time are parametrized by two distinct neural networks. Moreover, it has the advantage to be easily implemented in general factor models including market impact. We illustrate notably the impact of market impact on the optimal trading policies. The rest of the paper is structured as follows. We develop in Section 2 the policy gradient approach with randomized policies, and present our two types of algorithms. Section 3 is devoted to the application to valuation of SRP, including the case with market impact and transaction costs, with numerical results illustrating the convergence and accuracy of our algorithms, and comparison with other methods. Policy gradient methods We consider a time discretization of the stochastic control problem (1.1). Let \(\mathbb{T}=\{t_{0}=0<\ldots<t_{i}<\ldots<t_{N}=T\}\) be a subdivision of \([0,T]\) of size \(N\) with time steps \(\Delta t_{i}=t_{i+1}-t_{i}\), \(i=0,\ldots,N-1\). By misuse of notation, we denote by \((X_{t_{i}})_{i\in\llbracket 0,N\rrbracket}\) the Markov decision process (MDP) arising from the time discretization of the controlled state process \((X_{t})_{t}\), and it is characterized by an initial distribution \(p_{0}\) for \(X_{t_{0}}\), and the transition kernel function \(p(.|t_{i},x_{i},a)\) representing the probability of the next state \(X_{t_{i+1}}\) given the current state \(X_{t_{i}}=x_{i}\in\mathcal{X}\), and an action \(a\in A\) at time \(t_{i}\). Notice that in a model-free setting, this transition kernel is unknown. A randomized policy in this discretized time setting is a measurable transition kernel function \(\pi:(t_{i},x_{i})\in\mathbb{T}\times\mathcal{X}\mapsto\pi(.|t_{i},x_{i})\in \mathcal{P}(A)\) (the set of probability measures on \(A\)), and we say that \(\alpha=(\alpha_{t_{i}})_{i\in\llbracket 0,N-1\rrbracket}\) is a randomized feedback control generated from the stochastic policy \(\pi\), written as \(\alpha\sim\pi\), when \(\alpha_{t_{i}}\) is drawn from \(\pi(.|t_{i},X_{t_{i}})\) at any time \(t_{i}\). The exit time of the Markov decision process \((X_{t_{i}})_{i\in\llbracket 0,N\rrbracket}\) is given by \[\tau=\ \inf\{t_{i}\in\mathbb{T}:X_{t_{i}}\notin\mathcal{O}\}\wedge t_{N},\] and the gain functional associated to the Markov decision process with exit time and randomized feedback control \(\alpha\sim\pi\) is given by \[\mathrm{J}(\pi)=\mathbb{E}_{\alpha\sim\pi}\big{[}g(X_{\tau})\big{]}.\] Here the notation \(\mathbb{E}_{\alpha\sim\pi}[.]\) means that the expectation is taken when the Markov decision process \((X_{t_{i}})\) is controlled by the randomized feedback control \(\alpha\) generated from the stochastic policy \(\pi\). We now consider stochastic policies \(\pi=\pi_{\theta}\) with parameters \(\theta\in\mathbb{R}^{D}\), and which admit densities with respect to some measure \(\nu\) on \(A\): \(\pi_{\theta}(\mathrm{d}a|t_{i},x_{i})=\rho_{\theta}(t_{i},x_{i},a)\nu(\mathrm{ d}a)\), for some parametrized measurable functions \(\rho_{\theta}:\mathbb{T}\times\mathcal{X}\times A\to(0,\infty)\). * when \(A\) is a finite space, say \(A=\{a_{1},\ldots,a_{M}\}\), we take \(\nu\) as the counting measure, and choose softmax policies, i.e., \[\rho_{\theta}(t_{i},x_{i},a_{m})=\ \frac{\exp\big{(}\phi_{\theta_{m}}(t_{i},x_{i}) \big{)}}{\sum_{\ell=1}^{M}\exp\big{(}\phi_{\theta_{\ell}}(t_{i},x_{i})\big{)}},\quad m=1,\ldots,M,\] (2.1) where \(\phi_{\theta_{m}}\) are neural networks on \([0,T]\times\mathbb{R}^{d}\), and \(\theta=(\theta_{1},\ldots,\theta_{M})\) gathers all the parameters of the \(M\) neural networks. In this case, the score function is given by \[\nabla_{\theta_{\ell}}\log\rho_{\theta}(t_{i},x_{i},a_{m})=\ \big{(}\delta_{m\ell}-\rho_{ \theta}(t_{i},x_{i},a_{\ell})\big{)}\nabla_{\theta_{\ell}}\phi_{\theta_{\ell}} (t_{i},x_{i}).\] * when \(A\) is a continuous space of \(\mathbb{R}^{m}\), we can choose typically a Gaussian distribution on \(\mathbb{R}^{m}\) for the stochastic policy, with mean parametrized by neural network \(\mu_{\theta}(t,x)\) valued on \(A\), and variance a positive definite matrix \(\Sigma\) on \(\mathbb{R}^{m\times m}\) to encourage exploration, e.g. \(\Sigma=\varepsilon I_{m}\). In this case, \(\nu\) is the Lebesgue measure on \(\mathbb{R}^{m}\), and the density is \[\rho_{\theta}(t_{i},x_{i},a)=\ \frac{1}{(2\pi)^{m/2}\mathrm{det}(\Sigma)^{\frac{1}{2}}} \exp\big{(}-\frac{1}{2}\big{(}a-\mu_{\theta}(t_{i},x_{i})\big{)}^{\intercal} \Sigma^{-1}\big{(}a-\mu_{\theta}(t_{i},x_{i})\big{)}\big{)}.\] In this case, the score function is given by \[\nabla_{\theta}\log\rho_{\theta}(t_{i},x_{i},a)=\ \nabla_{\theta}\mu_{\theta}(t_{i},x_{i})^{ \intercal}\Sigma^{-1}(a-\mu_{\theta}(t_{i},x_{i})).\] We then denote, by abuse of notation, \(\mathrm{J}(\theta)=\mathrm{J}(\pi_{\theta})\), the performance function viewed as a function of the parameter \(\theta\) of the stochastic policy, and the principle of policy gradient method is to maximize over \(\theta\) this function by stochastic gradient ascent algorithm. In a model-free setting, the purpose is then to derive a suitable expectation representation of the gradient function \(\nabla_{\theta}\mathrm{J}(\theta)\) that does not involve unknown model coefficients and transition kernel \(p(.|t,x,a)\) of the state process, but only sample observations of the states \(X_{t_{i}}\), \(i=0,\ldots,N\), hence of the exit time \(\tau\), when taking decisions \(\alpha\sim\pi_{\theta}\), with known chosen family of densities \(\rho_{\theta}\). ### Policy gradient representation Our first main result is to provide a stochastic policy gradient representation for the performance function \(\mathrm{J}\) by adapting arguments in the infinite or finite horizon case. **Theorem 2.1**.: _We have_ \[\nabla_{\theta}\mathrm{J}(\theta)=\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}g (X_{\tau})\sum_{i=0}^{N-1}\nabla_{\theta}\log\rho_{\theta}(t_{i},X_{t_{i}}, \alpha_{t_{i}})1_{t_{i}<\tau}\Big{]}. \tag{2.2}\] Proof.: For a path \((x_{0},\ldots,x_{N})\in\mathcal{X}^{N+1}\), we denote by \[\iota(x_{0},\ldots,x_{N})=\ \inf\{i\in\llbracket 0,N\rrbracket:x_{i}\notin \mathcal{O}\}\wedge N,\] so that the exit time of \((X_{t_{i}})_{i\in\llbracket 0,N\rrbracket}\) is written as \(\tau=t_{\iota(X_{t_{0}},\ldots,X_{t_{N}})}\). Let us then introduce the function \(G\) defined on \(\mathcal{X}^{N+1}\) by \(G(x_{0},\ldots,x_{N})=\ g(x_{\iota(x_{0},\ldots,x_{N})})\), so that \[\mathrm{J}(\theta)= \ \mathbb{E}_{\alpha\sim\pi_{\theta}}\big{[}G(X_{t_{0}},\ldots,X_{ t_{N}})\big{]}\] \[= \ \int_{\mathcal{X}^{N+1}}\int_{A^{N}}G(x_{0},\ldots,x_{N})p_{0}( \mathrm{d}x_{0})\prod_{i=0}^{N-1}\pi_{\theta}(\mathrm{d}a_{i}|t_{i},x_{i})p( \mathrm{d}x_{i+1}|t_{i},x_{i},a_{i})\] \[= \ \int_{\mathcal{X}^{N+1}}\int_{A^{N}}G(\mathbf{x})p_{0}(\mathrm{d}x _{0})\mathbf{\rho}_{\theta}^{N}(\mathbf{x},\mathbf{a})\prod_{i=0}^{N-1}p(\mathrm{d}x_{i+1 }|t_{i},x_{i},a_{i})\nu(\mathrm{d}a_{i}), \tag{2.3}\] where we set \(\mathbf{x}=(x_{0},\ldots,x_{N})\), \(\mathbf{a}=(a_{0},\ldots,a_{N-1})\), and \[\mathbf{\rho}_{\theta}^{N}(\mathbf{x},\mathbf{a})=\ \prod_{i=0}^{N-1}\rho_{\theta}(t_{i},x_{i},a _{i}).\] By using the classical log-likelihood trick: \(\nabla_{\theta}\boldsymbol{\rho}_{\theta}^{N}(\boldsymbol{x},\boldsymbol{a})= \big{(}\nabla_{\theta}\log\boldsymbol{\rho}_{\theta}^{N}(\boldsymbol{x}, \boldsymbol{a})\big{)}\boldsymbol{\rho}_{\theta}^{N}(\boldsymbol{x},\boldsymbol {a}),\) and noting that \[\nabla_{\theta}\log\boldsymbol{\rho}_{\theta}^{N}(\boldsymbol{x},\boldsymbol{a })=\ \sum_{i=0}^{N-1}\nabla_{\theta}\log\rho_{\theta}(t_{i},x_{i},a_{i}),\] we deduce by differentiating (2.3) that \[\nabla_{\theta}\mathrm{J}(\theta)= \ \int_{\mathcal{X}^{N+1}}\int_{A^{N}}G(\boldsymbol{x})\nabla_{ \theta}\log\boldsymbol{\rho}_{\theta}^{N}(\boldsymbol{x},\boldsymbol{a})p_{0} (\mathrm{d}x_{0})\prod_{i=0}^{N-1}\pi_{\theta}(\mathrm{d}a_{i}|t_{i},x_{i})p( \mathrm{d}x_{i+1}|t_{i},x_{i},a_{i})\] \[= \ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}G(X_{t_{0}},\ldots,X _{t_{N}})\sum_{i=0}^{N-1}\nabla_{\theta}\log\rho_{\theta}(t_{i},X_{t_{i}}, \alpha_{t_{i}})\Big{]}.\] Finally, observe that for any \(i\in\llbracket 0,N-1\rrbracket\), we have \[\mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}G(X_{t_{0}},\ldots,X_{t _{N}})1_{t_{i}\geq\tau}\nabla_{\theta}\log\rho_{\theta}(t_{i},X_{t_{i}},\alpha_ {t_{i}})\Big{]}\] \[= \ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}g(X_{\tau})1_{t_{i} \geq\tau}\nabla_{\theta}\log\rho_{\theta}(t_{i},X_{t_{i}},\alpha_{t_{i}}) \Big{]}\] \[= \ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}g(X_{\tau})1_{t_{i} \geq\tau}\underbrace{\nabla_{\theta}\Big{(}\int_{A}\rho_{\theta}(t_{i},X_{t_{i }},a)\nu(\mathrm{d}a)\Big{)}}_{=\ 0}\Big{]}\ =\ 0, \tag{2.4}\] which yields the required result. Alternately, we now provide a second representation formula for the gradient of the performance function by exploiting the dynamic programming. Let us introduce the dynamic version of \(\mathrm{J}\). For \(i\in\llbracket 0,N\rrbracket\), and \(x\in\mathcal{X}\), we define the value (performance) function associated to the policy \(\pi_{\theta}\) \[V_{i}^{\theta}(x):=\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\big{[}g(X_{\tau_{i}}) |X_{t_{i}}=x\big{]},\] where \(\tau_{i}=\inf\{t_{j}\in\mathbb{T},t_{j}\geq t_{i}:X_{t_{j}}\notin\mathcal{O} \}\wedge t_{N}\), so that \(\mathrm{J}(\theta)=\mathbb{E}[V_{0}^{\theta}(X_{0})]\). We notice that \(V_{N}^{\theta}(x)=g(x)\), for all \(x\in\mathcal{X}\), and \(V_{i}^{\theta}(x)=g(x)\), for all \(i\in\llbracket 0,N-1\rrbracket\), and \(x\notin\mathcal{O}\). Moreover, by the dynamic programming (which is here simply reduced to the law of conditional expectations), we have for \(i\in\llbracket 0,N-1\rrbracket\): \[V_{i}^{\theta}(x)=\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}V_{i+1}^{ \theta}(X_{t_{i+1}})|X_{t_{i}}=x\Big{]},\ \ \ \ \text{for}\ x\in\mathcal{O}. \tag{2.5}\] **Theorem 2.2**.: _We have_ \[\nabla_{\theta}\mathrm{J}(\theta)=\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[} \sum_{i=0}^{N-1}V_{i+1}^{\theta}(X_{t_{i+1}})\nabla_{\theta}\log\rho_{\theta}( t_{i},X_{t_{i}},\alpha_{t_{i}})1_{t_{i}<\tau}\Big{]}. \tag{2.6}\] Proof.: From (2.5), we have for \((i,x_{i})\in\llbracket 0,N-1\rrbracket\times\mathcal{O}\) \[V_{i}^{\theta}(x_{i})=\ \int_{\mathcal{X}}\int_{A}V_{i+1}^{\theta}(x_{i+1}) \rho_{\theta}(t_{i},x,a)\nu(\mathrm{d}a)p(\mathrm{d}x_{i+1}|t_{i},x_{i},a).\] By differentiating with respect to \(\theta\), and using again the log-likelihood trick, we get \[\nabla_{\theta}V_{i}^{\theta}(x_{i})= \ \int_{\mathcal{X}}\int_{A}\nabla_{\theta}\big{[}V_{i+1}^{\theta} (x_{i+1})\big{]}\rho_{\theta}(t_{i},x_{i},a)\nu(\mathrm{d}a)p(\mathrm{d}x_{i+ 1}|t_{i},x_{i},a)\] \[+\ \int_{\mathcal{X}}\int_{A}V_{i+1}^{\theta}(x_{i+1})\nabla_{ \theta}[\log\rho_{\theta}(t_{i},x_{i},a)]\rho_{\theta}(t_{i},x_{i},a)\nu( \mathrm{d}a)p(\mathrm{d}x_{i+1}|t_{i},x_{i},a)\] \[= \ \int_{\mathcal{O}}\int_{A}\nabla_{\theta}\big{[}V_{i+1}^{\theta} (x_{i+1})\big{]}\pi_{\theta}(\mathrm{d}a|t_{i},x_{i})p(\mathrm{d}x_{i+1}|t_{i},x_{i},a)\] \[+\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}V_{i+1}^{\theta} (X_{t_{i+1}})\nabla_{\theta}\log\rho_{\theta}(t_{i},X_{t_{i}},\alpha_{t_{i}}) |X_{t_{i}}=x_{i}\Big{]},\ i\in\llbracket 0,N-1\rrbracket,\] for all \(x_{i}\in\mathcal{O}\), by noting that \(\nabla_{\theta}V_{i+1}^{\theta}(x)=0\) for \(x\notin\mathcal{O}\), and \(V_{i+1}^{\theta}(x)=V_{i+1}^{\theta}(x)\) for \(x\in\mathcal{O}\). By iterating over \(i\), and noting that \(\nabla_{\theta}V_{N}^{\theta}(.)\equiv 0\), we deduce that for all \(x_{0}\in\mathcal{O}\) \[\nabla_{\theta}V_{0}^{\theta}(x_{0})=\ \mathbb{E}_{\alpha\sim\pi_{\theta}} \Big{[}\sum_{i=0}^{N-1}V_{i+1}^{\theta}(X_{t_{i+1}})\nabla_{\theta}\log\rho_{ \theta}(t_{i},X_{t_{i}},\alpha_{t_{i}})\prod_{j=1}^{i}1_{X_{t_{j}}\in\mathcal{ O}}\big{|}X_{t_{0}}=x_{0}\Big{]}\] Since \(\prod_{j=1}^{i}1_{X_{t_{j}}\in\mathcal{O}}=1_{t_{i}<\tau}\) and \(\nabla_{\theta}V_{0}^{\theta}(.)=0\) on \(\mathcal{X}\setminus\mathcal{O}\), we get the required representation formula. **Remark 2.3**.: _It is known that stochastic gradient policy algorithms suffer from high variance, and a good alternative is to use a baseline. For instance, in the representation (2.6), we can substract to \(V_{i+1}^{\theta}(X_{t_{i+1}})\) the term \(V_{i}^{\theta}(X_{t_{i}})\) without biaising the gradient, i.e._ \[\nabla_{\theta}\mathrm{J}(\theta)=\mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[} \sum_{i=0}^{N-1}\big{(}V_{i+1}^{\theta}(X_{t_{i+1}})-V_{i}^{\theta}(X_{t_{i}}) \big{)}\nabla_{\theta}\log\rho_{\theta}(t_{i},X_{t_{i}},\alpha_{t_{i}})1_{t_{ i}<\tau}\Big{]}, \tag{2.7}\] _by the same trick as in (2.4):_ \[\mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}V_{i}^{\theta}(X_{t_{i }})\nabla_{\theta}\log\rho_{\theta}(t_{i},X_{t_{i}},\alpha_{t_{i}})1_{t_{i}< \tau}\Big{]}\] \[=\mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}V_{i}^{\theta}(X_{t_{i }})1_{t_{i}<\tau}\underbrace{\nabla_{\theta}\Big{(}\int_{A}\rho_{\theta}(t_{i},X_{t_{i}},a)\nu(\mathrm{d}a)\Big{)}}_{=0}\Big{]}\ =\ 0.\] ### Algorithms We now propose policy gradient algorithms which are based on the representation of the previous section. They do not require necessarily the knowledge of model coefficients and transition kernel \(p(.|t,x,a)\) of the state process, but only sample observations of the states \(X_{t_{i}}\), \(i=0,\ldots,N\), when taking decisions \(\alpha\) according to the chosen family of randomized policies, via e.g. an environment simulator (blackbox), hence of the exit time \(\tau\). They do neither require the knowledge of the analytical form of the reward function \(g\), and instead, we can consider that given an input/observation of a state \(x\), the associated output/reward \(g(x)\) is evaluated via e.g. a blackbox simulator. Our first algorithm (see pseudo-code in Algorithm 1) is based on the gradient representation (2.2). ``` Input data: Number of episodes \(E\), mini-batch size \(K\), learning rate \(\eta\) for policy gradient estimation; Parametrized family of randomized policies \(\pi_{\theta}\) with densities \(\rho_{\theta}\); Initialization: parameter \(\theta\); for each episode \(e\) = \(1,\ldots,E\)do select a random path \(k\) = \(1,\ldots,K\); Initialize state \(X_{0}^{(k)}\)\(\in\)\(\mathcal{O}\); for\(i\) = \(0,\ldots,N-1\)do Generate action \(\alpha_{t_{i}}^{(k)}\)\(\sim\)\(\pi_{\theta}(.|t_{i},X_{t_{i}}^{(k)})\) Simulate by a model or observe (e.g. by blackbox) state \(X_{t_{i+1}}^{(k)}\) If \(X_{t_{i+1}}^{(k)}\)\(\notin\)\(\mathcal{O}\) or \(t_{i+1}\) = \(T\), store the exit time \(\tau^{(k)}\) = \(t_{i+1}\), compute or observe by blackbox \(G^{(k)}\) := \(g(X_{\tau^{(k)}}^{(k)})\), and close the loop; Otherwise \(i\)\(\leftarrow\)\(i+1\); end Compute for path \(k\) \[\Gamma_{\theta}^{(k)}:=G^{(k)}\sum_{t_{i}<\tau^{(k)}}\nabla_{\theta}\log\rho_{ \theta}(t_{i},X_{t_{i}}^{(k)},\alpha_{t_{i}}^{(k)})\] Update parameters of the policies: \(\theta\)\(\leftarrow\)\(\theta\)\(+\)\(\eta\)\(\frac{1}{K}\)\(\sum_{k=1}^{K}\Gamma_{\theta}^{(k)}\); end Return: \(\pi_{\theta}\) ``` **Algorithm 1**Stochastic gradient policy Our second type of algorithm is based on the gradient representation (2.7), and is of actor-critic type: it consists in estimating simultaneously via fixed-point iterations the randomized optimal policy (the actor) by policy gradient (PG), and the value function (critic) by performance evaluation relying on the martingale property relation (2.5). More precisely, in addition to the parametrized family \(\pi_{\theta}\) of randomized policies, we are given a family of functions \(\mathcal{V}_{\phi}\) on \([0,T]\times\mathcal{X}\), with parameter \(\phi\), e.g. neural network, aiming to approximate the value function. The parameters \((\theta,\phi)\) are then updated alternately as follows: given a current estimation \((\theta^{(n)},\phi^{(n)})\), the parameter \(\theta\) is updated according to the PG (2.7) by replacing \(V\) by \(\mathcal{V}_{\phi^{(n)}}\): \[\theta^{(n+1)}=\ \theta^{(n)}+\eta\mathbb{E}_{\alpha\sim\pi_{\theta^{(n)}}}\Big{[} \sum_{t_{i}<\tau}\big{(}\mathcal{V}_{\phi^{(n)}}(t_{i+1},X_{t_{i+1}})-\mathcal{ V}_{\phi^{(n)}}(t_{i},X_{t_{i}})\big{)}\nabla_{\theta}\log\rho_{\theta^{(n)}}(t_{i},X _{t_{i}},\alpha_{t_{i}})\Big{]}\] while \(\phi\) is updated by minimizing the square regression error: \[\mathbb{E}\Big{[}\Big{|}\mathcal{V}_{\phi^{(n)}}(t_{i+1},X_{t_{i+1}})-\mathcal{ V}_{\phi}(t_{i},X_{t_{i}})\Big{|}^{2}1_{X_{t_{i}}\in\mathcal{O}}\Big{]}.\] Notice that we only need to learn the value function on the domain \(\mathcal{O}\) by sampling the state process until the exit time \(\tau\), as it is extended on \(\mathcal{X}\setminus\mathcal{O}\) by the reward \(g\). The pseudo-code of our Actor-Critic algorithm is described in Algorithm 2. ``` Input data: Number of episodes \(E\), mini-batch size \(K\), learning rates \(\eta^{G}\), \(\eta^{V}\) for policy and value function estimation; Parametrized family \(\pi_{\theta}\) with densities \(\rho_{\theta}\) for randomized policies, and \(\mathcal{V}_{\phi}\) for value function; Initialization: parameter \(\theta\), \(\phi\); foreach episode \(e\) = \(1,\ldots,E\)do select a random path \(k\) = \(1,\ldots,K\); Initialize state \(X_{0}^{(k)}\in\mathcal{O}\); for\(i\) = \(0,\ldots,N-1\)do Generate action \(\alpha_{t_{i}}^{(k)}\)\(\sim\)\(\pi_{\theta}(.|t_{i},X_{t_{i}}^{(k)})\) Simulate by a model or observe (e.g. by blackbox) state \(X_{t_{i+1}}^{(k)}\) If \(X_{t_{i+1}}^{(k)}\notin\mathcal{O}\) or \(t_{i+1}\) = \(T\), set \(\tau^{(k)}\) = \(t_{i+1}\), \(\mathcal{V}_{\phi}(t_{i+1},X_{t_{i+1}}^{(k)})\) = \(g(X_{t_{i+1}}^{(k)})\) computed e.g. by blackbox, and close the loop; Otherwise \(i\)\(\leftarrow\)\(i+1\); end for Compute for path \(k\) \[\Gamma_{\theta}^{(k)} :=\sum_{t_{i}<\tau^{(k)}}\big{(}\mathcal{V}_{\phi}(t_{i+1},X_{t_{i+ 1}}^{(k)})-\mathcal{V}_{\phi}(t_{i},X_{t_{i}}^{(k)})\big{)}\nabla_{\theta}\log \rho_{\theta}(t_{i},X_{t_{i}}^{(k)},\alpha_{t_{i}}^{(k)})\] \[\Delta_{\phi}^{(k)} :=\sum_{t_{i}<\tau^{(k)}}\big{(}\mathcal{V}_{\phi}(t_{i+1},X_{t_{i+ 1}}^{(k)})-\mathcal{V}_{\phi}(t_{i},X_{t_{i}}^{(k)})\big{)}\nabla_{\phi} \mathcal{V}_{\phi}(t_{i},X_{t_{i}}^{(k)})\] Actor update: \(\theta\)\(\leftarrow\)\(\theta\)\(+\)\(\eta^{G}\)\(\frac{1}{K}\)\(\sum_{k=1}^{K}\Gamma_{\theta}^{(k)}\); Critic update: \(\phi\)\(\leftarrow\)\(\phi\)\(+\)\(\eta^{V}\)\(\frac{1}{K}\)\(\sum_{k=1}^{K}\Delta_{\phi}^{(k)}\); end for Return: \(\pi_{\theta}\), \(\mathcal{V}_{\phi}\). ``` **Algorithm 2**Actor-Critic (offline) In the above actor-critic algorithm, the parameters are updated once the whole state trajectories are sampled. We can design an online version where the parameters are updated in real-time incrementally, see pseudo-code in Algorithm 3. ## 3 Application to Share Repurchase Programs Pricing ### Problem formulation We consider a company/client with stock price \(S\). This client mandates a bank to buy a quantity \(B\) of shares of stock within a period \([0,T]\). At early termination date \(\tau\) or at maturity \(T\) if no early termination has appeared, the client pays to the bank the Volume Weighted Average Price (in short VWAP) defined as \(V_{\tau}:=\frac{1}{\tau}\int_{0}^{\tau}S_{t}dt\), discounted by the number of shares, i.e., the amount \(B\)\(V_{\tau}\). The bank gives to the client the quantity \(B\) of shares, and its value at \(\tau\) is \(BS_{\tau}\). From the bank perspective, it is equivalent to being long an option with payoff \(B(V_{\tau}-S_{\tau})\) at \(\tau\). If the bank fails to collect the quantity \(B\) before \(T\) for the company, it must pay a penalty to the client. For the sake of simplicity, we have not included rate, dividends and repo, although this can be easily incorporated. We denote by \((Q_{t})_{t\in[0,T]}\) the quantity of shares (inventory) hold by the trader of the bank, and governed by \[dQ_{t}=\alpha_{t}dt,\] where \(\alpha\) represents the trading speed, valued in \([0,\overline{a}]\), for some constant \(\overline{a}\in(0,\infty)\). The underlying stock price \(S\) is a continuous time process, possibly controlled by \(\alpha\) in presence of permanent market impact. The dynamics of the VWAP process \((V_{t})_{t}\) and of the cumulated cost process \((C_{t})_{t}\) are given by \[\mathrm{d}V_{t}=\ \Big{(}\frac{S_{t}-V_{t}}{t}\Big{)}\mathrm{d}t,\quad 0<t\leq T,\ V_{0}\ =\ S_{0},\quad\mathrm{d}C_{t}\ =\ \alpha_{t}S_{t}\mathrm{d}t,\ C_{0}=0.\] The profit and loss (PnL) of the bank at execution time \(\tau\leq T\) is then given by \[\mathrm{PnL}_{\tau}^{\alpha}=B(V_{\tau}-S_{\tau})-\lambda(B-Q_{\tau})_{+}- \beta BC_{\tau},\] where \(\lambda>0\) is a penalization parameter, effective when \(\tau=T\), and \(Q_{T}<B\), and \(\beta\)\(\geq 0\) is a transaction cost parameter. The price of the barrier VWAP-minus contract is determined by the following stochastic control problem \[P_{BV}:=\sup_{\alpha\in\mathcal{A}}\ \mathbb{E}\big{[}\mathrm{PnL}_{\tau^{ \alpha}}^{\alpha}\big{]},\] where \(\mathcal{A}\) is the set of admissible trading strategies, and \(\tau^{\alpha}:=\inf\{t>0\mid Q_{t}\geq B\}\wedge T\) is the early termination time of the contract, defined as the first time when the inventory exceeds the required quantity \(B\) of shares. This fits into the form (1.1) with state variables \(X=(S,V,Q,C)\). **Remark 3.1**.: _In this context, the price of the ASR is given by_ \[P_{ASR}:=\ \sup_{\alpha\in\mathcal{A}}\sup_{\bar{\tau}\in\mathcal{T}_{0,T}} \mathbb{E}\big{[}\mathrm{PnL}_{\bar{\tau}}^{\alpha}\big{]},\] _while the price of the VWAP-minus contract as considered in [7] is given by_ \[P_{V}:=\ \sup_{\alpha\in\mathcal{A}}\sup_{\bar{\tau}\in\mathcal{T}_{\tau^{ \alpha},T}}\mathbb{E}\big{[}\mathrm{PnL}_{\bar{\tau}}^{\alpha}\big{]},\] _where \(\mathcal{T}_{t,T}\) is the set of stopping times valued in \([t,T]\). The prices of these contracts have been computed in [7] by using two distinct neural networks for approximating the policy \(\alpha\) and the stopping time \(\bar{\tau}\), and by definition, we should have \(P_{ASR}\geq P_{V}\geq P_{BV}\). Actually, one can show that \(P_{V}=P_{BV}\) in absence of market impact and transaction costs, see Appendix A. In other words, the pricing problem for the VWAP-minus can be reduced to a stochastic control with exit time, and there is no need to consider an additional optimization over stopping times \(\bar{\tau}\), which is quite advantageous from a numerical point of view._ The algorithm proposed in [7] considers two neural networks: \(p_{\theta}\) for the randomized stopping time and \(\mathrm{a}_{\xi}\) for trading rate to estimate the optimal strategy leading to \(P_{V}\). The optimisation is performed by a stochastic gradient ascent with the loss function \[\mathcal{L}(\theta,\xi)=\mathbb{E}\Big{[}\sum_{i=0}^{N-1}\prod_{j=0}^{i-1} \big{(}1-p_{\theta}(t_{j},X_{t_{j}})\big{)}\,p_{\theta}(t_{i},X_{t_{i}}) \mathrm{PnL}_{\mathrm{t_{i}}}+\prod_{j=0}^{N-1}\big{(}1-p_{\theta}(t_{j},X_{t _{j}})\big{)}\,\mathrm{PnL}_{\mathrm{t_{N}}}\Big{]}.\] Here \(\prod_{j=0}^{i-1}\left(1-p_{\theta}(t_{j},X_{t_{j}})\right)p_{\theta}(t_{i},X_{t_ {i}})\) represents the probability to exercise at \(t_{i}\), for a given path of the state variables. For the profit and loss PnL, \((B-Q_{t_{i}})^{+}\) is replaced by \(|B-Q_{t_{i}}|\) to prevent the agent from buying once the barrier is reached. Notice that the computation of the gradient of \(\mathcal{L}\) with respect to \(\theta\) and \(\xi\) is extremely costly. Furthermore, the numerical experiments show highly unstable results. Instead, our policy gradient algorithms is less costly and show stable results. ### Numerical results For the numerical results and comparison with other methods, we consider a price process with linear permanent price impact, governed by \[\mathrm{d}S_{t}=\ S_{t}\big{(}\gamma\alpha_{t}\mathrm{d}t+\sigma\mathrm{d}W_{ t}\big{)},\quad 0\leq t\leq T,\] where \(\gamma\geq 0\) is a constant market impact parameter. The value function \(P(t,x)\) with \(t\)\(\in[0,T]\), \(x=(s,v,q,c)\in\mathbb{R}_{+}^{*}\times\mathbb{R}_{+}^{*}\times\mathbb{R}_{+} \times\mathbb{R}_{+}\), is solution to the Bellman equation: \[\partial_{t}P+\overline{a}\big{(}\gamma s\partial_{s}P+s\partial _{c}P+\partial_{q}P\big{)}^{+} \tag{3.1}\] \[+\ \frac{s-v}{t}\partial_{v}P+\frac{1}{2}\sigma^{2}s^{2}\partial_{ s}^{2}P\ =0,\quad t\in(0,T),(s,v,q,c)\in\mathbb{R}_{+}^{*}\times\mathbb{R}_{+}^{*}\times[0, B)\times\mathbb{R}_{+},\] with the boundary conditions: \[\left\{\begin{array}{rcl}P(t,x)&=&B(v-s)-\beta Bc,\quad t\in[0,T],(s,v,q,c) \in\mathbb{R}_{+}^{*}\times\mathbb{R}_{+}^{*}\times[B,\infty)\times\mathbb{R} _{+},\\ P(T,x)&=&B(v-s)-\lambda(B-q)_{+}-\beta Bc,\quad(s,v,q,c)\in\mathbb{R}_{+}^{*} \times\mathbb{R}_{+}^{*}\times\mathbb{R}_{+}\times\mathbb{R}_{+}.\end{array}\right.\] Notice that the optimal feedback control is of bang-bang type, namely: \[\hat{a}(t,x)=\ \left\{\begin{array}{ll}0&\mbox{if}\ \ \gamma s\partial_{s}P+s \partial_{c}P+\partial_{q}P\leq 0,\\ \overline{a}&\mbox{otherwise},\end{array}\right.\] and therefore, we shall consider a softmax randomized policy as in (2.1) with two possible values in \(\{0,\overline{a}\}\). For numerical experiments of our algorithms to the pricing of Barrier VWAP-minus, we neglect transaction costs \(\beta=0\), and take the following parameters: \(T=60\) days, \(S_{0}=1\), \(B=1\), and \(\overline{a}\) ranging from \(5.04\) to \(25.2\), \(\lambda=5\), \(\Delta t=1/252\), number of Monte-Carlo simulations: \(N_{\mathrm{MC}}=10^{5}\). For the architecture of the neural networks for the randomized policies and the value function (for the actor-critic AC algorithm), we have used neural networks with 2 hidden layers of dimension 8 (linear ouput and Relu as intermediate activation function). The SGD is an Adam algorithm with standard hyper-parameters and 64 as mini-batch size for SGP and 32 for AC1. We first compute the price \(P_{BV}\times 10^{4}\) in absence of market impact \(\gamma=0\), and compare with the results obtained by HJB solver2 (see Appendix B). We fix \(\sigma=0.2\), and vary the maximal trading rate \(\overline{a}\), and display the associated prices in Figure 1. By construction, as we compute the expectation for a sub-optimal control, we obtain a lower bound. In particular, as the underlying price process is a martingale, note that using a constant control, we get \(\mathbf{0}\) bp. The graph of convergence in terms of the number of episodes of the algorithm for two pairs of parameters of \((\overline{a},\sigma)\), is reported in Figure 2. The two algorithms (SGP and AC) produce results that are similar to those obtained using splitting scheme in terms of price. Furthermore, the execution time of these algorithms is also found to be comparable to that of HJB solver, with both methods taking about two minutes to converge, indicating that they are computationally efficient and Figure 1: \(P_{BV}\times 10^{4}\) in absence of market impact and transaction costs for different values of \(\overline{a}\) computed with stochastic gradient policy and actor critic compared to splitting scheme (HJB solver). Figure 2: Convergence as a function of iterations for \(P_{BV}\times 10^{4}\) (without market impact and transaction costs) for \(\overline{a}=36.5,\sigma=0.2\) (left) and \(\overline{a}=9,\sigma=0.25\) (right) capable of solving the problem in a timely manner. However, when the number of state variables increases, the PDE method becomes computationally very costly in comparison to our proposed methods. This means that for problems involving a large number of state variables, our method becomes the only viable option. Overall, the results of this study demonstrate that our proposed algorithms are a reliable and cost-effective alternative to the PDE method for solving this class of problems. Next, we display the surface of the optimal randomized policy for fixed spot price \(S\), for two different values of \(t\) (\(t=T/2\) and \(t\) near maturity \(T\)), and as a function of the VWAP and inventory. Figure 3 shows the results in absence of market impact while Figure 4 considers the case with market impact. We observe that when we are close to the maturity, the probability of choosing the maximal trading rate is equal to one for almost all states of the VWAP and inventory with or without market impact: this is due to the fact that the trader has to achieve the goal of repurchasing the requested quantity of shares as he would be penalized otherwise. When we are in the midterm of the program, the optimal policy consists in choosing the maximal trading rate only when the VWAP is larger than some threshold, say \(V^{*}\), as he has enough time to complete his repurchasing goal. In absence of market impact, this threshold \(V^{*}\) is approximately equal to the spot price, while in presence of market impact, this threshold decreases with the market impact and also with the inventory. In other words, the trader will buy more quickly some fraction of the total shares \(B\) as the market impact is more penalizing when approaching maturity. Finally, we represent the evolution of the optimal inventory for two price realizations, in the case without market impact (see Figure 5) and with market impact (see Figure 6) The trader starts by purchasing some fraction of the total shares \(B\) (and this is done more quickly and with a higher fraction in presence of market impact), then do not trade for a while until the time when the spot price falls below the VWAP, where he purchases the remaining shared to complete the buy-back programme. Figure 5: Optimal repurchase strategy evolution for two price realizations (\(\sigma=0.2\)) in absence of market impact and transaction costs. ## Appendix A Barrier VWAP-minus vs VWAP-minus Given a trading strategy \(\alpha\in\mathcal{A}\), valued in \(A=[0,\overline{a}]\), we denote by \(\tau^{\alpha}\) the first time when the inventory \(Q_{t}^{\alpha}=\int_{0}^{t}\alpha_{s}ds\) reaches \(B\), and we consider the price of the VWAP-minus and Barrier VWAP-minus given by \[P_{V}=\,\sup_{\alpha\in\mathcal{A}}\sup_{\bar{\tau}\in\mathcal{T} ^{\alpha},T}\mathbb{E}\big{[}\mathrm{PNL}_{\bar{\tau}}^{\alpha}\big{]}\quad P_ {BV}\;=\;\sup_{\alpha\in\mathcal{A}}\mathbb{E}\big{[}\mathrm{PNL}_{\tau^{ \alpha}}^{\alpha}\big{]},\] where the \(\mathrm{PNL}\), in absence of transaction costs, is given by \[\mathrm{PNL}_{t}^{\alpha}=\;B\Big{(}\frac{1}{t}\int_{0}^{t}S_{s} ds-S_{t}\Big{)}-\lambda(B-Q_{t}^{\alpha})_{+},\quad 0\leq t\leq T.\] The price process \(S\) is a general continuous semimartingale process without market impact, and satisfying \[\mathbb{E}\big{[}\max_{t\in[0,T]}|S_{t}|\big{]}<\;\infty.\] (A.1) Notice that by Doob's inequality, such condition (A.1) is satisfied whenever the drift and the volatility of the asset price \(S\) are bounded. **Proposition A.1**.: _Under (A.1), and in absence of market impact and transaction costs, we have \(P_{BV}=P_{V}\)._ Proof.: Fix some arbitrary \(\alpha\in\mathcal{A}\), and \(\bar{\tau}\in\mathcal{T}_{\tau^{\alpha},T}\). For \(\varepsilon>0\), denote by \(\tau^{\alpha}_{\varepsilon}=\inf\{t\geq 0:Q_{t}^{\alpha}=B-\varepsilon\}\wedge T\), which is smaller than \(\tau^{\alpha}\), and converges a.s. to \(\tau^{\alpha}\) when \(\varepsilon\) goes to Figure 6: Optimal repurchase strategy evolution for two price realizations (\(\sigma=0.2\)) with market impact (\(\gamma=0.1\)) zero. Let us then define trading strategy \(\alpha^{\varepsilon}\in\mathcal{A}\) by \[\alpha^{\varepsilon}_{t}=\ \left\{\begin{array}{ll}\alpha_{t}&\mbox{ for }\ 0\leq t\leq\tau^{\alpha}_{\varepsilon}\\ 0&\mbox{ for }\ \tau^{\alpha}_{\varepsilon}<t\leq\bar{\tau}\\ \overline{a}&\mbox{ for }\ \bar{\tau}<t\leq T,\end{array}\right.\] which leads to an associated inventory \(Q^{\alpha^{\varepsilon}}\) given by \[Q^{\alpha^{\varepsilon}}_{t}=\ \left\{\begin{array}{cl}Q^{\alpha}_{t}&\mbox{ for }\ 0\leq t\leq\tau^{\alpha}_{\varepsilon}\\ B-\varepsilon&\mbox{ for }\ \tau^{\alpha}_{\varepsilon}<t\leq\bar{\tau}\\ B-\varepsilon+\overline{a}(t-\bar{\tau})&\mbox{ for }\ \bar{\tau}<t\leq T.\end{array}\right.\] Notice that \(\tau^{\alpha^{\varepsilon}}\) (the first time when \(Q^{\alpha^{\varepsilon}}\) reaches \(B\)) is lower-bounded by \(\bar{\tau}\), decreases with \(\varepsilon\), and converges a.s. to \(\bar{\tau}\) when \(\varepsilon\) goes to zero. By definition, we have \(P_{BV}\geq\mathbb{E}\big{[}\mathrm{PNL}^{\alpha^{\varepsilon}_{\varepsilon}} \big{]}\). Let us check that \(\mathrm{PNL}^{\alpha^{\varepsilon}}_{\tau^{\alpha^{\varepsilon}}}\) converges a.s. to \(\mathrm{PNL}^{\alpha}_{\bar{\tau}}\) when \(\varepsilon\) goes to zero. We distinguish two cases: * If \(\tau^{\alpha}<T\). Then, \(Q^{\alpha}_{\tau^{\alpha}}=B\leq Q^{\alpha}_{\bar{\tau}}\), and \(Q^{\alpha^{\varepsilon}}_{\tau^{\alpha^{\varepsilon}}}=B-\varepsilon\,+\, \overline{a}(\tau^{\alpha^{\varepsilon}}-\bar{\tau})\) converges to \(B\) when \(\varepsilon\) goes to zero. It follows that \[\mathrm{PNL}^{\alpha^{\varepsilon}}_{\tau^{\alpha^{\varepsilon}}} =\ B\Big{(}\frac{1}{\tau^{\alpha^{\varepsilon}}}\int_{0}^{\tau^{ \alpha^{\varepsilon}}}S_{s}ds-S_{\tau^{\alpha^{\varepsilon}}}\Big{)}-\lambda( B-Q^{\alpha^{\varepsilon}}_{\tau^{\alpha^{\varepsilon}}})_{+}\] \[\to\ B\Big{(}\frac{1}{\bar{\tau}}\int_{0}^{\bar{\tau}}S_{s}ds-S_{ \bar{\tau}}\Big{)}\ =\ \mathrm{PNL}^{\alpha}_{\bar{\tau}},\] as \(\varepsilon\) goes to zero. * If \(\tau^{\alpha}=T\). Then \(\bar{\tau}=T=\tau^{\alpha^{\varepsilon}}\), and \(\alpha^{\varepsilon}_{t}\) converges to \(\alpha_{t}\), for \(0\leq t<T\), when \(\varepsilon\) goes to zero. It follows that \(Q^{\alpha^{\varepsilon}}_{T}\) converges to \(Q^{\alpha}_{T}\). Therefore, \[\mathrm{PNL}^{\alpha^{\varepsilon}}_{\tau^{\alpha^{\varepsilon}}} =\ B\Big{(}\frac{1}{T}\int_{0}^{T}S_{s}ds-S_{T}\Big{)}-\lambda(B-Q ^{\alpha^{\varepsilon}}_{T})_{+}\] \[\to\ B\Big{(}\frac{1}{T}\int_{0}^{T}S_{s}ds-S_{T}\Big{)}-\lambda( B-Q^{\alpha}_{T})_{+}\ =\ \mathrm{PNL}^{\alpha}_{\bar{\tau}},\] as \(\varepsilon\) goes to zero. Moreover, by noting that \(\big{|}\mathrm{PNL}^{\alpha^{\varepsilon}}_{\tau^{\alpha^{\varepsilon}}}\big{|} \leq B(2\max_{t\in[0,T]}|S_{t}|+\lambda)\), and under (A.1), we can apply dominated convergence theorem to deduce that \[\mathbb{E}\big{[}\mathrm{PNL}^{\alpha^{\varepsilon}}_{\tau^{\alpha^{ \varepsilon}}}\big{]}\to\ \mathbb{E}\big{[}\mathrm{PNL}^{\alpha}_{\bar{\tau}}\big{]},\quad\mbox{ when }\ \varepsilon\mbox{ goes to zero},\] and so \(P_{BV}\geq\mathbb{E}\big{[}\mathrm{PNL}^{\alpha}_{\bar{\tau}}\big{]}\). Since this holds true for any \(\alpha\in\mathcal{A}\), and \(\bar{\tau}\in\mathcal{T}_{\tau^{\alpha},T}\), we conclude that \(P_{BV}\geq P_{V}\), hence the equality since it is clear that \(P_{V}\geq P_{BV}\). PDE Implementation by splitting scheme We solve the Bellman (HJB) equation (3.1) by backward induction. We know \(P\) at \(T\) (_Terminal condition_). Now, we assume that we know \(P\) at \(t\) and we want to compute \(P\) at a previous date \(t-\Delta t\). We use the approximation: \[\overline{a}1_{\{(\gamma s\partial_{s}+s\partial_{c}+\partial_{q})P(t-\Delta t,x)\geq 0\}}\approx\overline{a}1_{\{(\gamma s\partial_{s}+s\partial_{c}+ \partial_{q})P(t,x)\geq 0\}}:=\tilde{a}^{*}(t,x)\] for all \(x=(s,v,q,c)\in\mathcal{O}=\mathbb{R}_{+}^{*}\times\mathbb{R}_{+}^{*}\times(0, B)\times\mathbb{R}_{+}\). The HJB equation becomes \[\partial_{t}P_{|_{\mathcal{O}}}+\mathcal{L}P_{|_{\mathcal{O}}}+\mathcal{D}P_{| _{\mathcal{O}}}=0\] (B.1) where \(P_{|_{\mathcal{O}}}\) is the restriction of \(P\) to \(\mathcal{O}\), \(\mathcal{L}\) is a diffusion operator and \(\mathcal{D}\) is a transport operator defined over \(\mathcal{O}\) as \[\mathcal{L}\cdot =\frac{1}{2}\sigma^{2}s^{2}\partial_{ss}^{2}\cdot\] \[\mathcal{D}\cdot =\frac{s-v}{t}\partial_{v}\cdot-\tilde{a}^{*}(t,x)\partial_{q}\cdot\] where \(x=(s,v,q,c)\in\mathcal{O}\). One can verify that \(\mathcal{L}\), \(\mathcal{D}\) and \(\mathcal{L}+\mathcal{D}\) generate a \(C^{0}\) semi-groups, thus, the solution of (B.1) at \(t-\Delta t\) can be represented as \[P(t-\Delta t,x)=e^{\Delta t(\mathcal{L}+\mathcal{D})}P(t,x)\] where \(e^{\Delta t(\mathcal{L}+\mathcal{D})}\) denotes the semi-group associated to the parabolic linear PDE (B.1). A first order approximation of the solution operator is obtained using Baker-Campbell-Hausdorff formula and Lie-Trotter splitting (see [13]) \[e^{\Delta t(\mathcal{L}+\mathcal{D})}P(t,x)=e^{\Delta t\mathcal{D}}e^{\Delta t \mathcal{L}}P(t,x)+O(\Delta t)\] (B.2) One can also use Strang splitting \(e^{\frac{\Delta t}{2}\mathcal{D}}e^{\Delta t\mathcal{L}}e^{\frac{\Delta t}{2} \mathcal{D}}\) to get a second order approximation. The splitting (B.2) corresponds to solving the parabolic PDE first with generator \(\mathcal{L}\) and then the first-order transport PDE corresponding to the operator \(\mathcal{D}\). By using the method of characteristics, the solution corresponding to \(\mathcal{D}\) is explicitly given by \[e^{\Delta t\mathcal{D}}Q(t,x)\ =\ Q(t,s,v+\frac{s-v}{t}\Delta t,q+\tilde{a}^{*}(t,x )\Delta t,c+\tilde{a}^{*}(t,x)s\Delta t)\] where \(x=(s,v,q,c)\in\mathcal{O}\) and \(Q(t,x)=e^{\Delta t\mathcal{L}}P(t,x)\). Finally, we extend \(P(t,\cdot)\) to \(\mathbb{R}_{+}^{*}\times\mathbb{R}_{+}^{*}\times\mathbb{R}\times\mathbb{R}\) using boundary conditions.
2307.12695
Propagation of a carbon price in a credit portfolio through macroeconomic factors
We study how the climate transition through a low-carbon economy, implemented by carbon pricing, propagates in a credit portfolio and precisely describe how carbon price dynamics affects credit risk measures such as probability of default, expected and unexpected losses. We adapt a stochastic multisectoral model to take into account the greenhouse gases (GHG) emissions costs of both sectoral firms' production and consumption, as well as sectoral household's consumption. GHG emissions costs are the product of carbon prices, provided by the NGFS transition scenarios, and of GHG emissions. For each sector, our model yields the sensitivity of firms' production and households' consumption to carbon price and the relationships between sectors. It allows us to analyze the short-term effects of the carbon price as opposed to standard IAM (such as REMIND), which are deterministic and only capture long-term trends. Finally, we use a DCF methodology to compute firms' values which we then combine with a structural credit risk model to describe how the carbon price impacts credit risk measures. We obtain that the carbon price distorts the distribution of the firm's value, increases banking fees charged to clients (materialized by the bank provisions), and reduces banks' profitability (translated by the economic capital). In addition, the randomness we introduce provides extra flexibility to take into account uncertainties on the productivity and on the different transition scenarios. We also compute the sensitivities of the credit risk measures with respect to changes in the carbon price, yielding further criteria for a more accurate assessment of climate transition risk in a credit portfolio. This work provides a preliminary methodology to calculate the evolution of credit risk measures of a credit portfolio, starting from a given climate transition scenario described by a carbon price.
Géraldine Bouveret, Jean-François Chassagneux, Smail Ibbou, Antoine Jacquier, Lionel Sopgoui
2023-07-24T11:21:06Z
http://arxiv.org/abs/2307.12695v3
# Propagation of carbon taxes in credit portfolio through macroeconomic factors ###### Abstract We study how the introduction of carbon taxes in a closed economy propagate in a credit portfolio and precisely describe how carbon taxes dynamics affect the firm value and credit risk measures such as probability of default, expected and unexpected losses. We adapt a stochastic multisectoral model to take into account carbon taxes on both sectoral firms' production and sectoral household's consumption. Carbon taxes are calibrated on carbon prices, provided by the NGFS transition scenarios, as well as on sectoral households' consumption and firms' production, together with their related greenhouse gases emissions. For each sector, this yields the sensitivity of firms' production and households' consumption to carbon taxes and the relationships between sectors. Our model allows us to analyze the short-term effects of carbon taxes as opposed to standard Integrated Assessment Models (such as REMIND), which are not only deterministic but also only capture long-term trends of climate transition policy. Finally, we use a Discounted Cash Flows methodology to compute firms' values which we then use in the Merton model to describe how the introduction of carbon taxes impacts credit risk measures. We obtain that the introduction of carbon taxes distorts the distribution of the firm's value, increases banking fees charged to clients (materialized by the level of provisions computed from the expected loss), and reduces banks' profitability (translated by the value of the economic capital calculated from the unexpected loss). In addition, the randomness introduced in our model provides extra flexibility to take into account uncertainties on productivity and on the different transition scenarios by sector. We also compute the sensitivities of the credit risk measures with respect to changes in the carbon taxes, yielding further criteria for a more accurate assessment of climate transition risk in a credit portfolio. This work provides a preliminary methodology to calculate the evolution of credit risk measures of a multisectoral credit portfolio, starting from a given climate transition scenario described by carbon taxes. keywords: Credit risk, Climate risk, Merton model, Macroeconomic modelling, Transition risk, Carbon tax, Firm valuation, Stochastic modeling + Footnote †: journal: Journal of Economic and Applied Statistics This research is part of the PhD thesis in Mathematical Finance of Lionel Sopgoui whose works are funded by a CIFRE grant from BPCE S.A. The opinions expressed in this research are those of the authors and are not meant to represent the opinions or official positions of BPCE S.A. ## Introduction Climate change has had and will keep having a deep impact on human societies and their environments. In order to assess and mitigate the associated risks, many summits have been organized in recent decades, resulting in agreements signed by a large majority of countries around the globe. These include the Kyoto Protocol in 1997, the Copenhagen Accord in 2009 and the Paris Agreement in 2015, all of them setting rules to make a transition to a low-carbon economy. Climate risk has two components. The first one is physical risk and relates to the potential economic and financial losses arising from climate-related hazards, both acute (e.g., droughts, flood, and storms) and chronic (e.g., temperature increase, sea-level rise, changes in precipitation). The second one is transition risk and relates to the potential economic and financial losses associated with the process of adjusting towards a low-carbon economy. The financial sector usually considers three main types of transition risk: changes in consumer preferences, changes in technology, and changes in policy. Climate risk thus has a clear impact (negative or positive) on firms, industrial sectors and ultimately on state finances and household savings. This is the reason why assessing transition risk is becoming increasingly important in all parts of the economy, and in particular in the financial industry whose role will be to finance this low-carbon transition while ensuring the stability of the system. There is thus a fundamental need for studying the link between transition risk and credit risk. In this work, we study how an introduction of carbon taxes could propagate in a bank credit portfolio. Since the Paris climate agreement in 2015, a few papers studying climate-related financial aspects of transition risk have emerged. Battiston and Monasterolo [6] deal with transition risk assessment in sovereign bonds' portfolio. In [1], the authors focus on corporate credit assessment. The authors provide a general methodology to start from transition scenarios to credit metrics. In particular, for a given transition scenario (e.g., less than 2\({}^{\circ}\)C in 2050), they obtain both a carbon price and a gross domestic product trajectories. The latter two are then used in static general equilibrium models for the generation of a set of macroeconomic variables and of sectoral values added. All the macroeconomic trajectories obtained are then used to stress credit portfolios. It is globally this methodology that all French banks used during the climate stress test organized between 2020 and 2021 by the ACPR (French Prudential Supervision and Resolution Authority). However, on the one hand, the methodology used for translating macroeconomic impacts into financial ones is not always specified, and on the other hand, assumptions are independent of the stress-test horizon. Cartellier [10] discusses, under a non-mathematical framework, methodologies and approaches used by banks and scholars in climate stress testing. Garnier [17] as well as Gaudemet, Deshamps, and Vinciguerra [18] propose two models. The first one called CERM (Climate Extended Risk Model) is a model based on the Merton one with a multidimensional Gaussian systemic factor, where the transition risk is diffused to the credit risk by the factor loadings defined as the correlations between the systematic risk factors and the assets. The second one introduces a climate-economic model to calibrate the model of the former. There are other works like the one of Bourgey, Gobet, and Jiao [9] or Bouchet and Le Guenedal [8] who take the economic and capital structure of the firm into account in measuring carbon risk. In particular, the first one derives the firm value by using the Discounted Cash Flows methodology on cash flows that are affected by the firm's transition policy, while the second one directly affects the firm value by a shock depending on the ratio between carbon cost and EBITDA. Moreover, Le Guenedal and Tankov [27] use a structural model for pricing bonds issued by a company subject to climate transition risk and, in particular, take into account the transition scenario uncertainty. Finally, Livieri, Radi and Smaniotto [28] use a Jump-Diffusion Credit Risk model where the downward jumps describe green policies taken by firms, to price defaultable coupon bonds and Credit Default Swaps. The goal of the present work is to study how carbon taxes spread in a credit portfolio. In a first step, we build a stochastic and multisectoral model where we introduce sectoral carbon taxes calibrated on sectoral GHG emissions from households and firms. This model helps us analyze the impact of carbon taxes on sectoral production by firms and on sectoral consumption by households. We obtain that at the market equilibrium, the macroeconomic problem is reduced to a non-linear system of output and consumption. Moreover, when the households' utility function is logarithmic in consumption, output and consumption are uniquely defined and precisely described by productivity, carbon taxes and the model parameters. Then, for each sector, we can determine labor and intermediary inputs using the relationship of the latter with output and consumption. The sectoral structure also allows us to quantify the interactions between sectors both in terms of productivity and carbon taxes. The model we build in this first step is close to the one developped by Golosov and co-authors in [19]. However, there are two main differences. Firstly, they obtain an optimal path for their endogenous carbon taxes while, in our case, carbon taxes are exogenous. Secondly, the sectors in their model are allocated between sectors related to energy and a single sector representing the rest of the economy, while our model allows for any type of sectoral organization provided that a proper calibration of the involved parameters can be performed. In addition, our model is also close to the multisectoral model proposed by Devulder and Lisack in [12], with the difference that ours is dynamic and stochastic, and that we appeal to a Cobb Douglas production function instead of a Constant Elasticity of Substitution (CES) one. Finally, the model developed in this first step also differs from the model REMIND described in [35] as (1) it is a stochastic multisectoral model and (2) the productivity is exogenous. In a second step, we define the firm value by using the Discounted Cash Flows methodology [26]. We assume, as mentioned/admitted in the literature, that the cash flow growth is a linear function of the (sectoral) consumption growth. This allows us to describe the firm value as a function of productivity and of carbon taxes. Then, by assuming that the noise term in the productivity is small, we obtain a closed-form formulae of the firm value. The results show us that the distribution of firm value is distorted and shifts to the left when carbon taxes increase. In a third step, we use the firm value in the Merton's structural model. We can then calcu late, for different climate transition scenarios, the evolution of the annual default probability, the expected loss, and the unexpected loss of a credit portfolio. The works of Garnier [17] and Bourgey, Gobet, and Jiao [9] are the closest. However, [17] relies on the Vasicek-Merton model with a centered Gaussian systemic factor, while we appeal to a microeconomic definition of the firm's value as in [9]. On the contrary to [9], (1) we emphasize how firms are affected by macroeconomic factors (e.g., productivity and taxes processes) but do not allow them to optimize their transition strategy, and (2) besides discussing the impacts of carbon taxes on the probability of default, we also investigate their impacts on losses. We finally introduce an indicator to describe the sensitivity of the (un)expected loss of a portfolio to carbon price. This allows us to see how the above-mentioned risk measures would vary, should we deviate from the carbon price given by our supposedly deterministic scenarios. The paper is organized as follows. In Section 1, we build a stochastic multisectoral model and analyze how the sectors, grouped in level of GHG emissions, change when one introduces carbon taxes. In Section 2, we define the firm value as a function of consumption growth. In Section 3, we compute and project risk measures such as probability of default, expected and unexpected losses appealing to the Merton model. Finally, Section 4 is devoted to the calibration of different parameters while Section 5 focuses on presenting and analyzing the numerical results. #### Notations. * \(\mathbb{N}\) is the set of non-negative integers, \(\mathbb{N}^{*}:=\mathbb{N}\setminus\{0\}\), and \(\mathbb{Z}\) is the set of integers. * \(\mathbb{R}^{d}\) denotes the \(d\)-dimensional Euclidean space, \(\mathbb{R}_{+}\) is the set of non-negative real numbers, \(\mathbb{R}_{+}^{*}:=\mathbb{R}_{+}\setminus\{0\}\). * \(\mathbf{1}:=(1,\ldots,1)\in\mathbb{R}^{I}\). * \(\mathbb{R}^{n\times d}\) is the set of real-valued \(n\times d\) matrices (\(\mathbb{R}^{n\times 1}=\mathbb{R}^{n}\)), \(\mathbf{I}_{n}\) is the identity \(n\times n\) matrix. * \(x^{i}\) denotes the \(i\)-th component of the vector \(x\in\mathbb{R}^{d}\). For all \(A:=(A^{ij})_{1\leq i,j\leq n}\in\mathbb{R}^{n\times n}\), we denote by \(A^{\top}:=(A^{ji})_{1\leq i,j\leq n}\in\mathbb{R}^{n\times n}\) the transpose matrix. * \(\bigotimes\) is the Kronecker product. * For a given finite set \(S\), we define as the cardinal of \(S\), \(\#S\). * For all \(x,y\in\mathbb{R}^{d}\), we denote the scalar product \(x^{\top}y\), the Euclidean norm \(|x|:=\sqrt{x^{\top}x}\) and for a matrix \(M\in\mathbb{R}^{d\times d}\), we denote \[|M|:=\sup_{a\in\mathbb{R}^{d},|a|\leq 1}|Ma|.\] * \((\Omega,\mathcal{H},\mathbb{P})\) is a complete probability space. * For \(p\in[1,\infty]\), \(E\) is a finite dimensional Euclidian vector space and for a \(\sigma\)-field \(\mathcal{H}\), \(\mathcal{L}^{p}(\mathcal{H},E)\), denotes the set of \(\mathcal{H}\)-measurable random variable \(X\) with values in \(E\) such that \(\|X\|_{p}:=\left(\mathbb{E}\left[|X|^{p}\right]\right)^{\frac{1}{p}}<\infty\) for \(p<\infty\) and for \(p=\infty\), \(\|X\|_{\infty}:=\operatorname{esssup}|X(\omega)|<\infty\). * For a filtration \(\mathbb{G}\), \(p\in[1,+\infty]\) and \(I\in\mathbb{N}^{*}\), \(\mathcal{L}^{p}_{+}(\mathbb{G},(0,\infty)^{I})\) is the set of discrete-time processes that are \(\mathbb{G}\)-adapted valued in \((0,\infty)^{I}\) and which satisfy \[\|X_{t}\|_{p}<\infty\text{ for all }t\in\mathbb{N}.\] * If \(X\) and \(Y\) are two random variables \(\mathbb{R}^{d}\)-valued, for \(x\in\mathbb{R}^{d}\), we note \(Y|X=x\) the conditional distribution of \(Y\) given \(X=x\), and \(Y|\mathcal{F}\) the conditional distribution of \(Y\) given the filtration \(\mathcal{F}\). ## 1 A Multisectoral Model with Carbon tax We consider a closed economy with various sectors of industry which are subject to taxes. In this section, our main goal is to derive the dynamics of output and consumption per sector. The setting is strongly inspired by basic classical monetary models presented in the seminal textbook by Gali [16], and also by Devulder and Lisack [12], and in Miranda-Pinto and Young's sectoral model [30]. We thus consider a discrete-time model with infinite horizon. The main point here is that taxes are dynamics and shall be interpreted as carbon taxes. This will allow us in particular to describe the transition process to a decarbonized economy. We first consider two optimization problems: one for representative firms and one for a household. We obtain first-order conditions, namely the optimal behavior of the firm and the consumer as a response to the various variables at hand. Then, relying on market clearing conditions, we obtain the equations that the sectoral consumption and outputs processes must satisfy. Finally, in the last section, we solve those equations by making assumptions on the values taken by the set of involved parameters. Let \(\mathcal{I}\) denote a set of sectors with cardinal \(I\in\mathbb{N}^{*}\). Each sector \(i\in\mathcal{I}\) has a representative firm which produces a single good, so that we can associate sector, firm and good. We now introduce the following standing assumption which describes the productivity, which is considered to have stationary dynamics. **Standing Assumption 1.1**.: We define the \(\mathbb{R}^{I}\)-valued process \(\mathcal{A}\) which evolves according to \[\left\{\begin{array}{ll}\mathcal{A}_{t}&=\mathcal{A}_{t-1}+\Theta_{t},\\ \Theta_{t}&=\mu+\Gamma\Theta_{t-1}+\varepsilon\mathcal{E}_{t},\end{array} \right.\quad\text{for all }t\in\mathbb{N}^{*},\] with the constants \(\mu,\mathcal{A}_{0}\in\mathbb{R}^{I}\) and where the matrix \(\Gamma\in\mathbb{R}^{I\times I}\) has eigenvalues all strictly less than \(1\) in absolute value, \(0<\varepsilon\leq 1\) is an intensity of noise parameter that is fixed: it will be used in Section 2 to obtain a tractable proxy of the firm value. Moreover, \((\mathcal{E}_{t})_{t\in\mathbb{Z}}\) is independent and identically distributed with for \(t\in\mathbb{Z}\), \(\mathcal{E}_{t}\sim\mathcal{N}(\mathbf{0},\Sigma)\) with \(\Sigma\in\mathbb{R}^{I\times I}\). We also have \(\Theta_{0}\sim\mathcal{N}(\overline{\mu},\varepsilon^{2}\overline{\Sigma})\) with \(\overline{\mu}:=(\mathbf{I}_{I}-\Gamma)^{-1}\mu\), and \(\operatorname{vec}(\overline{\Sigma}):=(\mathbf{I}_{I\times I}-\Gamma\bigotimes \Gamma)^{-1}\text{vec}(\Sigma)\), where, \(\mathbf{P}\). Page 5 for \(M\in\mathbb{R}^{d\times d}\), \(\operatorname{vec}(M):=[M^{11},\dots,M^{d1},M^{21},\dots,M^{d2},\dots,M^{1d},\dots,M^{dd}]^{\top}\). The processes \((\mathcal{E}_{t})_{t\in\mathbb{N}}\) and the random variable \(\Theta_{0}\) are independent. To summarize, the productivity is a Vector Autoregressive Process. The literature on VAR (Vector Autoregressive Model) is rich, with detailed results and proofs in Hamilton [23], or Kilian and Lutkepohl [25]. We provide in A additional results that will be useful. For later use, we introduce, for \(i\in\mathcal{I}\), the process \[A^{i}_{t}:=\exp{(\mathcal{A}^{i}_{t})},\] which represents the level of technology of sector \(i\in\mathcal{I}\). **Remark 1.2**.: 1. Obviously, for any \(t\in\mathbb{N}\), \(\mathcal{A}_{t}=\mathcal{A}_{0}+\sum_{u=1}^{t}\Theta_{u}\). For later use, we define \[\mathcal{A}^{\circ}_{t}:=\mathcal{A}_{t}-\mathcal{A}_{0},\] and observe that \((\mathcal{A}^{\circ}_{t},\Theta_{t})_{t\geq 0}\) is a Markov process. 2. Since the eigenvalues of \(\Gamma\) are all strictly less than \(1\) in absolute value, \((\Theta_{t})_{t\in\mathbb{N}}\) is wide-sense stationary i.e. for \(t,u\in\mathbb{N}\), the first and the second orders moments (\(\mathbb{E}[\Theta_{t}]\) and \(\mathbb{E}[\Theta_{t}\Theta_{t+u}]\)) do not depend on \(t\). Then, given the law of \(\Theta_{0}\), we have for any \(t\in\mathbb{N}\), \(\Theta_{t}\sim\mathcal{N}(\overline{\mu},\varepsilon^{2}\overline{\Sigma})\). 3. For later use, we also observe the following: let \(\mathcal{Z}_{0}\sim\mathcal{N}(0,\overline{\Sigma})\) s.t. \(\Theta_{0}=\overline{\mu}+\varepsilon\mathcal{Z}_{0}\) and for \(t\geq 1\), \(\mathcal{Z}_{t}=\Gamma\mathcal{Z}_{t-1}+\mathcal{E}_{t}.\) Then \[\Theta_{t}=\overline{\mu}+\varepsilon\mathcal{Z}_{t}\ \text{ and }\ \mathcal{Z}_{t}\sim \mathcal{N}(0,\overline{\Sigma}).\] (1.1) Let \(\mathbb{G}:=(\mathcal{G}_{t})_{t\in\mathbb{N}}\) with \(\mathcal{G}_{0}:=\sigma(\Theta_{0})\) and for \(t\geq 1\), \(\mathcal{G}_{t}:=\sigma\left(\{\Theta_{0},\mathcal{E}_{s}:s\in(0,t]\cap \mathbb{N}^{*}\}\right)\). For each sector/representative firm/good \(i\in\mathcal{I}\), we introduce deterministic taxes: a tax \((\tau^{i}_{t})_{t\geq 0}\) on firm's production, a tax \((\zeta^{ji}_{t})_{t\geq 0}\) on firm \(i\)' consumption in sector \(j\in\mathcal{I}\), and a tax \((\kappa^{i}_{t})_{t\geq 0}\) on household's consumption. These taxes are interpreted as exogeneous carbon taxes and they allow us to model the impact of the transition pathways on the whole economy. We will note \(\mathfrak{d}:=(\tau,\zeta,\kappa)\) the complete tax process. We shall then assume the following setting. **Standing Assumption 1.3**.: Let \(0\leq t_{\circ}<t_{\star}\) be given. The sequences \(\tau\), \(\zeta\), and \(\kappa\) satisfy 1. for \(t\in[0;t_{\circ}]\), \(\mathfrak{d}_{t}=\mathfrak{d}_{0}\in[0,1)^{I}\times[0,1)^{I\times I}\times[0,1 )^{I}\), namely the taxes are constant; 2. for \(t\in(t_{\circ},t_{\star})\), \(\mathfrak{d}_{t}\in[0,1)^{I}\times[0,1)^{I\times I}\times[0,1)^{I}\), the taxes may evolve; 3. for \(t\geq t_{\star}\), \(\mathfrak{d}_{t}=\mathfrak{d}_{t_{\star}}\in[0,1)^{I}\times[0,1)^{I\times I }\times[0,1)^{I}\), namely the taxes are constant. In the assumption above, we interpret \(t_{\circ}\) as the start of the transition and \(t_{\star}\) as its end. Before the transition, carbon taxes are constant (possibly zero). Then, at the beginning of the transition, which lasts over \((t_{\circ},t_{\star})\), the carbon taxes can be dynamic depending on the objectives we want to reach. After \(t_{\star}\), the taxes become constant again. We now describe the firm and household programs that will allow us to derive the necessary equations that must be satisfied by the output and consumption in each sector. The proposed framework assumes a representative firm in each sector which maximizes its profits by choosing, at each time and for a given productivity, the quantities of labor and intermediary inputs. This corresponds to a sequence of static problems. Then, a representative household solves a dynamic optimization problem to decide how to allocate its consumption expenditures among the different goods and hours worked and among the different sectors. ### The firm's point of view Aiming to work with a simple model, we follow Gali [16, Chapter 2]. It then appears that the firm's problem corresponds to an optimization performed at each period, depending on the state of the world. This problem will depend, in particular, on the productivity and the tax processes introduced above. Moreover, it will also depend on \(P^{i}\) and \(W^{i}\), two \(\mathbb{G}\)-adapted positive stochastic processes representing respectively the price of good \(i\) and the wage paid in sector \(i\in\mathcal{I}\). We start by considering the associated deterministic problem below, when time and randomness are fixed. Solution for the deterministic problemWe denote \(\overline{a}\in(0,+\infty)^{I}\) the level of technology in each sector, \(\overline{p}\in(0,\infty)^{I}\) the price of the goods produced by each sector, \(\overline{w}\in(0,\infty)^{I}\) the nominal wage in each sector, \(\overline{\tau}\in[0,1)^{I}\) and \(\overline{\zeta}\in[0,1)^{I\times I}\) the taxes on production and consumption of goods. For \(i\in\mathcal{I}\), we consider a representative firm of sector \(i\), with technology described by the production function \[\mathbb{R}_{+}\times\mathbb{R}_{+}^{I}\ni(n,z)\mapsto F^{i}_{ \overline{a}}(n,z)=\overline{a}^{i}n^{\psi^{i}}\prod_{j\in\mathcal{I}}(z^{j}) ^{\boldsymbol{\lambda}^{ji}}\in\mathbb{R}_{+},\] where \(n\) represents the number of hours of work in the sector, and \(z^{j}\) the firm's consumption of intermediary input produced by sector \(j\). The coefficients \(\psi\in(\mathbb{R}_{+}^{*})^{I}\) and \(\boldsymbol{\lambda}\in(\mathbb{R}_{+}^{*})^{I\times I}\) are elasticities with respect to the corresponding inputs. Overall, we assume a constant return to scale, namely \[\psi^{i}+\sum_{j\in\mathcal{I}}\boldsymbol{\lambda}^{ji}=1,\qquad \text{for each }i\in\mathcal{I}. \tag{1.2}\] The management of firm \(i\) then solves the classical problem of profit maximization \[\widehat{\Pi}^{i}_{(\overline{a},\overline{w},\overline{p}, \overline{\tau},\overline{\zeta})}:=\sup_{(n,z)\in\mathbb{R}_{+}\times\mathbb{ R}_{+}^{I}}\Pi^{i}(n,z), \tag{1.3}\] where, omitting the dependency in \((\overline{a},\overline{w},\overline{p},\overline{\tau},\overline{\zeta})\), \[\Pi^{i}(n,z):=F^{i}_{\overline{a}}(n,z)(1-\overline{\tau}^{i})\overline{p}^{i}- \overline{w}^{i}n-\sum_{j\in\mathcal{I}}z^{j}(1+\overline{\zeta}^{ji})\overline {p}^{j}.\] Note that \(F^{i}_{\overline{a}}(n,z)(1-\overline{\tau}^{i})\overline{p}^{i}\) represents the firm's revenues after carbon tax, that \(\overline{w}^{i}n\) stands for the firm's total compensations, and that \(\sum_{j\in\mathcal{I}}z^{j}(1+\overline{\zeta}^{ji})\overline{p}^{j}\) is the firm's total intermediary inputs. Now, we would like to solve the optimization problem for the firms, namely determine the optimal demands \(\mathfrak{n}\) and \(\mathfrak{z}\) as functions of \((\overline{a},\overline{w},\overline{p},\overline{\tau},\overline{\zeta})\). Because we will lift these optimal quantities in a dynamical stochastic setting, we impose that they are expressed as measurable functions. We thus introduce: **Definition 1.4**.: An admissible solution to problem (1.3) is a pair of measurable functions \[(\mathfrak{n},\mathfrak{z}):(0,+\infty)^{I}\times(0,+\infty)^{I}\times(0,+ \infty)^{I}\times(0,+\infty)^{I}\times[0,1)^{I}\times[0,1)^{I\times I}\to[0,+ \infty)^{I}\times[0,+\infty)^{I\times I},\] such that, for each sector \(i\), denoting \(\overline{n}:=\mathfrak{n}^{i}(\overline{a},\overline{w},\overline{p}, \overline{\tau},\overline{\zeta})\) and \(\overline{z}:=\mathfrak{z}^{\,\,\,\,\,i}(\overline{a},\overline{w}, \overline{p},\overline{\tau},\overline{\zeta})\), \[F^{i}_{\overline{a}}(\overline{n},\overline{z})(1-\overline{\tau}^{i}) \overline{p}^{i}-\overline{w}^{i}\overline{n}-\sum_{j\in\mathcal{I}}\overline {z}^{j}(1+\overline{\zeta}^{ji})\overline{p}^{j}=\widehat{\Pi}^{i}_{( \overline{a},\overline{w},\overline{p},\overline{\tau},\overline{\zeta})},\] and \(F^{i}_{\overline{a}}(\overline{n},\overline{z})>0\) (non-zero production), according to (1.3). **Remark 1.5**.: The solution obviously depends also on the coefficients \(\psi\) and \(\boldsymbol{\lambda}\). But these are fixed once and we will not study the dependence of the solution with respect to them. **Proposition 1.6**.: _There exists admissible solutions in the sense of Definition 1.4. Any admissible solution is given by for all \(i\in\mathcal{I}\), \(\mathfrak{n}^{i}>0\) and for all \((i,j)\in\mathcal{I}^{2}\),_ \[\mathfrak{z}^{ji}=\frac{\boldsymbol{\lambda}^{ji}}{\psi^{i}}\frac{\overline{ w}^{i}}{(1+\overline{\zeta}^{ji})\overline{p}^{j}}\mathfrak{n}^{i}>0. \tag{1.4}\] _Moreover, it holds that \(\widehat{\Pi}^{i}_{(\overline{a},\overline{w},\overline{p},\overline{\tau}, \overline{\zeta})}=0\) (according to (1.3)) and_ \[\mathfrak{n}^{i} =\psi^{i}F^{i}_{\overline{a}}(\mathfrak{n}^{i},\mathfrak{z}^{ \,\,\,i})\frac{(1-\overline{\tau}^{i})\overline{p}^{i}}{\overline{w}^{i}}, \tag{1.5a}\] \[\mathfrak{z}^{ji} =\boldsymbol{\lambda}^{ji}F^{i}_{\overline{a}}(\mathfrak{n}^{i}, \mathfrak{z}^{\,\,\,i})\frac{(1-\overline{\tau}^{i})\overline{p}^{i}}{(1+ \overline{\zeta}^{ji})\overline{p}^{j}}. \tag{1.5b}\] Proof.: We study the optimization problem for the representative firm \(i\in\mathcal{I}\). Since \(\psi^{i}>0\) and \(\boldsymbol{\lambda}^{ji}>0\), for all \(j\in\mathcal{I}\), as soon as \(n=0\) or \(z^{j}=0\), for some \(j\in\mathcal{I}\), the production is equal to \(0\). From problem (1.3), we obtain that necessarily \(n\neq 0\) and \(z^{j}\neq 0\) for all \(j\) in this case. So an admissible solution, which has non-zero production, has positive components. Setting \(\overline{n}=\mathfrak{n}^{i}(\overline{a},\overline{w},\overline{p},\overline {\tau},\overline{\zeta})>0\) and \(\overline{z}=\mathfrak{z}^{\,\,\,i}(\overline{a},\overline{w},\overline{p}, \overline{\tau},\overline{\zeta})>0\), the optimality of \((\overline{n},\overline{z})\) yields \[\partial_{n}\Pi^{i}(\overline{n},\overline{z})=0\text{ and for any }j\in\mathcal{I},\quad \partial_{z^{j}}\Pi^{i}(\overline{n},\overline{z})=0.\] We then compute \[\psi^{i}\frac{F^{i}_{\overline{a}}(\overline{n},\overline{z})}{\overline{n}}(1 -\overline{\tau}^{i})\overline{p}^{i}-\overline{w}^{i}=0\text{ and for any }j\in\mathcal{I},\quad\boldsymbol{\lambda}^{ji}\frac{F^{i}_{\overline{a}}( \overline{n},\overline{z})}{\overline{z}^{j}}(1-\overline{\tau}^{i})\overline{p} ^{i}-(1+\overline{\zeta}^{ji})\overline{p}^{j}=0,\] which leads to (1.4), (1.5a), and (1.5b). Page 8 Dynamic settingIn Section 1.3 below, we characterize the dynamics of the output and consumption processes using market equilibrium arguments. There, the optimal demand by the firm for intermediary inputs and labor is lifted to the stochastic setting where the admissible solutions then write as functions of the productivity, taxes, price and wage processes, see Definition 1.8. ### The household's point of view Let \(r_{t}\) be the (exogenous) deterministic interest rate, valued in \(\mathbb{R}_{+}\). At each time \(t\in\mathbb{N}\) and for each sector \(i\in\mathcal{I}\), we denote * \(C^{i}_{t}\) the quantity consumed of the single good in the sector \(i\), valued in \(\mathbb{R}_{+}^{*}\); * \(H^{i}_{t}\) the number of hours of work in sector \(i\), valued in \(\mathbb{R}_{+}^{*}\). We also introduce a time preference parameter \(\beta\in[0,1)\) and a utility function \(U:(0,\infty)^{2}\to\mathbb{R}\) given, for \(\varphi\geq 0\), by \(U(x,y):=\frac{\pi^{1-\sigma}}{1-\sigma}-\frac{y^{1+\varphi}}{1+\varphi}\) if \(\sigma\in[0,1)\cup(1,+\infty)\) and by \(U(x,y):=\log(x)-\frac{y^{1+\varphi}}{1+\varphi}\), if \(\sigma=1\). We also suppose that \[\mathfrak{P}:=\sup_{t\in\mathbb{N},i\in\mathcal{I}}\mathbb{E}\left[\left( \frac{P^{i}_{t}}{W^{i}_{t}}\right)^{1+\varphi}\right]<+\infty. \tag{1.6}\] For any \(C,H\in\mathscr{L}_{+}^{1}(\mathbb{G},(0,\infty)^{I})\), we introduce the wealth process \[Q_{t}:=(1+r_{t-1})Q_{t-1}+\sum_{i\in\mathcal{I}}W^{i}_{t}H^{i}_{t}-\sum_{i\in \mathcal{I}}P^{i}_{t}(1+\kappa^{i}_{t})C^{i}_{t},\qquad\text{for any }t\geq 0,\] with the convention \(Q_{-1}:=0\) and \(r_{-1}:=0\). Note that we do not indicate the dependence of \(Q\) upon \(C\) and \(H\) to alleviate the notations. For \(t\in\mathbb{N}\) and \(i\in\mathcal{I}\), \(P^{i}_{t}(1+\kappa^{i}_{t})C^{i}_{t}\) represents the household's consumption after tax in the sector \(i\). Moreover, \(W^{i}_{t}H^{i}_{t}\) is the household's labor income in the sector \(i\), \((1+r_{t-1})Q_{t-1}\) the household's capital income, and \((1+r_{t-1})Q_{t-1}+\sum_{i\in\mathcal{I}}W^{i}_{t}H^{i}_{t}\) the household's total revenue. We define \(\mathscr{A}\) as the set of all couples \((C,H)\) with \(C,H\in\mathscr{L}_{+}^{1}(\mathbb{G},(0,\infty)^{I})\) such that \[\begin{cases}&\mathbb{E}\left[\sum_{i\in\mathcal{I}}\sum_{t=0}^{\infty}\beta^ {t}|U(C^{i}_{t},H^{i}_{t})|\right]<\infty,\\ &\lim_{T\uparrow\infty}\mathbb{E}[Q_{T}|\mathcal{G}_{t}]\geq 0,\qquad\text{ for all }t\geq 0.\end{cases}\] The representative household consumes the \(I\) goods of the economy and provides labor to all the sectors. For any \((C,H)\in\mathscr{A}\), let \[\mathcal{J}(C,H):=\sum_{i\in\mathcal{I}}\mathcal{J}_{i}(C^{i},H^{i}),\qquad \text{with}\qquad\mathcal{J}_{i}(C^{i},H^{i}):=\mathbb{E}\left[\sum_{t=0}^{ \infty}\beta^{t}U(C^{i}_{t},H^{i}_{t})\right],\quad\text{for all }i\in\mathcal{I}.\] The representative household seeks to maximize its objective function by solving \[\max_{(C,H)\in\mathscr{A}}\quad\mathcal{J}(C,H). \tag{1.7}\] We choose above a separable utility function as Miranda-Pinto and Young [30] does, meaning that the representative household optimizes its consumption and hours of work for each sector independently but under a global budget constraint. The following proposition provides an explicit solution to (1.7). **Proposition 1.7**.: _Assume that (1.7) has a solution \((C,H)\in\mathscr{A}\). Then, for all \(i,j\in\mathcal{I}\), the household's optimality condition reads, for any \(t\in\mathbb{N}\),_ \[\frac{P_{t}^{i}}{W_{t}^{i}} =\frac{1}{1+\kappa_{t}^{i}}(H_{t}^{i})^{-\varphi}(C_{t}^{i})^{- \sigma}, \tag{1.8a}\] \[\frac{P_{t}^{i}}{P_{t}^{j}} =\frac{1+\kappa_{t}^{j}}{1+\kappa_{t}^{i}}\left(\frac{C_{t}^{i}}{ C_{t}^{j}}\right)^{-\sigma}. \tag{1.8b}\] Note that the discrete-time processes \(C\) and \(H\) cannot hit zero by definition of \(\mathscr{A}\), so that the quantities above are well defined. Proof.: Suppose that \(\sigma\neq 1\). We first check that \(\mathscr{A}\) is non empty. Assume that, for all \(t\in\mathbb{N}\) and \(i\in\mathcal{I}\), \(\tilde{C}_{t}^{i}=1\) and \(\tilde{H}_{t}^{i}=\frac{P_{t}^{i}(1+\kappa_{t}^{i})}{W_{t}^{i}}\), then \[\mathbb{E}\left[\sum_{i\in\mathcal{I}}\sum_{t=0}^{\infty}\beta^{t }|U(\tilde{C}_{t}^{i},\tilde{H}_{t}^{i})|\right] \leq\sum_{i\in\mathcal{I}}\sum_{t=0}^{\infty}\beta^{t}\left(\frac {1}{1-\sigma}+\frac{1}{1+\varphi}\mathbb{E}\left[\left(\frac{P_{t}^{i}(1+ \kappa_{t}^{i})}{W_{t}^{i}}\right)^{1+\varphi}\right]\right).\] \[\leq\sum_{i\in\mathcal{I}}\sum_{t=0}^{\infty}\beta^{t}\left(\frac {1}{1-\sigma}+\frac{\mathfrak{B}(1+\kappa_{t}^{i})^{1+\varphi}}{1+\varphi} \right)<+\infty,\] using (1.6). We also observe that \(Q\) built from \(\tilde{H},\tilde{C}\) satisfies \(Q_{t}=0\), for \(t\in\mathbb{N}\). Thus \((\tilde{H},\tilde{C})\in\mathscr{A}\). Let now \((\widehat{C},\widehat{H})\in\mathscr{A}\) be such that \(\mathcal{J}(\widehat{C},\widehat{H})=\max_{(C,H)\in\mathscr{A}}\mathcal{J}(C,H)\). We fix \(s\in\mathbb{N}\) and \(i\in\mathcal{I}\). Let \(\eta=\pm 1\), \(0<h<1\), \(A^{s}\in\mathcal{G}_{s}\), \(\Delta^{(i,s)}:=(\mathbf{1}_{\{i=k,s=t\}})_{k\in\mathcal{I},t\in\mathbb{N}}\) and \(\theta^{(i,s)}:=\frac{1}{2}(1\wedge\frac{W_{s}^{i}}{P_{s}^{i}(1+\kappa_{s}^{i}) })\widehat{C}_{s}^{i}\wedge\hat{H}_{s}^{i}\wedge 1>0\). Set \[\overline{C}:=\widehat{C}+\eta h\theta^{(i,s)}\mathbf{1}_{A^{s}}\Delta^{(i,s)} \text{ and }\overline{H}:=\widehat{H}+\eta h\theta^{(i,s)}\mathbf{1}_{A^{s}}\Delta^{(i,s) }\frac{P^{i}(1+\kappa^{i})}{W^{i}}.\] We observe that for \((j,t)\neq(i,s)\), \(\overline{C}_{t}^{j}=\widehat{C}_{t}^{j}\) and \(\overline{H}_{t}^{j}=\widehat{H}_{t}^{j}\) and we compute \[\overline{C}_{s}^{i}\geq\widehat{C}_{s}^{i}-\theta^{(i,s)}\geq\frac{1}{2} \widehat{C}_{s}^{i}>0.\] Similarly, we obtain \(\overline{H}_{s}^{i}>0\). We also observe that \(\overline{C}\leq\frac{3}{2}\widehat{C}\) and \(\overline{H}\leq\frac{3}{2}\widehat{H}\). Finally, we have that \[\sum_{j\in\mathcal{I}}W_{t}^{j}\overline{H}_{t}^{j}-\sum_{j\in\mathcal{I}}P_{t}^ {j}(1+\kappa_{t}^{j})\overline{C}_{t}^{j}=\sum_{j\in\mathcal{I}}W_{t}^{j} \widehat{H}_{t}^{j}-\sum_{j\in\mathcal{I}}P_{t}^{j}(1+\kappa_{t}^{j})\widehat{C }_{t}^{j}.\] This allows us to conclude that \((\overline{C},\overline{H})\in\mathscr{A}\). We have, by optimality of \((\widehat{C},\widehat{H})\), \[\mathcal{J}(\widehat{C},\widehat{H})-\mathcal{J}(\overline{C},\overline{H})= \sum_{j\in\mathcal{I}}\mathcal{J}_{j}(\widehat{C}^{j},\widehat{H}^{j})-\sum_ {j\in\mathcal{I}}\mathcal{J}_{j}(\overline{C}^{j},\overline{H}^{j})\geq 0.\] However, for all \((t,j)\neq(s,i)\), \(\overline{C}_{t}^{j}=\widehat{C}_{t}^{j}\) and \(\overline{H}_{t}^{j}=\widehat{H}_{t}^{j}\), then \[\mathbb{E}\left[\beta^{s}U(\widehat{C}_{s}^{i},\widehat{H}_{s}^{i})\right]- \mathbb{E}\left[\beta^{s}U\left(\widehat{C}_{s}^{i}+\eta h\theta^{(i,s)} \mathbf{1}_{A^{s}},\widehat{H}_{s}^{i}+\eta h\theta^{(i,s)}\mathbf{1}_{A^{s}} \frac{P_{s}^{i}(1+\kappa_{s}^{i})}{W_{s}^{i}}\right)\right]\geq 0,\] i.e. \[\frac{1}{h}\mathbb{E}\left[U(\widehat{C}_{s}^{i},\widehat{H}_{s}^{i})-U\left( \widehat{C}_{s}^{i}+\eta h\theta^{(i,s)}\mathbf{1}_{A^{s}},\widehat{H}_{s}^{i }+\eta h\theta^{(i,s)}\mathbf{1}_{A^{s}}\frac{P_{s}^{i}(1+\kappa_{s}^{i})}{W_ {s}^{i}}\right)\right]\geq 0.\] Letting \(h\) tend to \(0\), we obtain \[\mathbb{E}\left[\eta\theta^{(i,s)}\mathbf{1}_{A^{s}}\frac{\partial U}{ \partial x}(\widehat{C}_{s}^{i},\widehat{H}_{s}^{i})+\eta\theta^{(i,s)} \mathbf{1}_{A^{s}}\frac{P_{s}^{i}(1+\kappa_{s}^{i})}{W_{s}^{i}}\frac{ \partial U}{\partial y}(\widehat{C}_{s}^{i},\widehat{H}_{s}^{i})\right]\geq 0.\] Since the above holds for all \(A^{s}\in\mathcal{G}_{s}\), \(\eta=\pm 1\) and since \(\theta^{(i,s)}>0\), then \[\frac{\partial U}{\partial x}(\widehat{C}_{s}^{i},\widehat{H}_{s}^{i})+\frac{ P_{s}^{i}(1+\kappa_{s}^{i})}{W_{s}^{i}}\frac{\partial U}{\partial y}(\widehat{C}_{s} ^{i},\widehat{H}_{s}^{i})=0,\] leading to (1.8a). For \(j\in\mathcal{I}\setminus\{i\}\) and \(\theta^{(i,j,s)}:=\frac{1}{2}\left(1\wedge\frac{P_{s}^{j}(1+\kappa_{s}^{j})}{P _{s}^{i}(1+\kappa_{s}^{i})}\right)(1\wedge\widehat{C}_{s}^{i}\wedge\widehat{C }_{s}^{j})>0\), setting now \[\overline{C}:=\widehat{C}+\eta h\mathbf{1}_{A^{s}}\theta^{(i,j,s)}\left( \Delta^{(i,s)}-\Delta^{(j,s)}\frac{P^{i}(1+\kappa^{i})}{P^{j}(1+\kappa^{j})} \right)\quad\text{and}\quad\overline{H}:=\widehat{H},\] and using similar arguments as above, we obtain (1.8b). When \(\sigma=1\), we carry out an analogous proof. ### Markets equilibrium We now consider that firms and households interact on the labor and goods markets. **Definition 1.8**.: A market equilibrium is a \(\mathbb{G}\)-adapted positive random process \((\overline{W},\overline{P})\) such that 1. Condition (1.6) holds true for \((\overline{W},\overline{P})\). 2. The goods' and labor's market clearing conditions are met, namely, for each sector \(i\in\mathcal{I}\), and for all \(t\in\mathbb{N}\), \[Y_{t}^{i}=C_{t}^{i}+\sum_{j\in\mathcal{I}}Z_{t}^{ij}\qquad\text{and}\qquad H_{t }^{i}=N_{t}^{i},\] where \(N_{t}=\overline{n}(A_{t},\overline{W}_{t},\overline{P}_{t},\kappa_{t},\zeta_{t})\), \(Z_{t}=\overline{z}(A_{t},\overline{W}_{t},\overline{P}_{t},\kappa_{t},\zeta_{ t})\), \(Y=F_{A}(N,Z)\) with \((\overline{n},\overline{z})\) an admissible solution (1.5a)-(1.5b) to (1.3), from Proposition 1.6 while \(C\) and \(H\) satisfy (1.8a)-(1.8b) for \((\overline{W},\overline{P})\). In the case of the existence of a market equilibrium, we can derive equations that must be satisfied by the output production process \(Y\) and the consumption process \(C\). **Proposition 1.9**.: _Assume that there exists a market equilibrium as in Definition 1.8. Then, for \(t\in\mathbb{N}\), \(i\in\mathcal{I}\), it must hold that_ \[\left\{\begin{array}{ll}Y_{t}^{i}&=C_{t}^{i}+\sum_{j\in \mathcal{I}}\Lambda^{ij}(\mathfrak{d}_{t})\left(\frac{C_{t}^{j}}{C_{t}^{i}} \right)^{-\sigma}Y_{t}^{j},\\ Y_{t}^{i}&=A_{t}^{i}\left[\Psi^{i}(\mathfrak{d}_{t})(C_{t}^{i})^{-\sigma}Y_{t }^{i}\right]^{\frac{\psi^{i}}{1+\varphi}}\prod_{j\in\mathcal{I}}\left[\Lambda^ {ji}(\mathfrak{d}_{t})\left(\frac{C_{t}^{i}}{C_{t}^{j}}\right)^{-\sigma}Y_{t}^{ i}\right]^{\boldsymbol{\lambda}^{ji}},\end{array}\right. \tag{1.9}\] _where \(\Psi\) and \(\Lambda\) are given, for \(\overline{\mathfrak{d}}\in[0,1)^{I}\times[0,1)^{I\times I}\times[0,1)^{I}\), by_ \[\Psi(\overline{\mathfrak{d}}) :=\left(\psi^{i}\frac{1-\overline{\tau}^{i}}{1+\overline{\kappa }^{i}}\right)_{i\in\mathcal{I}}\,, \tag{1.10}\] \[\Lambda(\overline{\mathfrak{d}}) :=\left(\boldsymbol{\lambda}^{ji}\frac{1-\overline{\tau}^{i}}{1 +\overline{\zeta}_{t}^{ji}}\frac{1+\overline{\kappa}^{j}}{1+\overline{\kappa }^{i}}\right)_{j,i\in\mathcal{I}}. \tag{1.11}\] Proof.: Let \(i,j\in\mathcal{I}\) and \(t\in\mathbb{N}\). Combining Proposition 1.6 and Proposition 1.7, we obtain \[Z_{t}^{ji}=\boldsymbol{\lambda}^{ji}\frac{1-\tau_{t}^{i}}{1+\zeta_{t}^{ji}} \frac{1+\kappa_{t}^{j}}{1+\kappa_{t}^{i}}\left(\frac{C_{t}^{i}}{C_{t}^{j}} \right)^{-\sigma}Y_{t}^{i}. \tag{1.12}\] From Propositions 1.6 and 1.7 again, we also have \[N_{t}^{i}=\psi^{i}\frac{1-\tau_{t}^{i}}{1+\kappa_{t}^{i}}(H_{t}^{i})^{-\varphi} (C_{t}^{i})^{-\sigma}Y_{t}^{i}.\] The labor market clearing condition in Definition 1.8 yields \[N_{t}^{i}=\left[\psi^{i}\frac{1-\tau_{t}^{i}}{1+\kappa_{t}^{i}}(C_{t}^{i})^{- \sigma}Y_{t}^{i}\right]^{\frac{1}{1+\varphi}}. \tag{1.13}\] Then, by inserting the expression of \(N_{t}^{i}\) given in (1.13)and \(Z_{t}^{ji}\) given in (1.12) into the production function \(F\), we obtain the second equation in (1.9). The first equation in (1.9) is obtained by combining the market clearing condition with (1.12) (at index \((i,j)\) instead of \((j,i)\)). ### Output and consumption dynamics and associated growth For each time \(t\in\mathbb{N}\) and noise realization, the system (1.9) is nonlinear with \(2I\) equations and \(2I\) variables, and its well-posedness is hence relatively involved. Moreover, it is computationally heavy to solve this system for each tax trajectory and productivity scenario. We thus consider a special value for the parameter \(\sigma\) which allows to derive a unique solution in closed form. From now on, and following [19, page 63], we assume that \(\sigma=1\), namely \(U(x,y):=\log(x)-\frac{y^{1+\varphi}}{1+\varphi}\) on \((0,\infty)^{2}\). **Theorem 1.10**.: _Assume that_ 1. \(\sigma=1\)_,_ 2. \(\mathbf{I}_{I}-\boldsymbol{\lambda}\) _is not singular,_ 3. \(\mathbf{I}_{I}-\Lambda(\mathfrak{d}_{t})^{\top}\) _is not singular for all_ \(t\geq 0\)_._ _Then for all \(t\in\mathbb{N}\), there exists a unique \((C_{t},Y_{t})\) satisfying (1.9). Moreover, with \(\mathfrak{e}_{t}^{i}:=\frac{Y_{t}^{i}}{C_{t}^{i}}\) for \(i\in\mathcal{I}\), we have_ \[\mathfrak{e}_{t}=\mathfrak{e}(\mathfrak{d}_{t}):=(\mathbf{I}_{I}-\Lambda( \mathfrak{d}_{t})^{\top})^{-1}\mathbf{1}, \tag{1.14}\] _and using \(\mathcal{B}_{t}=(\mathcal{B}_{t}^{i})_{i\in\mathcal{I}}:=\left[\mathcal{A}_{t }^{i}+v^{i}(\mathfrak{d}_{t})\right]_{i\in\mathcal{I}}\) with_ \[v^{i}(\mathfrak{d}_{t}):=\log\left((\mathfrak{e}_{t}^{i})^{-\frac{\varphi \psi^{i}}{1+\varphi}}\left(\Psi^{i}(\mathfrak{d}_{t})\right)^{\frac{\psi^{i}}{ 1+\varphi}}\prod_{j\in\mathcal{I}}\left(\Lambda^{ji}(\mathfrak{d}_{t})\right) ^{\boldsymbol{\lambda}^{ji}}\right), \tag{1.15}\] _we obtain_ \[C_{t}=\exp\left((\mathbf{I}_{I}-\boldsymbol{\lambda})^{-1}\mathcal{B}_{t}\right). \tag{1.16}\] Proof.: Let \(t\in\mathbb{N}\). When \(\sigma=1\), the system (1.9) becomes for all \(i\in\mathcal{I}\), \[\left\{\begin{array}{ll}Y_{t}^{i}&=C_{t}^{i}+\sum_{j\in \mathcal{I}}\Lambda^{ij}(\mathfrak{d}_{t})\left(\frac{C_{t}^{i}}{C_{t}^{j}} \right)Y_{t}^{j},\\ Y_{t}^{i}&=A_{t}^{i}\left[\Psi^{i}(\mathfrak{d}_{t})\mathfrak{e}_{t}^{i} \right]^{\frac{\psi^{i}}{1+\varphi}}\prod_{j\in\mathcal{I}}\left[\Lambda^{ji}( \mathfrak{d}_{t})C_{t}^{j}\mathfrak{e}_{t}^{i}\right]^{\lambda^{ji}}.\end{array}\right. \tag{1.17}\] For any \(i\in\mathcal{I}\), dividing the first equation in (1.17) by \(C_{t}^{i}\), we get \[\mathfrak{e}_{t}^{i}=1+\sum_{j\in\mathcal{I}}\Lambda^{ij}(\mathfrak{d}_{t}) \mathfrak{e}_{t}^{j},\] which corresponds to (1.14), thanks to (1.2). Using \(\sum_{j\in\mathcal{I}}\boldsymbol{\lambda}^{ji}=1-\psi^{i}\) and \(Y_{t}^{i}=\mathfrak{e}_{t}^{i}C_{t}^{i}\) in the second equation in (1.17), we compute \[C_{t}^{i}=A_{t}^{i}(\mathfrak{e}_{t}^{i})^{-\frac{\varphi\psi^{i}}{1+\varphi}} \left[\Psi^{i}(\mathfrak{d}_{t})\right]^{\frac{\psi^{i}}{1+\varphi}}\prod_{j \in\mathcal{I}}\left[\Lambda^{ji}(\mathfrak{d}_{t})\right]^{\boldsymbol{\lambda }^{ji}}\prod_{j\in\mathcal{I}}(C_{t}^{j})^{\boldsymbol{\lambda}^{ji}}.\] Applying \(\log\) and writing in matrix form, we obtain \((\mathbf{I}_{I}-\boldsymbol{\lambda})\log(C_{t})=\mathcal{B}_{t}\), implying (1.16). **Remark 1.11**.: The matrix \(\mathbf{\lambda}\) is generally not diagonal, and therefore, from (1.16), the sectors (in output and in consumption) are linked to each other through their respective productivity process. Similarly, an introduction of tax in one sector affects the other ones. **Remark 1.12**.: For any \(t\in\mathbb{N}\), \(i\in\mathcal{I}\), we observe that \[\mathcal{B}_{t}^{i}=\mathcal{A}_{t}^{i}+v^{i}(\mathfrak{d}_{t}), \tag{1.18}\] where \(v^{i}(\cdot)\) is defined using (1.15). Namely, \(\mathcal{B}_{t}\) is the sum of the (random) productivity term and a term involving the taxes. The economy is therefore subject to fluctuations of two different natures: _the first one comes from the productivity process while the second one comes from the tax processes._ We now look at the dynamics of production and consumption growth. **Theorem 1.13**.: _For any \(t\in\mathbb{N}^{*}\), let \(\Delta_{t}^{\varpi}:=\log\left(\varpi_{t}\right)-\log\left(\varpi_{t-1}\right)\), for \(\varpi\in\{Y,C\}\). Then, with the same assumptions as in Theorem 1.10,_ \[\Delta_{t}^{\varpi}\sim\mathcal{N}\left(m_{t}^{\varpi},\widehat{\Sigma} \right),\qquad\text{for }\varpi\in\{Y,C\},\] _with_ \[\widehat{\Sigma} =\varepsilon^{2}(\mathbf{I}_{I}-\mathbf{\lambda})^{-1}\overline{ \Sigma}(\mathbf{I}_{I}-\mathbf{\lambda}^{\top})^{-1},\] \[m_{t}^{C} =(I-\mathbf{\lambda})^{-1}\left[\overline{\mu}+v(\mathfrak{d}_{t})-v (\mathfrak{d}_{t-1})\right],\] \[(m_{t}^{Y})^{i} =(m_{t}^{C})^{i}+\log(\mathfrak{e}^{i}(\mathfrak{d}_{t}))-\log( \mathfrak{e}^{i}(\mathfrak{d}_{t-1})),\quad\text{for all }i\in\mathcal{I},\] _where \(\overline{\mu}\) and \(\varepsilon^{2}\overline{\Sigma}\) are the mean and the variance of the stationary process \(\Theta\) (Remark 1.2), \(v\) is defined in (1.15) and \(\mathfrak{e}\) in (1.14)._ Proof.: Let \(t\in\mathbb{N}^{*}\), from (1.18), we have, for \(i\in\mathcal{I}\), \[\mathcal{B}_{t}^{i}-\mathcal{B}_{t-1}^{i}=\Theta_{t}^{i}+v^{i}(\mathfrak{d}_{ t})-v^{i}(\mathfrak{d}_{t-1}).\] Combining the previous equality with (1.16), we get \[\Delta_{t}^{C}=(\mathbf{I}_{I}-\mathbf{\lambda})^{-1}\left[\Theta_{t}+v( \mathfrak{d}_{t})-v(\mathfrak{d}_{t-1})\right]. \tag{1.19}\] Applying Remark 1.2 leads to \(\Delta_{t}^{C}\sim\mathcal{N}\left(m_{t}^{C},\widehat{\Sigma}\right)\). Using (1.14), we observe that, for \(i\in\mathcal{I}\), \[(\Delta_{t}^{Y})^{i}=(\Delta_{t}^{C})^{i}+\log(\mathfrak{e}^{i}(\mathfrak{d}_{ t}))-\log(\mathfrak{e}^{i}(\mathfrak{d}_{t-1})),\] which, using the previous characterization of the law of \(\Delta_{t}^{C}\), allows to conclude. From the previous result, we observe that output and consumption growth processes have a stationary variance but a time-dependent mean. In the context of our standing assumption 1.3, we can also make the following observation: **Corollary 1.14**.: _Let \(t\in\mathbb{N}^{*}\). If \(t\leq t_{\circ}\) (before the transition scenario) or \(t\geq t_{\star}\) (after the transition), the carbon taxes are constant and with the same assumptions as in Theorem 1.10, then_ \[\Delta_{t}^{C}=\Delta_{t}^{Y}=(\mathbf{I}_{I}-\boldsymbol{\lambda})^{-1}\Theta_ {t}. \tag{1.20}\] Theorem 1.13 and Corollary 1.14 show that our economy follows three regimes: * Before the climate transition where carbon taxes are constant, the economy is a stationary state led by productivity. * During the transition, the economy is in a transitory state led by productivity and carbon taxes. * After the transition, we reach a constant carbon price and the economy returns in a stationary state ruled by productivity. ## 2 A Firm Valuation Model When an economy is in good health, the probabilities of default are relatively low, but when it enters a recession, the number of failed firms increases significantly. The same phenomenon is observed on the loss given default. This relationship between default rate and business cycle has been extensively studied in the literature: Nickell [32] quantifies the dependency between business cycles and rating transition probabilities while Bangia [3] shows that the loss distribution of credit portfolios varies highly with the health of the economy, and Castro [11] uses an econometric model to show the link between macroeconomic conditions and the banking credit risk in European countries. Following these works, Pesaran [34] uses an econometric model to empirically characterize the time series behaviour of probabilities of default and of recovery rates. The goal of that work is _"to show how global macroeconometric models can be linked to firm-specific return processes which are an integral part of Merton-type credit risk models so that quantitative answers to such questions can be obtained"_. This simply implies that macroeconomic variables are used as systemic factors introduced in the Merton model. The endogenous variables typically include _real GDP, inflation, interest rate, real equity prices, exchange rate and real money balances_. One way to choose the macroeconomic variables would be to run a LASSO regression between the logit function (\(p\mapsto\log\left(\frac{p}{1-p}\right)\) on \((0,1)\)) of observed default rates of firms and a set of macroeconomic variables. We perform such an analysis on a segment of S&P's data in C. In addition to this statistical work, Baker, Delong and Krugman [2] show through three different models that, in a steady state economy, economic growth and asset returns are linearly related. On the one hand, economic growth is equivalent to productivity growth. On the other hand, the physical capital rate of gross profit, the net rate of return on a balanced financial portfolio and the net rate of return on equities are supposed to behave similarly. In particular, in the Solow model [38], the _physical capital rate of gross profit_ is proportional to the return-to-capital parameter, to the productivity growth, and inversely proportional to the gross saving. In the Diamond model [13], the _net rate of return on a balanced financial portfolio_ is proportional to the reduction in labor productivity growth. In the Ramsey model [36] with a log utility function, the _net rate of return on equities_ is proportional to the reduction in labor productivity growth. Consider a portfolio of \(N\in\mathbb{N}^{*}\) firms and fix \(n\in\{1,\ldots,N\}\). Inspired by the aforementioned works, we introduce the following assumption: **Assumption 2.1**.: The \(\mathbb{R}^{N}\)-valued return process on assets of the firms denoted by \((\omega_{t})_{t\in\mathbb{N}^{*}}\) is linear in the economic factors (consumption growth by sector introduced in (1.19)), specifically we set for all \(t\in\mathbb{N}\), \[\omega_{t}=\tilde{\mathfrak{a}}\Delta_{t}^{C}+\mathfrak{b}_{t},\] for \(\tilde{\mathfrak{a}}\in\mathbb{R}^{N\times I}\), where the idiosyncratic noise \((\mathfrak{b}_{t})_{t\in\mathbb{N}}:=(\mathfrak{b}_{t}^{n})_{t\in\mathbb{N}, 1\leq n\leq N}\) is i.i.d. with law \(\mathcal{N}(0,\mathrm{diag}(\sigma_{\mathfrak{b}^{n}}^{2}))\) with \(\sigma_{\mathfrak{b}^{n}}>0\) for \(n\in\{1,\ldots,N\}\). Moreover, \((\Delta_{t}^{C})_{t\in\mathbb{N}^{*}}\) and \((\mathfrak{b}_{t})_{t\in\mathbb{N}}\) are independent. **Remark 2.2**.: The above definition of assets returns can be rewritten, with \(\mathfrak{a}:=\tilde{\mathfrak{a}}(\mathbf{I}_{I}-\boldsymbol{\lambda})^{-1}\), as \[\omega_{t}=\mathfrak{a}\left(\Theta_{t}+v(\mathfrak{d}_{t})-v(\mathfrak{d}_{t -1})\right)+\mathfrak{b}_{t}, \tag{2.1}\] according to (1.19). We call \(\mathfrak{a}\) and \(\tilde{\mathfrak{a}}\)_factor loadings_, quantifying the extent to which \(\omega\) is related to \(\Delta^{C}\). We define the filtration \(\mathbb{F}=(\mathcal{F}_{t})_{t\in\mathbb{N}}\) by \(\mathcal{F}_{t}=\sigma\left(\mathcal{G}_{t}\cup\sigma\left\{\mathfrak{b}_{s}: s\in[0,t]\cap\mathbb{N}\right\}\right)\) for \(t\in\mathbb{N}\), denote \(\mathbb{E}_{t}[\cdot]:=\mathbb{E}[\cdot|\mathcal{F}_{t}]\) and, for all \(1\leq n\leq N\), \[\mathcal{W}_{t}^{n}:=\sum_{u=1}^{t}\mathfrak{b}_{u}^{n}. \tag{2.2}\] In addition to the empirical results on the dependency between default indicators and business cycles, firm valuation models provide additional explanatory arguments. On the one hand, the Merton model says that default metrics (such as default probability) depend on the firm's value; on the other hand, valuation models help express the firm's value as a function of economic cycles. Reis and Augusto [35] organize valuations models in five groups: _models based on the discount of cash flows, models of dividends, models related to the firm's value, models based on accounting elements creation, and sustaining models in real options._ **Definition 2.3**.: Considering the Discounted Cash Flows method, following Kruschwitz and Loffer [26], the firm value is the sum of the present value of all future cash flows. For any time \(t\geq 0\) and firm \(n\in\{1,\ldots,N\}\), we note \(F_{t}^{n}\) the free cash flows of \(n\) at \(t\), and \(r>0\) the discount rate1. Then, the value \(V_{t}^{n}\) of the firm \(n\), at time \(t\), is Footnote 1: Here, \(r\) is constant, deterministic and independent of the companies. However, in a more general setting, it could be a stochastic process depending on the firm. \[V_{t}^{n}:=\mathbb{E}_{t}\left[\sum_{s=0}^{+\infty}e^{-rs}F_{t+s}^{n}\right]. \tag{2.3}\] To calculate precisely the firm value, we introduce first the cash flows dynamics. **Assumption 2.4**.: For \(n\in\{1,\ldots,N\}\), set \[F_{t+1}^{n}=F_{t}^{n}e^{\omega_{t+1}^{n}},\qquad\text{for }t\in\mathbb{N},\] with \(F_{0}^{n}\) and \(\frac{1}{F_{0}^{n}}\) both belonging to \(\mathcal{L}^{\infty}(\mathcal{F}_{0})\). The following proposition, proved in B.1, studies the well-posedness of the firm value. **Proposition 2.5**.: _Assume that \(|\Gamma|<1\) and that_ \[\rho:=\max_{1\leq n\leq N}\left\{\mathfrak{a}^{n}\overline{\mu}+\frac{1}{2} \sigma_{\mathfrak{b}^{n}}^{2}+\frac{\varepsilon^{2}}{2}|\mathfrak{a}^{n\cdot }|^{2}|\sqrt{\Sigma}|^{2}(1-|\Gamma|)^{-2}\right\}<r. \tag{2.4}\] _Then, for any \(t\in\mathbb{N}\) and \(1\leq n\leq N\), \(V_{t}^{n}\) is well defined and for some \(p>1\), which does not depend on t nor on \(n\) but on \(\rho\) and \(r\), \(\|V_{t}^{n}\|_{p}\leq C_{p}\|F_{t}^{n}\|_{q}<+\infty\), for some \(q>1\) that depends on \(p,\rho\) and \(r\)._ **Remark 2.6**.: In the above proposition, (2.4) guarantees the non-explosion of the expected discounted future cash flows of the firm. Moreover, we could remove the condition \(|\Gamma|<1\). Indeed, we know that, by Assumption 1.1, \(\Gamma\) has eigenvalues with absolute value strictly lower than one. However, we would need to alter condition (2.4) by using a matrix norm \(|\cdot|_{s}\) (subordinated) s.t. \(|\Gamma|_{s}<1\). It should also involve equivalence of norm constants between \(|\cdot|\) and \(|\cdot|_{s}\). Now, the question is how to obtain a more explicit expression for \(V_{t}^{n}\). We can describe it as a function of the underlying processes driving the economy. However, this will not lead to an easily tractable formula for \(V\), but could be written as a fixed-point problem that can be solved by numerical methods such as Picard iteration [7] or by deep learning methods[24]. To facilitate the forthcoming credit risk analysis, we approximate \(\frac{V_{t}^{n}}{F_{t}^{n}}\) by the first term of an expansion in terms of the noise intensity \(\varepsilon\) appearing in \(\Theta\) (Assumption 1.1). An expanded expression of the firm value is \[V_{t}^{n}=F_{t}^{n}\left(1+\sum_{s=1}^{+\infty}e^{-rs}\mathbb{E}_{t}\left[ \exp\left(\mathfrak{a}^{n\cdot}\left(v(\mathfrak{d}_{t+s})-v(\mathfrak{d}_{t} )+\sum_{u=1}^{s}\Theta_{t+u}\right)+\sum_{u=1}^{s}\mathfrak{b}_{t+u}^{n} \right)\right]\right).\] Let us introduce, for a firm \(n\) and \(t\in\mathbb{N}\), the quantity \[\mathcal{V}_{t}^{n}:=F_{t}^{n}\left(1+\sum_{s=1}^{+\infty}e^{-rs}\mathbb{E}_{t} \left[\exp\left(s\mathfrak{a}^{n}\overline{\mu}+\mathfrak{a}^{n\cdot}\left(v( \mathfrak{d}_{t+s})-v(\mathfrak{d}_{t})\right)+\sum_{u=1}^{s}\mathfrak{b}_{t+u }^{n}\right)\right]\right). \tag{2.5}\] We remind that \(\Theta\) depends on \(\varepsilon\) according to the Standing Assumption 1.1, therefore \(\omega\) according to Assumption 2.1 and \(F^{n}\) according to Assumption 2.4 also. This gives the dependence of \(V^{n}\) on \(\varepsilon\). From (2.5), \(\frac{\mathcal{V}_{t}^{n}}{F_{t}^{n}}\) almost corresponds to the definition of \(\frac{V_{t}^{n}}{F_{t}^{n}}\) but with the noise term coming from the economic factor in the definition of \(\Theta\) set to zero, for the dates after \(t\), according to (2.3), (B.1) and (2.1). We first make the following observation. **Lemma 2.7**.: _For any \(n\in\{1,\ldots,N\}\), assume that \(\varrho_{n}:=\frac{1}{2}\sigma_{\mathfrak{b}_{n}}^{2}+\mathfrak{a}^{n\cdot} \overline{\mu}-r<0\). Then \(\mathcal{V}_{t}^{n}\) is well defined for all \(t\in\mathbb{N}\) and_ \[\mathcal{V}_{t}^{n}=F_{0}^{n}\mathfrak{R}_{t}^{n}(\mathfrak{d})\exp\left( \mathfrak{a}^{n\cdot}(\mathcal{A}_{t}^{\circ}-v(\mathfrak{d}_{0}))\right) \exp\left(\mathcal{W}_{t}^{n}\right), \tag{2.6}\] _where \(\mathcal{W}\) is defined in (2.2) and_ \[\mathfrak{R}_{t}^{n}(\mathfrak{d}):=\sum_{s=0}^{\infty}e^{\varrho_{n}s}\exp \left(\mathfrak{a}^{n\cdot}v(\mathfrak{d}_{t+s})\right). \tag{2.7}\] _Moreover, with \(t_{\circ}\) and \(t_{\star}\) defined in Standing Assumption 1.3, we obtain the following explicit form,_ \[\mathfrak{R}_{t}^{n}(\mathfrak{d})=\left\{\begin{array}{ll} \frac{e^{\mathfrak{a}^{n}\cdot v(\mathfrak{d}_{t_{\star}})}}{1-e^{\varrho_{n}} },&\text{if }t\geq t_{\star},\\ \sum_{s=0}^{t_{\star}-t}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n \cdot}v(\mathfrak{d}_{t+s})\right)+\frac{e^{\mathfrak{a}^{n\cdot}v(\mathfrak{ d}_{t_{\star}})+\varrho_{n}(t_{\star}-t+1)}}{1-e^{\varrho_{n}}},&\text{if }t_{\circ} \leq t<t_{\star},\\ e^{\mathfrak{a}^{n}v(\mathfrak{d}_{t_{\circ}})}\frac{1-e^{\varrho_{n}(t_{ \circ}-t+1)}}{1-e^{\varrho_{n}}}+\sum_{s=t_{\circ}-t+1}^{t_{\star}-t}e^{\varrho _{n}s}e^{\mathfrak{a}^{n\cdot}v(\mathfrak{d}_{t+s})}+\frac{e^{\mathfrak{a}^{n \cdot}v(\mathfrak{d}_{t_{\star}})+\varrho_{n}(t_{\star}-t+1)}}{1-e^{\varrho_{ n}}},&\text{otherwise}.\end{array}\right.\] Proof.: Let \(t\in\mathbb{N}\) and introduce, for \(K>t_{\star}\), \[\mathcal{V}_{t}^{n,K}:=F_{t}^{n}\left(1+\sum_{s=1}^{K}e^{-rs}\mathbb{E}_{t} \left[\exp\left(s\mathfrak{a}^{n\cdot}\overline{\mu}+\mathfrak{a}^{n\cdot} \left(v(\mathfrak{d}_{t+s})-v(\mathfrak{d}_{t})\right)+\sum_{u=1}^{s} \mathfrak{b}_{t+u}^{n}\right)\right]\right). \tag{2.8}\] Similar computations as (in fact easier than) the ones performed in the proof of Proposition 2.5 show that \(\mathcal{V}_{t}^{n}=\lim_{K\to+\infty}\mathcal{V}_{t}^{n,K}\) is well defined in \(\mathcal{L}^{q}(\mathcal{H},\mathbb{E})\) for any \(q\geq 1\). Furthermore, \[\mathcal{V}_{t}^{n,K}=F_{t}^{n}\left(1+\sum_{s=1}^{K}e^{\varrho_{n}s}\exp\left( \mathfrak{a}^{n\cdot}\left(v(\mathfrak{d}_{t+s})-v(\mathfrak{d}_{t})\right) \right)\right)=F_{t}^{n}\left(1+e^{-\mathfrak{a}^{n\cdot}v(\mathfrak{d}_{t})} \sum_{s=1}^{K}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n\cdot}v(\mathfrak{d}_{t+s })\right)\right),\] where \(\varrho_{n}\) is defined in the lemma, and from Assumptions 2.1 and 2.4, \[F_{t}^{n}=F_{0}^{n}\exp\left(\sum_{u=1}^{t}w_{u}^{n}\right)=F_{0}^{n}e^{ \mathfrak{a}^{n\cdot}\left(v(\mathfrak{d}_{t})-v(\mathfrak{d}_{0})\right)}\exp \left(\mathfrak{a}^{n\cdot}\mathcal{A}_{t}^{\circ}+\mathcal{W}_{t}^{n}\right).\] We then have \[F_{t}^{n}\left(1+e^{-\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t})}\sum_{s=1}^{K}e^{ \varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t+s})\right) \right)=F_{0}^{n}e^{-\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t})}\exp\left( \mathfrak{a}^{n^{\cdot}}\mathcal{A}_{t}^{\circ}+\mathcal{W}_{t}^{n}\right)\sum _{s=0}^{K}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t+s })\right).\] (1) If \(t<t_{\circ}\), then \[\mathfrak{R}_{t}^{n,K}(\mathfrak{d}) :=\sum_{s=0}^{K}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}} v(\mathfrak{d}_{t+s})\right)\] \[=\sum_{s=0}^{t_{\circ}-t}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{ n^{\cdot}}v(\mathfrak{d}_{t+s})\right)+\sum_{s=t_{\circ}-t+1}^{t_{\star}-t}e^{ \varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t+s})\right)+ \sum_{s=t_{\star}-t+1}^{K}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}} v(\mathfrak{d}_{t+s})\right)\] \[=e^{\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t_{\circ}})}\frac{1- e^{\varrho_{n}(t_{\circ}-t+1)}}{1-e^{\varrho_{n}}}+\sum_{s=t_{\circ}-t+1}^{t_{ \star}-t}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t+s })\right)+e^{\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t+s})+\varrho_{n}(t_{ \star}-t+1)}\frac{1-e^{\varrho_{n}(K-t_{\star}+t)}}{1-e^{\varrho_{n}}}.\] (2) If \(t_{\circ}\leq t<t_{\star}\), then \[\sum_{s=0}^{K}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}} v(\mathfrak{d}_{t+s})\right) =\sum_{s=0}^{t_{\star}-t}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{ n^{\cdot}}v(\mathfrak{d}_{t+s})\right)+\sum_{s=t_{\star}-t+1}^{K}e^{\varrho_{n}s} \exp\left(\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t+s})\right)\] \[=\sum_{s=0}^{t_{\star}-t}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{ n^{\cdot}}v(\mathfrak{d}_{t+s})\right)+e^{\mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t _{\star}})+\varrho_{n}(t_{\star}-t+1)}\frac{1-e^{\varrho_{n}(K-t_{\star}+t)}}{ 1-e^{\varrho_{n}}}.\] (3) If \(t\geq t_{\star}\), then \[\sum_{s=0}^{K}e^{\varrho_{n}s}\exp\left(\mathfrak{a}^{n^{\cdot}} v(\mathfrak{d}_{t+s})\right)=\sum_{s=0}^{K}e^{\varrho_{n}s}\exp\left( \mathfrak{a}^{n^{\cdot}}v(\mathfrak{d}_{t_{\star}})\right)=e^{\mathfrak{a}^{n ^{\cdot}}v(\mathfrak{d}_{t_{\star}})}\frac{1-e^{\varrho_{n}(K+1)}}{1-e^{ \varrho_{n}}}.\] Finally, \(e^{\varrho_{n}(K+1)}\) and \(e^{\varrho_{n}(K-t_{\star}+t)}\) converge to \(0\) for \(\varrho_{n}<0\) as \(K\) tends to infinity, and the result follows. It follows from Lemma 2.7 that at time \(t\in\mathbb{N}\), the (proxy of the) firm value \(\mathcal{V}_{t}^{n}\) is a function of the productivity processes \(\mathcal{A}_{t}\), the carbon taxes processes \(\tau,\zeta,\kappa\), the parameters \(F_{0}^{n}\), \(\mathfrak{a}^{n^{\cdot}}\), \(\sigma_{\mathfrak{b}^{n}}^{2},\varepsilon\) and the different parameters introduced in Section 1. Moreover, we can make precise the law of \(\mathcal{V}_{t}^{n}|\mathcal{G}_{t}\). **Corollary 2.8**.: _For all \(t\in\mathbb{N}\),_ \[(\log\mathcal{V}_{t}^{n})_{1\leq n\leq N}|\mathcal{G}_{t}\sim\mathcal{N} \left(\log(F_{0}^{n})+\mathfrak{m}(\mathfrak{d},t,\mathcal{A}_{t}^{\circ}), \operatorname{diag}[t\sigma_{\mathfrak{b}_{n}}^{2}]\right),\] _with for \(n\in\{1,\ldots,N\}\),_ \[\mathfrak{m}^{n}(\mathfrak{d},t,\mathcal{A}_{t}^{\circ}):=\mathfrak{a}^{n^{ \cdot}}\left(\mathcal{A}_{t}^{\circ}-v(\mathfrak{d}_{0})\right)+\log( \mathfrak{R}_{t}^{n}(\mathfrak{d})).\] Proof.: Let \(t\geq 1\) and \(n\in\{1,\ldots,N\}\), we have from (2.6) \[\mathcal{V}_{t}^{n}=F_{0}^{n}\mathfrak{R}_{t}^{n}(\mathfrak{d})\exp\left(\mathfrak{ a}^{n\cdot}(\mathcal{A}_{t}^{\circ}-v(\mathfrak{d}_{0}))\right)\exp\left(\sum_{u=1}^ {t}\mathfrak{b}_{u}^{n}\right),\] then \[\log(\mathcal{V}_{t}^{n})=\log(F_{0}^{n})+\log(\mathfrak{R}_{t}^{n}(\mathfrak{d }))+\mathfrak{a}^{n\cdot}(\mathcal{A}_{t}^{\circ}-v(\mathfrak{d}_{0}))+\sum_{u =1}^{t}\mathfrak{b}_{u}^{n}.\] Therefore \(\log(\mathcal{V}_{t}^{n})|\mathcal{G}_{t}\sim\mathcal{N}\left(\log(F_{0}^{n} \mathfrak{R}_{t}^{n}(\mathfrak{d}))+\mathfrak{a}^{n\cdot}(\mathcal{A}_{t}^{ \circ}-v(\mathfrak{d}_{0})),t\sigma_{\mathfrak{b}^{n}}^{2}\right)\) and the conclusion follows. The following remark whose proof is in B.2 gives the law of the firm value at time \(t+T\) conditionally on \(\mathcal{G}_{t}\), with \(t,T\in\mathbb{N}\). **Remark 2.9**.: For \(t,T\in\mathbb{N}\) and \(1\leq n\leq N\), denote \[\mathcal{K}^{n}(\mathfrak{d},t,T,\mathcal{A}_{t}^{\circ},\Theta_{t}):= \mathfrak{m}^{n}(\mathfrak{d},t,\mathcal{A}_{t}^{\circ})+\log\left(\frac{ \mathfrak{R}_{t+T}^{n}(\mathfrak{d})}{\mathfrak{R}_{t}^{n}(\mathfrak{d})} \right)+\mathfrak{a}^{n\cdot}\Gamma\Upsilon_{T-1}\Theta_{t}+\mathfrak{a}^{n \cdot}\left(\sum_{u=1}^{T}\Upsilon_{u-1}\right)\mu, \tag{2.9}\] and \[\mathcal{L}^{n}(t,T):=\sigma_{\mathfrak{b}^{n}}^{2}(t+T)+\varepsilon^{2}\sum_ {u=1}^{T}(\mathfrak{a}^{n\cdot}\Upsilon_{T-u})\Sigma(\mathfrak{a}^{n\cdot} \Upsilon_{T-u})^{\top}. \tag{2.10}\] We have \[\log(\mathcal{V}_{t+T}^{n})|\mathcal{G}_{t}\sim\mathcal{N}\left(\log(F_{0}^{n} )+\mathcal{K}^{n}(\mathfrak{d},t,T,\mathcal{A}_{t}^{\circ},\Theta_{t}), \mathcal{L}^{n}(t,T)\right).\] In the following, we will work directly with \(\mathcal{V}_{t}^{n}\) instead of \(V_{t}^{n}\), as it appears to be a tractable proxy (its law can be easily identified). Indeed, this is justified when the noise term in the productivity process is small as shown in the following result [2]. The following proposition, whose proof is given in B.3, shows that \(\frac{\mathcal{V}_{t}^{n}}{F_{t}^{n}}\) and \(\frac{V_{t}^{n}}{F_{t}^{n}}\) gets closer as \(\varepsilon\) gets to \(0\). **Proposition 2.10**.: _Assume that \(|\Gamma|<1\) and that (2.4) is satisfied, then_ \[\mathbb{E}\left[\left|\frac{V_{t}^{n}}{F_{t}^{n}}-\frac{\mathcal{V}_{t}^{n}}{F _{t}^{n}}\right|\right]\leq C\varepsilon,\] _for some positive constant \(C\) (depending on \(t,\rho\))._ ## 3 Credit Risk Model ### General information on credit risk In credit risk assessment, Internal Rating Based (IRB) [33] introduces four parameters: the probability of default (PD) measures the default risk associated with each borrower, the exposure at default (EAD) measures the outstanding debt at the time of default, the loss given default (LGD) denotes the expected percentage of EAD that is lost if the debtor defaults, and the effective maturity \(T\) represents the duration of the credit. With these four parameters, we can compute the portfolio loss \(L\), with a few assumptions: **Assumption 3.1**.: Consider a portfolio of \(N\in\mathbb{N}^{*}\) credits. For \(1\leq n\leq N\), 1. \((\mathrm{EAD}^{n}_{t})_{t\in\mathbb{N}^{*}}\) is a \(\mathbb{R}^{+}_{*}\)-valued deterministic process; 2. \((\mathrm{LGD}^{n}_{t})_{t\in\mathbb{N}^{*}}\) is a \((0,1]\)-valued deterministic process; 3. \(\mathfrak{B}^{n}\) is a deterministic scalar. We will also denote \(B^{n}:=\dfrac{\mathfrak{B}^{n}}{F^{n}_{0}}\). Even if the LGD and the EAD are assumed here to be deterministic, we could take them to be stochastic. In particular, they could (or should) depend on the climate transition scenario: (1) the LGD could be impacted by the premature write down of assets - that is stranded assets - due to the climate transition, while (2) the EAD could depend on the bank's balance sheet, which can be modified according to the bank's policy (if related to climate transition). This will be the object of future research. **Remark 3.2**.: We recall that for all \(n\in\{1,\ldots,N\}\), we consider \(\mathcal{V}^{n}_{t}\), defined in (2.5), to be the proxy value of firm \(n\) at time \(t\) and its conditional law given in Corollary 2.8. In the Merton model that we follow, the default of entity \(n\) occurs when \(\mathcal{V}^{n}_{t}\) falls below a given barrier \(\mathfrak{B}^{n}\), related to the net debt, given in Assumption 3.1(3). **Definition 3.3**.: For \(t\geq 1\), the potential loss of the portfolio at time \(t\) is defined as \[L^{N}_{t}:=\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t}\cdot\mathrm{LGD}^{n}_{t}\cdot \mathbf{1}_{\{\mathcal{V}^{n}_{t}\leq\mathfrak{B}^{n}\}}. \tag{3.1}\] We take the point of view of the bank managing its credit portfolio and which has to compute various risk measures impacting its daily/monthly/quarterly/yearly routine, some of which may be required by regulators. We are also interested in understanding and visualizing how these risk measures evolve in time and particularly how they change due to carbon tax paths, i.e. due to transition scenarios. This explains why all these measures are defined below with respect to the information available at \(t\), namely the \(\mathbb{F}\)-filtration. We now study statistics of the process \((L^{N}_{t})_{t\geq 0}\), typically its mean, variance, and quantiles, under various transition scenarios. This could be achieved through (intensive) numerical simulations, however we shall assume that the portfolio is fine grained so that the idiosyncratic risks can be averaged out. The above quantities can then be approximated by only taking into account the common risk factors. We thus make the following assumption: **Assumption 3.4**.: For all \(t\in\mathbb{N}^{*}\), the family \((\mathrm{EAD}^{n}_{t})_{n=1,\ldots,N}\) is a sequence of positive constants such that 1. \(\sum_{n\geq 1}\mathrm{EAD}^{n}_{t}=+\infty\); 2. there exists \(\upsilon>0\) such that \(\frac{\mathrm{EAD}^{n}_{t}}{\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t}}=\mathcal{O}(N^ {-(\frac{1}{2}+\upsilon)})\), as \(N\) tends to infinity. The following theorem, similar to the one introduced in [20, Propositions 1, 2] and used when a portfolio is perfectly fine grained, shows that we can approximate the portfolio loss by the conditional expectation of losses given the systemic factor. **Theorem 3.5**.: _For all \(t\in\mathbb{N}\), define_ \[\mathrm{L}_{t}^{\mathbb{G},N}:=\mathbb{E}\left[L_{t}^{N}\big{|}\mathcal{G}_{t} \right]=\sum_{n=1}^{N}\mathrm{EAD}_{t}^{n}\cdot\mathrm{LGD}_{t}^{n}\cdot\Phi \left(\frac{\log(B^{n})-\mathfrak{m}^{n}(\mathfrak{d},t,\mathcal{A}_{t}^{ \circ})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t}}\right),\] _where \(\mathfrak{m}^{n}(\mathfrak{d},t,\mathcal{A}_{t}^{\circ})\) is defined in Corollary 2.8. Under Assumptions 3.1 and 3.4, \(L_{t}^{N}-\mathrm{L}_{t}^{\mathbb{G},N}\) converges to zero almost surely as \(N\) tends to infinity, for all \(t\in\mathbb{N}\)._ This implies that, at each time \(t\in\mathbb{N}\), in the limit, we only require the knowledge of \(\mathrm{L}_{t}^{\mathbb{G},N}\) to approximate the distribution of \(L_{t}^{N}\). In the following, we will use \(\mathrm{L}_{t}^{\mathbb{G},N}\) as a proxy for \(L_{t}^{N}\). Proof.: Let \(t\in\mathbb{N}\). We have \[\mathrm{L}_{t}^{\mathbb{G},N} =\mathbb{E}\left[L_{t}^{N}\big{|}\mathcal{G}_{t}\right]\] \[=\mathbb{E}\left[\sum_{n=1}^{N}\mathrm{EAD}_{t}^{n}\cdot\mathrm{ LGD}_{t}^{n}\cdot\mathbf{1}_{\{\mathcal{V}_{t}^{n}\leq\mathfrak{B}^{n}\}} \middle|\mathcal{G}_{t}\right]\quad\text{from (\ref{eq:Lt}) and (\ref{eq:Lt}) in Assumption \ref{eq:Lt}}\] \[=\sum_{n=1}^{N}\mathrm{EAD}_{t}^{n}\cdot\mathrm{LGD}_{t}^{n} \cdot\mathbb{P}\left[\mathcal{V}_{t}^{n}\leq\mathfrak{B}^{n}\right]\mathcal{G} _{t}\] \[=\sum_{n=1}^{N}\mathrm{EAD}_{t}^{n}\cdot\mathrm{LGD}_{t}^{n} \cdot\Phi\left(\frac{\log(\mathfrak{B}^{n})-\log(F_{0}^{n})-\mathfrak{m}^{n}( \mathfrak{d},t,\mathcal{A}_{t}^{\circ})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t}} \right)\quad\text{from Corollary \ref{eq:Lt}.}\] The rest of the proof requires a version of the strong law of large numbers (Appendix of [20, Propositions 1, 2]), where the systematic risk factor is \(\mathcal{A}_{t}^{\circ}\). For stress testing, it is fundamental to estimate through some statistics of loss, bank's capital evolution. In particular, some key measures for the bank to understand the (dynamics of the) risk in its portfolios of loans are the loss and the probability of default conditionally to the information generated by the risk factors. We would like to understand how these key measures are distorted when we introduce carbon taxes and, to this aim, we rely on the results derived in Section 1 and Section 2. Precisely, given a portfolio of \(N\in\mathbb{N}^{*}\) counterparts, each of which belonging to any sector, for a date \(t\in\mathbb{N}\) and a horizon \(T\in\mathbb{N}\), we would like to know these risk measures at \(t\) of the portfolio at horizon \(T\). **Definition 3.6**.: Let \(t\geq 0\) be the time at which the risk measures are computed over a period \(T\geq 1\). As classically done (and shown in Figure 1), the potential loss is divided into three components [39]: * The conditional Expected Loss (EL) is the amount that an institution expects to lose on a credit exposure seen at \(t\) and over a given time horizon T. It has to be quantified/included into the products and charged to the clients, and reads \[\mathrm{EL}_{t}^{N,T}:=\mathbb{E}\left[\mathrm{L}_{t+T}^{\mathbb{G},N}\middle| \mathcal{G}_{t}\right].\] In the normal course of business, a financial institution should set aside an amount equal to EL as a provision or reserves, even if it should be covered from the portfolio's earnings. * The conditional Unexpected Loss (UL) is the amount by which potential credit losses might exceed EL. UL should be covered by risk capital. For \(\alpha\in(0,1)\), \[\mathrm{UL}_{t,\alpha}^{N,T}:=\mathrm{VaR}_{t}^{\alpha,N,T}-\mathrm{EL}_{t}^{N,T},\quad\text{where}\quad 1-\alpha=\mathbb{P}\left[L_{t+T}^{\mathbb{G},N}\leq \mathrm{VaR}_{t}^{\alpha,N,T}\middle|\mathcal{G}_{t}\right].\] (3.2) * The Stressed Loss (or Expected Shortfall or ES) is the amount by which potential credit losses might exceed the capital requirement \(\mathrm{VaR}_{t}^{\alpha}(L_{s}^{N})\): \[\mathrm{ES}_{t,\alpha}^{N,T}:=\mathbb{E}\left[L_{t+T}^{N}\middle|L_{t+T}^{N} \geq\mathrm{VaR}_{t}^{\alpha,N,T},\mathcal{G}_{t}\right],\qquad\text{for $ \alpha\in(0,1)$}.\] In the following sections, we write the expression of the portfolio EL and UL as functions of the parameters and of the processes introduced above, and introduce the entity's probability of default. ### Expected loss The following proposition computes the default probability of each firm and the portfolio EL. Figure 1: Loss distribution. Source: Page 8 in [39]. **Proposition 3.7**.: _Let \((\Upsilon_{u})_{u\in\mathbb{N}}\) and \((\mathfrak{R}^{n}_{u}(\mathfrak{d}))_{u\in\mathbb{N}}\) (with \(n\in\{1,\ldots,N\}\)) be as in Appendix A and (2.7). For \((a,\theta)\in\mathbb{R}^{I}\times\mathbb{R}^{I}\), \(t\in\mathbb{N}\), \(T\in\mathbb{N}^{*}\), and \(n\in\{1,\ldots,N\}\), define_ \[\mathfrak{L}^{n}(\mathfrak{d},t,T,a,\theta):=\Phi\left(\frac{\log(B^{n})- \mathcal{K}^{n}(\mathfrak{d},t,T,a,\theta)}{\sqrt{\mathcal{L}^{n}(t,T)}} \right),\] _where \(\mathcal{K}^{n}(\mathfrak{d},t,T,a,\theta)\) and \(\mathcal{L}^{n}(t,T)\) are defined in Remark 2.9. Then, the (conditional) probability of default of the entity \(n\) at time \(t\) over the horizon \(T\) is_ \[\mathrm{PD}^{n}_{t,T,\mathfrak{d}}:=\mathbb{P}\left(\mathcal{V}^{n}_{t+T}\leq \mathfrak{B}^{n}|\mathcal{G}_{t}\right)=\mathfrak{L}^{n}(\mathfrak{d},t,T, \mathcal{A}^{\circ}_{t},\Theta_{t}), \tag{3.3}\] _and the (conditional) EL of the portfolio at time \(t\) over the horizon \(T\) reads_ \[\mathrm{EL}^{N,T}_{t,\mathfrak{d}}:=\mathrm{EL}^{N,T}_{t}=\sum_{n=1}^{N} \mathrm{EAD}^{n}_{t+T}\cdot\mathrm{LGD}^{n}_{t+T}\cdot\mathfrak{L}^{n}( \mathfrak{d},t,T,\mathcal{A}^{\circ}_{t},\Theta_{t}). \tag{3.4}\] Proof.: Let \(t\in\mathbb{N}\) and \(T\in\mathbb{N}^{*}\), for \(1\leq n\leq N\), (2.9) gives the law of \(\log(\mathcal{V}^{n}_{t+T})|\mathcal{G}_{t}\), we directly obtain (3.3). Moreover, \[\mathrm{EL}^{N,T}_{t,\mathfrak{d}} =\mathbb{E}\left[\mathrm{L}^{\mathbb{G},N}_{t+T}|\mathcal{G}_{t}\right]\] \[=\mathbb{E}\left[\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t+T}\cdot \mathrm{LGD}^{n}_{t+T}\cdot\Phi\left(\frac{\log(\mathfrak{B}^{n})-\log(F^{n}_{ 0})-\mathfrak{m}^{n}(\mathfrak{d},t+T,\mathcal{A}^{\circ}_{t+T})}{\sigma_{ \mathfrak{b}^{n}}\sqrt{t+T}}\right)\middle|\mathcal{G}_{t}\right]\] \[=\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t+T}\cdot\mathrm{LGD}^{n}_{t+T} \cdot\mathbb{E}\left[\Phi\left(\frac{\log(B^{n})-\mathfrak{m}^{n}(\mathfrak{d },t+T,\mathcal{A}^{\circ}_{t+T})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}\right) \middle|\mathcal{G}_{t}\right],\] where the last equality comes from Assumption 3.1(1)-(3). However, \[\mathfrak{m}^{n}(\mathfrak{d},t+T,\mathcal{A}^{\circ}_{t+T}) =\mathfrak{a}^{n}\left(\mathcal{A}^{\circ}_{t+T}-v(\mathfrak{d}_ {0})\right)+\log(\mathfrak{R}^{n}_{t+T}(\mathfrak{d}))\] \[=\mathfrak{a}^{n}\left(\mathcal{A}^{\circ}_{t}+\sum_{u=t+1}^{t+T} \Theta_{u}-v(\mathfrak{d}_{0})\right)+\log(\mathfrak{R}^{n}_{t+T}(\mathfrak{d }))\] \[=\mathfrak{m}^{n}(\mathfrak{d},t,\mathcal{A}^{\circ}_{t})+\log( \mathfrak{R}^{n}_{t+T}(\mathfrak{d}))-\log(\mathfrak{R}^{n}_{t}(\mathfrak{d})) +\mathfrak{a}^{n}\cdot\sum_{u=1}^{T}\Theta_{t+u}.\] For all \(\theta\in\mathbb{R}^{d}\), according to (A.1), \[\left(\sum_{u=1}^{T}\Theta_{t+u}\middle|\Theta_{t}=\theta\right)\sim\mathcal{ N}\left(\Gamma\Upsilon_{T-1}\theta+\left(\sum_{u=1}^{T}\Upsilon_{u-1}\right)\mu, \varepsilon^{2}\sum_{u=1}^{T}\Upsilon_{T-u}\Sigma(\Upsilon_{T-u})^{\top} \right),\] Let \(n\in\{1,\ldots,N\}\), therefore, \[\left(\mathfrak{a}^{n}\cdot\sum_{u=1}^{T}\Theta_{t+u}\middle|\mathcal{G}_{t} \right)\sim\mathcal{N}\left(\mathfrak{a}^{n}\cdot\Gamma\Upsilon_{T-1}\Theta_{t} +\mathfrak{a}^{n}\cdot\left(\sum_{u=1}^{T}\Upsilon_{u-1}\right)\mu,\varepsilon ^{2}\sum_{u=1}^{T}(\mathfrak{a}^{n}\cdot\Upsilon_{T-u})\Sigma(\mathfrak{a}^{n} \cdot\Upsilon_{T-u})^{\top}\right).\] Then \[\left(\frac{\log(B^{n})-\mathfrak{m}^{n}(\mathfrak{d},t+T,\mathcal{A}_{t+T}^{ \circ})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}\bigg{|}\mathcal{G}_{t}\right) \sim\frac{\mathcal{S}^{n}(T)}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}\mathcal{X}^ {n}+\frac{\log(B^{n})-\mathcal{K}^{n}(\mathfrak{d},t,T,\mathcal{A}_{t}^{\circ },\Theta_{t})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}, \tag{3.5}\] where \((\mathcal{X}^{n})_{1\leq n\leq N}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{N})\), and where \(\mathcal{K}^{n}(\mathfrak{d},t,T,\mathcal{A}_{t}^{\circ},\Theta_{t})\) is defined in (2.9) and where \[\mathcal{S}^{n}(T):=\varepsilon\sqrt{\sum_{u=1}^{T}(\mathfrak{a}^{n\cdot} \Upsilon_{T-u})\Sigma(\mathfrak{a}^{n\cdot}\Upsilon_{T-u})^{\top}}.\] We then have \[\mathbb{E}\left[\Phi\left(\frac{\mathcal{S}^{n}(T)}{\sigma_{ \mathfrak{b}^{n}}\sqrt{t+T}}\mathcal{X}^{n}+\frac{\log(B^{n})-\mathcal{K}^{n} (\mathfrak{d},t,T,\mathcal{A}_{t}^{\circ},\Theta_{t})}{\sigma_{\mathfrak{b}^{ n}}\sqrt{t+T}}\right)\right|\mathcal{G}_{t}\right]\] \[=\mathbb{E}_{\mathcal{X}^{n}}\left[\Phi\left(\frac{\mathcal{S}^{ n}(T)}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}\mathcal{X}^{n}+\frac{\log(B^{n})- \mathcal{K}^{n}(\mathfrak{d},t,T,\mathcal{A}_{t}^{\circ},\Theta_{t})}{\sigma_ {\mathfrak{b}^{n}}\sqrt{t+T}}\right)\right]\] \[=\int_{-\infty}^{+\infty}\Phi\left(\frac{\mathcal{S}^{n}(T)}{ \sigma_{\mathfrak{b}^{n}}\sqrt{(t+T)}}x+\frac{\log(B^{n})-\mathcal{K}^{n}( \mathfrak{d},t,T,\mathcal{A}_{t}^{\circ},\Theta_{t})}{\sigma_{\mathfrak{b}^{n} }\sqrt{(t+T)}}\right)\phi(x)dx\] \[=\Phi\left(\frac{\log(B^{n})-\mathcal{K}^{n}(\mathfrak{d},t,T, \mathcal{A}_{t}^{\circ},\Theta_{t})}{\sqrt{\mathcal{L}^{n}(t,T)}}\right),\] where \(\mathcal{L}^{n}(t,T)\) is defined in (2.10), and the conclusion follows. The last equality comes from the following result found in (37, Page 1063): if \(\Phi\) and \(\phi\) are the Gaussian cumulative distribution and density functions, then for \(a,b\in\mathbb{R}\), \[\int_{-\infty}^{+\infty}\Phi(a+bx)\phi(x)dx=\Phi\left(\frac{a}{\sqrt{1+b^{2}}} \right).\] ### Unexpected loss At time \(t\in\mathbb{N}\), it follows from the definition of UL in (3.2) that we need to compute the quantile of the (proxy of the) loss distribution \(L_{t}^{\mathbb{G},N}\). For \(\alpha\in(0,1)\), we obtain from Theorem 3.5, \[1-\alpha =\mathbb{P}\left[L_{t+T}^{\mathbb{G},N}\leq\mathrm{VaR}_{t}^{ \alpha,N,T}\Big{|}\mathcal{G}_{t}\right]\] \[=\mathbb{P}\left[\sum_{n=1}^{N}\mathrm{EAD}_{t+T}^{n}\cdot \mathrm{LGD}_{t+T}^{n}\cdot\Phi\left(\frac{\log(B^{n})-\mathfrak{m}^{n}( \mathfrak{d},t+T,\mathcal{A}_{t+T}^{\circ})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t +T}}\right)\leq\mathrm{VaR}_{t}^{\alpha,N,T}\Bigg{|}\mathcal{G}_{t}\right].\] However, it follows from (3.5), \[1-\alpha=\mathbb{P}_{\mathcal{X}^{1},\ldots,\mathcal{X}^{N}}\left[\sum_{n=1}^ {N}\mathrm{EAD}_{t+T}^{n}\cdot\mathrm{LGD}_{t+T}^{n}\cdot\Phi\left(\frac{ \mathcal{S}^{n}(T)}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}\mathcal{X}^{n}+\frac {\log(B^{n})-\mathcal{K}^{n}(\mathfrak{d},t,T,\mathcal{A}_{t}^{\circ},\Theta_{ t})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}\right)\leq\mathrm{VaR}_{t}^{\alpha,N,T} \right]. \tag{3.6}\] Page 25 Since the quantile function is not linear, one cannot find an analytical solution. Therefore, a numerical solution is needed. Recall that we must simulate \((\mathcal{X}^{1},\ldots,\mathcal{X}^{N})\) to find \(\mathrm{VaR}^{\alpha,N,T}_{t}\), which will also be a function of the random variables \((\mathcal{A}^{\circ}_{t},\Theta_{t})\), of dimension \(2I\). This can be solved for example by Monte Carlo [22] or by deep learning techniques [5]. ### Projection of one-year risk measures At this stage, we use (3.3) to compute, for each \(n\in\{1,\ldots,N\}\), the probability of default of a given firm \(n\) at maturity \(T\), stressed by (deterministic) carbon taxes \(\mathfrak{d}\). We can also calculate EL according to (3.4) and UL from (3.6). We just need the parameters, especially \(\mathbf{a}^{n}\), \(\sigma^{2}_{\mathfrak{b}^{n}}\), \(F^{n}_{0}\), and \(\mathfrak{B}^{n}\). We can distinguish two ways to determine them: 1. **Firm's view:**\(\mathbf{a}^{n}\), \(\sigma^{2}_{\mathfrak{b}^{n}}\) and \(F^{n}_{0}\) are calibrated on the firm's historical free cash flows, while \(\mathfrak{B}^{n}\) relates to the principal of its loans. 2. **Portfolio's view:** if we assume that there is just one risk class in the portfolio so that all the firms have the same \(\mathbf{a}^{n}\), \(\sigma^{2}_{\mathfrak{b}^{n}}\), and \(B^{n}\) (and not \(\mathfrak{B}^{n}\)), then knowing the historical default of the portfolio, we can use a log-likelihood maximization as in Gordy and Heitfield [21] to determine them. Let us introduce the following assumption related to the portfolio view. **Assumption 3.8**.: There is only one risk class in the given portfolio, namely for any \(1\leq n\leq N\), \(\mathbf{a}^{n}=\mathbf{a}^{1}\), \(\sigma^{2}_{\mathfrak{b}^{n}}=\sigma^{2}_{\mathfrak{b}^{1}}\), and \(B^{n}=B^{1}\). In practice, banks need to compute the one-year probability of default. We thus simplify the risk measures introduced previously by taking from now \(T=1\). **Corollary 3.9**.: _Under Assumption 3.8, for \(t\in\mathbb{N}\) and \(1\leq n\leq N\), the one-year (conditional) probability of default of firm \(n\) at time \(t\) is_ \[\mathrm{PD}^{n}_{t,1,\mathfrak{d}}=\Phi\left(\frac{\log(B^{1})-\mathcal{K}^{1 }(\mathfrak{d},t,1,\mathcal{A}^{\circ}_{t},\Theta_{t})}{\sqrt{\mathcal{L}^{1} (t,1)}}\right). \tag{3.7}\] Proof.: Let \(t\in\mathbb{N}\) and \(1\leq n\leq N\), \[\mathcal{K}^{n}(\mathfrak{d},t,1,\mathcal{A}^{\circ}_{t},\Theta_ {t}) =\mathfrak{m}^{n}(\mathfrak{d},t,\mathcal{A}^{\circ}_{t})+\log( \mathfrak{R}^{1}_{t+1}(\mathfrak{d}))-\log(\mathfrak{R}^{1}_{t}(\mathfrak{d})) +\mathfrak{a}^{1\cdot}\Gamma\Upsilon_{0}\Theta_{t}+\mathfrak{a}^{1\cdot} \Upsilon_{0}\mu\] \[=\mathfrak{m}^{1}(\mathfrak{d},t,\mathcal{A}^{\circ}_{t})+\log( \mathfrak{R}^{1}_{t+1}(\mathfrak{d}))-\log(\mathfrak{R}^{1}_{t}(\mathfrak{d})) +\mathfrak{a}^{1\cdot}\Gamma\Theta_{t}+\mathfrak{a}^{1\cdot}\mu\] \[=\mathcal{K}^{1}(\mathfrak{d},t,1,\mathcal{A}^{\circ}_{t},\Theta _{t}),\] given that \(\Upsilon_{0}=\mathbf{I}_{I}\) and from Assumption 3.8. We also have \[\mathcal{L}^{n}(t,1)=\sigma^{2}_{\mathfrak{b}^{1}}(t+1)+(\mathfrak{a}^{1\cdot }\Upsilon_{0})\Sigma(\mathfrak{a}^{1\cdot}\Upsilon_{0})^{\top}=\sigma^{2}_{ \mathfrak{b}^{1}}(t+1)+(\mathfrak{a}^{1\cdot})\Sigma(\mathfrak{a}^{1\cdot})^{ \top}=\mathcal{L}^{1}(t,1).\] #### 3.4.1 Expected loss The following corollary, whose proof follows from Corollary 3.9, gives a simplified formula for EL. **Proposition 3.10**.: _Under Assumption 3.8, the one-year (conditional) EL of the portfolio at time \(t\) is (with \(\mathrm{PD}^{1}_{t,1,\mathfrak{d}}\) defined in Corollary 3.9)_ \[\mathrm{EL}^{N,T}_{t,\mathfrak{d}}=\left(\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t+1} \cdot\mathrm{LGD}^{n}_{t+1}\right)\cdot\mathrm{PD}^{1}_{t,1,\mathfrak{d}}. \tag{3.8}\] #### 3.4.2 Unexpected loss We saw in (3.6) that determining the UL is not possible analytically and is numerically intensive (since quantiles depend on rare events and because of the dimension of the macroeconomic factors). However, Assumption 3.4 allows us to further simplify the formula for UL. **Corollary 3.11**.: _Under Assumption 3.8, the one-year (conditional) UL of the portfolio at time \(t\) is_ \[\mathrm{UL}^{N,T}_{t,\mathfrak{d},\alpha}=\left(\sum_{n=1}^{N}\mathrm{EAD}^{n} _{t+T}\cdot\mathrm{LGD}^{n}_{t+T}\right)\left[\Phi\left(\frac{\mathcal{S}^{1} (t,T)\Phi^{-1}(1-\alpha)+\log(B^{1})-\mathcal{K}^{1}(\mathfrak{d},t,T,\mathcal{ A}^{\circ}_{t},\Theta_{t})}{\sigma_{\mathfrak{b}^{1}}\sqrt{t+T}}\right)- \mathrm{PD}^{1}_{t,1,\mathfrak{d}}\right]. \tag{3.9}\] Proof.: From (3.6), we have \[1-\alpha=\mathbb{P}_{\mathcal{X}^{1},\ldots,\mathcal{X}^{N}}\left[\sum_{n=1}^{ N}\mathrm{EAD}^{n}_{t+T}\cdot\mathrm{LGD}^{n}_{t+T}\cdot\Phi\left(\frac{\mathcal{S}^{ n}(T)\mathcal{X}^{n}+\log(B^{n})-\mathcal{K}^{n}(\mathfrak{d},t,T,\mathcal{A}^{ \circ}_{t},\Theta_{t})}{\sigma_{\mathfrak{b}^{n}}\sqrt{t+T}}\right)\leq \mathrm{VaR}^{\alpha,N,T}_{t}\right],\] but with Corollary 3.9, \[1-\alpha =\mathbb{P}_{\mathcal{X}^{1}}\left[\Phi\left(\frac{\mathcal{S}^{ 1}(t,T)\mathcal{X}^{1}+\log(B^{1})-\mathcal{K}^{1}(\mathfrak{d},t,T,\mathcal{ A}^{\circ}_{t},\Theta_{t})}{\sigma_{\mathfrak{b}^{1}}\sqrt{t+T}}\right)\leq\frac{ \mathrm{VaR}^{\alpha,N,T}_{t}}{\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t+T}\cdot \mathrm{LGD}^{n}_{t+T}}\right]\] \[=\mathbb{P}_{\mathcal{X}^{1}}\left[\mathcal{X}^{1}\leq\frac{1}{ \mathcal{S}^{1}(t,T)}\left(\sigma_{\mathfrak{b}^{1}}\sqrt{t+T}\Phi^{-1}\left( \frac{\mathrm{VaR}^{\alpha,N,T}_{t}}{\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t+T} \cdot\mathrm{LGD}^{n}_{t+T}}\right)-\log(B^{1})+\mathcal{K}^{1}(\mathfrak{d},t,T,\mathcal{A}^{\circ}_{t},\Theta_{t})\right)\right].\] Then the corollary follows from \[\mathrm{VaR}^{\alpha,N,T}_{t}=\left(\sum_{n=1}^{N}\mathrm{EAD}^{n}_{t+T}\cdot \mathrm{LGD}^{n}_{t+T}\right)\cdot\Phi\left(\frac{\mathcal{S}^{1}(t,T)\Phi^{- 1}(1-\alpha)+\log(B^{1})-\mathcal{K}^{1}(\mathfrak{d},t,T,\mathcal{A}^{\circ} _{t},\Theta_{t})}{\sigma_{\mathfrak{b}^{1}}\sqrt{t+T}}\right).\] Page 27 ### Sensitivity of losses to carbon price We would like to quantify the variation of losses for a given variation in the carbon price. **Definition 3.12**.: For our portfolio of \(N\) firms and for \(\alpha\in(0,1)\), we introduce the sensitivity of expected and unexpected losses to carbon taxes, at time \(t\in\mathbb{N}\) over the horizon \(T\in\mathbb{N}^{*}\), and for a given sequence of carbon taxes \(\mathfrak{d}\), respectively denoted \(\Gamma^{N,T,\mathrm{EL}}_{t,\mathfrak{d}}(\mathfrak{U})\) and \(\Gamma^{N,T,\mathrm{UL}}_{t,\mathfrak{d},\alpha}(\mathfrak{U})\), as being, \[\Gamma^{N,T,\mathrm{EL}}_{t,\mathfrak{d}}(\mathfrak{U}):=\lim_{\vartheta\to 0 }\frac{\mathrm{EL}^{N,T}_{t,\mathfrak{d}+\vartheta\mathfrak{U}}-\mathrm{EL}^{N,T}_{t,\mathfrak{d}}}{\vartheta}\qquad\text{and}\qquad\Gamma^{N,T,\mathrm{UL }}_{t,\mathfrak{d},\alpha}(\mathfrak{U}):=\lim_{\vartheta\to 0}\frac{\mathrm{UL}^{N,T}_{t, \mathfrak{d}+\vartheta\mathfrak{U},\alpha}-\mathrm{UL}^{N,T}_{t,\mathfrak{d},\alpha}}{\vartheta},\] where \(\mathfrak{U}\in([0,1)^{I}\times[0,1)^{I\times I}\times[0,1)^{I})^{\mathbb{N}}\) is chosen so that there exists a neighbourhood \(v\) of the origin so that for all \(\vartheta\in v\), \(\mathfrak{d}+\vartheta\mathfrak{U}\in([0,1)^{I}\times[0,1)^{I\times I}\times[ 0,1)^{I})^{\mathbb{N}}\). These sensitivities can be computed and understood in different ways depending of the direction \(\mathfrak{U}\): either in relation to the entire tax trajectory, or in relation to all taxes at a given date, or in relation to one of the three taxes, or in relation to a sector, and so on. We could also (and will so in a future note) give the results for stochastic carbon price in the transition period. In this case, if the productivity \(\Theta\) and the carbon price \(\delta\) are independent, it is enough to add in the previous results, the expectation conditionally to \(\delta\). ## 4 Estimation and calibration Assume that the time unit is year. We will calibrate the model parameters on a set of data ranging from year \(\mathfrak{t}_{0}\) to \(\mathfrak{t}_{1}\). In practice, \(\mathfrak{t}_{0}=1978\) and \(\mathfrak{t}_{1}=t_{\circ}=2021\). For each sector \(i\in\mathcal{I}\) and \(0\leq t<t_{\circ}\), we observe the output \(Y^{i}_{t}\), the labor \(N^{i}_{t}\), the intermediary input \((Z^{ji}_{t})_{j\in\mathcal{I}}\), and the consumption \(C^{i}_{t}\) (recall that the transition starts at year \(t_{\circ}\) so \([0,t_{\circ})\) is the past). For the sake of clarity, we will omit the dependence of each estimated parameter on \(t_{\circ}\). ### Calibration of carbon taxes We assume here that the carbon price is deterministic. The regulator fixes the transition time horizon \(t_{\star}\in\mathbb{N}^{*}\), the carbon price at the beginning of the transition \(P_{carbon}>0\), at the end of the transition \(\delta_{t_{\star}}>P_{carbon}\), and the annual evolution rate \(\eta_{\delta}>0\). Then, for all \(t\in\mathbb{N}\), \[\delta_{t}=\left\{\begin{array}{ll}P_{carbon},&\text{if }t<t_{\circ},\\ P_{carbon}(1+\eta_{\delta})^{t-t_{\circ}},&\text{if }t\in\{t_{\circ},\ldots,t_{ \star}\},\\ P_{carbon}=\delta_{t_{\circ}}(1+\eta_{\delta})^{t_{\star}-t_{\circ}},&\text{ otherwise.}\end{array}\right.\] We denote for any sector \(i\in\mathcal{I}\), * \(t=t_{\circ}\) the first year of the transition; * \(Y^{i}_{t_{\circ}}\) the output at time \(t_{\circ}\); * \(P^{i}_{t_{\circ}}\) the aggregate price at time \(t_{\circ}\); * \(C^{i}_{t_{\circ}}\) the sectoral consumption (or value added) of households at time \(t_{\circ}\). The taxes are calibrated on realized emissions [14], based on Devulder and Lisack [12], to the chosen year \(t_{\circ}\), then for all \(i\in\mathcal{I}\): * the tax rate on firms production is set such that \[\tau^{i}_{t_{\circ}}Y^{i}_{t_{\circ}}P^{i}_{t_{\circ}}=P_{carbon}E^{i,F}_{t_{ \circ}},\] where \(E^{i,F}_{t_{\circ}}\) are the GHG emissions (in tonnes of CO2-equivalent) by all the firms of the sector \(i\) at \(t_{\circ}\). Then for all \(t\in\{t_{\circ},\ldots,t_{\star}\}\), we have \[\tau^{i}_{t}(\delta_{t}):=\delta_{t}\frac{E^{i,F}_{t_{\circ}}}{Y^{i}_{t_{ \circ}}P^{i}_{t_{\circ}}},\] * the tax rate on households final consumption is set such that \[\kappa^{i}_{t_{\circ}}P^{i}_{t_{\circ}}C^{i}_{t_{\circ}}:=P_{carbon}E^{i,H}_{ i,t_{\circ}},\] where \(E^{i,H}_{t_{\circ}}\) is the GHG emitted (in tonnes of CO2-equivalent) by households through their consumption in sector \(i\). Then for all \(t\in\{t_{\circ},\ldots,t_{\star}\}\), we have \[\kappa^{i}_{t}(\delta_{t}):=\delta_{t}\frac{E^{i,H}_{i,t_{\circ}}}{P^{i}_{t_{ \circ}}C^{i}_{t_{\circ}}}.\] * the tax rate on firms intermediate consumption, for each sector \(i\) and \(j\), is set such that \[\sum_{j=1}^{I}\zeta^{ji}_{t_{\circ}}P^{j}_{t_{\circ}}Z^{ji}_{t_{\circ}}:=P_{ carbon}E^{i,F}_{t_{\circ}}.\] Then for all \(t\in\{t_{\circ},\ldots,t_{\star}\}\), we have \[\zeta^{ji}_{t}(\delta_{t})=\delta_{t}\frac{E^{i,F}_{t_{\circ}}}{I\times P^{j}_ {t_{\circ}}Z^{ji}_{t_{\circ}}}.\] The values \(\frac{E^{i,F}_{t_{\circ}}}{Y^{i}_{t_{\circ}}P^{i}_{t_{\circ}}}\), \(\frac{E^{i,CO2,H}_{i,t_{\circ}}}{P^{i}_{t_{\circ}}C^{i}_{t_{\circ}}}\) and \(\frac{E^{i,F}_{t_{\circ}}}{I\times P^{j}_{t_{\circ}}Z^{ji}_{t_{\circ}}}\) represent the carbon intensities of sector \(i\) production, consumption and intermediary input respectively, which we assume fixed over the transition. This is a very strong assumption here, because we can think that the greening of the economy will lead to a decrease in carbon intensity. Moreover, we assume that taxes increase. However, there are several scenarios that could be considered, including taxes that would increase until a certain year before leveling off or even decreasing. The tax on production would increase when the tax on households would stabilize or disappear (in order to avoid social movements) and so on. The framework can be adapted to various sectors, scenarios, and tax evolutions. ### Calibration of economic parameters As in [16], we assume a unitary Frisch elasticity of labor supply so \(\varphi=1\) and the utility of consumption is logarithmic so \(\sigma=1\). Similarly, for any \(i,j\in\mathcal{I}\), the input shares, \(\boldsymbol{\lambda}^{ij}\), are estimated as dollar payments from sector \(j\) to sector \(i\) expressed as a fraction of the value of production in sector \(j\). The parameter \(\psi^{i}\) is then obtained by \[\psi^{i}=1-\sum_{j\in\mathcal{I}}\boldsymbol{\lambda}^{ji},\] and we get \((\widehat{\boldsymbol{\lambda}}^{ij})_{i,j\in\mathcal{I}}\) and \((\widehat{\psi}^{i})_{i\in\mathcal{I}}\). We can then compute the functions \(\Psi\) in (1.10) and \(\Lambda\) in (1.11). We can also compute the sectoral consumption growth \(\big{(}\Delta_{t}^{C}=(\log(C_{t}^{i})-\log(C_{t-1}^{i}))_{j\in\mathcal{I}} \big{)}_{t\in 0,\ldots,t_{1}-1}\) directly from data. Without carbon tax in any sector, it follows from (1.20) in Corollary 1.14 that, for each \(t\in\{\mathsf{t}_{0},\ldots,\mathsf{t}_{1}-1\}\), the computed consumption growth \(\Delta_{t}^{C}\) is equal to \(\Delta_{t}^{C}=(\mathbf{I}_{I}-\widehat{\boldsymbol{\lambda}})^{-1}\widehat{ \Theta}_{t}\) when \(\mathbf{I}_{I}-\widehat{\boldsymbol{\lambda}}\) is not singular; hence \(\widehat{\Theta}_{t}=(\mathbf{I}_{I}-\widehat{\boldsymbol{\lambda}})\Delta_{t} ^{C}\) and we can easily compute the estimations \(\widehat{\mu}\), \(\widehat{\Gamma}\), and \(\widehat{\Sigma}\), and then \(\widehat{\overline{\mu}}\) and \(\widehat{\overline{\Sigma}}\) of the VAR(1) parameters \(\mu\), \(\Gamma\), \(\Sigma\), \(\overline{\mu}\), and \(\overline{\Sigma}\) (all defined in Standing Assumption 1.1). ### Calibration of firm and of the credit model parameters Recall that we have a portfolio with \(N\in\mathbb{N}^{*}\) firms (or credit) at time \(t_{\circ}\). For each firm \(n\in\{1,\ldots,N\}\), we have its historical cash flows \((F_{t}^{n})_{t\in\mathsf{t}_{0},\ldots,\mathsf{t}_{1}-1}\), hence its log-cash flow growths. We assume that we can divide our portfolio in \(M\leq N\) disjunct groups \(g_{1},\ldots,g_{M}\) so that each group represents a single risk class. For any \(t\in\{\mathsf{t}_{0},\ldots,\mathsf{t}_{1}-1\}\) and \(1\leq m\leq M\), we denote by \(r_{t}^{m}\) (resp. \(d_{t}^{m}\)) the number of firms in \(g_{m}\) rated at the beginning of the year \(t\) (resp. defaulted during the year \(t\)). In particular, \(r_{\mathsf{t}_{0}}=\#g_{m}\). Within each group \(g_{m}\), all the firms behave in the same way as there is only one risk class. We fix \(m_{\star}:=\min\left\{n\in\{1,\ldots,N\}\text{ such that }n\in g_{m}\right\}\) and, for each \(n\in g_{m}\), \(\mathbf{a}^{n}=\mathbf{a}^{m_{\star}}\), \(\sigma_{\mathfrak{b}^{n}}=\sigma_{\mathfrak{b}^{m_{\star}}}\), and \(B^{n}=B^{m_{\star}}\). We then proceed as follows: 1. Knowing the consumption growth \(\big{(}\Delta_{t}^{C}\big{)}_{t\in\{\mathsf{t}_{0},\ldots,\mathsf{t}_{1}-1\}}\), we calibrate the factor loading \(\widehat{\mathbf{a}}_{m_{\star}}\) and the standard deviation \(\widehat{\sigma}_{m_{\star}}\), according to Assumptions 2.1 and 2.4, appealing to the regression \[\sum_{n\in g_{m}}\omega_{t}^{n}=(\#g_{m})\mathbf{a}^{m_{\star}}\Delta_{t}^{C}+ \sqrt{\#g_{m}}\sigma_{\mathfrak{b}^{m_{\star}}}\mathfrak{u}_{t}\quad\text{ where}\quad\mathsf{u}_{t}\sim\mathcal{N}(0,1),\quad\text{for all}\quad t\in\{\mathsf{t}_{0},\ldots,\mathsf{t}_{1}-1\}.\] 2. We then estimate the barrier \(B^{m_{\star}}\) by MLE as detailed in Gordy and Heitfield in [21, Section 3]: we compute \[\widehat{B}^{m_{\star}}:=\operatorname*{arg\,max}_{B^{m_{\star}}\in\mathbb{R}^{ +}}\mathcal{L}(B^{m_{\star}}),\] where \(\mathcal{L}(B^{m_{*}})\) is the log-likelihood function defined by \[\mathcal{L}(B^{m_{*}}):=\sum_{t=t_{0}}^{t_{1}-1}\log\left(\int_{\mathbb{R}^{2I}} \mathbb{P}[D^{m_{*}}=d_{t}^{m}|(a,\theta)]d\mathbb{P}[(\mathcal{A}_{t}^{\circ},\Theta_{t})\leq(a,\theta)]\right),\] and where \[\mathbb{P}[D^{m_{*}}=d_{t}^{m}|(\mathcal{A}_{t}^{\circ},\Theta_{t})]=\binom{r_{ t}^{m}}{d_{t}^{m}}(\mathrm{PD}^{m_{*}}_{t,1,0})^{d_{t}^{m}}\Big{(}1-\mathrm{PD}^{m_ {*}}_{t,1,0}\Big{)}^{r_{t}^{m}-d_{t}^{m}},\] with \(D^{m_{*}}\) the Binomial random variable standing for the conditional number of defaults, and \(\mathrm{PD}^{m_{*}}_{t,1,0}\) in Corollary 3.9, depending on \(\sigma_{\mathfrak{b}^{m_{*}}}=\widehat{\sigma}_{\mathfrak{b}^{m_{*}}}\), \(\mathbf{a}^{m_{*}}=\widehat{\mathbf{a}}^{m_{*}}\), \(\mathfrak{d}=0\) and on \(B^{m_{*}}\). ### Expected and Unexpected losses Suppose that we have chosen or estimated all the economic parameters \((\varphi,\sigma,\psi,\boldsymbol{\lambda},\mu,\Gamma,\Sigma)\) and firm specific parameters \(((B^{n},\mathbf{a}^{n},F^{n}_{0},\sigma_{\mathfrak{b}^{n}})_{1\leq n\leq N})\), thanks to the previous equations. We give ourselves a trajectory of carbon price \(\delta\), then, for all \(t\in\{t_{\circ},\ldots,t_{\star}\}\), PD, EL and UL are computed by Monte Carlo simulations following the formulae below. We simulate \(M\in\mathbb{N}^{*}\) paths of \((\Theta^{p}_{t_{\circ}},\ldots,\Theta^{p}_{t_{\star}})\) indexed by \(p\in\{1,\ldots,M\}\), as a VAR(1) process, and we derive \(((\mathcal{A}_{t_{\circ}}^{\circ})^{p},\ldots,(\mathcal{A}_{t_{\star}}^{\circ })^{p})\). For any \(t\in\{t_{\circ},\ldots,t_{\star}\}\): * for any \(n\in\{1,\ldots,N\}\), from (3.7), the estimated one-year probability of default of firm \(n\) is \[\widehat{\mathrm{PD}}^{n,M}_{t,1,\mathfrak{d}}:=\frac{1}{M}\sum_{p=1}^{M} \Phi\left(\frac{\log(B^{n})-\mathcal{K}^{n}(\mathfrak{d},t,1,(\mathcal{A}_{t} ^{\circ})^{p},\Theta^{p}_{t})}{\sqrt{\mathcal{L}^{n}(t,1)}}\right),\] (4.1) * the one-year expected loss is, from (3.8), \[\widehat{\mathrm{EL}}^{N,T}_{t,\mathfrak{d}}:=\frac{1}{M}\sum_{p=1}^{M}\sum_{ n=1}^{N}\mathrm{EAD}^{n}_{t+1}\cdot\mathrm{LGD}^{n}_{t+1}\cdot\widehat{\mathrm{PD}}^{n,M}_ {t,1,\mathfrak{d}},\] (4.2) * the one-year unexpected loss is, from (3.9), \[\widehat{\mathrm{UL}}^{N,T}_{t,\mathfrak{d},\alpha}:=q_{\alpha,M}\left(\left\{ \sum_{n=1}^{N}\mathrm{EAD}^{n}_{t+1}\cdot\mathrm{LGD}^{n}_{t+1}\cdot\Phi\left( \frac{\log(B^{n})-\mathcal{K}^{n}(\mathfrak{d},t,1,(\mathcal{A}_{t}^{\circ})^ {p},\Theta^{p}_{t})}{\sqrt{\mathcal{L}^{n}(t,1)}}\right)\right\}_{1\leq p\leq M }\right)-\widehat{\mathrm{EL}}^{N,T}_{t,\mathfrak{d}},\] (4.3) where \(q_{\alpha,M}(\{Y^{1},\ldots,Y^{M}\})\) denotes the empirical \(\alpha\)-quantile of the distribution of \(Y\). ### Summary of the process More concretely, the goal is to project, for a given portfolio, the \(T=1\) year probability of default, as well as the expected and unexpected losses between year \(t_{\circ}\) and year \(t_{\star}\), given (1) the number of firms rated \(r_{t}\) and defaulted \(d_{t}\) between \(\mathfrak{t}_{0}\) and \(\mathfrak{t}_{1}-1\), (2) all the firms' cash flows \((F^{n}_{t})_{1\leq n\leq N}\) between \(\mathfrak{t}_{0}\) and \(\mathfrak{t}_{1}-1\),w (3) the macroeconomic variables observed between \(\mathfrak{t}_{0}\) and \(\mathfrak{t}_{1}-1\), and (4) the carbon price dynamics \((\delta_{t})_{t\in\{t_{\circ},\ldots,t_{\star}\}}\) or carbon taxes dynamics \((\mathfrak{d}_{t})_{t\in\{t_{\circ},\ldots,t_{\star}\}}\) given by the regulator. We proceed as follows: 1. From the macroeconomic historical data, we estimate the productivity parameters \(\widehat{\Gamma}\), \(\widehat{\mu}\) and \(\widehat{\Sigma}\), as well as the elasticities \(\widehat{\psi}\) and \(\widehat{\mathbf{\lambda}}\) as described in Subsection 4.2. 2. For each \(n\in\{1,\ldots,N\}\), we estimate the parameters \(B^{n}\), \(\sigma_{\mathfrak{b}^{n}}\), \(\mathbf{a}^{n}\) using Subsections 4.3, yielding \(\widehat{B}^{n}\), \(\widehat{\sigma}_{\mathfrak{b}^{n}}\), \(\widehat{\mathbf{a}}^{n}\). 3. We compute the carbon taxes from the carbon price dynamics \((\mathfrak{d}_{t})_{t_{\circ}\leq t\leq t_{\star}}\) as defined in Subsection 4.1, then the tax function \(v\) defined in (1.15). 4. We fix a large enough integer \(M\), and simulate \(M\) paths of the productivity process \((\Theta^{p}_{t})_{t_{\circ}\leq t\leq t_{\star}}\), then we derive \(((\mathcal{A}^{\circ}_{t})^{p})_{t_{\circ}\leq t\leq t_{\star}}\) as defined in Assumption 1.1. For each \(n\in\{1,\ldots,N\}\), we compute the one-year probability of default \(\widehat{\mathrm{PD}}^{n,M}_{t,1,\mathfrak{d}}\), for each \(t_{\circ}\leq t\leq t_{\star}\), using (4.1). 5. We compute the expected (resp. unexpected) losses \(\widehat{\mathrm{EL}}^{N,T}_{t,\mathfrak{d}}\) (resp. \(\widehat{\mathrm{UL}}^{N,T}_{t,\mathfrak{d},\alpha}\)), for each \(t_{\circ}\leq t\leq t_{\star}\), using (4.2) (resp. (4.3)). 6. We fix the direction \(\mathfrak{U}\) and a small step \(\vartheta\), and repeat 3.-4.-5. replacing \(\mathfrak{d}\) by \(\mathfrak{d}+\vartheta\mathfrak{U}\). Finally, we approach the sensitivity of the losses with respect to the carbon taxes \(\mathfrak{d}\) by finite differences, i.e. for each \(t_{\circ}\leq t\leq t_{\star}\), \[\widehat{\Gamma}^{N,T,\mathrm{EL}}_{t,\mathfrak{d}}(\mathfrak{U}):=\frac{1}{ \vartheta}\left(\widehat{\mathrm{EL}}^{N,T}_{t,\mathfrak{d}+\vartheta \mathfrak{U}}-\widehat{\mathrm{EL}}^{N,T}_{t,\mathfrak{d}}\right)\quad\text{ and}\quad\widehat{\Gamma}^{N,T,\mathrm{UL}}_{t,\mathfrak{d},\alpha}(\mathfrak{U}):=\frac{1}{ \vartheta}\left(\widehat{\mathrm{UL}}^{N,T}_{t,\mathfrak{d}+\vartheta \mathfrak{U},\alpha}-\widehat{\mathrm{UL}}^{N,T}_{t,\mathfrak{d},\alpha} \right).\] (4.4) In the sequel, we choose the direction \(\mathfrak{U}\in([0,1)^{I}\times[0,1)^{I\times I}\times[0,1)^{I})^{t_{\star}+1}\) which is equal to \(1\) at \(t\) and \(0\) everywhere else, for each time \(t\), and a step \(\vartheta=1\%\). ## 5 Results ### Data We work on data related to the French economy: 1. Annual consumption, labor, output (displayed on Figure 10 and Figure 11), and intermediary inputs come from Eurostat from 1978 to 2019 (check [4] for details) and are expressed in billion Euros. We also assume that 2020's data are the same as 2019's ones in order not to account for the impact of the COVID-19 crisis on data. We thus consider that \(t_{\circ}=2021\). 2. The 21 Eurostat sectors are grouped in four categories, _Very High Emitting_, _Very Low Emitting_, _Low Emitting_, _High Emitting_, based on their carbon intensities (Appendix D). 3. The taxes are calibrated on the realized emissions [14] (expressed in tonnes of CO2-equivalent) of the chosen starting year (2021). 4. To perform LASSO regression (A) questioning the relationship between credit risk and economics conditions (as we assumed in Section 3), we use S&P ratings for data on the ratings and default, on a yearly basis from 1995 to 2019, of 7046 large US companies belonging to 13 sectors. We can analyze and use them to compute the historical probability of default (displayed Figure 12) and the migration matrix by sector. The USA macroeconomic time series can be found in the World Bank database and in the FRED Saint-Louis database [15]. ### Calibration of economics parameters For the parameters \(\sigma\) and \(\varphi\), we use the same values as in Gali [16]: a unitary log-utility \(\sigma=1\) and a unitary Frisch elasticity of labor supply \(\varphi=1\). We have the parameters of the multisectoral model \((\widehat{\psi}_{i})_{i\in\mathcal{I}}\) and \((\widehat{\mathbf{\lambda}}_{ji})_{i,j\in\mathcal{I}}\) in Table 1 and in Table 2. We obtain the productivity parameters in Table 3, 4, 5. In our simulation, we consider four deterministic transition scenarios giving four deterministic carbon price trajectories. The scenarios used come from the NGFS simulations, whose descriptions are given on the NGFS website [31] as follows: \begin{table} \begin{tabular}{|r|r|r|r|r|} \hline _Emissions Level_ & **Very High** & **High** & **Low** & **Very Low** \\ \hline **Very High** & 0.243 & 0.010 & 0.241 & 0.037 \\ \hline **High** & 0.001 & 0.302 & 0.212 & 0.098 \\ \hline **Low** & 0.053 & 0.042 & 0.412 & 0.107 \\ \hline **Very Low** & 0.004 & 0.015 & 0.134 & 0.220 \\ \hline \end{tabular} \end{table} Table 2: Elasticity of intermediary inputs \(\widehat{\mathbf{\lambda}}\) \begin{table} \begin{tabular}{|r|r|r|r|r|} \hline _Emissions Level_ & **Very High** & **High** & **Low** & **Very Low** \\ \hline \(\times 10^{-3}\) & 5.655 & -0.71 & 0.509 & 2.901 \\ \hline \end{tabular} \end{table} Table 3: \(\widehat{\mu}\) \begin{table} \begin{tabular}{|r|r|r|r|r|} \hline _Emissions Level_ & **Very High** & **High** & **Low** & **Very Low** \\ \hline **Very High** & -0.301 & 0.077 & 0.020 & 0.011 \\ \hline **High** & 0.0820 & 0.083 & -0.001 & 0.032 \\ \hline **Low** & -0.218 & 0.225 & 0.160 & 0.292 \\ \hline **Very Low** & 0.552 & 0.629 & 0.348 & 0.674 \\ \hline \end{tabular} \end{table} Table 4: \(\widehat{\Gamma}\) * _Net Zero 2050_ _is an ambitious scenario that limits global warming to_ \(1.5^{\circ}C\) _through stringent climate policies and innovation, reaching net zero CO2 emissions around 2050. Some jurisdictions such as the US, EU and Japan reach net zero for all greenhouse gases by this point._ * _Divergent Net Zero_ _reaches net-zero by 2050 but with higher costs due to divergent policies introduced across sectors and a quicker phase out of fossil fuels._ * _Nationally Determined Contributions (NDCs)_ _includes all pledged policies even if not yet implemented._ * _Current Policies_ _assumes that only currently implemented policies are preserved, leading to high physical risks._ We consider a time horizon of ten years with \(t_{\circ}=2021\) as starting point, a time step of one year and \(t_{\star}=2030\) as ending point. For each scenario, we compute the average annual growth of the tax as displayed in the fourth column of Table 6. ### Calibration of taxes We compute the evolutions of the _carbon tax rate on production_, \(\tau\), the _carbon tax rate on final consumption_, \(\kappa\), and the _carbon tax rate on the firm's intermediate consumption_, \(\zeta\), for each sector based on the realized emissions, and report the average in Table 7, Table 8, and Table 9. Moreover, the evolutions of carbon price between 2020 and 2030 are shown on Figure 2. Given that carbon intensities are constant, carbon taxes will follow the same trends. is thus zero. The highest level of taxation for households' consumption comes from the _High Emitting_ group (involved for cooking and heating) and from the _Low Emitting_ one (involved for constructing, commuting, and travelling). On firms' production side, the _Very High Emitting_ group is the highest taxed (because agriculture and farming emit large amounts of GHG like methane), and is naturally followed by the _High Emitting_ one which emits significant amounts of CO2. On the taxation of firms' intermediary consumption, we observe expected patterns. For example, the carbon tax applied on inputs produced by the _Very High Emitting_ sector and \begin{table} \begin{tabular}{|r|r|r|r|r|} \hline _Emissions level_ & **Very High** & **High** & **Low** & **Very Low** \\ \hline _Current Policies_ & 4.301 & 4.301 & 0.459 & 0.014 \\ \hline _NDCs_ & 6.151 & 6.151 & 0.656 & 0.02 \\ \hline _Net Zero 2050_ & 8.883 & 8.883 & 0.948 & 0.03 \\ \hline _Divergent Net Zero_ & 19.029 & 19.029 & 2.031 & 0.063 \\ \hline \end{tabular} \end{table} Table 7: Average annual carbon tax on households’ consumption from each sector between 2020 and 2030 (in %) \begin{table} \begin{tabular}{|r|r|r|r|r|} \hline _Emissions level_ & **Very High** & **High** & **Low** & **Very Low** \\ \hline _Current Policies_ & 4.006 & 1.605 & 0.413 & 0.069 \\ \hline _NDCs_ & 5.73 & 2.296 & 0.591 & 0.098 \\ \hline _Net Zero 2050_ & 8.275 & 3.315 & 0.853 & 0.142 \\ \hline _Divergent Net Zero_ & 17.728 & 7.102 & 1.827 & 0.304 \\ \hline \end{tabular} \end{table} Table 8: Average annual carbon tax on firms’ production in each sector between 2020 and 2030 (in %) Figure 2: Annual carbon price per scenario consumed by the _Low Emitting_ one is very high. This is explained by the fact that many inputs used by sectors belonging to the _Low Emitting_ group (such as _Manufacture of food products, beverages and tobacco products_) are produced by _Agriculture_ which belongs to _Very High Emitting_ group. Similar comments can be done for the other sectors. Those results thus show that sectors are not only affected by their own emissions, but also by the emissions from the sectors from which they consume products. We now calibrate our model on the historical data assuming no carbon tax as detailed in Section 4.2 and perform simulations. ### Output and consumption growth We compute the mean of the annual consumption growth and related 95% confidence interval for each sector and each scenario. Results are displayed on Figure 3. Additionally, we compute the average annual consumption growth over the ten-year period, as illustrated in Table 10. \begin{table} \end{table} Table 9: Average annual carbon tax on firms’ intermediary input from each sector between 2020 and 2030 (in %) It follows from the _Total_ column in Table 10 that the average annual growth between 2020 and 2030 is decreasing. The _Divergent Net Zero_ is the economic worst case (the best one for the climate) where the carbon ton would cost \(395.21\)EUR in 2030. The _Current Policies_ is the economic best case (the worst one for the climate) where the carbon ton would cost \(39.05\)EUR in 2030. The difference of annual consumption growth between the worst and the best scenarios is of about \(-0.786\%\). The four scenarios are clearly discriminating. In the _Divergent Net Zero_ scenario, our model shows, on the last subplot in Figure 3, a drop in consumption growth, with respect to the _Current Policies_ scenario, that starts at \(0.438\%\) in 2020 and increases every year until a \(1.258\%\) drop is reached in 2030. Cumulatively, from 2020 to 2030, a drop of \(7.860\%\) is witnessed. We can compare this value to \(2.270\%\) which is the GDP drop between the _Net Zero 2050_ and _Current Policies_ scenarios obtained with the REMIND model in [29]. The difference observed with REMIND can be explained by the fact that our model does not specify how the collected carbon taxes are reinvested or redistributed. We could, for example, head the investment towards low-carbon energies, which would have the effect of reducing the tax on these sectors. Moreover, in our model, carbon price is assumed to increase uniformly (which implies that emissions would increase indefinitely - which is not desirable) from 2020 to 2030, while in REMIND an adjustment of the carbon price growth rate is being made in 2025. Furthermore, productivity is totally exogenous in our model while there are exogenous labor productivity and endogenous technological change for green energies in REMIND, which is expected to have a downward effect on the evolution of carbon price. However, we recall that our model has the benefit to be stochastic and multisectoral. \begin{table} \begin{tabular}{|r|r|r|r|r|r|} \hline _Emissions level_ & **Very High** & **High** & **Low** & **Very Low** & **Total** \\ \hline _NDCs_ & -0.343 & -0.115 & -0.095 & -0.016 & -0.140 \\ \hline _Net Zero 2050_ & -1. & -0.323 & -0.268 & -0.046 & -0.399 \\ \hline _Divergent Net Zero_ & -2.119 & -0.599 & -0.519 & -0.098 & -0.786 \\ \hline \end{tabular} \end{table} Table 10: Average annual consumption growth evolution with respect to the _Current Policies_ scenario between 2020 and 2030 (in %) Figure 3: Mean and 95% confidence interval of the annual consumption growth from 2020 to 2030 Now, it follows from both Figure 3 and Table 10 that the introduction of carbon taxes is less adverse for the _Very Low Emitting_ and _Low Emitting_ groups than for the _High Emitting_ and _Very High Emitting_ ones. The slowdown is highest for the _Very High Emitting_ group, which was anticipated given that the tax on firms was the highest. However, we can see that, even in the best case scenario, the consumption growth in the _Low Emitting_ group stabilizes or begins to decline. It is probably because we are working on French data and the industrial production in the French economy structurally decreases. Moreover, the slowdown could be accelerated by the climate transition, not only because this sector emits GHG, but also because its intermediary inputs are from _High Emitting_ and _Very High Emitting_ sectors. On the other hand, the _Very Low Emitting_ sector continues its strong growth because it emits less and because France is driven by the service industry. Finally, the consumption in the two most polluting sectors suffers from a slowdown higher than the whole consumption slowdown and lower than in the two least polluting ones. ### Firm Valuation Here, we consider a representative firm characterized by its cashflow \(F_{t_{\circ}-1}\) at \(t_{\circ}-1\), with standard deviation \(\sigma_{\mathfrak{b}}\) and by the contribution \(\mathfrak{a}\) of sectoral consumption growth to its cash flows growth. We would like to know how the value of this company evolves during the transition period and with the carbon price introduced in the economy. Consider \(F_{t_{\circ}-1}=\) EUR1,000,000, \(\sigma_{\mathfrak{b}}=5.0\%\), \(\mathfrak{a}=[0.25,0.25,0.25,0.25]\) (each sector has the same contribution to the growth of the cash flows of the firm), the interest rate \(r=5\%\). For \(M=1000\) simulations of the productivity processes \((\Theta_{t},\mathcal{A}_{t})_{t_{\circ}\leq t\leq t_{*}}\), we compute the firm value using (2.6). We can analyze both the average evolution of the firm value per year and per scenario (Figure 4) and the empirical distribution of the firm value per scenario (Figure 5). We see that even if the value of the firm grows each year, this growth is affected by the severity of the transition scenario. The presence of the carbon tax in the economy clearly Figure 4: Average annual firm value per scenario in million euros per year reduces the firm value. The introduction of the transition scenario distorts the density function of the firm value, and in particular, moves it to the left. ### Credit Risk Consider a fictitious portfolio of \(N=12\) firms described in Table 11 below. This choice is made to ease the reproducibility of the result since the default data are proprietary data of BPCE. Note that the growths in the cash flows of Firm 2, 4, 6, and 8 are respectively driven by the _Very High Emitting_, _High Emitting_, _Low Emitting_, and _Very Low Emitting_ groups. #### 5.6.1 Probabilities of default (PD) We use the parameters of the portfolio and firms as detailed in Table 11 to compute annual PDs over ten years using the closed-form formulae (4.1). We then report, in Figure 6, the average annual PD and its annual evolution. The remarks raised for the consumption growth remain valid, only the monotony changes: we can clearly distinguish the fourth various climate transition scenario. The probability of default grows each year, which is consistent as uncertainty increases with time. Even in the Figure 5: Firm value distribution per scenario and per year _Current Policies_ scenario, the PD goes from 5.970% in 2021 to 7.024% in 2030. Moreover, the increase is emphasized when the transition scenario gets tougher from an economic point of view. Between the worst-case (_Divergent Net Zero_) scenario and the best-case (_Current Policies_) one, the difference in average default probability reaches 1.911% in 2030. Over the next 10 years, the annual average PD for the _Current Policies_ scenario is 6.579%, for the _NDCs_ scenario is 6.882%, for the _Net Zero 2050_ scenario is 7.478%, and for the _Divergent Net Zero_ scenario is 8.490%. It is no surprise that the introduction of a carbon tax increases the portfolio's average probability of default. In Figure 7 above, we can also observe that, for each company, the evolution of PD depends on the sector that is at the origin of the growth of its cash flows. As expected, the PD grows throughout the years, and the growth is even more abrupt when the sector to which the company belongs to is polluting. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\mathbf{n^{*}}\) & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** \\ \hline \(\sigma_{\mathfrak{b}^{n}}\) & 0.05 & 0.05 & 0.06 & 0.06 & 0.07 & 0.07 & 0.08 & 0.08 & 0.09 & 0.09 & 0.10 & 0.10 \\ \hline \(F_{\alpha}^{n}\) & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \hline \(B^{n}\) & 3.41 & 3.14 & 3.47 & 3.83 & 3.49 & 3.19 & 3.36 & 3.54 & 4.21 & 3.01 & 2.46 & 2.45 \\ \hline \(\mathfrak{a}^{n}\)**(Very High)** & 0.25 & 1.0 & 0.5 & 0.0 & 0.0 & 0.0 & 0.0 & 0.25 & 0.25 & 0.25 & 0.75 \\ \hline \(\mathfrak{a}^{n}\)**(High)** & 0.25 & 0.0 & 0.5 & 1.0 & 0.5 & 0.0 & 0.0 & 0.0 & 0.50 & 0.50 & 0.25 & -0.16 \\ \hline \(\mathfrak{a}^{n}\)**(Low)** & 0.25 & 0.0 & 0.0 & 0.0 & 0.5 & 1.0 & 0.5 & 0.0 & 0.25 & 0.25 & 0.50 & 0.16 \\ \hline \(\mathfrak{a}^{n}\)**(Very Low)** & 0.25 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.5 & 1.0 & 0.25 & -0.25 & -0.50 & -0.16 \\ \hline \end{tabular} \end{table} Table 11: Characteristics of the portfolio Figure 6: Average annual probability of default of the portfolio per scenario and year in % #### 5.6.2 Expected and unexpected losses We compute the EL and UL using (4.2) and (4.3), assuming that LGD and EAD are constant over the years and \(\text{LGD}^{n}=45\%\) and \(\text{EAD}^{n}=\text{\textcircled{E}}1\) million for each firm \(n\) described in Table 11. The annual exposure of the notional portfolio of \(N=12\) firms thus remains fixed and is equal to EUR12 millions. We then express losses as a percentage of the firm's or portfolio's exposure. Table 12 and Table 13 show the average annual EL and UL. We observe in Table 12 and Figure 8 that, as expected (notably because the LGD is not stressed), the different scenarios remain clearly differentiated for the EL. EL as a percentage of \begin{table} \begin{tabular}{|r|r|r|r|r|r|} \hline _Emissions level_ & **Firm 2** & **Firm 4** & **Firm 6** & **Firm 8** & **Portfolio** \\ \hline _Current Policies_ & 0.19 & 0.387 & 0.593 & 0.613 & 2.898 \\ \hline _NDCs_ & 0.204 & 0.413 & 0.609 & 0.616 & 3.030 \\ \hline _Net Zero 2050_ & 0.234 & 0.464 & 0.639 & 0.62 & 3.291 \\ \hline _Divergent Net Zero_ & 0.327 & 0.539 & 0.684 & 0.627 & 3.733 \\ \hline \end{tabular} \end{table} Table 12: Average annual EL as a percentage of exposure Figure 8: EL of the portfolio in % of the exposure per scenario and per year. Figure 7: Average annual probability of default per scenario and per firm the portfolio's exposure increases with the year and the carbon price/taxes. For the portfolio as a whole, we see that the average annual EL increases by 62% between the two extreme scenarios. Moreover, still focusing on the two extreme scenarios, the average annual EL increases by 72% for Firm 2 belonging to the _Very High Emitting_ group while it increases by 2% for Firm 8 belonging to the _Very Low Emitting_ group. EL being covered by the provisions coming from the fees charged to the client, an increase in EL implies an increase in credit cost. Therefore, somehow, companies from the most polluting sectors will be charged more than those from the least polluting sectors. Similarly for the UL, we observe the difference between the scenarios from Table 13 and Figure 9. For the portfolio as a whole, we see that the average annual UL increases by 27% between the two extreme scenarios. Moreover, still focusing on the two extreme scenarios, the average annual UL increases by 32% for Firm 2 belonging to the _Very High Emitting_ group while it increases by 2% for Firm 8 belonging to the _Very Low Emitting_ group. UL being covered by the economic capital coming from the capital gathered by the shareholders, an increase in UL implies a decrease in the bank's profitability. Therefore, in some way, granting loans to companies from the most polluting sectors will affect banks more negatively than doing so to companies from the least polluting sectors. \begin{table} \begin{tabular}{|r|r|r|r|r|r|} \hline _Emissions level_ & **Firm 2** & **Firm 4** & **Firm 6** & **Firm 8** & **Portfolio** \\ \hline _Current Policies_ & 0.895 & 0.27 & 0.205 & 0.368 & 1.683 \\ \hline _NDCs_ & 0.941 & 0.282 & 0.209 & 0.369 & 1.755 \\ \hline _Net Zero 2050_ & 1.031 & 0.305 & 0.215 & 0.371 & 1.895 \\ \hline _Divergent Net Zero_ & 1.19 & 0.336 & 0.224 & 0.373 & 2.133 \\ \hline \end{tabular} \end{table} Table 13: Average annual UL as a percentage of exposure Figure 9: UL of the portfolio in % of the exposure per scenario and per year We therefore observe that the introduction of a carbon price will not only increase the banking fees charged to the client (materialized by the provisions via the expected loss) but will also reduce the bank's profitability (via the economic capital that is calculated from the unexpected loss). Finally, for more in-depth analysis, Figure 15 (respectively Figure 16) shows the distortions of the distribution of EL (respectively UL) per scenario and per year. #### 5.6.3 Losses' sensitivities to carbon taxes Finally, we compute the sensitivity of our portfolio losses to carbon taxes using (4.4). Since the scenarios are deterministic, this quantity allows us to measure some form of model uncertainty. Indeed, for a given scenario, it allows to capture the level by which the computed loss would vary should that assumed deterministic scenario deviate by a certain percentage. For each time \(t\), we choose the direction \(\mathfrak{U}\in([0,1)^{I}\times[0,1)^{I\times I}\times[0,1)^{I})^{t_{*}+1}\) which is equal to 1 at \(t\) and 0 everywhere else, and a step \(\vartheta=1\%\). A carbon price change of 1% will cause a change in EL of \(\widehat{\Gamma}_{t,\mathfrak{I}}^{N,T}(\text{EL})\) and a change in UL of \(\widehat{\Gamma}_{t,\mathfrak{I},\alpha}^{N,T}(\text{UL})\). We report the results in Table 14 and Table 15. For example, over the next ten years, if the price of carbon varies by 1% around the scenario _NDCs_, the EL will vary by 1.402% while the UL will change by 1.148% around this scenario. The greater the sensitivity, the more polluting the sector is. This is to be expected as carbon taxes are higher in these sectors. In addition, the sensitivity of the portfolio is smaller than that in the most polluting sectors, and greater than that in the least polluting ones. Finally, we notice that the variation of the EL is slightly more sensitive than the variation of the UL. This means that the bank's provisions will increase a bit more than the bank's capital, or that the growth of the carbon taxes will impact customers more than shareholders. \begin{table} \begin{tabular}{|r|r|r|r|r|r|} \hline _Emissions level_ & **Firm 2** & **Firm 4** & **Firm 6** & **Firm 8** & **Portfolio** \\ \hline _Current Policies_ & 1.561 & 1.581 & 1.191 & 0.827 & 1.280 \\ \hline _NDCs_ & 1.777 & 1.687 & 1.261 & 0.904 & 1.402 \\ \hline _Net Zero 2050_ & 2.142 & 1.864 & 1.386 & 1.035 & 1.631 \\ \hline _Divergent Net Zero_ & 2.668 & 2.096 & 1.562 & 1.215 & 1.973 \\ \hline \end{tabular} \end{table} Table 14: Average annual EL sensitivity to carbon price in % \begin{table} \begin{tabular}{|r|r|r|r|r|r|} \hline _Emissions level_ & **Firm 2** & **Firm 4** & **Firm 6** & **Firm 8** & **Portfolio** \\ \hline _Current Policies_ & 1.299 & 1.290 & 1.042 & 0.547 & 1.135 \\ \hline _NDCs_ & 1.463 & 1.365 & 1.102 & 0.583 & 1.148 \\ \hline _Net Zero 2050_ & 1.726 & 1.485 & 1.206 & 0.634 & 1.197 \\ \hline _Divergent Net Zero_ & 2.070 & 1.632 & 1.352 & 0.681 & 1.472 \\ \hline \end{tabular} \end{table} Table 15: Average annual UL sensitivity to carbon price in % ## Conclusion In this work, we study how the introduction of carbon taxes would propagate in a credit portfolio. To this aim, we first build a dynamic stochastic multisectoral model in which we introduce carbon taxes calibrated on sectoral greenhouse gases emissions. We later use the Discounted Cash Flows methodology to compute the firm value and introduce the latter in the Merton model to project PD, EL and UL. We finally introduce losses' sensitivities to carbon taxes to measure the uncertainty of the losses to the transition scenarios. This work opens the way to numerous extensions mobilizing diverse and varied mathematical tools. In the climate-economic model, exogenous and deterministic scenarios as well as heterogeneous agents are assumed while one could consider agent-based or mean-field games models where a central planner decides on the carbon taxes and agents (companies or households) optimize production, prices, and consumption according to the tax level. In the credit risk part, the LGD is assumed to be deterministic, constant, and independent of the carbon taxes. In our forthcoming research, we will analyze how the LGD is affected by the stranding of assets. We furthermore assume that EAD and thus bank balance sheets remain static over the years while the transition will require huge investments. One could thus introduce capital in the model. Finally, we have adopted a sectoral view, while one could alternatively assess the credit risk at the counterpart level and thus penalize or reward companies according to their individual and not sectoral emissions.
2307.03940
Uncovering the limits of uniqueness in sampled Gabor phase retrieval: A dense set of counterexamples in $L^2(\mathbb{R})$
Sampled Gabor phase retrieval - the problem of recovering a square-integrable signal from the magnitude of its Gabor transform sampled on a lattice - is a fundamental problem in signal processing, with important applications in areas such as imaging and audio processing. Recently, a classification of square-integrable signals which are not phase retrievable from Gabor measurements on parallel lines has been presented. This classification was used to exhibit a family of counterexamples to uniqueness in sampled Gabor phase retrieval. Here, we show that the set of counterexamples to uniqueness in sampled Gabor phase retrieval is dense in $L^2(\mathbb{R})$, but is not equal to the whole of $L^2(\mathbb{R})$ in general. Overall, our work contributes to a better understanding of the fundamental limits of sampled Gabor phase retrieval.
Rima Alaifari, Francesca Bartolucci, Matthias Wellershoff
2023-07-08T09:17:05Z
http://arxiv.org/abs/2307.03940v1
Uncovering the limits of uniqueness in sampled Gabor phase retrieval: A dense set of counterexamples in \(L^{2}(\mathbb{R})\) ###### Abstract Sampled Gabor phase retrieval -- the problem of recovering a square-integrable signal from the magnitude of its Gabor transform sampled on a lattice -- is a fundamental problem in signal processing, with important applications in areas such as imaging and audio processing. Recently, a classification of square-integrable signals which are not phase retrievable from Gabor measurements on parallel lines has been presented. This classification was used to exhibit a family of counterexamples to uniqueness in sampled Gabor phase retrieval. Here, we show that the set of counterexamples to uniqueness in sampled Gabor phase retrieval is dense in \(L^{2}(\mathbb{R})\), but is not equal to the whole of \(L^{2}(\mathbb{R})\) in general. Overall, our work contributes to a better understanding of the fundamental limits of sampled Gabor phase retrieval. Phase retrieval, Gabor transform, sampling result ## I Introduction Phase retrieval is a term broadly applied to problems in which information about complex phase needs to be inferred from data. The origins of phase retrieval can be traced back to the years 1915 - 1929 when W. H. Bragg and W. L. Bragg (among others) used X-ray diffraction images of crystals in order to illuminate their atomic structure [1, 2, 3, 4, 5]. Since then phase retrieval has found applications in various fields such as crystallography, electron microscopy and astronomy [6, 7, 8]. Gabor phase retrieval refers to problems in which signals \(f\in L^{2}(\mathbb{R})\) have to be reconstructed from magnitudes of their _Gabor transform_, \[\mathcal{G}f(x,\omega):=2^{1/4}\int_{\mathbb{R}}f(t)\mathrm{e}^{-\pi(t-x)^{2} }\mathrm{e}^{-2\pi\mathrm{i}t\omega}\,\mathrm{d}t,\quad(x,\omega)\in\mathbb{R }^{2}.\] It has been used in a range of audio processing tasks, including the phase vocoder for time-stretching and pitch-shifting of audio signals [9], as well as speech enhancement and source separation [10]. In this contribution, we will specifically focus on the _sampled Gabor phase retrieval_ problem which is the recovery of signals \(f\) from the magnitude measurements \((|\mathcal{G}f(x,\omega)|)_{(x,\omega)\in\Lambda}\) where \(\Lambda\subset\mathbb{R}^{2}\) is a lattice1. We focus on this sampling problem because magnitude information on the entire time-frequency plane \(\mathbb{R}^{2}\) is not available in practice. Instead, only a finite number of measurements are stored and inferences are made based on them. We consider the sampled setup proposed above as a natural and useful compromise between the fully continuous case, where no sampling occurs, and the fully discrete case, where the signals are finite-dimensional vectors. Footnote 1: A _lattice_\(\Lambda\subset\mathbb{R}^{2}\) is a discrete subset of the time-frequency plane that can be written as \(L\mathbb{Z}^{k}\) where \(L\in\mathbb{R}^{2\times k}\) is a matrix with linearly independent columns and \(k\in\{1,2\}\). ### _Prior arts: Counterexamples to uniqueness in sampled Gabor phase retrieval_ In the following, we will focus on counterexamples to uniqueness in sampled Gabor phase retrieval; i.e. signals whose Gabor transform magnitudes agree on a lattice but which are fundamentally different from each other. Before introducing the concept of a counterexample rigorously, we need to emphasise that there is one ever-present ambiguity in Gabor phase retrieval: the global phase ambiguity. Two signals \(f,g\in L^{2}(\mathbb{R})\) are said to _agree up to global phase_ if they are equivalent with respect to the relation \[f\sim g:\iff f=\mathrm{e}^{\mathrm{i}\alpha}g,\text{ for some }\alpha\in\mathbb{R}.\] Notably, \(f\sim g\) implies \(|\mathcal{G}f|=|\mathcal{G}g|\) such that signals which agree up to global phase cannot be recovered from Gabor transform magnitudes. With this in mind, we define the set of counterexamples. **Definition I.1** (Counterexamples).: _Let \(\Lambda\subset\mathbb{R}^{2}\). The set of counterexamples to uniqueness in Gabor phase retrieval on \(\Lambda\) is defined by_ \[\mathfrak{C}(\Lambda):=\left\{f\in L^{2}(\mathbb{R})\,\big{|} \,|\mathcal{G}f|=|\mathcal{G}g|\text{ on }\Lambda\text{ and }f\not\sim g,\right.\] \[\left.\text{ for some }g\in L^{2}(\mathbb{R})\right\}.\] _An element \(f\in\mathfrak{C}(\Lambda)\) is called a counterexample to uniqueness in sampled Gabor phase retrieval on \(\Lambda\)._ Counterexamples to uniqueness in sampled Gabor phase retrieval are interesting for two reasons. First, they allow us to better understand the fundamental limits of sampled Gabor phase retrieval, which in turn can guide future research towards achieving uniqueness. In addition, they offer the opportunity to explore the potential relationship between uniqueness and stability in phase retrieval [11]. Let us briefly summarise the recent research on counterexamples in Gabor phase retrieval. The relationship between the Gabor transform and the Bargmann transform (which is described in more detail in Section II) allows for the relation of the Gabor phase retrieval problem to a phase retrieval problem for entire functions. This was realised in [12, 13]. Then, following these ideas, a characterisation of all entire functions of exponential-type whose magnitudes agree on any set of infinitely many equidistant parallel lines was proven in [14]. Using this characterisation and noting that all lattices are a subset of some set of infinitely many equidistant parallel lines, it becomes possible to construct various types of counterexamples. This idea has been applied in [15] to construct explicit counterexamples to uniqueness in sampled Gabor phase retrieval on any lattice. (See [14] for a more in-depth explanation.) An extension of the results in [15] has appeared in [16]. ### _Our contributions_ In this contribution, we show that the set of counterexamples \(\mathfrak{C}(\Lambda)\) is dense in \(L^{2}(\mathbb{R})\) when \(\Lambda\subset\mathbb{R}^{2}\) is a lattice or a set of equidistant parallel lines. We also show that the Gaussian is _not_ a counterexample for quadratic lattices, \(\Lambda=a\mathbb{Z}^{2}\), with \(a\in(0,1)\). Therefore, _the set of counterexamples is dense but not equal to the whole of \(L^{2}(\mathbb{R})\) in general_. We prove these two results by using the connection between the Bargmann transform and the Gabor transform as well as some classical results from complex analysis. Note that this contribution is a condensed and modified version of the section on the fragility of uniqueness in sampled Gabor phase retrieval in the larger manuscript [11]. Apart from a comprehensive treatment of counterexamples, the larger manuscript also discusses the stability of Gabor phase retrieval as well as its potential connection with uniqueness in sampled Gabor phase retrieval. Here, we focus specifically on showing that the counterexamples are dense. ### _Notation_ Rotation by \(\theta\in\mathbb{R}\) on \(\mathbb{R}^{2}\) is denoted by \(\mathrm{R}_{\theta}:\mathbb{R}^{2}\to\mathbb{R}^{2}\); in matrix notation, we have \[R_{\theta}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}.\] Translation by \(x\in\mathbb{R}\) on \(L^{p}(\mathbb{R})\), where \(p\in[1,\infty]\), is denoted by \(\mathrm{T}_{x}:L^{p}(\mathbb{R})\to L^{p}(\mathbb{R})\); i.e. \[\mathrm{T}_{x}\,f(t)=f(t-x),\qquad t\in\mathbb{R},\] for \(f\in L^{p}(\mathbb{R})\). Finally, the normalised Gaussian is denoted by \[\varphi(t)=2^{1/4}\mathrm{e}^{-\pi t^{2}},\qquad t\in\mathbb{R}.\] ## II The relation between the Bargmann and Gabor transform As mentioned before, we will make use of the well-known connection between the Bargmann transform and the Gabor transform [17]. The _Fock space_\(\mathcal{F}^{2}(\mathbb{C})\) is the Hilbert space of all entire functions for which the norm induced by the inner product \[(F,G)_{\mathcal{F}}:=\int_{\mathbb{C}}F(z)\overline{G(z)}\mathrm{e}^{-\pi|z| ^{2}}\,\mathrm{d}z\] is finite. The _Bargmann transform_\(\mathcal{B}:L^{2}(\mathbb{R})\to\mathcal{F}^{2}(\mathbb{C})\), \[\mathcal{B}f(z):=2^{1/4}\int_{\mathbb{R}}f(t)\mathrm{e}^{2\pi tz-\pi t^{2}- \frac{\pi}{2}z^{2}}\,\mathrm{d}t,\quad z\in\mathbb{C},\] acts as an isomorphism between \(L^{2}(\mathbb{R})\) and \(\mathcal{F}^{2}(\mathbb{C})\). It is related to the Gabor transform by the formula \[\mathcal{G}f(x,-\omega)=\mathrm{e}^{\pi\mathrm{i}x\omega}\mathcal{B}f(x+ \mathrm{i}\omega)\mathrm{e}^{-\frac{\pi}{2}(x^{2}+\omega^{2})}, \tag{1}\] for \((x,\omega)\in\mathbb{R}^{2}\). It is the formula above that allows us to relate the Gabor phase retrieval problem to a phase retrieval problem for entire functions. We are going to use this relation in the sequel. ## III The set of counterexamples is dense In this section, we show that the set of counterexamples \(\mathfrak{C}(\Lambda)\) is dense in \(L^{2}(\mathbb{R})\) when \(\Lambda\subset\mathbb{R}^{2}\) is a lattice or a set of equidistant parallel lines. Our strategy will be to design entire functions \(H_{\delta}^{\pm}\in\mathcal{F}^{2}(\mathbb{C})\) which converge to a constant as \(\delta\to 0\), do not agree up to global phase and still satisfy \(|H_{\delta}^{+}|=|H_{\delta}^{-}|\) on \(\mathbb{R}+\mathrm{i}a\mathbb{Z}\). Once these entire functions have been designed, we can multiply them with \(\mathcal{B}f\) and take the inverse Bargmann transform in order to get counterexamples on \(\mathbb{R}\times a\mathbb{Z}\) that are close to \(f\in L^{2}(\mathbb{R})\). We design the functions \(H_{\delta}^{\pm}\in\mathcal{F}^{2}(\mathbb{C})\) by modifying the counterexamples on \(\mathbb{R}\times a\mathbb{Z}\), \[h^{\pm}(t):=\varphi(t)\left(\cosh\left(\frac{\pi t}{a}\right)\pm\mathrm{i} \sinh\left(\frac{\pi t}{a}\right)\right),\quad t\in\mathbb{R},\] presented in [15]. To accomplish this, we compute the Bargmann transforms of \(h^{\pm}\) which are given by \[z\mapsto\left(1\mp\mathrm{i}+(1\pm\mathrm{i})\mathrm{e}^{\frac{\pi s}{a}} \right)\mathrm{e}^{-\frac{\pi s}{2a}}\] up to a constant depending on \(a\). Next, we note that time-shifting \(h^{\pm}\) by \(u\in\mathbb{R}\) will produce additional counterexamples on \(\mathbb{R}\times a\mathbb{Z}\) according to the covariance property of the Gabor transform [17]. For \(f\in L^{2}(\mathbb{R})\), it holds that \[\mathcal{B}\,\mathrm{T}_{u}\,f(z)=\mathcal{B}f(z-u)\mathrm{e}^{\pi uz} \mathrm{e}^{-\frac{\pi}{2}u^{2}},\quad z\in\mathbb{C},\] such that the Bargmann transforms of the counterexamples \(\mathrm{T}_{u}\,h^{\pm}\), with \(u=-\frac{a}{\pi}\log\delta\), are given by \[z\mapsto\left(1\mp\mathrm{i}+(1\pm\mathrm{i})\delta\cdot\mathrm{e}^{\frac{\pi s }{a}}\right)\delta^{-az}\mathrm{e}^{-\frac{\pi s}{2a}}\] up to a constant depending on \(a\) and \(\delta\). After multiplying by \(\delta^{az}\mathrm{e}^{\frac{\pi s}{2a}}\), it follows that \[z\mapsto 1\mp\mathrm{i}+(1\pm\mathrm{i})\delta\cdot\mathrm{e}^{\frac{\pi s}{a}}\] are entire functions whose magnitudes agree on \(\mathbb{R}+\mathrm{i}a\mathbb{Z}\) (which do not agree up to global phase) and we define \[H_{\delta}^{\pm}(z):=1\pm\mathrm{i}\delta\cdot\mathrm{e}^{\frac{\pi z}{a}},\quad z \in\mathbb{C}, \tag{2}\] after multiplying by \((1\pm\mathrm{i})/2\). **Theorem III.1**.: _Let \(a>0\). Then, \(\mathfrak{C}(\mathbb{R}\times a\mathbb{Z})\) is dense in \(L^{2}(\mathbb{R})\)._ Proof.: Let \(\epsilon>0\) and \(f\in L^{2}(\mathbb{R})\). We want to show that there exist \(g^{\pm}\in L^{2}(\mathbb{R})\) which do not agree up to global phase, are \(\epsilon\)-close to \(f\) in \(L^{2}(\mathbb{R})\) and satisfy \[|\mathcal{G}g^{+}|=|\mathcal{G}g^{-}|\text{ on }\mathbb{R}\times a\mathbb{Z}.\] To do so, we note that the monomials \[e_{n}(z):=\left(\frac{\pi^{n}}{n!}\right)^{1/2}z^{n},\qquad n\in\mathbb{N}_{0},\ z\in\mathbb{C},\] form an orthonormal basis for the Fock space \(\mathcal{F}^{2}(\mathbb{C})\)[17]. Therefore, the space of complex polynomials is dense in \(\mathcal{F}^{2}(\mathbb{C})\) and we can find \(P\in\mathbb{C}[z]\) such that \[\|\mathcal{B}f-P\|_{\mathcal{F}}<\frac{\epsilon}{2}.\] Let us now consider the entire functions \(H_{\delta}^{\pm}\) defined in equation (2) and note that \(G_{\delta}^{\pm}:=H_{\delta}^{\pm}\cdot P\in\mathcal{F}^{2}(\mathbb{C})\) since \(G_{\pm}^{\delta}\) are entire functions of exponential-type. Hence, we can define the signals \(g_{\delta}^{\pm}:=\mathcal{B}^{-1}G_{\delta}^{\pm}\in L^{2}(\mathbb{R})\). To establish the desired properties of \(g_{\delta}^{\pm}\), we will work with their Bargmann transforms \(G_{\delta}^{\pm}\). First, we note that \(|H_{\delta}^{+}|=|H_{\delta}^{-}|\) on \(\mathbb{R}+\mathrm{i}a\mathbb{Z}\) implies \(|G_{\delta}^{+}|=|G_{\delta}^{-}|\) on \(\mathbb{R}+\mathrm{i}a\mathbb{Z}\) and thus \(|\mathcal{G}g_{\delta}^{+}|=|\mathcal{G}g_{\delta}^{-}|\) on \(\mathbb{R}\times a\mathbb{Z}\) by equation (1). Secondly, we note that the entire functions \(G_{\delta}^{\pm}\) do not agree up to global phase: indeed, both entire functions \(H_{\delta}^{\pm}\) have infinitely many roots but no root of \(H_{\delta}^{+}\) is a root of \(H_{\delta}^{-}\) and vice versa. At the same time, \(P\) is a polynomial and has only finitely many roots. It follows that \(G_{\delta}^{+}\) does have roots which are not roots of \(G_{\delta}^{-}\) (and vice versa) and thus \(G_{\delta}^{+}\not\sim G_{\delta}^{-}\). By the linearity of the Bargmann transform, we can conclude that \(g_{\delta}^{+}\not\sim g_{\delta}^{-}\). Finally, we note that the definition of \(H_{\delta}^{\pm}\) in equation (2) directly implies that \[\|P-P\cdot H_{\delta}^{\pm}\|_{\mathcal{F}}=\delta\|z\mapsto P(z)\cdot\mathrm{ e}^{\pi z/a}\|_{\mathcal{F}},\] and so there exists a \(\delta>0\) depending on \(a\), \(\epsilon\) and \(P\) (which in turn depends on \(f\) and \(\epsilon\)) such that \[\|P-P\cdot H_{\delta}^{\pm}\|_{\mathcal{F}}<\frac{\epsilon}{2}.\] We conclude \[\|f-g_{\delta}^{\pm}\|_{2} =\|\mathcal{B}f-H_{\pm}^{\delta}\cdot P\|_{\mathcal{F}}\] \[\leq\|\mathcal{B}f-P\|_{\mathcal{F}}+\|P-H_{\pm}^{\delta}\cdot P \|_{\mathcal{F}}<\epsilon.\] **Remark III.2** (Some explanations on the proof).: _As \(\mathcal{B}f\in\mathcal{F}^{2}(\mathbb{C})\), for \(f\in L^{2}(\mathbb{R})\), we know that \(\mathcal{B}f\) is either an entire function of exponential-type or an entire function of second order. If \(\mathcal{B}f\) is of second order, then its type is either strictly smaller than \(\pi/2\) or exactly \(\pi/2\). In most of these cases, it holds that \(\mathcal{B}f\cdot H_{\delta}^{\pm}\in\mathcal{F}^{2}(\mathbb{C})\) and thus we can define_ \[g_{\delta}^{\pm}:=\mathcal{B}^{-1}\left(\mathcal{B}f\cdot H_{\delta}^{\pm} \right)\in L^{2}(\mathbb{R}),\] _with_ \[\delta<\frac{\epsilon}{\|z\mapsto\mathcal{B}f(z)\mathrm{e}^{\pi z/a}\|_{ \mathcal{F}}},\] _to obtain counterexamples which are \(\epsilon\)-close to \(f\) in \(L^{2}(\mathbb{R})\). We note that \(g_{\delta}^{\pm}\) are small additive perturbations of our original signals \(f\)._ _Unfortunately, there is one case in which this simple strategy does not work: the one in which \(\mathcal{B}f\) is a second-order entire function of type \(\pi/2\). Indeed, in this case, it is not guaranteed that \(\mathcal{B}f\cdot H_{\delta}^{\pm}\) is in the Fock space. -- Two striking examples for why this can fail can be found in [18]. -- As the only situation in which \(\mathcal{B}f\cdot H_{\delta}^{\pm}\) is not in the Fock space occurs when \(\mathcal{B}f\) is exactly of order two and of type \(\pi/2\), it seems obvious that the functions \(f\) for which \(\mathcal{B}f\cdot H_{\delta}^{\pm}\in\mathcal{F}^{2}(\mathbb{C})\) holds must be dense in \(L^{2}(\mathbb{R})\). We can prove this by realising that the complex polynomials are dense in \(\mathcal{F}^{2}(\mathbb{C})\)._ Theorem III.1 continues to hold for any set of infinitely many equidistant parallel lines. We can show this by considering the entire functions \[H_{\delta}^{\pm}(z):=1\pm\mathrm{i}\delta\exp\left(\frac{\pi\mathrm{e}^{ \mathrm{i}\theta}}{a}\left(z-\overline{\lambda}_{0}\right)\right)\] and realising that the corresponding signals \(g_{\delta}^{\pm}\in L^{2}(\mathbb{R})\) satisfy \[|\mathcal{G}g_{\delta}^{+}|=|\mathcal{G}g_{\delta}^{-}|\text{ on }\ \mathrm{R}_{ \theta}\left(\mathbb{R}\times a\mathbb{Z}\right)+\lambda_{0},\] where \(a>0\), \(\lambda_{0}\in\mathbb{R}^{2}\simeq\mathbb{C}\). The statement for general lattices follows from the same consideration because all lattices are subsets of some set of infinitely many equidistant parallel lines. We therefore arrive at the following result. **Theorem III.3**.: _Let \(\Lambda\subset\mathbb{R}^{2}\) be a set of equidistant parallel lines or a lattice. Then, \(\mathfrak{C}(\Lambda)\) is dense in \(L^{2}(\mathbb{R})\)._ To illustrate our main results, we construct counterexamples that are close to the Hermite functions and plot their spectrograms. **Example III.4**.: _Consider the \(n\)-th Hermite function \(H_{n}\in L^{2}(\mathbb{R})\) given by_ \[\mathcal{B}H_{n}(z)=e_{n}(z)=\left(\frac{\pi^{n}}{n!}\right)^{1/2}z^{n},\qquad z \in\mathbb{C}.\] _By equation (1), the Gabor transform of the Hermite function is_ \[\mathcal{G}H_{n}(x,\omega) =\mathrm{e}^{-\pi\mathrm{i}x\omega}\mathcal{B}H_{n}(x-\mathrm{i} \omega)\mathrm{e}^{-\frac{\pi}{2}\left(x^{2}+\omega^{2}\right)}\] \[=\left(\frac{\pi^{n}}{n!}\right)^{1/2}\mathrm{e}^{-\pi\mathrm{i}x \omega}\left(x-\mathrm{i}\omega\right)^{n}\mathrm{e}^{-\frac{\pi}{2}\left(x^{2}+ \omega^{2}\right)},\] _for \((x,\omega)\in\mathbb{R}^{2}\). If we plot the magnitude of the above (for \(n=5\)), we obtain Figure (a)a. Next, we want to find counterexamples which are close to \(H_{n}\). According to Remark III.2, we can define \(g_{\delta}^{+}:=\mathcal{B}^{-1}(\mathcal{B}H_{n}\cdot H_{\delta}^{+})\). Let us visualise the spectrogram of \(g_{\delta}^{+}\), i.e._ \[\mathcal{G}g_{\delta}^{+}(x,\omega)=\mathcal{G}H_{n}(x,\omega)\cdot H_{\delta}^ {+}(x-\mathrm{i}\omega),\quad(x,\omega)\in\mathbb{R}^{2},\] _in Figure 0(b) (for \(n=5\), \(a=\frac{1}{4}\) and \(\delta=\frac{1}{50}\exp(-10\pi)\))._ ## IV The Gaussian is not a counterexample Finally, we can show that the Gaussian is not a counterexample on \(a\mathbb{Z}^{2}\) if \(a\in(0,1)\). Specifically, we prove the following result. **Theorem IV.1**.: _Let \(0<a<1\) and \(f\in L^{2}(\mathbb{R})\) be such that_ \[|\mathcal{G}f(x,\omega)|^{2}=|\mathcal{G}\varphi(x,\omega)|^{2},\qquad(x, \omega)\in a\mathbb{Z}^{2}.\] _Then, there exists an \(\alpha\in\mathbb{R}\) such that \(f=\mathrm{e}^{\mathrm{i}\alpha}\varphi\)._ Since the Bargmann transform of the Gaussian is one, equation (1) implies that the theorem above is equivalent to the following lemma. **Lemma IV.2**.: _Let \(0<a<1\) and let \(F\in\mathcal{F}^{2}(\mathbb{C})\) be such that_ \[|F(z)|=1=|\mathcal{B}\varphi(z)|,\qquad z\in a\mathbb{Z}+\mathrm{i}a\mathbb{Z}.\] _Then, there exists an \(\alpha\in\mathbb{R}\) such that \(F=\mathrm{e}^{\mathrm{i}\alpha}\)._ The intuition for the proof of this lemma comes from the maximum modulus principle: we note that we are considering a second order entire function \(F\) which is bounded on all lattice points; this suggests that \(F\) should be constant in the entire complex plane as long as the lattice is dense enough. This intuition is indeed correct, as evidenced by the following result independently discovered by V. G. Iyer [19] and A. Pfluger [20]. **Theorem IV.3**.: _Let \(h\) be an entire function such that_ \[\limsup_{r\to\infty}\frac{\log M_{h}(r)}{r^{2}}<\frac{\pi}{2},\] _where \(M_{h}(r):=\max_{|z|=r}|h(z)|\). If there exists a constant \(\kappa>0\) such that_ \[|h(m+\mathrm{i}n)|\leq\kappa,\qquad m,n\in\mathbb{Z},\] _then \(h\) is constant._ Proof of Lemma IV.2.: Consider the function \(h(z):=F(az)\), for \(z\in\mathbb{C}\). It holds that \[|h(z)|=|F(az)|\leq\|F\|_{\mathcal{F}}\cdot\mathrm{e}^{\frac{\pi}{2}|az|^{2}}=\| F\|_{\mathcal{F}}\cdot\mathrm{e}^{\frac{\pi a^{2}}{2}|z|^{2}},\] for \(z\in\mathbb{C}\), such that \[\limsup_{r\to\infty}\frac{\log M_{h}(r)}{r^{2}} \leq\limsup_{r\to\infty}\left(\frac{\log\|F\|_{\mathcal{F}}}{r^{2 }}+\frac{\pi a^{2}}{2}\right)\] \[=\frac{\pi a^{2}}{2}<\frac{\pi}{2}.\] Additionally, \[|h(m+\mathrm{i}n)|=|F(am+\mathrm{i}an)|=1,\qquad m,n\in\mathbb{Z},\] holds such that the assumptions of Theorem IV.3 are met and we can conclude that \(h\) is constant. As \(|h(0)|=1\), it follows that there must exist an \(\alpha\in\mathbb{R}\) such that \(h=\mathrm{e}^{\mathrm{i}\alpha}\) which implies \(F=\mathrm{e}^{\mathrm{i}\alpha}\). We have therefore shown that the set of counterexamples is not equal to the whole of \(L^{2}(\mathbb{R})\) when \(\Lambda\) is a sufficiently dense quadratic lattice. **Remark IV.4**.: _A natural confusion that might arise in connection with Theorem IV.1 is in how far it is different from the result in [21] on shift-invariant spaces with Gaussian generator \(V_{\beta}^{1}(\varphi)\). While Theorem IV.1 implies that the Gaussian can be distinguished from all other functions in \(L^{2}(\mathbb{R})\) based on its sampled Gabor magnitude measurements, the result in [21] only implies that it can be distinguished from the functions in \(V_{\beta}^{1}(\varphi)\subset L^{2}(\mathbb{R})\)._ ## Acknowledgements The authors would like to extend their heartfelt thanks to Stefan Steinerberger for his insightful discussions and acknowledge funding through the SNSF grant 200021_184698.
2306.02977
Improving the accuracy of bubble date estimators under time-varying volatility
In this study, we consider a four-regime bubble model under the assumption of time-varying volatility and propose the algorithm of estimating the break dates with volatility correction: First, we estimate the emerging date of the explosive bubble, its collapsing date, and the recovering date to the normal market under assumption of homoskedasticity; second, we collect the residuals and then employ the WLS-based estimation of the bubble dates. We demonstrate by Monte Carlo simulations that the accuracy of the break dates estimators improve significantly by this two-step procedure in some cases compared to those based on the OLS method.
Eiji Kurozumi, Anton Skrobotov
2023-06-05T15:49:32Z
http://arxiv.org/abs/2306.02977v1
# Improving the accuracy of bubble date estimators under time-varying volatility+ ###### Abstract In this study, we consider a four-regime bubble model under the assumption of time-varying volatility and propose the algorithm of estimating the break dates with volatility correction: First, we estimate the emerging date of the explosive bubble, its collapsing date, and the recovering date to the normal market under assumption of homoskedasticity; second, we collect the residuals and then employ the WLS-based estimation of the bubble dates. We demonstrate by Monte Carlo simulations that the accuracy of the break dates estimators improve significantly by this two-step procedure in some cases compared to those based on the OLS method. _Keywords_: rational bubble; change points; explosive autoregression; time-varying volatility; right-tailed unit root testing; mildly explosive; mildly integrated. _JEL Codes_: C12, C22 Introduction Non-stationary volatility is sometimes observed in time series (in particular, financial data) but discussion of the break dates estimators under non-stationary volatility has limited attention in the literature. One of the exceptions is Harris et al. (2020), in which the estimation of level shift was improved by correcting the original time series by non-parametrically estimated time varying variance. While the explosive bubble model was proposed by Phillips et al. (2011) and extended by Phillips et al. (2015a,b) and Harvey et al. (2017), in which the time series is generated by a unit root process followed by an explosive regime that is again followed by a unit root regime (or with a possible stationary correction market in a recovery regime), the importance of non-stationary volatility accommodation in bubble detection methods was discussed by Harvey et al. (2016) and Phillips and Shi (2020), the latter of which proposed a modification of the wild bootstrap recursive algorithm (based on the expanding sample) of Harvey et al. (2016) for obtaining the dates of the bubble(s) and also addressed the multiplicity testing problem. Harvey et al. (2020) considered the minimization of the sign based statistic for obtaining the dates of the bubble but did not provide any finite sample performance. On the other hand, as discussed in Harvey et al. (2017) and Pang et al. (2021) (PDC hereafter), the break dates estimators based on the minimization of the sum of the squared residuals are more accurate than the recursive method of Phillips et al. (2015a,b) under the assumption of homoskedasticity. Nevertheless, as far as we know, there are no studies which accommodate the non-stationary volatility behaviour into the estimation of the bubble dates based on the minimization of the sum of the squared residuals. Recently, PDC and Kurozumi and Skrobotov (2022) investigated the asymptotic behaviour of the bubble date estimators. In particular, they obtained the consistency of the collapsing date estimator by minimizing the sum of the squared residuals using the two-regime model (even though the true model has four regimes), allowing non-stationary volatility. Due to the consistency, one could split the whole sample at the estimated break date and consider the estimation of the date of the origination of the bubble using the sample before the estimated collapsing date and the date of the market recovery using the sample after the estimated collapsing date. This sample splitting approach closely resembles that of Harvey et al. (2017) by minimizing the full SSR based on the four regimes model, but computationally less involved and, as PDC demonstrated, performs better in terms of estimation accuracy of the break dates. On the contrary to the collapsing date of the bubble, the consistency of the dates of the origination of the bubble and the market recovery depend on the extent of the explosive regime and collapsing regime. In other words, if the explosive speed is not suffi ciently fast, then PDC and Kurozumi and Skrobotov (2022) obtained only the consistency of the estimators of the break fractions, not the break date. In this paper, we propose a two-step algorithm for estimating the emerging date, the collapsing date, and the recovering date of a bubble under non-stationary volatility. First, due to the consistency of the break dates (fractions) estimators regardless of heteroskedasticity, we estimate these break dates as proposed by PDC and Kurozumi and Skrobotov (2022) and collect the residuals of the fitted four-regime model. Second, we estimate non-parametrically the time-varying error variance from these residuals and perform the GLS-based sample splitting approach, which minimizes the weighted SSRs. Monte-Carlo simulations demonstrate the performance of our correction method for a model with a one time break in volatility, especially when this break occurs at the beginning or the end of the sample. The empirical application consists of different time series of cryptocurrencies for which the two methods of identifying the bubble dates are performed: One without volatility correction and another with volatility correction. The remainder of this paper is organized as follows. Section 2 formulates the model and assumptions. In Section 3, we define the main GLS-based procedure under a general type of weights. The choice of the specific weights are discussed in Section 4 and the new two-step algorithm is proposed. The finite sample performance of the estimated break dates is demonstrated in Section 5, and the empirical example is given in Section 6. Section 7 concludes the paper. ## 2 Model Let us consider the following bubble's emerging and collapsing model for \(t=1,2,\ldots,T\): \[y_{t}=\left\{\begin{array}{lcl}c_{0}T^{-\eta_{0}}+y_{t-1}+\varepsilon_{t}&:& 1\leq t\leq k_{e},\\ \phi_{a}y_{t-1}+\varepsilon_{t}&:&k_{e}+1\leq t\leq k_{c},\\ \phi_{b}y_{t-1}+\varepsilon_{t}&:&k_{c}+1\leq t\leq k_{r},\\ c_{1}T^{-\eta_{1}}+y_{t-1}+\varepsilon_{t}&:&k_{r}+1\leq t\leq T,\end{array}\right. \tag{1}\] where \(y_{0}=o_{p}(T^{1/2})\), \(c_{0}\geq 0\), \(\eta_{0}>1/2\), \(\phi_{a}>1\), \(\phi_{b}<1\), \(c_{1}\geq 0\), and \(\eta_{1}>1/2\). We assume that the market is normal in the first and last regimes in the sense that the time series \(y_{t}\) is a unit root process (a random walk) with possibly positive drift shrinking to \(0\). The process starts exploding at \(t=k_{e}+1\) at a rate of \(\phi_{a}\), which is typically only slightly greater than one and thus sometimes characterized as a mildly explosive specification. The explosive behavior stops at \(t=k_{c}\) and \(y_{t}\) is collapsing at a rate of \(\phi_{b}<1\) in the next regime, followed by the normal market regime. This model can be seen as a structural change model with the break points being given by \(k_{e}\), \(k_{c}\), and \(k_{r}\). The corresponding break fractions are defined as \(\tau_{e}\coloneqq k_{e}/T\), \(\tau_{c}\coloneqq k_{c}/T\), and \(\tau_{r}\coloneqq k_{r}/T\), respectively. We would like to estimate these break dates as accurately as possible. For model (1), we make the following assumption. **Assumption 1**: \(0<\tau_{e}<\tau_{c}<\tau_{r}<1\)_._ **Assumption 2**: \(\varepsilon_{t}\coloneqq\sigma_{t}e_{t}\)_, where \(\{e_{t}\}\sim i.i.d.(0,1)\) with \(E[e_{t}^{4}]<\infty\) and \(\sigma_{t}\coloneqq\omega(t/T)\) where \(\omega(\cdot)\) is a nonstochastic and strictly positive function on \([0,1]\) satisfying \(\underline{\omega}<\omega(\cdot)<\overline{\omega}<\infty\)._ By Assumption 1, the break fractions are distinct and not too close each other. Assumption 2 allows for various kinds of nonstationary unconditional volatility in the shocks, such as a volatility shift (possibly multiple times) and linear and non-linear transitions. Under Assumption 2, it is well known that the functional central limit theorem (FCLT) holds for the partial sum process of \(\{\varepsilon_{t}\}\) normalized by \(\sqrt{T}\), which weakly converges to a variance transformed Brownian motion as shown by Cavaliere and Taylor (2007a,b). ## 3 Individual Estimation of Break Dates Following PDC and Kurozumi and Skrobotov (2022), we estimate the break dates one at a time. As model (1) can be expressed as \[y_{t}=\left\{\begin{array}{l}\phi_{1}y_{t-1}+u_{t}\\ \phi_{a}y_{t-1}+u_{t}\\ \phi_{b}y_{t-1}+u_{t}\\ \phi_{1}y_{t-1}+u_{t}\end{array}\right.\quad\text{where }\phi_{1}=1\quad \text{and}\quad u_{t}\coloneqq\left\{\begin{array}{l}c_{0}/T^{\eta_{0}}+ \varepsilon_{t}\\ \varepsilon_{t}\\ c_{1}/T^{\eta_{1}}+\varepsilon_{t},\end{array}\right. \tag{2}\] PDC and Kurozumi and Skrobotov (2022) proposed to fit the one-time structural change model without a constant and to estimate the break point by minimizing the sum of the squared residuals. It is shown that the estimated break date, \(\hat{k}_{c}\), is consistent for \(k_{c}\). We then split the whole sample into the two subsamples, and from the fist subsample before \(\hat{k}_{c}\), the emerging date of the explosive behavior is estimated by fitting a one-time structural change model again, while \(k_{r}\) is estimated from the second subsample after \(\hat{k}_{c}\). These estimated break fractions, \(\hat{\tau}_{e}\coloneqq\hat{k}_{e}/T\) and \(\hat{\tau}_{r}\coloneqq\hat{k}_{r}/T\), are shown to be consistent and further, \(\hat{k}_{e}\) (\(\hat{k}_{r}\)) is consistent for \(k_{e}\) (\(k_{r}\)) if, roughly speaking, \(\phi_{a}\) deviates from 1 sufficiently (\(\phi_{a}-1>1-\phi_{b}\)). See PDC and Kurozumi and Skrobotov (2022) for details. Although the above estimated break dates (fractions) are consistent under nonstationary volatility in Assumption 2, the efficiency gain would be expected by estimating the break dates based on the weighted sum of the squared residuals (SSR). To be more precise, let \(\delta_{t}\) be a generic series of weights \(\delta_{t}\) and then the weighted SSR based on a one-time structural change model is given by \[SSR(k,\delta_{t},\phi_{a},\phi_{b})\coloneqq\sum_{1}^{k}\delta_{t}^{-2}\left(y _{t}-\phi_{a}y_{t-1}\right)^{2}+\sum_{k+1}^{T}\delta_{t}^{-2}\left(y_{t}-\phi_ {b}y_{t-1}\right)^{2}, \tag{3}\] where \(\sum_{t=\ell}^{m}\) is abbreviated just as \(\sum_{\ell}^{m}\). As \(SSR(k,\delta_{t},\phi_{a},\phi_{b})\) is minimized at \[\hat{\phi}_{a}(k,\delta_{t})\coloneqq\frac{\sum_{1}^{k}y_{t-1}y_{t}\delta_{t} ^{-2}}{\sum_{1}^{k}y_{t-1}^{2}\delta_{t}^{-2}}\quad\text{and}\quad\hat{\phi}_ {b}(k,\delta_{t})\coloneqq\frac{\sum_{k+1}^{T}y_{t-1}y_{t}\delta_{t}^{-2}}{ \sum_{k+1}^{T}y_{t-1}^{2}\delta_{t}^{-2}}\] for given \(k\) and \(\delta_{t}\), the estimator of \(k_{c}\) is given by \[\hat{k}_{c}(\delta_{t})\coloneqq\arg\min_{\underline{\tau}_{c}\leq k/T\leq \overline{\tau}_{c}}SSR(k,\delta_{t}),\] where \(0<\underline{\tau}_{c}<\tau_{c}<\overline{\tau}_{c}<1\) and \(SSR(k,\delta_{t})\coloneqq SSR(k,\delta_{t},\hat{\phi}_{a}(k,\delta_{t}),\hat {\phi}_{b}(k,\delta_{t}))\). The corresponding break fraction estimator is defined as \(\hat{\tau}_{c}(\delta_{t})\coloneqq\hat{k}_{c}(\delta_{t})/T\). Once we obtained the estimator of \(k_{c}\), we can move on to the estimation of \(k_{e}\) and \(k_{r}\). For \(k_{e}\), the estimation is based on the minimization of the weighted sum of the squared residuals using the first sub-sample, and the estimator is defined as \[\hat{k}_{e}(\delta_{t})\coloneqq\arg\min_{\underline{\tau}_{c}\leq k/T\leq \overline{\tau}_{e}}SSR_{1}(k,\delta_{t})\] where \(0<\underline{\tau}_{e}<\tau_{e}<\overline{\tau}_{e}<\hat{\tau}_{c}\) and \[SSR_{1}(k,\delta_{t})\coloneqq\sum_{1}^{k}\delta_{t}^{-2}\left(y_{t}-\hat{ \phi}_{c}(k,\delta_{t})y_{t-1}\right)^{2}+\sum_{k+1}^{\hat{k}_{c}(\delta_{t}) }\delta_{t}^{-2}\left(y_{t}-\hat{\phi}_{d}(k,\delta_{t})y_{t-1}\right)^{2}\] \[\text{with}\quad\hat{\phi}_{c}(k,\delta_{t})\coloneqq\frac{\sum_{1}^{k}y_{t-1 }y_{t}\delta_{t}^{-2}}{\sum_{1}^{k}y_{t-1}^{2}\delta_{t}^{-2}}\quad\text{and} \quad\hat{\phi}_{d}(k,\delta_{t})\coloneqq\frac{\sum_{k+1}^{\hat{k}_{c}( \delta_{t})}y_{t-1}y_{t}\delta_{t}^{-2}}{\sum_{k+1}^{\hat{k}_{c}(\delta_{t})}y _{t-1}^{2}\delta_{t}^{-2}}.\] The corresponding break fraction estimator is defined as \(\hat{\tau}_{e}(\delta_{t})\coloneqq\hat{k}_{e}(\delta_{t})/T\). For notational convenience, we suppressed the dependence of \(\hat{k}_{e}(\delta_{t})\), \(\hat{\tau}_{e}(\delta_{t})\), and \(SSR_{1}(k,\delta_{t})\) on \(\hat{k}_{c}(\delta_{t})\). On the other hand, for the estimation of \(k_{r}\), we minimize the weighted sum of the squared residuals using the second sub-sample, and the estimator is defined as \[\hat{k}_{r}(\delta_{t})\coloneqq\arg\min_{\underline{\tau}_{r}\leq k/T\leq \overline{\tau}_{r}}SSR_{2}(k,\delta_{t})\] where \(\hat{\tau}_{c}<\underline{\tau}_{r}<\tau_{r}<\overline{\tau}_{r}<1\) and \[SSR_{2}(k,\delta_{t})\coloneqq\sum_{\hat{k}_{c}(\delta_{t})+1}^{k}\delta_{t}^{- 2}\left(y_{t}-\hat{\phi}_{e}(k,\delta_{t})y_{t-1}\right)^{2}+\sum_{k+1}^{T} \delta_{t}^{-2}\left(y_{t}-\hat{\phi}_{f}(k,\delta_{t})y_{t-1}\right)^{2}\] \[\text{with}\quad\hat{\phi}_{e}(k,\delta_{t})=\frac{\sum_{\hat{k}_{c}(\delta_{t })+1}^{k}y_{t-1}y_{t}\delta_{t}^{-2}}{\sum_{\hat{k}_{c}(\delta_{t})+1}^{k}y_{t -1}^{2}\delta_{t}^{-2}}\quad\text{and}\quad\hat{\phi}_{f}(k,\delta_{t})=\frac{ \sum_{k+1}^{T}y_{t-1}y_{t}\delta_{t}^{-2}}{\sum_{k+1}^{T}y_{t-1}^{2}\delta_{t} ^{-2}}.\] The corresponding break fraction estimator is defined by \(\hat{\tau}_{r}(\delta_{t})\coloneqq\hat{k}_{r}(\delta_{t})/T\). We call the above method the sample splitting approach based on the weighted least least squares (WLS) method. Note that the special case where \(\delta_{t}=1\) for all \(t\), called the OLS method in this paper, corresponds to the estimation method by PDC with the sample ranging from 1 to \(k_{r}\) and that by Kurozumi and Skrobotov (2022). ## 4 Adaptive Estimation To implement the sample splitting approach based on the WLS method in pracitce, we need to choose a weight function \(\delta_{t}\) appropriately. In our model, it is natural to choose the volatility function \(\sigma_{t}\) as the weight \(\delta_{t}\) to obtain the efficiency gain but such a WLS estimation is infeasible because the volatility function is unknown. In this article, we follow Xu and Phillips (2008) and estimate \(\sigma_{t}\) by a kernel-based method. More precisely, we first estimate \(\tau_{e}\), \(\tau_{c}\), and \(\tau_{r}\) by the sample splitting approach based on the OLS method (\(\delta_{t}=1\) for all \(t\)) as proposed by PDC and Kurozumi and Skrobotov (2022). Then, using the estimated break dates denoted as \(\hat{\tau}_{e}(1)\), \(\hat{\tau}_{c}(1)\), and \(\hat{\tau}_{r}(1)\), we estimate \[\Delta y_{t}=\mu_{1}D_{t}(\hat{\tau}_{e}(1),\hat{\tau}_{c}(1))+\mu_{2}D_{t}( \hat{\tau}_{c}(1),\hat{\tau}_{r}(1))+\delta_{1}D_{t}(\hat{\tau}_{e}(1),\hat{ \tau}_{c}(1))y_{t-1}+\delta_{2}D_{t}(\hat{\tau}_{c}(1),\hat{\tau}_{r}(1))y_{t- 1}+e_{t}, \tag{4}\] by the least squares method and obtain the residuals \(\hat{e}_{t}\), where \(D_{t}(a,b)=\mathbb{I}(\lfloor aT\rfloor<t\leq\lfloor bT\rfloor)\) with \(\mathbb{I}(\cdot)\) being the indicator function. Next, \(\hat{\sigma}_{t}^{2}\) is calculated as \[\hat{\sigma}_{t}^{2}=\sum_{t=1}^{T}\left(\sum_{i=1}^{T}K_{it}\right)^{-1}K_{it} \hat{e}_{t}^{2},\quad where\quad K_{it}=\left\{\begin{array}{ll}K\left( \frac{t-i}{Tb}\right)&\text{if }t\neq i\\ 0&\text{if }t=i\end{array}\right., \tag{5}\] \(K(\cdot)\) is a bounded nonnegative continuous kernel function defined on the real line with \(\int_{-\infty}^{\infty}K(s)ds=1\), and \(b\) is a bandwidth parameter. Finally, by plugging \(\hat{\sigma}_{t}^{2}\) into \(\delta_{t}^{2}\) in the sample splitting approach, we obtain the estimators \(\hat{\tau}_{e}(\hat{\sigma}_{t})\), \(\hat{\tau}_{c}(\hat{\sigma}_{t})\), and \(\hat{\tau}_{c}(\hat{\sigma}_{t})\). Xu and Phillips (2008) showed that the estimation accuracy of the coefficient in a stable autoregressive model improves by the adaptive (WLS) estimation and we investigate if it works for the estimation of the bubble's dates in the next section. ## 5 Monte-Carlo Simulations In this section, we examine the performance of the estimates of the bubble regimes dates in finite samples if the error variance is subject to changes in volatility. The Monte-Carlo simulations reported in this section are based on the series generated by (1) with \(y_{0}=1500\) and \(\{\varepsilon_{t}\}\sim IIDN(0,1)\). Data are generated from this DGP for samples of \(T=(400,800)\) with \(50,000\) replications.1 We set the drift terms in the first and fourth regimes to \(c_{0}T^{-\eta_{0}}=1/800\) and \(c_{1}T^{-\eta_{0}}=1/800\), respectively, following PDC. In this experiment, we focus on local to unit root behaviour characterized by \(\phi_{a}=1+c_{a}/T\) and \(\phi_{b}=1-c_{b}/T\) where \(c_{a}\) takes values among \(\{4,5,6\}\) whereas \(c_{b}\) is fixed at 6. Footnote 1: All simulations were programmed in R with rnorm random number generator. For the dates of bubble regimes, we use \((\tau_{e},\tau_{c},\tau_{r})\) to be equal to (0.4,0.6,0.7). This setting seems to be empirically relevant considering Japanese stock price, its logarithm, US house price index, and cryptocurrencies. We consider the case with a one-time break in volatility at date \(\tau\), so that the volatility function \(\sigma_{t}\) has the following form: \[\sigma_{t}^{2}=\sigma_{0}^{2}+\delta(\sigma_{1}^{2}-\sigma_{0}^{2})\mathbb{I} (t>\lfloor\tau T\rfloor)\] with \(\sigma_{1}/\sigma_{0}\) takes values among \(\{1/5,5\}\) and \(\tau\) takes values among \(\{0.2,0.8\}\). As in Kurozumi and Skrobotov (2022), in the minimization of \(SSR(k/T)\), \(SSR_{1}(k/T)\), and \(SSR_{2}(k/T)\), we excluded the first and last 5% observations from the permissible break date \(k\). For example, when estimating \(k_{r}\) based on \(SSR_{2}(k/T)\), the permissible break date \(k\) ranges from \(\hat{k}_{c}+0.05T+1\) to \(0.95T\). If the break date estimate \(\hat{k}_{c}\) exceeds \(0.95T\), then we cannot estimate \(k_{r}\); we do not include such a case in any bins of the histogram and thus the sum of the heights of the bins does not necessarily equal one for \(\hat{k}_{r}\) in some cases. Similarly, we cannot estimate \(\hat{k}_{e}\) when \(\hat{k}_{c}<0.05T\). To save space, we pick up several selected cases in the following and the other cases are provided in the online appendix. Figure 1 presents the histograms of \(\hat{k}_{c}\) when \(\tau=0.8\), \(s_{0}/s_{1}=1/5\), and \(T=400\). The left column shows the results based on the OLS based method, while the right column corresponds to the WLS based method. In this case, the process becomes more volatile at the end of the sample and thus it would be difficult to distinguish between the explosive and collapsing behavior (from \(\tau=0.4\) to \(0.7\)) and a random walk with high volatility (from \(\tau=0.8\) to 1). As expected, the OLS method tends to incorrectly choose the end of the sample as the collapsing date when \(c_{a}=4\) as shown in Figure 1(a), although the local peak is observed at around the true break fraction (\(\tau_{c}=0.6\)). As the size of the bubble (\(c_{a}\)) gets larger, the local peak becomes higher as is observed in Figures 1(c) and (e) (note that the vertical axis is different depending on the value of \(c_{a}\)). On the contrary, we can observe from Figures 1(b), (d), and (f) that the WLS method can estimate the collapsing date more accurately than the OLS method; the finite sample distribution has a mode at the true break fraction and the frequency of correctly estimating the true date by WLS is about twice of that by OLS. Figure 2 shows the histograms of \(\hat{k}_{c}\) when \(\tau=0.2\), \(s_{0}/s_{1}=5\), and \(T=400\). In this case, there exists a unit root regime with high volatility at the beginning of the sample and thus it is expected that the histograms would have positive frequencies before \(\tau=0.2\). In fact, this is the case as is observed in Figure 2, although the accuracy is much better than the case in Figure 1. Overall, the WLS based method can detect the true collapsing date more often than the OLS based method. For example, when \(c_{a}=0.4\) and \(c_{b}=0.6\), the relative frequency of correct detection of the true collapsing date rises from 0.25 to 0.35 by introducing the adaptive procedure. We can also observe that the WLS method incorrectly detect the collapsing date at the beginning of the sample less frequently than the OLS method. Figure 3 presents the histograms of \(\hat{k}_{e}\) when \(\tau=0.2\), \(s_{0}/s_{1}=5\), and \(T=400\), which is the same case as in Figure 2. Overall, when the size of the bubble is small with \(c_{a}=4\), it is difficult to estimate the emerging date (\(\tau_{e}=0.4\)) accurately, but for large values of \(c_{a}\), the accuracy of \(\hat{k}_{e}\) improves and the histograms has a peak at 0.4 as in Figures 3(c)-(f). Again, in this case, the performance of the estimator based on the WLS method is better than that based on the OLS method. The other results are briefly summarized in the online appendix. Overall, Monte-Carlo simulations demonstrate that the accuracy of the estimators of the break dates improves significantly in some cases, while in other cases we cannot find any difference between the distribution of the estimator based on the OLS method and that based on the WLS method. Because our volatility correction does not deteriorate the finite sample performance of the break dates estimators, we recommend using the sample splitting approach with the WLS based method in any cases. Empirical application In this section, we demonstrate the application of the two sample splitting approaches to the top largest cryptocurrencies by capitalization (btc, eth, xrp, xlm, bch, ltc, eos, bnb, ada, xtz, etc, xmr) for daily observations. In all cases, the closing price in US dollars at 00:00 GMT on the corresponding day is used. Recently, Kurozumi et al. (2022) investigated the explosive behaviour of these time series and detected the explosiveness as well as non-stationary volatility behavior. We implemented the two estimation methods for each calendar year (365 observations from January 1 to December 31) from 2014 to 2019, if the data of the corresponding currencies are available in that year. We report only the cases where the two methods return the different estimates of the break dates, because the purpose of this section is to demonstrate how effective the WLS method is for identifying the dates of the explosive behavior. Therefore, we omit the cases where the break dates are the same in both the methods. We found eight cases where at least one of the estimated break dates is different. The results are presented in Figures 4-11. In each figure, the black line shows the sample path of the corresponding cryptocurrency, the three red doted lines the estimated dates of the emergence, collapse, and recovery based on the OLS method, and the three blue dashed lines those estimated by the WLS method. For xrp in 2014 in Figure 4, the series is collapsing from the beginning of the sample and it seems to be explosive, at least by visual inspection, at the end of the sample. Clearly, our model (1) with one explosive regime is not valid in the corresponding year. In such a case, both methods cannot identify the correct break dates. This example demonstrates that we should carefully choose the sample periods in which only one set of the four regimes should be included in the same order as in (1). Figure 5 shows xrm in 2015. We can observe that the same collapsing and recovering dates of the explosive behavior are obtained by the two methods, whereas the estimated emerging date by the WLS is about one month earlier than that by the OLS, if we take nonstationary volatility into account. Figure 6 shows eth in 2016, which may becomes explosive twice by visual inspection. It seems that the WLS method successfully detect the explosive behavior of eth in early 2016, whereas the OLS method erroneously assigns the second peak of the process as the recovering date. The currency xlm in 2017 is given in Figure 7, in which there exists the small explosive behavior at the middle of the sample and the much large explosiveness is observed at the end of the sample, which is not compatible with our model (1). Nevertheless, the WLS method seems to detect the first explosiveness well, whereas \(\hat{k}_{c}\) estimated by the OLS method is no longer the collapsing date. Figure 8 for etc in 2017 and Figure 9 for xmr in 2017 are similar to Figure 7 in that the time series has two explosiveness in the sample. Again, for etc in 2017, the first exuberance is well identified by the WLS method whereas the OLS method seems to fail to accurately estimate the recovering date. On the other hand, it seems to be difficult to identify the break dates by the both methods for xmr in 2017. Figure 10 shows the sample path of xlm 2018, which has several small humps in this sample period. Although the three break dates estimated by the WLS may be interpreted as the emerging, collapsing, and recovering dates, they may not correspond to the one specific explosiveness but to some of the several humps. On the other hand, the three estimated dates by the OLS method cannot be interpreted as designated by theory. Figure 11 shows bnb in 2018, in which large explosiveness is observed at the beginning of the sample and there seems to exists a mild explosive and collapsing behavior in most part of the sample. It seems that the WLS method captures this second behavior, although the collapsing regime is relatively short by taking volatility shift into account. It seems that the estimated collapsing date by the OLS method seems to be incorrect and it might be either the recovering date or the emerging date. As a whole, the volatility correction by the WLS method seems to work well, except for several cases where the explosive behavior is observed more than twice. We also observed that the WLS method can be robust to the short explosiveness either at the beginning or end of the sample if there exists another exuberance in the middle of the sample, although it is desirable to set up the sample periods in which only one set of the exuberance is included. For that purpose, the procedure proposed by Phillips et al. (2015a,b) may be useful. ## 7 Conclusion We proposed the algorithm for volatility correction in estimation of the dates of the bubble in four-regime model. The method consists of the following steps: Estimation of the break dates without volatility correction; non-parametric estimation of the volatility function by replacing the true break dates with the estimated ones; WLS-based estimation of the dates of the bubble. The Monte-Carlo results show that the estimated break dates are at least as accurately as those under the homoskesasticity assumption and better in some cases. The empirical illustration using the cryptocurrencies demonstrates the different performance of the two methods, with and without volatility correction and that the WLS method returns the adequate break dates more often than the OLS method.
2302.01751
Motion ID: Human Authentication Approach
We introduce a novel approach to user authentication called Motion ID. The method employs motion sensing provided by inertial measurement units (IMUs), using it to verify the persons identity via short time series of IMU data captured by the mobile device. The paper presents two labeled datasets with unlock events: the first features IMU measurements, provided by six users who continuously collected data on six different smartphones for a period of 12 weeks. The second one contains 50 hours of IMU data for one specific motion pattern, provided by 101 users. Moreover, we present a two-stage user authentication process that employs motion pattern identification and user verification and is based on data preprocessing and machine learning. The Results section details the assessment of the method proposed, comparing it with existing biometric authentication methods and the Android biometric standard. The method has demonstrated high accuracy, indicating that it could be successfully used in combination with existing methods. Furthermore, the method exhibits significant promise as a standalone solution. We provide the datasets to the scholarly community and share our project code.
Aleksei Gavron, Konstantin Belev, Konstantin Kudelkin, Vladislav Shikhov, Andrey Akushevich, Alexey Fartukov, Vladimir Paramonov, Dmitry Syromolotov, Artem Makoyan
2023-01-25T09:08:33Z
http://arxiv.org/abs/2302.01751v1
# Motion ID: Human Authentication Approach ###### Abstract We introduce a novel approach to user authentication called Motion ID. The method employs motion sensing provided by inertial measurement units (IMUs), using it to verify the person's identity via short time series of IMU data captured by the mobile device. The paper presents two labeled datasets with unlock events: the first features IMU measurements, provided by six users who continuously collected data on six different smartphones for a period of 12 weeks. The second one contains 50 hours of IMU data for one specific motion pattern, provided by 101 users. Moreover, we present a two-stage user authentication process that employs motion pattern identification and user verification and is based on data preprocessing and machine learning. The Results section details the assessment of the method proposed, comparing it with existing biometric authentication methods and the Android biometric standard. The method has demonstrated high accuracy, indicating that it could be successfully used in combination with existing methods. Furthermore, the method exhibits significant promise as a standalone solution. We provide the datasets to the scholarly community and share our project code. ## 1 Introduction Traditional authentication methods in mobile biometrics (PIN or lock pattern) are being phased out in favor of modern approaches such as fingerprint(9), iris(14), or facial recognition(17). The most important benefits of such a switch are user convenience and greater recognition accuracy. However, while the latest technologies have their strengths, both types of biometric methods entail additional interaction between the user and the device, thus being called explicit authentication. This interaction brings along a whole array of issues. The first issue lies in the necessity of additional hardware in mobile devices. Fingerprint recognition systems require optical, capacitive, or ultrasonic(16) sensors. For iris recognition, smartphones must be equipped with special infrared cameras. Due to the emergence of new security issues and corner cases (which we will discuss below), facial recognition needs special equipment, such as a time-of-flight camera (e.g. TrueDepth) or a stereo camera, rather than just a regular selfie camera. Mobile hardware is constantly changing to keep up with the trends - for example, fingerprint sensors and cameras are being integrated underneath the displays. This process is extremely costly. Furthermore, hardware changes require changes in the software, which only worsens the issue at hand. The second issue is insufficient security. Fingerprint, iris, and face recognition systems can be hijacked via artificial fingerprints/irises/faces (spoofs), manually manufactured through different methods: silicone or gelatin fakes (for fingerprints), 2D photos, or printed 3D masks (for iris and face recognition). Currently, specific anti-spoofing algorithms[13; 12] are used to tackle this issue. But anti-spoofing algorithms have to be constantly updated to successfully counteract the use of different kinds of lighting, materials, and the general advances in hacking techniques, making it a never-ending endeavor. This is a widespread weakness that has led to the emergence of international anti-spoofing competitions such as LivDet[2; 20]. The third issue concerns the emergence of new corner cases. During the pandemic, masks[18] and gloves have become a significant obstacle to facial recognition and fingerprint recognition systems, respectively. This lowers the accuracy of facial recognition, requiring additional tweaks and updates to software or hardware, while fingerprint recognition systems become unusable. The fourth issue concerns the limited application of such methods. As far as mobile business is concerned, all the above-mentioned biometric technologies are used only in smartphones. However, more and more different devices require biometrics. This includes AR/VR glasses, gaming controllers, smartwatches and other wearable electronics. Not all of these devices can support built-in cameras or fingerprint sensors, to say nothing of ease of use. In addition, the emergence and popularity of metaverses[8] mean that additional hardware for user authentication is becoming a must. To sum up, the above-mentioned issues prevent manufacturers from fully complying with the requirements for biometric technology, presented in the Android Compatibility Definition Document.1 Passive (or implicit) authentication[5], based on IMU sensors, can circumvent these issues without sacrificing security and ease of use[3]. Widespread usage of IMU sensors in smartphones and other wearable electronics allows us to create a unique system that tracks and recognizes motion patterns for each specific user. Footnote 1: CDD: ([https://source.android.com/security/biometric/measure](https://source.android.com/security/biometric/measure)) ## 2 Related Work There are very few datasets with recorded IMU data and they are all different in a variety of ways. This indicates that there are no standards for collection of such data, which further complicates the search for suitable data for our experiment. Data for UCI-HAR(1) were collected by 30 users, with each user performing six actions: walking, walking upstairs and downstairs, sitting, standing, and lying down. Users also wore their smartphones at the waist level. This dataset was unable to solve the previously mentioned biometrics problems for several reasons: users did not use the smartphone in the way that people would ordinarily use, the dataset did not capture unlocking events, and the number of users was too low to enable optimal performance that is required for biometrics. The HMOG dataset[21] contains a much larger amount of IMU data - 100 volunteers collected data over randomly selected actions, such as reading, writing, or map navigation. However, this dataset does not fully cover all aspects of the problem either, most importantly, it lacks flag labels for unlock events. The WISDM-HARB dataset[19] was collected from 51 participants during 18 different activities. IMU data were recorded at a sampling rate of 20 Hz via a smartphone and a smartwatch. This sampling rate is insufficient for the purposes of mobile user verification. Similar to the previous dataset, this one lacked labels for unlock events. There is also a large-scale study[11] on biometric authentication that relied on a huge (but private) dataset collected by 1,500 volunteers. Besides IMU measurements, this dataset also contains such smartphone sensor readings as images from the front camera, touchscreen data, GPS, Bluetooth, etc. However, tracking this amount of data is impractical because the resources available to biometric authentication systems are extremely limited and it is currently impossible to store all of the necessary data. There is a study[3] that takes such hardware limitations into account. It uses only the most relevant data that are most frequently mentioned in scholarly literature. However, the dataset for this study was collected by 30 volunteers with a sampling rate of 1 Hz. This is insufficient for identifying possible unlocking events and staying within the current performance requirements for biometrics. ## 3 Motion ID: Concept of Operations Motion ID starts with a built-in pre-trained base algorithm based on two sets of data. For a certain period (a couple of weeks), the smartphone collects the owner's IMU data. The collected data is flagged at points when the user unlocks their phone. The system then adapts to the unique user data and the fine-tuned Motion ID system is ready for daily use. Once Motion ID is configured for a specific user, authentication occurs in two consecutive steps: (1) _the pre-unlocking step_; (2) _user verification_ (to make sure that it was the owner who performed the action that fits the motion pattern). In the first step, the system predicts that the user is going to unlock the device by detecting a _usage pattern_ that corresponds to the device being unlocked. Next, the system verifies that it was the owner of the device who executed the pattern. This is done to rule out hijacking attempts (which would replicate the usage pattern). In other words, _user verification_ is performed. Figure 1 shows a real-life user scenario for Motion ID (compared to facial recognition as an example). In the first step, when the owner is about to use their device, the proposed authentication method _predicts that the unlock event will happen based on the usage patterns_. The system _verifies the user_ just before the owner is ready to use the device. The proposed method not only eliminates all existing obstacles for biometrics, but also reduces the time it takes to authenticate and verify the user (excluding the inference time of the biometric solution itself). ## 4 IMU datasets ### Data Collection We have developed two smartphone data collection protocols and an Android app to collect IMU data. These protocols are based on two different types of interaction between the user and the smartphone, both of which are an integral part of Motion ID: (1) The first dataset focuses on recognizing individual usage patterns. This is the first part of Motion ID, called _Motion Patterns Identification_; (2) The second dataset, called _User Verification_, follows directly from the previous one. #### 4.1.1 Motion Patterns Identification Six users each with a Samsung Galaxy S10e smartphone collected IMU data every day for 2 weeks. At the end of the 2 weeks, the users switched smartphones with each other and restarted the process. Each user spent 2 weeks per smartphone during the whole data collection process, which took 12 Figure 1: User scenario for Motion ID. The closed padlock icon (at the top and bottom) signifies the stages where the device has not yet been unlocked by the user, and, conversely, the open padlock icon denotes stages when the device is unlocked. weeks in total. Throughout the experiment, the Galaxy S10e was the main and only device of each user. The smartphones were used habitually and ordinarily, with the only difference from real-life scenarios being that the data collection app was always on. Data were collected from the following sensors: accelerometer (gravity and linear acceleration), magnetometer, gyroscope, and rotation sensor. The sampling rate averaged at 50 Hz. Each user unlocked their phone using biometrics for the entire duration of the data collection, namely fingerprint recognition. For each unlock event, data was labeled with a special flag. The flags were used to prepare the data so that the machine learning model could be trained on it. To ensure consistency in the data, each of the smartphones was unlocked only by fingerprint (via a capacitive sensor on the side panel of the device). #### 4.1.2 User Verification For the user verification part, we used the same Android app, but only on one smartphone: Galaxy S20. The data was collected by 101 users, each of whom lifted the smartphone from the table 300 times, 50 times for each of the 6 locations of the device. The data collection procedure for user verification was as follows: (1) The user lifts the locked smartphone from the table surface to a comfortable level; (2) The user unlocks the smartphone via an in-display ultrasonic fingerprint sensor; (3) The user locks the smartphone via Home button and puts the device; (4) The cycle is repeated for 50 times at each location, for a total of 300 times per person. Data collection usually took about 30 minutes per user. The test subjects were allowed to rest for 0.5-2 minutes between each location. This was necessary to keep their motion patterns natural and prevent mechanic motions, as well as to subdivide data into six clusters. Each unlock event was labeled with a special flag, same as for the _Motion Patterns Identification_. ### Sample Size (User Count) Justification There were no special requirements for the number of users for the _Motion Patterns Identification_ stage. Each user was processed independently from the others. Only the duration of data collection matters. More IMU data means more samples that the machine learning model can be trained on. The following explanation applies only to the _User Verification_ part. The main goal of the Motion ID feasibility study approach was to achieve performance comparable, to the Android biometrics standard - the Android Compatibility Definition Document (CDD) - and the existing biometric solutions, at least for a limited number of use cases. According to CDD, biometric systems can be divided into Class 3 (formerly _Strong_), Class 2, (formerly _Weak_), or Class 1 (formerly _Convenience_). Strong class requires _TAR2(@FAR3=1/50000)=90%_ metrics. Footnote 2: TAR – True Acceptance Rate To determine the number of people required for data collection, we had to take into account the following factors: (1) Sufficient number of people/attempts to correctly evaluate the test dataset. In other words, the ability to produce the metrics described above (for a strong class); (2) Sufficient data for training and validation sets; (3) Reasonable number of subjects, considering limited conditions; (4) Reasonable amount of time for data collection per person. Getting the right FAR metric requires a huge amount sample size and a lot of time to directly compare their performance. To estimate the sufficient number of people/tries, we decided to use the rule of 30, which states that "to be 90% confident that the true error rate is within \(\pm\)30% of the observed error rate, there must be at least 30 errors"(10; 4). In the case of TAR(@FAR=1/50000)=90% and 90% confidence, this rule required us to make 300 genuine comparisons and 1.5M impostor comparisons. However, this rule assumes that the tests are independent (with different subjects randomly selected from the population). This means that, to fully comply with the rule, we needed at least 300 subjects for genuine comparisons and 1.5 million subjects for impostor comparisons, which is impossible. For performance evaluation, we decided to use the bootstrap method. Cross-comparison approach reduces the expected confidence level compared to the same number of independent comparisons. ### Datasets Pre-processing Before training the neural network and evaluating the results further, we undertook several pre-processing steps, independently for each of the two datasets. #### 4.3.1 Motion Patterns Identification At this stage, we needed to get the time series that leads or does not lead to a device being unlocked. To determine the series that leads to an unlock, we looked at all the timestamps marked with the USER_PRESENT flag.4 For each of these timestamps, we gathered data from each sensor over a period of 3 seconds. Combining measurements from all sensors, we obtained the required time series. At times, the smartphone failed to take measurements for several seconds due to technical imperfections of the sensors. To obtain a fixed signal length, we only took those time series, where there were at least 100 readings for each sensor. Footnote 4: [https://developer.android.com/reference/android/content/Intent](https://developer.android.com/reference/android/content/Intent) To determine the time series that did not lead to the device being unlocked, we first had to eliminate the times when the phone was motionless - i.e., when the linear accelerometer readings were zero for all 3 axes. After that, we took the time intervals between SCREEN_OFF and the next SCREEN_ON or USER_PRESENT flag, discarding the last 3 seconds (since these 3 seconds lead to the device being unlocked). #### 4.3.2 User Verification In this step, we collected data about the current state of the smartphone, subdivided into three states: (1) The smartphone is not in use; (2) The smartphone is currently in use; (3) The smartphone has just been unlocked. The first pre-processing step is trimming the raw data. The data was collected constantly in all three states, with an average frequency of 50 Hz. However, we only need the readings that are directly before unlock event. As an example, Figure 2 (left) shows the magnetometer data collected 4 seconds before unlocking, but we are only interested in the data from the last second for two reasons: 1) the most significant movements occur in the last second before unlocking; 2) inference time has a huge impact on biometrics in mobile devices. Therefore, we used only the data located between the red and magenta lines on the figure. The second step was to subdivide the data into six clusters (Figure 2 (right)) (corresponding to the six data collection locations) for each user. Here, ergonomics and the human factor played a more significant role than expected. The data collected at the first and sixth locations differed significantly. In the first location, subjects were only getting used to the motion pattern, the time between unlocking events was longer, the motion itself was unsteady, and subjects were adjusting to the size of the phone and its weight. Some of the users have never used in-display fingerprint biometry before, which took Figure 2: _Left_: data collected by geomagnetic field sensor (_x-axis_ – time, _y-axis_ – raw field strength data in \(\mu\)T for X, Y, Z). _Right_: subdividing the data into six clusters. them some additional time to adjust to. By the end of data collection, most of these problematic aspects were gone. ## 5 Motion Patterns Identification ### CNN architecture At this stage, the network has a traditional architecture consisting of several layers with pointwise convolutions, cross-entropy loss, and two output classes: true and false unlock events. The best epoch was selected according to the best ROC-AUC metric obtained on the validation set. ## 6 User Verification ### Feature Generation At this stage, the pre-processed data comprises accelerometer, rotation sensor, magnetometer, and gyroscope readings. The newly generated features are described below. In smartphones, the accelerometer detects orientation changes and rotates the screen accordingly. In other words, the accelerometer readings use the screen as its frame. We converted accelerometer data to the Earth-fixed frame, which would determine the position of the smartphone in space. We rotate the readings from the gyroscope and magnetometer in a similar fashion. The linear acceleration sensor outputs a three-dimensional vector representing acceleration along each device axis, excluding gravity. It is possible to directly obtain data from the linear accelerometer. But during the data collection, we noticed unstable sensor behavior: spikes in the data collection frequency, possibly caused by repetitive movements. We decided to manually generate measurements from the linear accelerometer and use it as another function. Theoretically, this sensor outputs readings according to the following formula (acceleration data, excluding acceleration due to gravity): \[linear\;acc=acc-acc\;due\;to\;gravity\] The remaining generated features, which applies to almost all of the initial sensors (excluding initial raw data), consisted of: (1) Data rotated to the Earth-fixed frame; (2) Differences in measurements between the previous and the next reading (rotated and unrotated); (3) An integral of the sensor's measurements (rotated and unrotated). The total number of feature vectors was 22. ### Data augmentation Time series of 1.5 seconds were randomly cut into segments of 1 second. Then, randomly distributed noise was added to each segment. Random noise serves as an additional regularization technique to prevent overfitting. ### Dataset splitting Implicit smartphone authentication systems utilize two main model training strategies: _on-line_ and _off-line_(3). Off-line approaches serve to train generic models that can be used to authenticate the user immediately after the application is installed and the user signs up. Therefore, the procedure of data set splitting is extremely important. The training, validation, and test sets must not have any overlaps in terms of users. In real-life conditions, the model is extremely likely to be overfitting, if it was not fine-tuned to a particular user. This means that the model would demonstrate adequate performance for users from the initial training dataset and far worse performance for the others. Within the confines of the proposed authentication method, the overfitting risk is exacerbated by the fact that distinguishing traits of the users are not entirely physiological, unlike irises or faces. On-line approaches use a pre-trained baseline model that is further fine-tuned for each user individually. We employed this method in contrast with Android standard which sets metrics for off-line methods only. The on-line approach does not require splitting training, validation, and test datasets by users, so we subdivided datasets by attempts. According to the bootstrap method, we needed at least m = 188 attempts in the test dataset to theoretically estimate the metric defined in the Android biometric standard. This figure can be calculated by the formula: n*(n - 1)*m = 1.5M, where n - number of users, m - number of tries, 1.5M - required number of impostor comparisons. Below we will describe that, given the limitations of the trained model and the achieved performance, we can afford to use fewer users in the sample without a significant impact on the accuracy. ### CNN architecture The entire architecture of the verification stage is depicted in Figure 3. The input data (after feature generation) was augmented and concatenated twice. The first step is the feature extraction part, where separate CNN branches were created for each of the 22 generated data features (Figure 3a). In other words, at this point, we wanted to get an information-rich conversion of the augmented raw data into an embedding space that could maintain the necessary distance between samples and, at the same time, be resistant to noise in the data. The input tensor was split into 22 equal parts, one part per feature. All branches have an identical architecture, consisting of 1D convolutional layers. Further, the architecture splits into two. On one branch, the concatenated embeddings are processed via a classifier. The goal is to train the feature extractor in a way that allows its embeddings to contain information about the traits that are unique to each user. The target labels of this classifier are user IDs. Here, we employed cross-entropy loss as a loss function (Figure 3, _loss 1_) and defined it as: \[L_{\text{CE}}=-\sum_{i=1}^{n}t_{i}log(p_{i}),\] for n classes, where \(t_{i}\) - the genuine user ID and \(p_{i}\) - the Softmax function for the \(i^{\text{th}}\) class. The second branch begins with a Siamese(7) network head, required for user verification. The goal of the Siamese head is to obtain highly discriminative features that can distinguish impostor comparisons from genuine ones. To train the Siamese head, we used the Triplet Margin Loss(15) function (Figure 3, _loss 2_). Let us assume that we are given three inputs: the anchor a, the positive p, and the negative n. The Triplet Margin Loss is defined as: \[L_{\text{TM}}(a,p,n)=max\{d(a_{i},p_{i})-d(a_{i},n_{i})+\ margin,0\}\] where \[d(x_{i},y_{i})=\left\|\mathbf{x}_{i}-\mathbf{y}_{i}\right\|_{p}\] In the final step, embeddings from the Siamese head pass through a multi-layer perceptron. Usually, the cross-entropy function (the one we used in a previous classifier) is sensitive to adversarial examples and noise, outputting false negatives if the inputs differ from the initial data even just a little bit. IMUs demonstrate a significant hardware bias even in the case of repeated identical movements. To make the model more robust when fed augmented data, especially with added random noise, we applied a supervised contrastive pre-training method(6) to the classification task. The MLP head learns to map normalized embeddings of samples and their augmentations that belong to the same user closer to each other, and those belonging to any other user - farther. Total loss is calculated in the following way: \[L_{total}=L_{\text{CE}}+\alpha_{\text{TM}}*L_{\text{TM}}+L_{\text{SC}}\] Figure 3: Block scheme of verification stage, where a) twice augmented input; b) 22 branches for each of the generated features; \(L_{\text{CE}}\) - Cross-Entropy loss; \(L_{\text{TM}}\) - Triplet Margin Loss, \(L_{\text{SC}}\) - Supervised Contrastive loss. where \(\alpha_{\mathrm{T}M}\) - weighting coefficient for Triplet Margin loss. ### Training Procedure We propose the following training procedure and results evaluation method, consisting of four steps: 1) baseline model training; 2) user-specific fine-tuning for the trained model; 3) choosing the best epoch based on an additional validation subset; and 4) individual testing for each user from the final test subset. In steps 1-3, we took only 90 out of 101 users, the rest (11) were used in the final testing. To train the baseline model, we used part of 90 users - i.e., \(n\) users. These \(n\) users were split into \(\mathrm{train}_{\mathrm{base}}/\mathrm{val}_{\mathrm{base}}/\mathrm{test}_{ \mathrm{base}}\) - \(\mathrm{subset}_{\mathrm{base}}\) by attempts (following the on-line approach). _(90 - n)_ users were used in _additional validation_ subset \(\mathrm{val}_{\mathrm{add}}\). We tried different split ratios for \(\mathrm{subset}_{\mathrm{base}}\) and \(\mathrm{val}_{\mathrm{add}}\): from n = 60 to n = 85 with a step of 5. At this stage, the classifier in CNN architecture has n classes. In the next stage, we fine-tune the trained baseline model for each of the 11 remaining users (\(\mathrm{test}_{\mathrm{final}}\)). To do this, we a) use \(\mathrm{subset}_{\mathrm{base}}\) (n users) and 11 users from \(\mathrm{test}_{\mathrm{final}}\); b) freeze feature extractor; c) reduce the learning rate and number of epochs; d) change number of classes in the classifier from n to 2, where the first class is user from \(\mathrm{subset}_{\mathrm{base}}\) and the second class - certain user from \(\mathrm{test}_{\mathrm{final}}\). After the model is fine-tuned but before the final metrics are evaluated, we need to choose the suitable epoch correctly. For this, we used \(\mathrm{val}_{\mathrm{add}}\), mentioned above. Until this step, the _90 - n_ users from \(\mathrm{val}_{\mathrm{add}}\) have not been used yet. As in the previous step, the classifier also has 2 classes, but in this case the first class is any user from \(\mathrm{val}_{\mathrm{add}}\), and the second class - certain user from \(\mathrm{test}_{\mathrm{final}}\). By testing 11 users, we selected an epoch with the best \(\mathrm{FAR}_{\mathrm{val}}(@\mathrm{TAR}\)=90%) for each user. The final test represents a real-world simulation of this approach: a pre-trained baseline model is fine-tuned to the device owner's data. Only the subset \(\mathrm{test}_{\mathrm{final}}\) was used in the final testing. The first classifier also had 2 classes, where the first class is the current user and the second class - any other user from \(\mathrm{test}_{\mathrm{final}}\). Here we used a bootstrap-like method, same as when training the baseline model. As usual, each user had 300 attempts. For testing, we used 90 of them. In addition to these 90 attempts by the current user, we also randomly chose 90 attempts by the remaining 10 users and repeated the point estimation of FAR 5000 times. ## 7 Results Table 1 demonstrates Motion Patterns Identification accuracy for each user-device pair. Here, N/A denotes a lack of false unlock events or an insufficient amount thereof. The model predicted unlock events with fairly high accuracy. For a real-world application, these events must be continuously recorded and stored in memory. However, memory remains a limited resource for biometric algorithms, meaning that further studies in this area could focus on increasing the accuracy, reducing the size of the neural network, and optimizing data storage. Table 2 shows the training results of the baseline models for different splits of \(\mathrm{subset}_{\mathrm{base}}\) and \(\mathrm{val}_{\mathrm{add}}\). In all experiments, we randomly sampled the attempts and trained each model 5 times. All metrics are calculated for \(\mathrm{val}_{\mathrm{base}}\) and \(\mathrm{test}_{\mathrm{base}}\). All FAR values were obtained at fixed TAR = 90%. \(\mathrm{FAR}_{\mathrm{theor}}\) denotes the theoretically achievable FAR at each split. The second lines in FAR columns are FAR metrics presented in \(1\) / \(k\) form, for easier comparison with the Android biometrics standard. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{6}{c}{Accuracy,\%} \\ \cline{2-7} Device/User & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 85.5 \(\pm\) 1.3 & 83.0 \(\pm\) 1.2 & 79.4 \(\pm\) 1.9 & N/A & 84.4 \(\pm\) 0.7 & 84.3 \(\pm\) 1.2 \\ 2 & 88.7 \(\pm\) 0.4 & 81.4 \(\pm\) 1.5 & 82.1 \(\pm\) 0.6 & 79.0 \(\pm\) 2.0 & 83.7 \(\pm\) 2.0 & 79.6 \(\pm\) 1.2 \\ 3 & 91.0 \(\pm\) 0.5 & 79.1 \(\pm\) 1.1 & 73.6 \(\pm\) 0.9 & 81.1 \(\pm\) 4.0 & 81.7 \(\pm\) 1.1 & 80.6 \(\pm\) 1.2 \\ 4 & 82.2 \(\pm\) 0.8 & 83.6 \(\pm\) 0.2 & N/A & 80.0 \(\pm\) 2.0 & 87.6 \(\pm\) 1.2 & N/A \\ 5 & 87.5 \(\pm\) 1.0 & 79.1 \(\pm\) 0.9 & 80.0 \(\pm\) 0.7 & 80.9 \(\pm\) 1.5 & 84.7 \(\pm\) 1.5 & 82.2 \(\pm\) 2.0 \\ 6 & 88.4 \(\pm\) 0.5 & 77.6 \(\pm\) 1.5 & 82.3 \(\pm\) 0.9 & 82.0 \(\pm\) 1.0 & 78.7 \(\pm\) 1.3 & 79.1 \(\pm\) 2.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Motion Patterns Identification performance for each user-device pair The more users go through training, the better the validation results, thanks to the increased number of genuine-impostor pairs. However, test results are less stable. The current architecture needs more generalization. This could be achieved with a smaller model capacity and more data. In any case, the results for the baseline model are already comparable to the current standard. All trained baseline models were used for testing each user from test\({}_{\text{final}}\). Results are shown in Table 3. The results obtained are highly dependent on the users themselves. This is most likely caused by user-specific traits in hand motions. Two users (0 and 10) had motions that differed significantly (but visually imperceptible) from those of other people within the same scenario. There were also the opposite cases (3 and 7), where the model may have lacked features for better discrimination. These results are also highly dependent on the quality of the pre-trained model. For example, results in splits 75 and 80 for User 1, as well as in splits 70 and 80 for User 2, had significant differences in metrics. The adaptive selection of hyperparameters and splits for each unique user is worth paying attention to. Overall, the proposed user authentication approach shows promising results. As a starting point, this methodology can be used in combination with existing biometric systems to maintain optimal performance in various corner cases. The system can be extremely appealing because it does not need additional hardware. However, the proposed architectures and training methods have a number of drawbacks as far as mobile devices are concerned. As mentioned in the Section 1, biometric systems \begin{table} \begin{tabular}{l l l l l l} \hline \hline & \multicolumn{5}{c}{Metrics of baseline model,\%} \\ \cline{2-6} split & Acc\({}_{\text{val}}\) & Acc\({}_{\text{test}}\) & FAR\({}_{\text{val}}\)(@TAR=90\%) & FAR\({}_{\text{test}}\)(@TAR=90\%) & FAR\({}_{\text{theor}}\) \\ \hline 60 & \(97.9\pm 0.2\) & \(98.1\pm 0.3\) & \begin{tabular}{l} (\(1.0\pm 0.4\))*\(10^{-2}\) \\ \(1\) / \(10000\) \\ \end{tabular} & \begin{tabular}{l} (\(2.0\pm 1.1\))*\(10^{-2}\) \\ \(1\) / \(5000\) \\ \end{tabular} & \begin{tabular}{l} 0.94*\(10^{-2}\) \\ \(1\) / \(10620\) \\ \end{tabular} \\ \hline 65 & \(98.2\pm 0.3\) & \(97.86\pm 0.16\) & \begin{tabular}{l} (\(0.5\pm 0.4\))*\(10^{-2}\) \\ \(1\) / \(20000\) \\ \end{tabular} & \begin{tabular}{l} (\(2.4\pm 0.5\))*\(10^{-2}\) \\ \(1\) / \(4167\) \\ \end{tabular} & \begin{tabular}{l} 0.8*\(10^{-2}\) \\ \(1\) / \(12480\) \\ \end{tabular} \\ \hline 70 & \(97.9\pm 0.3\) & \(97.8\pm 0.2\) & \begin{tabular}{l} (\(1.0\pm 0.9\))*\(10^{-2}\) \\ \(1\) / \(10000\) \\ \end{tabular} & \begin{tabular}{l} (\(2.3\pm 0.2\))*\(10^{-2}\) \\ \(1\) / \(4348\) \\ \end{tabular} & \begin{tabular}{l} 0.69*\(10^{-2}\) \\ \(1\) / \(14490\) \\ \end{tabular} \\ \hline 75 & \(98.1\pm 0.3\) & \(98.15\pm 0.15\) & \begin{tabular}{l} (\(0.8\pm 0.4\))*\(10^{-2}\) \\ \(1\) / \(12500\) \\ \end{tabular} & \begin{tabular}{l} (\(1.4\pm 0.6\))*\(10^{-2}\) \\ \(1\) / \(7143\) \\ \end{tabular} & \begin{tabular}{l} 0.6*\(10^{-2}\) \\ \(1\) / \(16650\) \\ \end{tabular} \\ \hline 80 & \(98.12\pm 0.16\) & \(98.07\pm 0.18\) & \begin{tabular}{l} (\(0.5\pm 0.3\))*\(10^{-2}\) \\ \(1\) / \(20000\) \\ \end{tabular} & \begin{tabular}{l} (\(1.4\pm 0.6\))*\(10^{-2}\) \\ \(1\) / \(7143\) \\ \end{tabular} & \begin{tabular}{l} 0.53*\(10^{-2}\) \\ \(1\) / \(18960\) \\ \end{tabular} \\ \hline 85 & \(97.9\pm 0.4\) & \(98.10\pm 0.11\) & \begin{tabular}{l} (\(0.9\pm 0.6\))*\(10^{-2}\) \\ \(1\) / \(11111\) \\ \end{tabular} & \begin{tabular}{l} (\(1.0\pm 0.5\))*\(10^{-2}\) \\ \(1\) / \(10000\) \\ \end{tabular} & \begin{tabular}{l} 0.47*\(10^{-2}\) \\ \(1\) / \(21420\) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 2: Validation and test performance for different splits \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{5}{c}{For each user id: FAR(@TAR=90),\%} \\ \cline{2-6} user id & 60 & 65 & 70 & 75 & 80 & 85 \\ \hline 0 & 0 & 0 & 0 & \(0.6\pm 0.4\) & 0 & \(1.0\pm 0.4\) \\ 1 & \(4.2\pm 2.0\) & \(6\pm 3\) & \(4\pm 3\) & \(2.0\pm 1.4\) & \(6\pm 5\) & \(6\pm 5\) \\ 2 & \(9\pm 3\) & \(12\pm 11\) & \(17\pm 9\) & \(10\pm 6\) & \(5\pm 4\) & \(8\pm 6\) \\ 3 & \(12\pm 2\) & \(17\pm 4\) & \(10\pm 2\) & \(12\pm 4\) & \(13\pm 7\) & \(14\pm 5\) \\ 4 & \(2.0\pm 0.9\) & \(1.0\pm 0.4\) & \(1.9\pm 1.6\) & \(5\pm 3\) & \(1.0\pm 0.4\) & \(8\pm 4\) \\ 5 & \(12\pm 3\) & \(14\pm 5\) & \(11\pm 5\) & \(11\pm 6\) & \(10\pm 3\) & \(10\pm 4\) \\ 6 & \(5\pm 3\) & \(3\pm 2\) & \(4\pm 3\) & \(2.8\pm 1.8\) & \(4\pm 3\) & \(4.7\pm 1.8\) \\ 7 & \(24\pm 4\) & \(23\pm 5\) & \(21\pm 5\) & \(22\pm 6\) & \(22\pm 4\) & \(22\pm 2\) \\ 8 & \(6\pm 2\) & \(8\pm 3\) & \(6\pm 3\) & \(5\pm 3\) & \(5\pm 2\) & \(4.9\pm 1.5\) \\ 9 & \(4\pm 3\) & \(2.0\pm 1.0\) & \(2.4\pm 1.4\) & \(4\pm 3\) & \(1.6\pm 0.6\) & \(2.2\pm 0.8\) \\ 10 & \(1.9\pm 1.1\) & \(0.5\pm 0.2\) & \(0.6\pm 0.4\) & \(0.6\pm 0.4\) & \(0\) & \(0.5\pm 0.2\) \\ \hline \hline \end{tabular} \end{table} Table 3: Test performance for user-split pairs need to be small and run quickly because of the hardware limitations and usability requirements. The proposed architectures are quite large and the methodology requires customization for each user, which presupposes a built-in backpropagation algorithm. Further research is needed to enhance performance. However we used the industry standardized off-line method to collate with, our results with the on-line approach are promising. We strongly believe that new biometry system will emerge for the new use cases and hardware that is why we share our datasets and code to make new researches possible.
2308.14461
Spatio-Temporal Analysis of Patient-Derived Organoid Videos Using Deep Learning for the Prediction of Drug Efficacy
Over the last ten years, Patient-Derived Organoids (PDOs) emerged as the most reliable technology to generate ex-vivo tumor avatars. PDOs retain the main characteristics of their original tumor, making them a system of choice for pre-clinical and clinical studies. In particular, PDOs are attracting interest in the field of Functional Precision Medicine (FPM), which is based upon an ex-vivo drug test in which living tumor cells (such as PDOs) from a specific patient are exposed to a panel of anti-cancer drugs. Currently, the Adenosine Triphosphate (ATP) based cell viability assay is the gold standard test to assess the sensitivity of PDOs to drugs. The readout is measured at the end of the assay from a global PDO population and therefore does not capture single PDO responses and does not provide time resolution of drug effect. To this end, in this study, we explore for the first time the use of powerful large foundation models for the automatic processing of PDO data. In particular, we propose a novel imaging-based high-throughput screening method to assess real-time drug efficacy from a time-lapse microscopy video of PDOs. The recently proposed SAM algorithm for segmentation and DINOv2 model are adapted in a comprehensive pipeline for processing PDO microscopy frames. Moreover, an attention mechanism is proposed for fusing temporal and spatial features in a multiple instance learning setting to predict ATP. We report better results than other non-time-resolved methods, indicating that the temporality of data is an important factor for the prediction of ATP. Extensive ablations shed light on optimizing the experimental setting and automating the prediction both in real-time and for forecasting.
Leo Fillioux, Emilie Gontran, Jérôme Cartry, Jacques RR Mathieu, Sabrina Bedja, Alice Boilève, Paul-Henry Cournède, Fanny Jaulin, Stergios Christodoulidis, Maria Vakalopoulou
2023-08-28T09:58:34Z
http://arxiv.org/abs/2308.14461v1
Spatio-Temporal Analysis of Patient-Derived Organoid Videos Using Deep Learning for the Prediction of Drug Efficacy ###### Abstract Over the last ten years, Patient-Derived Organoids (PDOs) emerged as the most reliable technology to generate ex-vivo tumor avatars. PDOs retain the main characteristics of their original tumor, making them a system of choice for pre-clinical and clinical studies. In particular, PDOs are attracting interest in the field of Functional Precision Medicine (FPM), which is based upon an ex-vivo drug test in which living tumor cells (such as PDOs) from a specific patient are exposed to a panel of anti-cancer drugs. Currently, the Adenosine Triphosphate (ATP) based cell viability assay is the gold standard test to assess the sensitivity of PDOs to drugs. The readout is measured at the end of the assay from a global PDO population and therefore does not capture single PDO responses and does not provide time resolution of drug effect. To this end, in this study, we explore for the first time the use of powerful large foundation models for the automatic processing of PDO data. In particular, we propose a novel imaging-based high-throughput screening method to assess real-time drug efficacy from a time-lapse microscopy video of PDOs. The recently proposed SAM algorithm for segmentation and DINov2 model are adapted in a comprehensive pipeline for processing PDO microscopy frames. Moreover, an attention mechanism is proposed for fusing temporal and spatial features in a multiple instance learning setting to predict ATP. We report better results than other non-time-resolved methods, indicating that the temporality of data is an important factor for the prediction of ATP. Extensive ablations shed light on optimizing the experimental setting and automating the prediction both in real-time and for forecasting. ## 1 Introduction Precision medicine aims to optimize the choice of drug given the characteristics of the patient, so as to optimize certain aspects such as the efficacy of the treatment or quality of life of the patient. Although doing this on a case-by-case basis by a clinician seems impractical, artificial intelligence-driven tools help guide this approach. In this objective, FPM [23] bases this optimization on tests performed on live patient cells. Patient-derived organoids (PDOs) have gained great interest over the last few years as they represent minimalistic models to mimic essential features from the tissue they originate from. In the context of cancer therapy, drug efficiency can be limited due to the development of resistance in patients as well as other evolutionary changes in the tumors over time. PDOs represent a good testbed for physicians, researchers, and patients to assess personalized drug efficacy, based on tumor-specific patient characteristics. The gold standard method to assess drug efficacy on cells relies on Adenosine Triphosphate (ATP), an energy molecule released in active cells through metabolic reactions. ATP is a biomarker for evaluating the number of viable cells. The quantity of released ATP measured by luminescence is proportional to the number of living cells. Thus, ATP quantity assessed by luminescence counts serves as a readout to evaluate drug efficacy with the estimation of remaining living cells in the experimental sample. This test is easily implementable in bench labs and reliable for assessing the global cell population response to drug exposure [37]. However, since it causes cell lysis, the ATP test is a destructive assay, that does not allow to assess real-time organoid drug response, and post-ATP observations to evaluate long-term viability changes and drug resistance. The emergence of large foundation models in various fields of machine learning has allowed for novel solutions in a variety of downstream tasks. Notably, SAM (Segment Anything Model) [22] or SEEM (Segment Everything Everywhere All at Once) [50] have facilitated segmentation by providing a general model for segmentation from various types of prompts, which is extremely valuable for tasks for which little to no annotations can be found. Similarly, self-supervised [11, 4, 14, 9] or unsupervised models such as DINov2 [34] provide high-quality descriptors for data and facilitate the training for downstream tasks with relatively small data and task-specific samples. This training setting allows the extracted features to be task-agnostic and, therefore, to adapt more easily to new tasks. Building on these recent advances, in this paper, we propose a new high-throughput screening method for the analysis of PDOs testing multiple drugs and high-quality videos. Indeed, processing of this data is usually performed in a manual and time-consuming setting. In this paper, our contributions are the following. To our knowledge, we present the first fully automatic method for processing high-quality videos of PDOs, conducting a spatio-temporal analysis for the prediction of ATP. We propose an efficient, automatic, and accurate prompt engineering paradigm taking into account the temporal characteristics of PDOs for using SAM without the need for additional training. Finally, we explore powerful recent foundation models for the spatio-temporal representation of PDOs for the first time coupling them with time sequence modeling and multiple instance learning. An extensive experimental analysis was performed to identify the best representations for PDOs as well as the most informative time frames, which we used for the accurate prediction of ATP. This opens the possibility to integrate any other clinical endpoint, to match PDO drug sensitivity and patient clinical response. ## 2 Related work **Foundation Models in Computer Vision.** Foundation models are precious resources and stimulate research by both providing a strong solution for a specific task and by proposing a novel innovative approach. The authors of the "Florence" model [46] introduced a method for learning joint visual-textual features that can be adapted to a multitude of joint tasks. The CLIP model [38] uses a contrastive learning approach in order to learn image embeddings, guided by an associated textual description of the image, which can transfer zero-shot to a wide variety of tasks. This approach has led to many extensions [25, 32, 24]. DINOv2 [34] is a deep learning model trained in a self-supervised manner that leverages more stable training at a bigger scale for better taking advantage of large datasets. The model is trained on a dataset of 142M images. It is based on the ViT architecture [13] and is made available in different sizes. Very recently, foundation models for image segmentation such as SegGPT [43], SEEM [50], and SAM [22] have surfaced. SAM provides a strong zero-shot transfer approach by combining a powerful image encoder with a prompt encoder which can adapt to multimodal prompts. Its convincing adaptability led to many adaptations for videos [45], lightweight versions [49], or medical applications [44, 26]. While these models show impressive results and generalizability, their adaptations to medical applications often show limitations. For example, recent generative models such as GLIDE [33] or DALL-E [39] show impressive results on natural images but struggle to generate realistic medical images without prior fine-tuning [21, 2]. **Time Sequence Modeling.** Modeling the temporal aspect of data can be achieved either on extracted features of the frames or directly in an end-to-end fashion on the videos. Once features have been extracted from frames of the videos, standard sequence models such as LSTMs [18] or Transformers [42, 47] can be applied for a variety of tasks. They aim to capture dependencies between elements of a sequence, either in the form of recurrent neural networks or self-attention mechanisms. State space modeling is gaining attraction in sequence modeling, especially for long sequences [15], and more specifically for modeling time series [48]. For end-to-end video modeling, Transformer adaptations are very popular such as the Video Swin Transformer [27], ViViT [3], or the TimeSformer [5]. We are also beginning to see adaptations of large language models (LLMs) for video understanding tasks [10]. However, even if these models are available, their application to PDOs has not yet been explored and investigated, especially in a low-availability data regime. **Analysis of PDO Microscopy Images.** With the emergence of PDOs as a key technology for FPM in the last few years, machine learning methods to analyze this data have followed. Earlier studies [6, 30] looked for methods for tracking and analyzing the dynamics of PDOs through time. Authors of "D-CryptO" [1] proposed to classify types of organoids based on their morphology. Later research sought to use organoid analysis pipelines for other downstream tasks, such as the prediction of kidney differentiation [35] or the prediction of a biomarker of Huntington's disease [31]. Recently [7] have developed a pipeline for predicting ATP from microscopy images at single time points in a multiple instance learning setting. However, all these methods focus on analyzing single representations of organoids, losing a lot of information that arises from their temporal dynamics for different drug treatments. Moreover, they are mainly based on classical processing techniques, such as the extraction of predefined visual features, the extraction of ResNet features, or segmentation through biological staining. ## 3 Methods In this section, we will start by describing the specificities of the dataset and the preprocessing steps, and then proceed to an in-depth presentation of the proposed method. Specifically, we will present the methodology for the segmentation of the cavities and organoids by employing SAM [22], then we will present the details of the feature extraction from the organoid regions using DINOv2 [34]. The model for the final prediction of the ATP given the features from the cavities under the multiple instance learning framework will then be presented. Figure 1 illustrates the proposed pipeline. ### Data To capture real-time information about drug efficacy on patient organoids, bright-field imaging techniques represent a good non-invasive alternative to the ATP bioluminescence test. However, having single-organoid resolution may be a challenging task when working with the standard experimental setup where organoids are embedded in hydrogel droplets, which results in heterogenous organoid distribution at different depths and subsequently heterogeneous organoid sizes, making the drug efficacy assessment less accurate. To overcome these issues, we developed a high-throughput experimental and imaging pipeline. It consists in using high-throughput cell culture systems with 96-well plates containing 500 \(\mu\)m diameter cavities. Each cavity contains one organoid, while the setting allows for the parallel follow-up of individual organoids with a homogeneous depth distribution. Testing multiple drugs in parallel thus becomes easier with one image containing the information of multiple organoids under the same treatment. One type of drug at a given concentration is applied to each well, and the ATP can only be measured on the well level. Figure 2 shows one well containing multiple cavities together with their automatic segmentation. A total of 116 wells are imaged, from which we detect 8241 cavities, and therefore 8241 organoids. A cavity is imaged in a set of frames \(T=\{T_{1},...,T_{f}\},T_{i}\in\mathbb{R}^{w\times h}\), where \(w\) and \(h\) are the frame dimensions, and \(f\) the number of frames in each timelapse. In this study, we developed a novel pipeline to segment each cavity \(S=\{S_{1},...,S_{f}\},S_{i}\in\mathbb{R}^{w\times h}\) and using \(S\) and \(T\), we extract interesting features from each organoid. Finally, we model a well by a set of cavity features \(\mathcal{C}=\{c_{1},\ldots,c_{n}\}\in\mathbb{R}^{n\times k\times f}\), where \(n\) is the number of cavities in the well, and \(k\) the dimensionality of the feature space. The final ATP value is defined as \(y\in\mathbb{R}\). The patient-derived organoids (from now on called organoids, for simplicity) are extracted from colorectal cancer patients. In this analysis, we do not consider the concentration of the drugs and only observe their effects indirectly with the ATP. Drugs are introduced one day after the organoids have been formed in the cavities. The wells are imaged every 30 minutes for 100 hours, resulting in \(f=200\) frames for each well. Wells are digitized using an Agilent Lionheart FX digital microscope. Note that some experiments could take slightly longer than 200 frames, for those cases, only the last 200 frames of the experiments were retained, in order to keep the measurement of the ATP at the last frame. ### Preprocessing. Each scan consists of acquiring quarters of wells at three different depth levels, resulting in 12 images for each time point. A z-projection based on Sobel filters [20] is used to project the images into the same plane, before stitching all four corners into a single image. Artifacts appear Figure 1: Illustration of our proposed pipeline. (a) The overall pipeline taking as input the whole well timelapse and outputting the predicted ATP \(\hat{y}\). (b) Organoid segmentation based on automatic generation of prompts for SAM [22]. (c) Feature extraction given input frames and associated segmentation maps focusing in the region of interest and using DINov2 model [34]. through time (e.g. evaporation of the liquid impacting the contrast), which may impact the segmentation pipeline. To account for this, we normalize the contrast of the timelapse through time. Finally, all frames of a timelapse are then co-registered using SIFT-based features [29]. All preprocessing is performed using ImageJ [40]. Figures 1(a)-1(d) show all steps of the preprocessing from the 12 input images per frame to the normalized frame. ### Automatic Segmentation of Regions of Interest In this study, we propose and develop an automatic pipeline based on SAM [22] for the automatic segmentation of regions of interest in the video of the organoids. For accurate segmentation, various characteristics of these regions are used and different prompt engineering strategies have been developed to handle the nature of the data and their temporal information. **Cavity Segmentation.** The objective is to segment the individual cavities contained in the well, in order to represent the well as a set of cavities. We assume that the detection of the cavities only needs to be run in a single frame, as the frames of the video have been co-registered. The well is considered as the mask with the biggest surface area containing the center pixel in the frame is considered, as it is the biggest object in the frame. Given the candidate masks \(\mathcal{M}=\{m_{1},\dots,m_{n}\}\), \(m_{\text{well}}=m_{\text{argmax}_{\{}\{\text{area}(m_{i})\}}}\). The cavities are defined as other regions detected by SAM, which have a circularity above a certain threshold and which are included in the detected well. For each point \(b_{i}\) on the boundary \(B\) of a detected region \(m\), we define \(\mathcal{P}=\{\max_{i}(\text{dist}(b_{i},B))\}\), which represents for each point on the contour, the distance to the point on the contour that is furthest away. The ratio \(\frac{\min\mathcal{P}}{\max\mathcal{P}}\) defines the circularity. On a perfect circle, all values in \(\mathcal{P}\) should be equal to the circle's diameter and therefore have circularity \(=1\). Once the cavities have been detected on a single frame, a crop around the cavity is extracted for all frames, giving a few dozen cavity timelapses per well. Figure 1(e) shows an example of the cavity segmentation masks on a normalized frame. **Organoid Segmentation.** Segmentation of organoids is performed on the extracted videos of the cavities by designing the proper prompts for SAM [22]. The first prompt is generated for the last frame of the timelapse \(T_{f}\) (as it is the frame where the organoid is the largest, and it is, therefore, more likely to find a point inside the region of interest). Canny edge detection [8] is used to extract rough contours from the last frame, after having subtracted the average frame over time, in order to remove the background. These rough contours are then filtered using a series of binary morphological operators, before extracting a centroid, which is used as a positive point prompt for SAM. Figure 3 shows an example of these steps for the last frame. Once the mask for the last frame has been generated, the prompts will be generated backward from the last frame to the first frame. The center of the organoid mask is extracted from the prediction and an exponentially weighted average from the prompts from frames \(T_{t+1}\) to \(T_{t+10}\) is used to generate the prompt for frame \(T_{t}\). Multimask output is used to generate the top 3 masks, and the mask which has the highest Dice score with the generated mask from the previous frame (\(T_{t+1}\)) is chosen. Postprocessing is performed on the mask by only selecting the detected region with the highest surface area. Algorithm 1 represents the whole organoid segmentation pipeline. Figure 3: Example of the steps used in the generation of the prompt for the segmentation of the last frame of a cavity timelapse. (a) Last frame of the timelapse. (b) Mean frame across time. (c) Difference between the last frame and the mean frame. (d) Result of Canny edge detection on (c). (e) Generated prompt for the frame. (f) Predicted mask. Figure 2: Example of the steps needed for preprocessing and cavity segmentation. (a) All images needed to construct one frame (4 corners at 3 different z-levels each). (b) Projected corners. (c) Stitched frame. (d) Local contrast normalization applied to the stitched frame. (e) Cavity segmentation masks. ### Feature Extraction Features are extracted from each frame of the cavity using the DINOv2 [34] model, which produces task-agnostic visual features from images. A lighter version (86M parameters) of the model is used, which produces \(k\)-dimensional feature vectors. Features are extracted per frame, using a crop of the cavity around the organoid, and by masking everything outside of the regions of interest, giving \(c_{i}=\text{DINOv2}(T_{i}\odot S_{i})\). The model generates feature vectors of dimension \(k=768\) for each time frame. ### ATP Prediction Having access to features for each individual cavity but only having the ATP measure on the well level, the multiple instance learning setting seems to be the best approach to the problem. Each set of cavities \(\mathcal{C}\) is associated with an ATP measure \(y\). The ATP prediction model \(\mathcal{M}\) should map an input \(\mathcal{C}\in\mathbb{R}^{n\times k\times f}\) to an output \(\hat{y}\in\mathbb{R}\). The aggregation of the cavity representations on the well level is done by mean pooling in order to obtain a well-level representation \(\text{{mean}}(\mathcal{C})\in\mathbb{R}^{k\times f}\). Each time frame may have a different impact on the prediction of the ATP. In fact, we expect later time frames to have more impact than earlier time frames. We give the model the freedom to learn how much weight to put to each time frame using a normalized weight vector \(\mathbf{w}_{t}\in\mathbb{R}^{f}\) that is used in a weighted average across time frames. This ensures that all cavities across all wells share the same importance for a given time frame. A feature-wise weight vector \(\mathbf{w}_{k}\in\mathbb{R}^{k}\) is used in order to learn the relative importance of each feature that is shared across all wells. The last element of the model is a multilayer perceptron (MLP) which is simply composed of four linear layers, with PReLU [17] activation function. \[\begin{split}\mathcal{M}\colon\mathbb{R}^{n\times k\times f}& \rightarrow\mathbb{R}\\ \mathcal{C}&\mapsto\text{{MLP}}(\text{{mean}}( \mathcal{C})\cdot\frac{\mathbf{w}_{t}}{\|\mathbf{w}_{t}\|}\odot\mathbf{w}_{k})\end{split} \tag{1}\] The loss function used for training is composed of two elements, a relative \(L_{1}\) loss, to directly optimize for the mean absolute percentage error (MAPE), and a \(L_{1}\) loss, which has been normalized with the maximum ATP value in the training set to have a comparable range to the relative \(L_{1}\) loss. \[\mathcal{L}_{\textit{rel}}(y,\hat{y}) =\frac{|y-\hat{y}|}{y} \tag{2}\] \[\mathcal{L}_{\textit{norm}}(y,\hat{y},y_{\textit{max}}) =\frac{|y-\hat{y}|}{y_{\textit{max}}}\] (3) \[\mathcal{L}(y,\hat{y},y_{\textit{max}}) =\alpha\mathcal{L}_{\textit{rel}}+(1-\alpha)\mathcal{L}_{\textit {norm}} \tag{4}\] Where \(y\) is the ATP ground truth, \(\hat{y}\) the output of our model, \(y_{\textit{max}}\) the maximum value of ATP in the training set, and \(\alpha\) the weight given to \(\mathcal{L}_{\textit{rel}}\). A relative \(L_{1}\) loss is used because of the varying range that the ATP can take within experiments. The normalized \(L_{1}\) loss is used to ensure that the model still has an acceptable performance on higher values of ATPs. ### Implementation Details Over all experiments, training was performed under the same scheme. Cross-validation is performed over 4 splits, on the well level, resulting in training sets of size 87 and validation sets of size 29. The reported results are the average over all folds of the validation sets. All trainings were performed using the PyTorch deep learning library [36] in Python. The AdamW [28] optimizer was used with a learning rate of \(1\cdot 10^{-3}\), weight decay of \(1\cdot 10^{-1}\), and a batch size of 8. The model was trained over 2000 epochs, and early stopping with a patience of 200 epochs was used. The \(\alpha\) parameter which balances both terms of the loss was set at \(\alpha=0.5\) after grid search. All trainings were accelerated using Nvidia Tesla V100 GPUs. ## 4 Experiments To evaluate the performance of the method we used the MAPE and Pearson correlation coefficient metrics. All performances are measured on each validation set, utilizing all four models trained on the corresponding training sets. The MAPE, measures the mean relative error between the prediction \(\hat{y}\) and the label \(y\). Because of the very wide range of values of the ATP (both within an experiment and between different experiments) as presented in Table 1, the MAPE appears suitable for ensuring comparability, in particular for future studies which may use datasets with different distributions of ATP. Lower values of MAPE indicate better performance. \[\textit{MAPE}(y,\hat{y})=\frac{|y-\hat{y}|}{y} \tag{5}\] The Pearson correlation coefficient quantifies the linear correlation between the set of predictions \(\hat{Y}\) and the set of labels \(Y\). Higher values of the Pearson correlation coefficient indicate better performance. \[\textit{Pearson}(Y,\hat{Y})=\frac{\sum_{i}(Y_{i}-\bar{Y})(\hat{Y}_{i}-\bar{\hat{ Y}})}{\sqrt{\sum_{i}(Y_{i}-\bar{Y})^{2}(\hat{Y}_{i}-\bar{\hat{Y}})^{2}}} \tag{6}\] Figure 4 qualitatively compares three methods of organoid segmentation to justify the superiority of our proposed method over simple intensity-based methods (Otsu thresholding) and justifies the use of our prompt generation for the use of SAM [22]. ### Results With our proposed high-throughput screening method, we obtain an average MAPE and Pearson correlation coefficient on the four validation sets of \(0.1755\) and \(0.9214\), respectively. The relative error of \(17.55\%\) is to be put in context with the wide range of ATP values in our dataset, with a ratio between the highest and lowest values of ATP of 14. Table 1 shows the distribution of ATP values along with the corresponding performance of the model in terms of MAPE in each bin. The model performs worst on the bin with the smallest values of ATP, which may be explained by the precision of the measurement of ATP. **Feature Extraction.** In Table 2, we explore the impact of four different feature extraction methods on the final results. Different classical and deep learning features have been explored to highlight the best representations for organoids. The pyradiomics Python package [41] allows the extraction of imaging features from regions of interest. For this study, we extracted features related to first-order statistics, gray-level matrices, and shape features. A total of \(93\) features have been extracted per frame. ResNet50 [16] (pre-trained on ImageNet [12]) is a popular classification architecture, which can be used for feature extraction when removing the classification head. It produces 2048-dimensional features. VICReg [4] is a feature extraction model trained in a self-supervised manner. For our study, we used a model based on the ResNet50 architecture, which produces 4096-dimensional features. Features extracted using models trained in a self-supervised manner (VICReg and DINOv2) seem to adapt better to different downstream tasks, compared to models trained for other specific tasks (ResNet50 on ImageNet), or classical predefined features. The superiority of our proposed model is highlighted both in terms of MAPE and correlation coefficient. **Attention Mechanism.** We used two types of attention mechanisms, one attending to the temporal aspect of the data, and the other attending to the individual features. In Table 3 we perform an ablation study for three variations for each type of attention mechanism. For the temporal attention, we explored the impact of only using the last frame, multihead attention [42] (MHA), or the learnable weight vector \(\mathbf{w}_{t}\). Similarly, for the feature-wise attention, we explored the impact of having no such attention, using multihead attention (MHA) or the learnable weight vector \(\mathbf{w}_{k}\). In terms of temporal attention, using \(\mathbf{w}_{t}\) clearly outperforms other methods, especially the multihead attention, while using the last frame still gives reasonable results. \(\mathbf{w}_{t}\) seems to grasp the temporal importance of each frame. Concerning the feature-wise attention, the use of multihead attention clearly does not seem adequate for this application, while the effect of incorporating \(\mathbf{w}_{k}\) compared to its absence is not readily apparent. The analysis of its weights could however be used for finer explainability of the model. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline ATP bin (\(10^{5}\)) & \([1.1,2.8)\) & \([2.8,4.5)\) & \([4.5,6.2)\) & \([6.2,8.0)\) & \([8.0,9.7)\) & \([9.7,11.4)\) & \([11.4,13.1)\) & \([13.1,14.9]\) \\ \hline Count & 23 & 19 & 11 & 15 & 0 & 7 & 17 & 24 \\ MAPE \(\downarrow\) & 0.34 & 0.15 & 0.08 & 0.07 & NA & 0.10 & 0.23 & 0.12 \\ \hline \hline \end{tabular} \end{table} Table 1: Distribution of ATP values in our dataset, with associated MAPE. \begin{table} \begin{tabular}{c c c c c} \hline \hline Features & Classical & ResNet50 [16] & VICReg [4] & DINOv2 [34] \\ \hline MAPE \(\downarrow\) & 0.3173 & 0.2174 & 0.1847 & **0.1755** \\ Pearson \(\uparrow\) & 0.7883 & 0.8960 & 0.8939 & **0.9214** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of MAPE and Pearson correlation coefficient for four methods of feature extraction. Best results are indicated in **bold**, and second best results are underlined. Figure 4: Comparison of three methods of segmentation. (a) Our proposed method. (b) Otsu thresholding based segmentation. (c) Using SAM [22] without any prompts and choosing the mask with the highest predicted IOU. **Cavity Aggregation.** In multiple instance learning, the choice of the bag level aggregation plays a crucial role. In Table 4 we tested multiple methods of aggregation, min-pooling, max-pooling, mean, sum (which is different from the mean as the number of cavities per bag varies), and using a SE [19] block followed by a sum aggregator. The sum and mean operators outperform other types of aggregation, which can be explained by the fact that the ATP is a direct function of the number of organoid cells in the well. This information is not captured by the min-pooling and max-pooling operators. The SE block learns the weight to give to each cavity based on its features, which is used to model the fact that different cavities contribute differently to the ATP. While this may be true, we hypothesize that the information is already present in the features, and the use of the SE block therefore only adds noise. **Takeaway.** Our model has a mean MAPE \(0.1755\) and Pearson of \(0.9214\) on the validation set. Trying out different feature extraction methods shows that features from DI-NOv2 [34] provide the best results. Comparing the impact of a variety of feature and temporal attention schemes justifies the use of \(\mathbf{w}_{t}\) and \(\mathbf{w}_{k}\), while we also show the superiority of using the mean as the bag-level aggregation function. ### Learned Attention Weights Once the model is trained, analyzing the weights learned by the \(\mathbf{w}_{t}\) and \(\mathbf{w}_{k}\) vectors gives insights into our experiments. Figure 4(a) shows the weight associated with each frame in the \(\mathbf{w}_{t}\) vector after training. Intuitively, we expect later frames to have the most importance. Without any constraints during training, we see that the learned temporal attention has learned a continuous and smooth distribution, meaning that nearby frames will have relatively similar importance. It is noteworthy that the initial frames seem to have more importance for the final prediction than frames after 24 hours of the experiment. This could be attributed to the fact that the organoid's initial state is significantly determinant of the final ATP, likely due to its size, as larger organoids tend to exhibit higher values of ATP. Examining the feature importance in Figure 4(b) shows that the feature attention does not highlight any specific feature, as indicated by a ratio of 2 between the feature with the highest and lowest attention weight. Note that in Figure 4(b), the features are sorted by their attention weight. **Takeaway.** The temporal attention vector \(\mathbf{w}_{t}\) learns meaningful relative importance of the time frames, while the feature-wise attention vector \(\mathbf{w}_{k}\) does not make any particular feature stand out. ### Comparison to SOTA To the best of our knowledge, only Ins-ATP [7] have explored the use of machine learning for the prediction of ATP from organoid microscopy images. They imaged multiple organoids placed in a single matrigel drop (compared to our method which has one organoid per cavity) at a single time point, which yields very different visual results to our images. We implemented the Ins-ATP method and adapted it to fit our data, by defining the bags of instances as a set of cavities from the wells, only using the last frame. This can be thought of as a compromise between the presented MeshIns-ATP and DeepIns-ATP [7] as the instances composing the bags are not learned by the model, but are chosen as meaningful regions of interest (the cavities). With this method, we obtained an average MAPE and Pearson correlation coefficient of the 4 validation sets of 0.4375 and 0.6794 respectively (compared to 0.1755 and 0.9214 respectively for our pipeline). This highlights the advances of our study in the development of a high-throughput screening method to assess real-time drug efficacy from a time-lapse microscopy video of PDOs. **Takeaway.** The Ins-ATP [7] method applied to our dataset gives lower performance than our proposed method. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Cavity aggregation & Min & SE [19] & Max & Sum & Mean \\ \hline MAPE \(\downarrow\) & 0.2425 & 0.2138 & 0.2092 & 0.1797 & **0.1755** \\ Pearson \(\uparrow\) & 0.8912 & 0.8922 & 0.9185 & 0.9204 & **0.9214** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of MAPE and Pearson correlation coefficient for four methods of feature extraction. Best results are indicated in **bold**, and second best results are underlined. Figure 5: Visualization of the temporal attention (a) and (sorted) feature attention (b) weights for a model after training (visualization for one fold only). Polynomial fit in red. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Temporal attention & Last frame & Last frame & \begin{tabular}{c} Last frame \\ MRA \\ \end{tabular} & \begin{tabular}{c} MHA \\ \end{tabular} & \begin{tabular}{c} MHA \\ \end{tabular} & \begin{tabular}{c} MHA \\ \end{tabular} & \begin{tabular}{c} MHA \\ \end{tabular} & \begin{tabular}{c} \(\mathbf{w}_{t}\) \\ \end{tabular} & \begin{tabular}{c} \(\mathbf{w}_{t}\) \\ None \\ \end{tabular} & \begin{tabular}{c} \(\mathbf{w}_{t}\) \\ MHA \\ \end{tabular} & \begin{tabular}{c} \(\mathbf{w}_{t}\) \\ \end{tabular} \\ \hline MAPE \(\downarrow\) & 0.2027 & 0.2417 & 0.2070 & 0.2403 & 0.3719 & 0.2382 & **0.1674** & 0.2320 & 0.1755 \\ Pearson \(\uparrow\) & 0.9169 & 0.8358 & 0.9123 & 0.8755 & 0.6834 & 0.8586 & 0.9209 & 0.8327 & **0.9214** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of MAPE and Pearson correlation coefficient for three types of temporal attention and feature-wise attention. Best results are indicated in **bold**, and second best results are underlined. ### Beyond Predicting Current ATP. We have shown the ability of our proposed pipeline to, given a sequence of frames, predict the ATP measured at the last frame. Two main questions arise. How well does our model perform in forecasting the ATP? How many frames of history does our model need to predict the current ATP? Note that while we are retraining the model with features from a subset of frames, the segmentation maps are still the ones computed on all \(f\) frames. **Forecasting the ATP.** Given our current data (i.e. a sequence of \(f=200\) frames, and a measure of ATP at the last frame), we can evaluate the performance of our pipeline for forecasting the ATP by training the model while omitting the last frames. In Figure 5(a), we show the performance of the model when trained only on frames from 0 to \(i\), equivalent to predicting the ATP \((f-i)\) frames in advance. As expected, the performance of the model drops as the model is trained to predict ATPs earlier in the video. However, predicting 2 days in advance a MAPE of \(0.28\) seems reasonable compared to the best performance of \(0.1755\), especially when considering the wide range of ATP values. **Required history for ATP prediction.** Similarly, if we omit the first frames from the training, we can evaluate how many frames of history our model needs for predicting the ATP. In Figure 5(b) we show the performance of the model when trained only on frames from \(i\) to \(f\) (i.e. having access to \(f-i\) frames). The drop in performance is not as clear as the one shown in Figure 5(a). This indicates that our model seems to perform well when not taking too many frames into account: the peak performance appears to happen when about 15 frames are given to the model. This is positive because it suggests that our model could be used for the live measurement of ATP as the images are still being acquired, as it does not need a full history of the organoid cells. However, it is important to note that although the ATP prediction part of the model seems to perform well with only 15 frames, the segmentation pipeline (and more precisely, the generation of the first prompt for SAM) relies on movement within the timelapse. We tested how well the segmentation of the organoids worked on the last 15 frames by computing the Dice score with the last 15 frames of the segmentation map computed using all frames as ground truth. The mean Dice score is 0.80, with a very uneven distribution among cavities: 75% of the cavities obtain a Dice over 0.90, while 15% obtain a Dice under 0.20. It seems that most segmentations are not affected by the use of a smaller number of frames, but for those cases which are affected, the segmentations are prone to complete failure. **Takeaway.** Our model can predict the ATP in advance with a reasonable drop in performance and is able to predict with only about 15 frames of history making it usable for online predictions. ## 5 Conclusion The estimation of the ATP is a standard method for the estimation of drug efficacy on organoids. However, it allows the measurement at a single timepoint for the entire experiment. In this paper, we propose a method for the spatio-temporal analysis of organoid microscopy timelapse videos, for the prediction of ATP. We assess the performance of our approach for predicting the current ATP with different ablations and we report better performance than SOTA. Future work includes the further exploration of foundation models for the analysis of organoid videos. In this study, the foundation models are used as frozen blocks. However, authors of DINOv2 [34] showed that finetuning the model encoders to a specific dataset improved the results on the dataset. One step could be to finetune the feature extraction model to improve the representations of the organoids to be better adapted to the prediction of ATP. Similarly for the SAM [22] model, we could finetune the mask decoder part of the model, or learn more efficient prompts. **Acknowledgments.** This work has benefited from state financial aid, managed by the Agence Nationale de Recherche under the investment program integrated into France 2030, project reference ANR-21-RHUS-0003. Experiments have been conducted using HPC resources from the "Mesocentre" computing center. Figure 6: MAPE (left) and Pearson correlation coefficient (right) as a function of the number of frames given to the model. (a) Performance of models trained only on frames from 0 to \(i\). (b) Performance of models trained only on frames from \(i\) to 200.
2302.10884
Decoupling for complex curves and improved decoupling for the cubic moment curve
We prove sharp $\ell^2$-decoupling inequalities for non-degenerate complex curves via the bilinear argument due to Guo--Li--Yung--Zorin-Kranich. Secondly, quantifying the iteration in the cubic case, we obtain a logarithmic refinement of the decoupling inequality for the cubic moment curve.
Robert Schippa
2023-02-21T18:58:33Z
http://arxiv.org/abs/2302.10884v2
# Improved decoupling for the moment curve in three dimensions ###### Abstract. By quantifying a bilinear decoupling iteration for the moment curve in three dimensions due to Guo-Li-Yung-Zorin-Kranich, we show a logarithmic improvement of the decoupling constant at critical exponent. Correspondingly, we obtain a logarithmic improvement of Vinogradov's mean-value theorem in the cubic case. Key words and phrases:moment curve in three dimensions, decoupling, Vinogradov's mean-value theorem ## 1. Introduction Let \(\delta\in\mathbb{N}^{-1}\) and \(\mathcal{P}(\delta)\) be a partition of \([0,1]\) into intervals of length \(\delta\). Let \(g(s)=(s,s^{2})\), and let \((f_{J})_{J\in\mathcal{P}(\delta)}\) be a family of functions \(f_{J}:\mathbb{R}^{2}\to\mathbb{C}\) with \(\operatorname{supp}(\hat{f}_{J})\subseteq\mathcal{V}_{J}\), where \(\mathcal{V}_{J}\) is the parallelepiped of size \(\delta\times\delta^{2}\) into directions \(g^{\prime}(c_{J})\) and \(g^{\prime\prime}(c_{J})\) with \(c_{J}\) denoting the center of the interval \(J\). Let \(\mathcal{D}_{2}(\delta)\) denote the smallest constant such that the decoupling inequality holds: \[\|\sum_{J\in\mathcal{P}(\delta)}f_{J}\|_{L^{6}(\mathbb{R}^{2})}\leq\mathcal{D }_{2}(\delta)\big{(}\sum_{J\in\mathcal{P}(\delta)}\|f_{J}\|_{L^{6}(\mathbb{R}^ {2})}^{\big{)}^{\frac{1}{2}}}.\] Bourgain-Demeter [2] proved that for any \(\varepsilon>0\) there is \(C_{\varepsilon}>0\) such that \[\mathcal{D}_{2}(\delta)\leq C_{\varepsilon}\delta^{-\varepsilon}.\] Li [9, Theorem 1.1] observed how the double exponential bound \[\mathcal{D}_{2}(\delta)\leq A^{A^{\frac{1}{\varepsilon}}}\delta^{-\varepsilon} \tag{1}\] allows one to sharpen the decoupling constant to \[\mathcal{D}_{2}(\delta)\leq\exp(C\frac{\log\delta^{-1}}{\log(\log\delta^{-1})}).\] This recovered the bound proved for discrete restriction by Bourgain [1, Proposition 2.36] via a divisor counting argument. By a Gauss sum argument, Bourgain [1, Remark 2, p. 118] showed moreover that \[\mathcal{D}_{2}(\delta)\gtrsim\log(\delta^{-1})^{1/6}.\] Li [9] proved (1) via the bilinear approach. More recently, Guth-Maldague-Wang [7] improved (1) to \[\mathcal{D}_{2}(\delta)\leq\log(\delta^{-1})^{c} \tag{2}\] for some (possibly large) constant \(c\). Subsequently, Guo-Li-Yung [5] improved the discrete restriction constant to \(C_{\varepsilon}\log(\delta^{-1})^{2+\varepsilon}\) for \(\varepsilon>0\). The approach in [7] differs from [9] as it uses a high-low decomposition. However, the high-low method becomes more involved in higher dimensions, whereas the bilinear approach as carried out in higher dimensions by Guo-Li-Yung-Zorin-Kranich [6] (see also [4]) seems more tractable. Li [9] firstly quantified decoupling via the bilinear approach for the parabola (see also [8]). In this note we turn to the moment curve in three dimensions. Let \(\Gamma_{3}(t)=(t,t^{2},t^{3})\) denote the moment curve mapping in three dimensions. For an interval \(J\subseteq[0,1]\) with center \(c_{J}\), let \(\mathcal{U}_{J}\) be the parallelepiped of dimensions \(4|J|\times 4|J|^{2}\times 4|J|^{3}\), whose center is \(\Gamma_{3}(c_{J})\) and sides are parallel to \(\Gamma_{3}^{\prime}(c_{J})\), \(\Gamma_{3}^{\prime\prime}(c_{J})\), \(\Gamma_{3}^{(3)}(c_{J})\). We define the linear decoupling constant for the three-dimensional moment curve for \(\delta\in\mathbb{N}^{-1}\) as smallest constant, which is monotone decreasing in \(\delta\), such that: \[\|\sum_{J\in\mathcal{P}(\delta)}f_{J}\|_{L^{12}(\mathbb{R}^{3})}\leq\mathcal{ D}_{3}(\delta)\big{(}\sum_{J\in\mathcal{P}(\delta)}\|f_{J}\|_{L^{12}(\mathbb{R}^{3})} ^{2}\big{)}^{1/2}.\] Above \(\mathcal{P}(\delta)\) denotes a partition up to points of \(I\) into intervals of length \(\delta\), and \(f_{J}\in\mathcal{S}(\mathbb{R}^{3})\) with Fourier support in \(\mathcal{U}_{J}\). Bourgain-Demeter-Guth [3] proved that \(\mathcal{D}_{k}(\delta)\leq C_{\varepsilon}\delta^{-\varepsilon}\) for any \(\varepsilon>0\) in any dimension, which yields as corollary the Vinogradov mean value theorem. The argument in [3] relies on multilinear Kakeya estimates. More recently, Guo _et al._[6] found a shorter proof of decoupling for moment curves, which relies on bilinear arguments and induction on dimension. We observe that in one dimension, the decoupling inequality for the moment curve reduces to Plancherel's theorem with \(\mathcal{D}_{1}(\delta)\leq 10\) and in two dimensions the logarithmic loss due to Guth-Maldague-Wang [7] is at disposal. In the present note we quantify the bilinear iteration from [6] for the moment curve in three dimensions and use improved decoupling inequalities without \(\delta^{-\varepsilon}\)-loss in lower dimensions to show the following: **Theorem 1.1** (Improved decoupling for the moment curve in three dimensions).: _There is \(0<\delta_{0}<1\) and \(C>0\) such that for \(0<\delta<\delta_{0}\), we have the following bound for the decoupling constant of the moment curve in three dimensions:_ \[\mathcal{D}_{3}(\delta)\leq\exp(C\frac{\log\delta^{-1}}{\log(\log(\delta^{-1 }))}). \tag{3}\] In the proof we see how losing additional logarithmic factors in the bilinear approach still allows us to show an estimate like in (1), which implies (3) after optimizing in \(\varepsilon\). The optimization argument was carried out in [9] without additional logarithms in the context of the parabola. As mentioned above, the decoupling result for the moment curve yielded Vinogradov's mean-value theorem. In the present instance the decoupling result in Theorem 1.1 yields a logarithmic improvement on the number of simultaneous solutions to the diophantine equations: \[\left\{\begin{array}{ll}\sum_{i=1}^{6}x_{i}&=\sum_{i=1}^{6}y_{i},\\ \sum_{i=1}^{6}x_{i}^{2}&=\sum_{i=1}^{6}y_{i}^{2},\\ \sum_{i=1}^{6}x_{i}^{3}&=\sum_{i=1}^{6}y_{i}^{3}.\end{array}\right. \tag{4}\] For \(1\leq x_{i},y_{i}\leq N\) we denote the number of integer solutions to (4) by \(J(N)\). We have the following corollary to Theorem 1.1: **Corollary 1.2**.: _For \(N\) sufficiently large, there is \(C>0\) such that the following estimate holds:_ \[J(N)\leq\exp\big{(}\frac{C\log(N)}{\log(\log(N))}\big{)}N^{6}.\] For an overview of recent work on Vinogradov's mean-value theorem, we refer to [10]. Via efficient congruencing Wooley [11, 12] firstly proved \[J(N)\leq C_{\varepsilon}N^{6+\varepsilon}\] for any \(\varepsilon>0\). We record the standard argument passing from (3) to the discrete decoupling and yielding the logarithmic improvement for the sake of self-containedness. Proof.: As the well-known argument goes, we write with \(e(x)=\exp(2\pi ix)\): \[J(N)=\int_{[0,1]^{3}}\big{|}\sum_{j=1}^{N}e(jx_{1}+j^{2}x_{2}+j^{3}x_{3})\big{|} ^{12}dx_{1}dx_{2}dx_{3}.\] By change of variables, we find \[J(N)=N^{-6}\int_{[0,N]\times[0,N^{2}]\times[0,N^{3}]}\big{|}\sum_{j=1}^{N}e(x. \Gamma_{3}(j/N))\big{|}^{12}dx_{1}dx_{2}dx_{3}.\] Now we use periodicity in \(x_{1}\) with period \(N\) and in \(x_{2}\) with period \(N^{2}\) to write \[J(N)\lesssim N^{-9}\int_{[0,N^{3}]^{3}}\big{|}\sum_{j=1}^{N}e(x.\Gamma_{3}(j/N ))\big{|}^{12}dx_{1}dx_{2}dx_{3}.\] Let \(f_{j}=\exp(x.\Gamma_{3}(j/N))\psi(x)\) with \(\psi\in\mathcal{S}(\mathbb{R}^{3})\) be a Schwartz function such that \(\operatorname{supp}(\hat{\psi})\subseteq N^{-3}\) and \(|\psi(x)|\sim 1\) on \(B(0,N^{3})\). By the above we have \[J(N)\lesssim N^{-9}\int_{[0,N^{3}]^{3}}\big{|}\sum_{j=1}^{N}f_{j}(x)\big{|}^{ 12}dx_{1}dx_{2}dx_{3}.\] Applying Theorem 1.1 for \(N\) large enough gives \[J(N)\lesssim N^{-9}\exp\big{(}\frac{C\log(N)}{\log(\log(N))}\big{)}\big{(} \sum_{j=1}^{N}\|f_{j}\|_{L^{12}(w_{B})}^{2}\big{)}^{6}.\] Since \(|f_{j}(x)|=1\) and \(\|f_{j}\|_{L^{12}(w_{B})}\lesssim(N^{9})^{\frac{1}{12}}\), we find \[J(N)\lesssim\exp\big{(}\frac{C\log(N)}{\log(\log(N))}\big{)}N^{6}.\] The claim follows from choosing \(C\) slightly larger. _Outline of the paper._ In Section 2 we introduce notations, explain the bilinear reduction, and give an overview of the constants coming up in the iteration. In Section 3 we recall the improved decoupling results in two dimensions and show stability. In Section 4 we carry out the decoupling iteration using asymmetric decoupling constants. Then we can bootstrap the decoupling constant to prove the claim. ## 2. Preliminaries ### Notations For \(k\in\mathbb{N}\) let \(\Gamma_{k}:[0,1]\to\mathbb{R}^{k}\) denote the moment curve in \(\mathbb{R}^{k}\). Let \(\delta\in\mathbb{N}^{-1}=\{\frac{1}{n}:n\in\mathbb{N}\}\). For a closed interval \([a,b]=I\subseteq[0,1]\) with \(|I|\delta^{-1}\in\mathbb{N}\) we denote by \(\mathcal{P}(I,\delta)\) the decomposition into closed intervals of length \(\delta\): \(I=\bigcup_{j=0}^{N-1}[a+j\delta,a+(j+1)\delta]\) with \(N\delta=|I|\). If \(I=[0,1]\), we let \(\mathcal{P}(I,\delta)=\mathcal{P}(\delta)\). Above we defined for an interval \(J\subseteq[0,1]\) with center \(c_{J}\), the parallelepiped \(\mathcal{U}_{J}\) of dimensions \(4|J|\times 4|J|^{2}\times 4|J|^{3}\), whose center is \(\Gamma(c_{J})\) and sides are parallel to \(\Gamma^{\prime}_{3}(c_{J})\), \(\Gamma^{\prime\prime}_{3}(c_{J})\), \(\Gamma^{(3)}_{3}(c_{J})\). More generally, we define for a curve \(\gamma:[0,1]\to\mathbb{R}^{k}\) the parallelepiped \(\mathcal{U}_{J,\gamma}\) with center \(c_{J}\) of dimensions \(4|J|\times 4|J|^{2}\times\ldots\times 4|J|^{k}\) into directions \(\partial\gamma(c_{J}),\ldots,\partial^{k}\gamma(c_{J})\). In the following, for an interval \(J\) and curve \(\gamma\), let \(\mathcal{U}^{o}_{J,\gamma}\) denote the parallelepiped centered at the origin, which is dual to \(\mathcal{U}_{J,\gamma}\), that is \[\mathcal{U}^{o}_{I,\gamma}=\{x\in\mathbb{R}^{k}\,:\,\big{|}\langle x,\partial^{ i}\gamma(c_{J})\rangle\big{|}\leq\frac{1}{4}|J|^{-i},\quad 1\leq i\leq k\}.\] This is a parallelepiped of size \(\sim|J|^{-1}\times|J|^{-2}\times|J|^{-3}\). We define a bump function adapted to \(\mathcal{U}^{o}_{I}\) by \[\phi_{I}(x)=|\mathcal{U}^{o}_{I}|^{-1}\inf\{t\geq 1\,:x/t\in\mathcal{U}^{o}_{I} \}^{-10k}.\] This is \(L^{1}\)-normalized as can be seen from anisotropic dilation: \(\int_{\mathbb{R}^{k}}\phi_{I}(x)dx\leq C_{4,k}\). For \(k\in\mathbb{N}\), we define the critical decoupling exponent for the moment curve \(\Gamma_{k}\) as \(p_{k}=k(k+1)\). We define \(\mathcal{D}_{k}(\delta)\) as monotone decreasing in \(\delta\) (this means if \(\delta^{-1}\) becomes larger, the decoupling constant is also supposed to become larger) and smallest constant, which satisfies \[\big{\|}\sum_{J\in\mathcal{P}(\delta)}f_{J}\big{\|}_{L^{p_{k}}(\mathbb{R}^{k}) }\leq\mathcal{D}_{k}(\delta)\big{(}\sum_{J\in\mathcal{P}(\delta)}\|f_{J}\|_{ L^{p_{k}}(\mathbb{R}^{k})}^{2}\big{)}^{\frac{1}{2}}.\] ### Bilinear reduction, and uncertainty principle We define the bilinear decoupling constant \(\mathcal{B}_{k}(\delta)\) as smallest constant decreasing in \(\delta\) such that \[\big{(}\int_{\mathbb{R}^{k}}|\sum_{J_{1}\in\mathcal{P}(I_{1}, \delta)}f_{J_{1}}|^{p_{k}/2}|\sum_{J_{2}\in\mathcal{P}(I_{2},\delta)}f_{J_{2}} |^{p_{k}/2}\big{)}^{1/p_{k}} \leq\mathcal{B}_{k}(\delta)\big{(}\sum_{J_{1}\in\mathcal{P}(I_{1},\delta)}\|f_{J_{1}}\|_{L^{p_{k}}}^{2}\big{)}^{1/4}\] \[\times\big{(}\sum_{J_{2}\in\mathcal{P}(I_{2},\delta)}\|f_{J_{2}} \|_{L^{p_{k}}}^{2}\big{)}^{1/4}.\] In the above display, we consider intervals \(I_{i}\subseteq[0,1]\), \(i=1,2\) with \(\operatorname{dist}(I_{1},I_{2})\geq\frac{1}{4}\) and \(|I_{i}|\delta\in\mathbb{N}\). We have the following linear-to-bilinear reduction: **Lemma 2.1** (Bilinear reduction, [6, Lemma 2.2]).: _If \(\delta=2^{-M}\), then there is \(C_{1}>0\) such that_ \[\mathcal{D}_{k}(\delta)\leq C_{1}\big{(}1+\sum_{n=2}^{M}\mathcal{B}_{k}(2^{-M +n-2})^{2}\big{)}^{1/2}.\] The proof of the above lemma is based on a Whitney decomposition and affine rescaling, which is already very important for the linear decoupling: **Lemma 2.2** (Affine rescaling, [6, Lemma 2.3]).: _Let \(I\in\mathcal{P}(2^{-n})\) for some integer \(n\geq 0\). For any \(\delta\in(0,2^{-n})\) and any tuple of functions \((f_{J})_{J\in\mathcal{P}(I,\delta)}\) with \(\text{supp}(\hat{f}_{J})\subseteq\mathcal{U}_{J}\) for all \(J\), the following holds:_ \[\|f_{I}\|_{L^{p_{k}}(\mathbb{R}^{k})}\leq\mathcal{D}_{k}(2^{n}\delta)\big{(} \sum_{J\in\mathcal{P}(I,\delta)}\|f_{J}\|_{L^{p_{k}}}^{2}\big{)}^{1/2}. \tag{5}\] The affine rescaling yields submultiplicativity: **Lemma 2.3**.: _We have for \(\delta\), \(\sigma\), \(\delta/32\sigma\in\mathbb{N}^{-1}\):_ \[\mathcal{D}_{k}(\delta)\leq\mathcal{D}_{k}(\sigma)\mathcal{D}_{k}(\delta/32 \sigma). \tag{6}\] Proof.: We can suppose that \(\sigma\leq\frac{1}{32}\) because for \(\sigma\in[\frac{1}{32},1]\) we have by monotonicity trivially \[\mathcal{D}_{k}(\delta)\leq\mathcal{D}_{k}(\delta/32\sigma).\] We partition \(\mathcal{P}(\delta)\) into collections indexed by \(\tilde{J}\in\mathcal{P}(8\sigma)\) such that \(\mathcal{U}_{J}\subseteq\mathcal{U}_{\tilde{J}}\) and write \(J\sim\tilde{J}\) (it suffices to take \(J\) as a child of \(\tilde{J}\) or as a child of a neighbour), and we write \(f_{\tilde{J}}=\sum_{J\sim\tilde{J}}f_{J}\). Then we can apply decoupling at \(8\sigma\) to find \[\big{\|}\sum_{J}f_{J}\big{\|}_{L^{p_{k}}(\mathbb{R}^{k})} \leq\mathcal{D}_{k}(8\sigma)\big{(}\sum_{J}\|f_{J}\|_{L^{p_{k}}( \mathbb{R}^{k})}^{2}\big{)}^{\frac{1}{2}}\] \[\leq\mathcal{D}_{k}(8\sigma)\mathcal{D}_{k}(\delta/32\sigma)\big{(} \sum_{J\in\mathcal{P}(\delta)}\|f_{J}\|_{L^{p_{k}}(\mathbb{R}^{k})}^{2}\big{)} ^{\frac{1}{2}},\] where we used affine rescaling in the second step and that \(J\) is either a child of \(\tilde{J}\) or a child of a neighbouring interval. Finally, we use monotonicity \(\mathcal{D}_{k}(8\sigma)\leq\mathcal{D}_{k}(\sigma)\) to conclude the proof. In the iteration to estimate \(\mathcal{D}_{k}(\delta)\), we use monotonicity of \(\mathcal{B}_{k}(\delta)\) to write \[\mathcal{D}_{k}(\delta)\leq C_{1}\log(\delta^{-1})\mathcal{B}_{k}(\delta). \tag{7}\] The reason we do not resort to the slightly sharper argument of broad-narrow reduction, which is used by Li [9], is that the unit distance separation of the intervals simplifies the forthcoming arguments, and we are losing logarithmic factors in the iteration anyway. We also use the following instance of uncertainty prinicple: **Lemma 2.4** ([6, Lemma 3.3]).: _For \(p\in[1,\infty)\) and \(J\subseteq[0,1]\) we have_ \[|g_{J}|^{p}\leq C_{p}(|g_{J}|^{p}*\phi_{J}),\] _for every \(g_{J}\) with \(\text{supp}(\hat{g}_{J})\subseteq C^{\prime}\mathcal{U}_{J}\)._ Record the following trivial bound due to Cauchy-Schwarz: \[\mathcal{D}_{k}(\delta)\leq\delta^{-\frac{1}{2}}. \tag{8}\] ### Overview of constants In the following we denote by \(C_{i}\), \(i=1,\dots,8\), \(c\) fixed (possibly very large) constants, which will be defined in the course of the argument. * \(C_{1}\) is used in the linear-to-bilinear reduction (7), * \(C_{2}\), \(C_{3}\) are used to record constants in the arguments involving lower dimensional decoupling, \(c\geq 1\) denotes the exponent in the logarithmic loss for the \(\ell^{2}L^{6}\)-decoupling, * \(C_{4}\) depends on the \(L^{1}\)-norm of an essentially \(L^{1}\)-normalized function (see Lemma 4.2), * \(C_{5}\) is a constant, which comes up in the key iteration to lower the scale, * \(C_{6}\) comes from an application of the triangle inequality to lower the scale once to \(\nu\) (see (26)), * \(N=N(\varepsilon,c)\) will later in the proof denote the number of iterations to lower the scale, * \(C_{7}\), \(C_{8}\) are absolute constants used to record intermediate estimates for \(\mathcal{D}_{3}(\delta)\) after carrying out the decoupling iteration (see Lemmas 4.6, 4.7). ## 3. Decoupling in one and two dimensions In this section we argue how the improved decoupling result by Guth-Maldague-Wang extends to the family of curves presently considered. Some of the arguments are already contained in [7, Appendix], but we opt to give the details. Then we shall see how we can use lower dimensional decoupling in bilinear expressions. Their improved decoupling result is formulated for normalized curves as follows: **Theorem 3.1** ([7, Appendix]).: _Let \(\gamma:[0,1]\to\mathbb{R}^{2}\), \(\gamma(t)=(t,h(t))\) be a curve such that \(h\in C^{2}([-1,2])\) with \(h(0)=h^{\prime}(0)=0\) and \(\frac{1}{2}\leq h^{\prime\prime}(t)\leq 2\). Then, there are \(C,c>0\) such that for \((f_{J})_{J\in\mathcal{P}_{\delta}}\) with \(\text{supp}(\hat{f}_{J})\subseteq\mathcal{U}_{J,\gamma}\) we have:_ \[\big{\|}\sum_{J\in\mathcal{P}_{\delta}}f_{J}\big{\|}_{L^{6}(\mathbb{R}^{2})} \leq C(\log(\delta^{-1}))^{c}\big{(}\sum_{J}\|f_{J}\|_{L^{6}(\mathbb{R}^{2})}^ {2}\big{)}^{\frac{1}{2}}.\] In the following we want to argue that the result extends to more general curves \(\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t))\in C^{5}\) with \[\|\gamma\|_{C^{5}}\leq D_{3}<\infty\text{ and }0<D_{1}\leq|\gamma^{\prime}(t) \wedge\gamma^{\prime\prime}(t)|\leq D_{2}<\infty. \tag{9}\] **Proposition 3.2** (Stability of improved decoupling).: _Suppose \(\gamma\in C^{5}\) satisfies (9), and let \((f_{J})_{J\in\mathcal{P}_{\delta}(I)}\) with \(\text{supp}(\hat{f}_{J})\subseteq C^{\prime}\mathcal{U}_{J,\gamma}\). Then, there is \(C(\underline{D},C^{\prime})\) such that_ \[\big{\|}\sum_{J}f_{J}\big{\|}_{L^{6}(\mathbb{R}^{2})}\leq C(\log(\delta^{-1}) )^{c}\big{(}\sum_{J}\|f_{J}\|_{L^{6}}^{2}\big{)}^{\frac{1}{2}}. \tag{10}\] Proof.: In the first step we reduce the curves \(\gamma\) to \((t,h(t))\) by finite decomposition, rotation, and translation, which only depends on \(\underline{D}\): For any point \(\gamma(t_{*})\) we can obtain by rotation and translation that \(\gamma(t_{*})=0\), \(\dot{\gamma}(t_{*})=(c,0)\) for some \(c>0\), and \(\ddot{\gamma}(t_{*})>0\). By the implicit function theorem we obtain a reparametrization \(t=g(s)\) such that \(\gamma_{1}(g(s))=s\). The interval on which the reparametrization exists depends on \(c\) and \(\|\gamma\|_{C^{2}}\). \(c\) is bounded from above by \(\mathcal{D}_{3}\) and from below by using the torsion: \[\begin{vmatrix}c&\ddot{\gamma}_{1}(t_{*})\\ 0&\ddot{\gamma}_{2}(t_{*})\end{vmatrix}=c|\ddot{\gamma}_{2}(t_{*})|\geq D_{1} \Rightarrow c\geq\frac{D_{1}}{D_{3}}.\] This means we find finitely many curves \(\tilde{\gamma}(s)=(s,h(s))\) with \(h(0)=h^{\prime}(0)=0\) and \(0<D^{\prime}_{1}\leq h^{\prime\prime}(s)\leq D^{\prime}_{2}<\infty\) with \(D^{\prime}_{i}=D^{\prime}_{i}(\underline{D})\). We can compare the rectangles \(\tilde{\gamma}\) and \(\gamma\) by noting that from \(\tilde{\gamma}(s)=\gamma(g(s))\) follows: \[\dot{\tilde{\gamma}}(s)=\dot{\gamma}(g(s))g^{\prime}(s),\quad\ddot{\tilde{ \gamma}}(s)=\tilde{\gamma}(g(s))(g^{\prime}(s))^{2}+\dot{\gamma}(g(s))g^{ \prime\prime}(s).\] The bilipschitz comparability of rectangles follows then from \(g^{\prime}(s)\sim_{\underline{D}}1\) and \(|g^{\prime\prime}(s)|\leq\kappa(\underline{D})\). In \(s\) parametrization, the rectangles \(C^{\prime}\mathcal{U}_{J,\gamma}\) become centered at \(\gamma(t_{J})=\tilde{\gamma}(s_{J})\) and can be contained in rectangles of length \(C^{\prime\prime}\delta\times C^{\prime\prime}\delta^{2}\) in the directions \(\dot{\tilde{\gamma}}(s_{J})\), \(\ddot{\tilde{\gamma}}(s_{J})\). For this reason we observe \(\text{supp}(\hat{f}_{J})\subseteq\mathcal{C}^{\prime\prime}\mathcal{U}_{J, \tilde{\gamma}}\). In the following \(\gamma(t)=(t,h(t))\), and we turn to normalization of \(h\). We subdivide \([0,1]\) into intervals \(I_{s}\) of length \(s\). A Taylor expansion of \(\gamma\) at the center \(t_{c}\) gives \[\gamma(t)=\gamma(t_{c})+\gamma^{\prime}(t_{c})(t-t_{c})+\gamma^{\prime\prime}(t _{c})\frac{(t-t_{c})^{2}}{2}+O((t-t_{c})^{3}).\] Since \(|\dot{\gamma}(t_{c})\wedge\tilde{\gamma}(t_{c})|=|h^{\prime\prime}(t_{c})|\neq 0\), there is an anisotropic dilation \(\underline{d}=\text{diag}(d_{1},d_{2})\) such that after translation \[\tilde{\gamma}(t)=te_{1}+\frac{t^{2}}{2}e_{2}+G(t)t^{3}e_{2}.\] In the above display \(e_{i}\) denote the unit vectors such that \(\gamma(t)=\gamma_{1}(t)e_{1}+\gamma_{2}(t)e_{2}=(\gamma_{1}(t),\gamma_{2}(t))\). The representation \(G(t)t^{3}e_{2}\) with \(G\in C^{2}\) for the third order remainder term in the Taylor expansion (after dilation) follows from the integral representation of the remainder: \[R_{3}(t)=\int_{0}^{t}\frac{\gamma^{(3)}(s)}{6}(t-s)^{3}ds.\] We obtain \[G(t)=\int_{0}^{t}\frac{\gamma^{(3)}(s)}{6}(1-\frac{s}{t})^{3}ds=t\int_{0}^{1} \frac{\gamma^{(3)}(ts^{\prime})}{6}(1-s^{\prime})^{3}ds^{\prime}\] and for \(\gamma\in C^{5}\) we find \(G\in C^{2}\) and \(\|G\|_{C^{2}}\leq\kappa(\underline{D})\). Moreover, \[\tilde{\gamma}^{\prime\prime}(t)=e_{2}+(G^{\prime}(t)t^{3}+3G(t)t^{2})^{\prime}e _{2}=e_{2}+(G^{\prime\prime}(t)t^{3}+6t^{2}G^{\prime}(t)+6G(t)t)e_{2}.\] Clearly, \(|G^{\prime\prime}(t)t^{3}+6t^{2}G^{\prime}(t)+6G(t)t|=O_{\underline{D}}(t)\) and choosing \(s\) small enough only depending on \(\underline{D}\), we finish the decomposition into curves of the kind \(\gamma(t)=(t,h(t))\) with \(h(0)=h^{\prime}(0)=0\) and \(\frac{1}{2}\leq h^{\prime\prime}(t)\leq 2\) up to rigid motions and anisotropic dilations controlled by \(\underline{D}\). Now we consider the decoupling of \((t,h(t))\) with \(\text{supp}\hat{f}_{J}\subseteq C\mathcal{U}_{J,\gamma}\) and shall prove that \[\big{\|}\sum_{J\in\mathcal{P}(\delta)}f_{J}\big{\|}_{L^{6}(\mathbb{R}^{2})} \leq\tilde{C}(C,C^{\prime})(\log(\delta^{-1}))^{c}\big{(}\sum_{J}\|f_{J}\|_{L^ {6}(\mathbb{R}^{2})}^{2}\big{)}^{\frac{1}{2}}\] with \(C\) like in Theorem 3.1. First, we observe that Theorem 3.1 applies with \(\tilde{C}=C\) for \(C^{\prime}\leq 1\). We turn to \(C^{\prime}\geq 1\): The minor technical issue is that the blocks \(\mathcal{U}_{J,\gamma}\) are overlapping more often than in the original collection. We observe that these blocks are in the \(\tilde{\delta}\)-neighbourhood for \(\tilde{\delta}=10^{10}(C^{\prime})^{2}\delta\). So we can apply decoupling for \(\tilde{\delta}\), but the decomposition into \(\mathcal{U}_{J,\gamma}\) for \(\tilde{J}\in\mathcal{P}_{\tilde{\delta}}\) is too coarse. For \(\tilde{J}\) we choose a collection \(\mathcal{J}\) of intervals \(J\subseteq\tilde{J}\) such that \(\sum_{\tilde{J}}f_{\tilde{J}}=\sum_{J}f_{J}\) and find: \[\big{\|}\sum_{\tilde{J}\in\mathcal{P}(\tilde{\delta})}f_{\tilde{J}}\big{\|}_{ L^{6}(\mathbb{R}^{2})}\leq C\log((10^{10}(C^{\prime})^{2}\delta)^{-1})^{c} \big{(}\sum_{\tilde{J}\in\mathcal{P}(\tilde{\delta})}\|f_{\tilde{J}}\|_{L^{6} (\mathbb{R}^{2})}^{2})^{\frac{1}{2}}.\] Since \(\#\{J\subseteq\tilde{J}\}=O((C^{\prime})^{2})\), an application of Cauchy-Schwarz finishes the proof: \[\big{\|}\sum_{J\in\mathcal{U}_{J}}f_{J}\big{\|}_{L^{6}(\mathbb{R}^{2})}\leq \tilde{C}(C,C^{\prime})(\log(\delta^{-1}))^{c}\big{(}\sum_{J}\|f_{J}\|_{L^{6} (\mathbb{R}^{2})}^{2}\big{)}^{\frac{1}{2}}.\] We summarize uniform decoupling inequalities for families of curves: Suppose \(\ell\in\{1,2\}\) and \(\gamma:[0,1]\to\mathbb{R}^{\ell}\) is a curve such that \[\|\gamma\|_{C^{5}}\leq D_{3}\text{ and for any }t\in[0,1]:\,D_{1}\leq \big{|}\bigwedge_{i=1}^{\ell}\partial^{i}\gamma(t)\big{|}\leq D_{2}. \tag{11}\] **Proposition 3.3** (Decoupling for curves with torsion for \(d=1,2\)).: _Suppose that \(\ell\in\{1,2\}\), and \(\gamma:[0,1]\to\mathbb{R}^{\ell}\) is a curve satisfying (11). Then, for any \(C>0\), any \(\delta\in(0,1)\), and any tuple of functions \((f_{J})_{J\in\mathcal{P}(\delta)}\) with \(\text{supp}(\hat{f}_{J})\subseteq C\mathcal{U}_{J,\gamma}\) for any \(J\), the following inequality holds:_ \[\big{\|}\sum_{J\in\mathcal{P}(\delta)}f_{J}\big{\|}_{L^{p_{\ell}}(\mathbb{R}^{ \ell})}\leq C^{\prime}_{\ell}(C,\underline{D},\delta)\big{(}\sum_{J\in \mathcal{P}(\delta)}\|f_{J}\|_{L^{p_{\ell}}(\mathbb{R}^{\ell})}^{2}\big{)}^{1/2} \tag{12}\] _with_ \[C^{\prime}_{\ell}(C,\underline{D},\delta)=\begin{cases}C^{\prime}(C,\underline {D}),&\ell=1,\\ C^{\prime}(C,\underline{D})(\log(\delta^{-1}))^{c},&\ell=2.\end{cases}\] Proof.: For \(\ell=1\) this is obvious, for \(\ell=2\) this is Proposition 3.2. **Corollary 3.4**.: _Under the assumptions of Proposition 3.3, for every ball \(B\subseteq\mathbb{R}^{\ell}\) of radius \(\delta^{-\ell}\), we have_ \[\fint_{B}\big{|}\sum_{J\in\mathcal{P}(\delta)}f_{J}\big{|}^{p_{\ell}}\leq C^{ \prime}_{\ell}(C,\underline{D},\delta)\big{(}\sum_{J\in\mathcal{P}(\delta)}\| f_{J}\|_{L^{p_{\ell}}(\phi_{B})}^{2}\big{)}^{p_{\ell}/2}. \tag{13}\] _In the above display \(\phi_{B}(x)=|B|^{-1}(1+\delta^{\ell}\text{dist}(x,B))^{-30}\) denotes an \(L^{1}\)-normalized bump function adapted to \(B\), and \(\fint_{B}\) denotes the average integral._ Proof.: We apply Proposition 3.3 to functions \(f_{J}\psi_{B}\), where \(\psi_{B}\) is a Schwartz function such that \(|\psi_{B}|\gtrsim 1\) on \(B\) and \(\operatorname{supp}(\hat{\psi}_{B})\subseteq B(0,\delta^{\ell})\). For \(B\) centered at the origin, it suffices to consider \(\psi_{B}(x)=\delta^{-\ell^{\prime}}\int_{\mathbb{R}^{3}}e^{ix\cdot\xi}a( \delta^{-\ell}\xi)d\xi\) with \(a\in C_{c}^{\infty}(\mathbb{R}^{3})\) a radially decreasing function satisfying \(a(0)=1\), \(a\geq 0\) and having support in \(B(0,c)\) for \(c\) small enough. The general case follows from translation. Then \[\fint_{B}\big{|}\sum_{J}f_{J}\big{|}^{p_{\ell}} \lesssim\int_{\mathbb{R}^{3}}\big{|}\sum_{J}f_{J}\frac{\psi_{B}}{ |B|^{1/p_{\ell}}}\big{|}^{p_{\ell}}\lesssim\int_{\mathbb{R}^{3}}\big{|}\sum_{J }f_{J}\frac{\psi_{B}}{|B|^{1/p_{\ell}}}\big{|}^{p_{\ell}}\] \[\leq C^{\prime}(C,\underline{D})C_{\ell}(\delta)\big{(}\sum_{J} \|\frac{f_{J}\psi_{B}}{|B|^{1/p_{\ell}}}\|_{L^{p_{\ell}}}^{2}\big{)}^{p_{\ell} /2}\] with \[C_{\ell}(\delta)=\begin{cases}1,&\ell=1,\\ (\log(\delta^{-1}))^{c},&\ell=2.\end{cases}\] To conclude the proof, we need to argue that \[\sup_{x}\frac{|\psi_{B}(x)|^{p_{\ell}}}{|B|\phi_{B}(x)}\lesssim 1.\] This follows from the rapid decay of \(\psi_{B}\) away from \(B\) (which is by our definition of \(\psi_{B}\) faster than any polynomial). Let \(R=\delta^{-\ell}\) and suppose again \(B\) is centered at the origin. For \(\operatorname{dist}(x,B)\leq R\) we have \(|\psi_{B}(x)|^{p_{\ell}}\leq C\), \(|B|\phi_{B}(x)\geq 1\). For \(\operatorname{dist}(x,B)\geq R\), which means \(x\geq 2R\), we can estimate \[\psi_{B}(x)=\psi_{1}(R^{-1}x)\leq C_{N}(1+R^{-1}x)^{-N}.\] Therefore, \((\psi_{1}(R^{-1}x))^{p_{\ell}}\leq C_{N}(R^{-1}x)^{-N\cdot p_{\ell}}\) and \(|B|\phi_{B}(x)\sim(R^{-1}x)^{-30}\). Choosing \(N\) large enough yields an acceptable contribution. **Lemma 3.5** (Lower degree decoupling (see [6, Lemma 3.5.])).: _Let \(\ell\in\{1,2\}\). Let \(\delta\in(0,1)\) and \((f_{K})_{K\in\mathcal{P}(\delta)}\) be a tuple of functions so that \(\operatorname{supp}\hat{f}_{K}\subseteq\mathcal{U}_{K}\) for every \(K\). If \(0\leq a\leq(3-\ell+1)b/\ell\), then for any pair of intervals \(I\in\mathcal{P}(\delta^{a})\), \(I^{\prime}\in\mathcal{P}(\delta^{b})\) with \(\operatorname{dist}(I,I^{\prime})\geq 1/4\), we obtain for \(\ell=1\):_ \[\int_{\mathbb{R}^{3}}(|f_{I}|^{2}*\phi_{I})(|f_{I^{\prime}}|^{10}*\phi_{I^{ \prime}})\leq C_{1}\sum_{J\in\mathcal{P}(I,\delta^{3b})}\int_{\mathbb{R}^{3}}( |f_{J}|^{2}*\phi_{J})(|f_{I^{\prime}}|^{10}*\phi_{I^{\prime}}), \tag{14}\] _and for \(\ell=2\):_ \[\int_{\mathbb{R}^{3}}(|f_{I}|^{6}*\phi_{I})(|f_{I^{\prime}}|^{6}*\phi_{I^{ \prime}})\leq C_{2}(\log(\delta^{-b}))^{c}\big{(}\sum_{J\in\mathcal{P}(I, \delta^{b})}\big{(}\int_{\mathbb{R}^{3}}\big{(}|f_{J}|^{6}*\phi_{J}\big{)}\big{(} |f_{I^{\prime}}|^{6}*\phi_{I^{\prime}}\big{)}\big{)}^{\frac{1}{3}}\big{)}^{3}. \tag{15}\] We need the following transversality observation: **Lemma 3.6** ([6, Lemma 3.5]).: _Let \(\Gamma_{k}(t)=(t,t^{2},\ldots,t^{k}):[0,1]\to\mathbb{R}^{k}\). For any integers \(0\leq\ell\leq k\) and any \(\xi_{1},\xi_{2}\in\mathbb{R}\), we have_ \[|\partial^{1}\Gamma_{k}(\xi_{1})\wedge\ldots\wedge\partial^{\ell}\Gamma_{k}(\xi _{1})\wedge\partial^{1}\Gamma_{k}(\xi_{2})\wedge\ldots\wedge\partial^{k-\ell} \Gamma_{k}(\xi_{2})|\gtrsim_{k,\ell}|\xi_{1}-\xi_{2}|^{\ell(k-\ell)}.\] Now we are ready to prove Lemma 3.5. We repeat the argument from [6] for convenience to make the involved implicit constants transparent. By quantifying the decoupling constants, we can improve the \(\delta^{-\varepsilon}\) bound from [6] as claimed. Proof of Lemma 3.5.: Denote \(b^{\prime}=(3-\ell+1)b/\ell\) and \(k=3\). Fix \(\xi^{\prime}\in I^{\prime}\), let \(V^{m}(\xi^{\prime})=\operatorname{span}(\partial^{1}\Gamma_{k}(\xi^{\prime}), \ldots,\partial^{m}\Gamma_{k}(\xi^{\prime}))\) be the tangent space for \(m\in\{1,2,3\}\), and let \(\hat{H}=\mathbb{R}^{k}/V^{k-\ell}(\xi^{\prime})\) be the quotient space. Let \(P:\mathbb{R}^{k}\to\hat{H}\) be the projection onto \(\hat{H}\). For every \(\xi\in I\), we have by Lemma 3.6 that \[|\partial^{1}(P\circ\Gamma_{k})(\xi)\wedge\ldots\wedge\partial^{\ell}(P\circ \Gamma_{k})(\xi)|\gtrsim 1.\] Moreover, \(P(\mathcal{U}_{J})\subseteq C^{\prime}\mathcal{U}_{J,P\circ\Gamma_{k}}\). Let \(H=V^{k-\ell}(\xi^{\prime})^{\perp}\) be the orthogonal complement in \(\mathbb{R}^{k}\) so that \(\hat{H}\) is its Pontryagin dual. Since the Fourier support of the restriction \(f_{J}\big{|}_{H+z}\) to almost every translated copy is contained in \(P(\text{supp}(\hat{f}_{J}))\) and \(P(\mathcal{U}_{J})\subseteq C^{\prime}\mathcal{U}_{J,P\circ\Gamma}\), we can apply lower dimensional decoupling inequalities. We write by Fubini's theorem \[\int_{\mathbb{R}^{k}}\big{(}|f_{I}|^{p_{\ell}}*\phi_{I}\big{)}\big{(}|f_{I^{ \prime}}|^{p_{k}-p_{\ell}}*\phi_{I^{\prime}}\big{)}=\int_{z\in\mathbb{R}^{k}} \mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B_{H}(z,\delta-b^{\prime}\ell)}\big{(}|f_{I}|^{p_ {k}-p_{\ell}}*\phi_{I^{\prime}}\big{)}=\int_{z\in\mathbb{R}^{k}}\mathchoice{{ \vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B_{H}(z,\delta-b^{\prime}\ell)}\big{(}|f_{I}|^{p_ {\ell}}*\phi_{I}\big{)}\big{(}|f_{I^{\prime}}|^{p_{k}-p_{\ell}}*\phi_{I^{ \prime}}\big{)}, \tag{16}\] where \(B_{H}(z,\delta-b^{\prime}\ell)\) is the \(\ell\)-dimensional ball with radius \(\delta^{-b^{\prime}\ell}\) centered at \(z\) inside the affine subspace \(H+z\). Since \(B_{H}(0,\delta-b^{\prime}\ell)=B_{H}(0,\delta^{-(k-\ell+1)b})\subseteq C^{ \prime}\mathcal{U}_{I^{\prime}}^{o}\), we have \[\sup_{x\in B_{H}(z,\delta-b^{\prime}\ell)}\big{(}|f_{I^{\prime}}|^{p_{k}-p_{ \ell}}*\phi_{I^{\prime}}\big{)}(x)\lesssim\big{(}|f_{I^{\prime}}|^{p_{k}-p_{ \ell}}*\phi_{I^{\prime}}\big{)}(z). \tag{17}\] This allows us to continue to write \[(\ref{eq:16})\lesssim\int_{z\in\mathbb{R}^{k}}\big{(}\mathchoice{{\vbox{ \hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B_{H}(z,\delta-b^{\prime}\ell)}|f_{I}|^{p_{ \ell}}*\phi_{I}\big{)}\big{(}|f_{I^{\prime}}|^{p_{k}-p_{\ell}}*\phi_{I^{ \prime}}\big{)}(z). \tag{18}\] Above \(*_{H}\) denotes convolution along \(H\). Now we can use lower dimensional decoupling with \(\delta^{b^{\prime}}\) in place of \(\delta\) in Corollary 3.4: \[\leq C^{\prime}_{\ell}(C,\underline{D},\delta)\int_{z^{\prime}}\phi_{I}(z-z^{ \prime})\big{(}\sum_{J\in\mathcal{P}(I,\delta^{b^{\prime}})}\|f_{J}\|_{L^{p_{ \ell}}(\phi_{B_{H}}(z^{\prime},\delta-b^{\prime}\ell))}^{p_{\ell}/2}.\] Taking the \(p_{\ell}\)th root we find for (16): \[\begin{split}(\ref{eq:16})^{1/p_{\ell}}&\leq C^{ \prime}_{\ell}(C,\underline{D},\delta^{b^{\prime}})\big{(}\int_{z,z^{\prime}\in \mathbb{R}^{3}}\big{(}|f_{I^{\prime}}|^{p_{k}-p_{\ell}}*\phi_{I^{\prime}}\big{)} (z)\\ &\qquad\times\phi_{I}(z-z^{\prime})\big{(}\sum_{J\in\mathcal{P}(I,\delta^{b^{\prime}})}\|f_{J}\|_{L^{p_{\ell}}(z^{\prime}+H,\phi_{B_{H}}(z^{ \prime},\delta-b^{\prime}\ell))}^{2}\big{)}^{p_{\ell}/2}\big{)}^{1/p_{\ell}}\\ &\leq C^{\prime}_{\ell}(C,\underline{D},\delta^{b^{\prime}}) \big{(}\sum_{J\in\mathcal{P}(I,\delta^{b^{\prime}})}\big{(}\int_{z,z^{\prime} \in\mathbb{R}^{3}}\big{(}|f_{I^{\prime}}|^{p_{k}-p_{\ell}}*\phi_{I^{\prime}} \big{)}(z)\\ &\qquad\times\phi_{I}(z-z^{\prime})\|f_{J}\|_{L^{p_{\ell}}(\phi_{ B_{H}(z^{\prime},\delta-b^{\prime}\ell_{\ell})})}^{p_{\ell}}\big{)}^{2/p_{\ell}} \big{)}^{\frac{1}{2}}.\end{split} \tag{19}\] The ultimate estimate follows from Minkowski's inequality since \(2\leq p_{\ell}\). The double integral inside the brackets can be written as \[\begin{split}&\int_{\mathbb{R}^{k}}\big{(}|f_{I^{\prime}}|^{p_{k} -p_{\ell}}*\phi_{I^{\prime}}\big{)}\big{(}\phi_{I}*|f_{J}|^{p_{\ell}}*_{H}\phi_{ B_{H}(0,\delta-b^{\prime}\ell_{\ell})}\big{)}\\ &=\int_{\mathbb{R}^{k}}(|f_{I^{\prime}}|^{p_{k}-p_{\ell}}*\phi_{I}*_ {H}\phi_{B_{H}(0,\delta-b^{\prime}\ell_{\ell})})(|f_{J}|^{p_{\ell}}*\phi_{I})\\ &\lesssim\int_{\mathbb{R}^{k}}\big{(}|f_{I^{\prime}}|^{p_{k}-p_{ \ell}}*\phi_{I^{\prime}}\big{)}\big{(}|f_{J}|^{p_{\ell}}*\phi_{I}\big{)},\end{split} \tag{20}\] which follows again by \(B_{H}(0,\delta-b^{\prime}\ell)\subseteq C\mathcal{U}_{I^{\prime},\gamma}^{o}\). Using the uncertainty principle and \(\mathcal{U}_{I}^{o}\subseteq C\mathcal{U}_{J}^{o}\), we find \[|f_{J}|^{p_{\ell}}*\phi_{I}\lesssim|f_{J}|^{p_{\ell}}*\phi_{J}*\phi_{I}\lesssim|f_{ J}|^{p_{\ell}}*\phi_{J}. \tag{21}\] We merge the implicit constants in (17), (18), (20), and (21) with \(C^{\prime}_{\ell}(C,\underline{D})\) in (19) to complete the proof. ## 4. Proof of Theorem 1.1 ### Asymmetric decoupling constant In the following we define asymmetric decoupling constants, which effectively allow us to lower the scale by using lower-dimensional decoupling stated in the previous section. We consider two intervals \(I\), \(I^{\prime}\) of size \(|I|=\delta^{a}\) and \(|I^{\prime}|=\delta^{b}\), \(a,b\in[0,1]\), which are separated at unit distance. Following [6], we define bilinear decoupling constants as smallest constants, which satisfy the following: \[\int_{\mathbb{R}^{3}}(|f_{I}|^{6}*\phi_{I})(|f_{I^{\prime}}|^{6}*\phi_{I^{ \prime}})\leq M_{6,a,b}^{12}(\delta)\big{(}\sum_{J\in\mathcal{P}(I,\delta)} \|f_{J}\|_{L^{12}}^{2}\big{)}^{3}\big{(}\sum_{J^{\prime}\in\mathcal{P}(I^{ \prime},\delta)}\|f_{J^{\prime}}\|_{L^{12}}^{2}\big{)}^{3}.\] Secondly, we define \[\int_{\mathbb{R}^{3}}(|f_{I}|^{2}*\phi_{I})(|f_{I^{\prime}}|^{10}*\phi_{I^{ \prime}})\leq M_{2,a,b}^{12}(\delta)\big{(}\sum_{J\in\mathcal{P}(I,\delta)} \|f_{J}\|_{L^{12}}^{2}\big{)}\big{(}\sum_{J^{\prime}\in\mathcal{P}(I^{\prime},\delta)}\|f_{J^{\prime}}\|_{L^{12}}^{2}\big{)}^{5}.\] We have the following as consequence of (14) and (15): **Lemma 4.1** (Lower dimensional decoupling).: _Let \(a,b\in[0,1]\) such that \(0\leq a\leq 3b\). Then_ \[M_{2,a,b}(\delta)\leq C_{2}M_{2,3b,b}(\delta). \tag{22}\] _If \(0\leq a\leq b\), then the following estimate holds for some \(c\in\mathbb{N}\):_ \[M_{6,a,b}(\delta)\leq C_{3}(\log(\delta^{-b}))^{c}M_{6,b,b}(\delta). \tag{23}\] The following is straight-forward from Holder's inequality and parabolic rescaling (cf. [6, Lemma 4.1]): **Lemma 4.2** (Holder's inequality I).: _Let \(a,b\in[0,1]\). Then_ \[M_{2,a,b}(\delta)\leq C_{4}M_{6,a,b}(\delta)^{1/3}D(\delta/\delta^{b})^{2/3}. \tag{24}\] Proof.: We apply Holder's inequality to find: \[|f_{I}|^{2}*\phi_{I} \leq C_{4}(|f_{I}|^{6}*\phi_{I})^{\frac{1}{3}},\] \[|f_{I^{\prime}}|^{10}*\phi_{I^{\prime}} \leq(|f_{I^{\prime}}|^{6}*\phi_{I^{\prime}})^{1/3}(|f_{I^{\prime }}|^{12}*\phi_{I^{\prime}})^{2/3}.\] The constant in the first estimate does not depend on the scale due to \(L^{1}\)-normalization of \(\phi_{I}\): \[\int|f_{I}|^{2}(y)\phi_{I}(x-y)dy =\int|f_{I}|^{2}(y)\phi_{I}(x-y)^{\frac{1}{3}}\phi_{I}(x-y)^{ \frac{2}{3}}dy\] \[\leq\big{(}\int|f_{I}|^{6}\phi_{I}(x-y)dy\big{)}^{\frac{1}{3}} \big{(}\int\phi_{I}(x-y)dy\big{)}^{\frac{2}{3}}\] \[=C_{4}(|f_{I}|^{6}*\phi_{I})^{\frac{1}{3}}.\] By the above and another application of Holder's inequality we find: \[\int_{\mathbb{R}^{3}}(|f_{I}|^{2}*\phi_{I})(|f_{I^{\prime}}|^{10}* \phi_{I^{\prime}}) \leq C_{4}\int_{\mathbb{R}^{3}}(|f_{I}|^{6}*\phi_{I})^{1/3}(|f_{ I^{\prime}}|^{6}*\phi_{I^{\prime}})^{1/3}(|f_{I^{\prime}}|^{12}*\phi_{I^{\prime}})^{2/3}\] \[\leq C_{4}\big{(}\int_{\mathbb{R}^{3}}(|f_{I}|^{6}*\phi_{I})(|f_{ I^{\prime}}|^{6}*\phi_{I^{\prime}})\big{)}^{1/3}\big{(}\int_{\mathbb{R}^{3}}(|f_{ I^{\prime}}|^{12}*\phi_{I^{\prime}})\big{)}^{2/3}\] From this estimate and parabolic rescaling, (24) is immediate. Another application of Holder's inequality gives the following (again [6, Lemma 4.2]): **Lemma 4.3** (Holder's inequality II).: _Let \(a,b\in[0,1]\). Then_ \[M_{6,a,b}(\delta)\leq M_{2,a,b}(\delta)^{1/2}M_{2,b,a}(\delta)^{1/2}. \tag{25}\] Proof.: By two applications of the Cauchy-Schwarz inequality we find \[\int_{\mathbb{R}^{3}}(|f_{I}|^{6}*\phi_{I})(|f_{I^{\prime}}|^{6}* \phi_{I^{\prime}})\] \[\leq\int_{\mathbb{R}^{3}}(|f_{I}|^{2}*\phi_{I})^{1/2}(|f_{I^{ \prime}}|^{10}*\phi_{I^{\prime}})^{1/2}\cdot(|f_{I}|^{10}*\phi_{I})^{1/2}(|f_{ I^{\prime}}|^{2}*\phi_{I^{\prime}})^{1/2}\] \[\leq\big{(}\int_{\mathbb{R}^{3}}(|f_{I}|^{2}*\phi_{I})(|f_{I^{ \prime}}|^{10}*\phi_{I^{\prime}})\big{)}^{1/2}\big{(}\int_{\mathbb{R}^{3}}(|f _{I}|^{10}*\phi_{I})(|f_{I^{\prime}}|^{2}*\phi_{I^{\prime}})\big{)}^{1/2}.\] From this estimate (25) is immediate. ### The decoupling iteration By Lemma 4.1, 4.2, and 4.3, we have the following key iteration step: **Lemma 4.4** (Iteration step for the moment curve).: _Let \(a,b\in[0,1]\) and \(0<a\leq 3b\). We find_ \[M_{2,a,b}(\delta)\leq C_{5}M_{2,3b,3b}^{1/3}(\delta)\log(\delta^{-3b})^{c}D( \delta/\delta^{b})^{2/3}.\] Proof.: By successive applications of the aforementioned lemmas, we find \[M_{2,a,b}(\delta) \leq C_{2}M_{2,3b,b}(\delta)\leq C_{2}C_{4}M_{6,3b,b}(\delta)^{1/3 }D(\delta/\delta^{b})^{2/3}\] \[\leq C_{2}C_{3}C_{4}M_{6,3b,3b}^{1/3}(\delta)\log(\delta^{-3b})^{c}D( \delta/\delta^{b})^{2/3}\] \[\leq\underbrace{C_{2}C_{3}C_{4}}_{C_{5}}M_{2,3b,3b}^{1/3}(\delta )\log(\delta^{-3b})^{c}D(\delta/\delta^{b})^{2/3}.\] To make the iteration effective, we initially divide the unit size intervals \(I\), \(I^{\prime}\) considered in \(B(\delta)\) into \(\nu^{-1}\) smaller intervals, and then use the previously established iteration. Let \(\nu=\delta^{b}\). We choose \(\nu=\delta^{1/3^{N}}\) such that in \(N\) iterations of Lemma 4.4 we reach the scale \(\delta\), where decoupling becomes trivial. We use the estimate \[M_{6,0,0}(\delta)\leq C_{6}\nu^{-\frac{1}{2}}M_{6,b,b}(\delta) \tag{26}\] due to the Cauchy-Schwarz inequality: \[\big{(}\int\big{(}|\sum_{J\in\mathcal{P}(I,\delta^{b})}f_{J}\big{|} ^{6}*\phi_{I}\big{)}\big{(}|\sum_{J^{\prime}\in\mathcal{P}(I^{\prime},\delta^ {b})}f_{J^{\prime}}\big{|}^{6}*\phi_{I^{\prime}}\big{)}\big{)}^{\frac{1}{b}}\] \[\leq\sum_{\begin{subarray}{c}J\in\mathcal{P}(I,\delta^{b})\\ J^{\prime}\in\mathcal{P}(I^{\prime},\delta^{b})\end{subarray}}\big{(}\int \big{(}|f_{J}|^{6}*\phi_{I}\big{)}\big{(}|f_{J^{\prime}}|^{6}*\phi_{I^{\prime} }\big{)}\big{)}^{\frac{1}{b}}\] \[\leq M_{6,b,b}^{2}(\delta)\sum_{\begin{subarray}{c}J\in\mathcal{ P}(I,\delta^{b}),\\ J^{\prime}\in\mathcal{P}(I^{\prime},\delta^{b})\end{subarray}}\big{(}\sum_{K \in\mathcal{P}(J,\delta)}\|f_{K}\|_{L^{12}(\mathbb{R}^{3})}^{2}\big{)}^{\frac{1 }{2}}\big{(}\sum_{K^{\prime}\in\mathcal{P}(J^{\prime},\delta)}\|f_{K^{\prime }}\|_{L^{12}(\mathbb{R}^{3})}^{2}\big{)}^{\frac{1}{2}}\] \[\leq M_{6,b,b}^{2}(\delta)C_{6}^{2}\nu^{-1}\big{(}\sum_{K\in \mathcal{P}(I,\delta)}\|f_{K}\|_{L^{12}(\mathbb{R}^{3})}^{2}\big{)}^{\frac{1 }{2}}\big{(}\sum_{K^{\prime}\in\mathcal{P}(I^{\prime},\delta)}\|f_{K^{\prime }}\|_{L^{12}(\mathbb{R}^{3})}^{2}\big{)}^{\frac{1}{2}}.\] By the decoupling result of Bourgain-Demeter-Guth [3] we have \[\mathcal{D}_{3}(\delta)\leq C_{e}\delta^{-\varepsilon}, \tag{27}\] which gives: **Lemma 4.5**.: _Let \(N\in\mathbb{N}\). Suppose that \(\delta\in 2^{\mathbb{Z}}\) and \(\delta^{-\frac{1}{3^{N}}}\in\mathbb{N}\). Then the following estimate holds:_ \[D(\delta)\leq C_{7}\delta^{\frac{\varepsilon}{3N}\left(1+\frac{2N}{3}-\frac{1} {2\varepsilon}\right)}\log(\delta^{-1})^{3c}C_{\varepsilon}^{1-\frac{1}{3^{N }}}\delta^{-\varepsilon}. \tag{28}\] Proof.: We find iterating Lemma 4.4\(N\) times: \[M_{2,b,b}(\delta)\leq C_{5}^{2}M_{6,3^{Nb},3^{Nb}b}^{1/3^{N}}\log(\delta^{-1}) ^{2c}\prod_{j=0}^{N-1}D(\delta/\delta^{3^{j}b})^{(2/3)\cdot 1/3^{j}}. \tag{29}\] From the bilinear reduction, we have (here we use \(\delta\in 2^{\mathbb{Z}}\)) \[\mathcal{D}_{3}(\delta)\leq C_{1}\log(\delta^{-1})M_{6,0,0}(\delta).\] We reduce the scale in \(M_{6,0,0}(\delta)\) to \(\nu\) by (26) such that \[\mathcal{D}_{3}(\delta)\leq C_{1}C_{6}\nu^{-\frac{1}{2}}\log(\delta^{-1})M_{6,b,b}(\delta).\] Now we plug in (29) to find the following recursive estimate for the linear decoupling constant: \[\mathcal{D}_{3}(\delta)\leq\underbrace{C_{1}C_{5}^{2}C_{6}}_{C_{7}}\delta^{- \frac{b}{2}}\log(\delta^{-1})^{3c}\prod_{j=0}^{N-1}D(\delta/\nu^{3^{j}})^{\frac {2}{3}\cdot\frac{1}{3^{j}}}.\] By (27), we find \[D(\delta) \leq C_{7}\delta^{-\frac{b}{2}}\log(\delta^{-1})^{3c}\prod_{j=0} ^{N-1}(C_{\varepsilon}\delta^{-\varepsilon(1-3^{j-N})})^{\frac{2}{3}\cdot \frac{1}{3^{j}}}\] \[=C_{7}\delta^{-\frac{b}{2}}\log(\delta^{-1})^{3c}C_{\varepsilon}^ {1-\frac{1}{3^{N}}}\delta^{-\varepsilon(1-\frac{1}{3^{N}})}\delta^{\varepsilon \frac{2N}{3\cdot 3^{N}}}\] \[=C\gamma\delta^{\frac{\varepsilon}{3^{N}}(1+\frac{2N}{3}-\frac{1} {2\varepsilon})}\log(\delta^{-1})^{3c}C_{\varepsilon}^{1-\frac{1}{3^{N}}} \delta^{-\varepsilon}.\] In the next step, we choose \(N=N(\varepsilon)\), which simplifies the above expression for \(\delta\in 2^{\mathbb{Z}}\) and \(\delta^{-\frac{1}{3^{N}}}\in\mathbb{N}\). **Lemma 4.6**.: _Let \(0<\varepsilon<\varepsilon_{0}=\varepsilon_{0}(c)\), and \(N\in\mathbb{N}\) such that_ \[1+\frac{2N}{3}-\frac{1}{2\varepsilon}\in[\frac{2}{3},2]. \tag{30}\] _For \(\delta\in(\delta_{n})_{n=n_{0}}^{\infty}\) with \(\delta_{n}=2^{-n3^{10N}}\), \(n_{0}=n_{0}(c)\), we have the following:_ \[\mathcal{D}_{3}(\delta)\leq C_{7}C_{\varepsilon}^{1-\frac{1}{3^{N}}}\delta^{- \varepsilon}.\] Proof.: With the assumptions of Lemma 4.5 satisfied, we find by (29) \[\mathcal{D}_{3}(\delta)\leq C_{7}\delta^{\frac{\varepsilon}{3^{N}}\left(1+ \frac{2N}{3}-\frac{1}{2\varepsilon}\right)}\log(\delta^{-1})^{3c}C_{ \varepsilon}^{1-\frac{1}{3^{N}}}\delta^{-\varepsilon}.\] By (30) this simplifies to \[\mathcal{D}_{3}(\delta)\leq C_{7}\log(\delta^{-1})^{3c}\delta^{\frac{2 \varepsilon}{3^{N}\cdot 3}}C_{\varepsilon}^{1-\frac{1}{3^{N}}}\delta^{-\varepsilon}.\] Since \(\delta=2^{-n\cdot 3^{10N}}\), we show that for \(n\geq n_{0}(c)\) and \(0<\varepsilon<\varepsilon_{0}\) \[\log(\delta^{-1})^{3c}\delta^{\frac{2\varepsilon}{3^{N}\cdot 3}}\leq 1.\] First we note that \[\delta^{\frac{2\varepsilon}{3^{N}\cdot 3^{N}}}\leq\delta^{\frac{1}{3^{2N}}} \leq 2^{-n\cdot 3^{8N}}.\] Here we use \(\varepsilon\sim\frac{1}{N}\), and for \(0<\varepsilon<\varepsilon_{0}\), \(N\) becomes large enough to argue like in the above estimate. Moreover, \[\log(\delta^{-1})^{3c}\leq n^{3c}3^{30Nc}\log(2)^{3c}\leq n^{3c}3^{30Nc}.\] First, we see that \[3^{30Nc}\leq 2^{\log(3)30Nc}\leq 2^{\frac{n}{2}3^{8N}}\] by choosing \(0<\varepsilon<\varepsilon_{0}(c)\) small enough such that \(30Nc\log(3)\leq 3^{8N}/2\) (since \(N\) becomes large enough for \(N\sim 1/\varepsilon\) such that the inequality holds). Secondly, we can choose \(n\geq n_{0}(c)\) large enough such that \[3\log_{2}(n)c\leq\frac{n}{2}\Rightarrow 2^{\log_{2}(n)3c}\leq 2^{\frac{n3^{8N}}{2}}.\] Then we arrive at the claim \[\mathcal{D}_{3}(\delta)\leq C_{7}C_{\varepsilon}^{1-\frac{1}{3^{N}}}\delta^{- \varepsilon}.\] This absorbs the additional \(\log(\delta^{-1})^{3c}\)-factor, which is absent in Li's proof of improved decoupling for the parabola [9]. We give the concluding arguments from [9] for self-containedness. In the following lemma we use submultiplicativity to extend this estimate to all \(\delta\in\mathbb{N}^{-1}\): **Lemma 4.7**.: _Let \(0<\varepsilon<\varepsilon_{0}=\varepsilon_{0}(c)\) and \(n_{0}=n_{0}(c)\) such that Lemma 4.6 is valid. Then there is some \(a>0\) such that we find for all \(\delta\in\mathbb{N}^{-1}\)_ \[\mathcal{D}_{3}(\delta)\leq C_{8}2^{n_{0}\cdot 3^{\frac{n}{2}}}C_{\varepsilon} ^{1-a/\varepsilon}\delta^{-\varepsilon}. \tag{31}\] Proof.: Let \(N\) be like in (30) and \(\delta\in(\delta_{n})_{n=n_{0}}^{\infty}=(2^{-n\cdot 3^{10N}})_{n=n_{0}}^{\infty}\). If \(\delta\in(\delta_{n_{0}},1]\in\mathbb{N}^{-1}\), we use the trivial estimate \[\mathcal{D}_{3}(\delta)\leq\delta^{-1/2}\leq 2^{\frac{n_{0}}{2}\cdot 3^{10N}}.\] If \(\delta\in(\delta_{n+1},\delta_{n}]\) for \(n\geq n_{0}\), then submultiplicativity and Lemma 4.6 imply \[\mathcal{D}_{3}(\delta) \leq\mathcal{D}_{3}(\delta_{n+1})\leq\mathcal{D}_{3}(\delta_{n}) \mathcal{D}_{3}(\delta_{n+1}/32\delta_{n})\leq(C_{7}C_{\varepsilon}^{1-1/3^{ N}}\delta_{n}^{-\varepsilon})(32(\delta_{n}/\delta_{n+1}))^{1/2}\] \[=32^{1/2}C_{7}C_{0}^{1/2}2^{\frac{1}{2}\cdot 3^{10N}}C_{ \varepsilon}^{1-1/3^{N}}\delta^{-\varepsilon}.\] Taking the two estimates together gives \[D(\delta)\leq C_{8}2^{n_{0}\cdot 3^{10N}}C_{\varepsilon}^{1-1/3^{N}}\delta^{- \varepsilon}.\] This estimate holds for all \(\delta\in\mathbb{N}^{-1}\) with \(N\) given by (30). Now we simplify by monotonicity in \(N\). By the choice of \(N\), we have \(3^{N}\leq 3^{a/\varepsilon}\) for some \(a\) and \(\varepsilon<\varepsilon_{0}(c)\). We obtain \[D(\delta)\leq C_{8}2^{n_{0}\cdot 3^{10a/\varepsilon}}C_{\varepsilon}^{1-\frac{1}{ 3^{a/\varepsilon}}}\delta^{-\varepsilon}.\] We bootstrap this bound to find the following: **Lemma 4.8**.: _There is \(\varepsilon_{0}=\varepsilon_{0}(C_{8},c)\) such that for all \(0<\varepsilon<\varepsilon_{0}(c)\) and \(\delta\in\mathbb{N}^{-1}\), we have_ \[\mathcal{D}_{3}(\delta)\leq 2^{3^{100a/\varepsilon}}\delta^{-\varepsilon}.\] Proof.: Let \(P(C,\lambda)\) be the statement that \(D(\delta)\leq C\delta^{-\lambda}\) for all \(\delta\in\mathbb{N}^{-1}\). Lemma 4.7 implies that for \(\varepsilon\in(0,\varepsilon_{0}(c))\) and \(n_{0}=n_{0}(c)\): \[P(C_{\varepsilon},\varepsilon)\Rightarrow P(C_{8}\cdot 2^{n_{0}\cdot 3^{10a/ \varepsilon}}C_{\varepsilon}^{1-1/3^{a/\varepsilon}},\varepsilon).\] After \(M\) iterations of the above implication, we obtain \[P(C_{\varepsilon},\varepsilon)\Rightarrow P((C_{8}\cdot 2^{n_{0}\cdot 3^{10a/ \varepsilon}})^{\sum_{j=0}^{M-1}(1-1/3^{a/\varepsilon})^{j}}C_{\varepsilon}^{(1 -1/3^{a/\varepsilon})^{M}},\varepsilon).\] We can take limits \[C_{\varepsilon}^{(1-1/3^{a/\varepsilon})^{M}}\to_{M\to\infty}1,\quad\sum_{j=0}^{M- 1}(1-1/3^{a/\varepsilon})^{j}\to_{M\to\infty}3^{a/\varepsilon}.\] Hence, letting \(M\to\infty\), we obtain \[P(C_{8}^{3^{a/\varepsilon}}\cdot 2^{n_{0}\cdot 3^{11a/\varepsilon}},\varepsilon).\] By choosing \(0<\varepsilon<\varepsilon_{0}(C_{8},n_{0}(c))\) we find for all \(\delta\in\mathbb{N}^{-1}\) \[D(\delta)\leq C_{8}^{3^{a/\varepsilon}}2^{n_{0}\cdot 3^{11a/\varepsilon}} \delta^{-\varepsilon}\leq 2^{3^{100a/\varepsilon}}\delta^{-\varepsilon}.\] This finishes the proof. In the following we fix \(\varepsilon_{0}=\varepsilon_{0}(C_{8},c)\) and \(a\) such that Lemma 4.8 is valid. ### Proof of Theorem 1.1 We can write for \(0<\varepsilon<\varepsilon_{0}\) \[D(\delta)\leq A^{A^{1/\varepsilon}}\delta^{-\varepsilon} \tag{32}\] for some \(A=A(a)\geq e\). It suffices to prove (3) with exponentials and logarithms based on \(A\). Proof of Theorem 1.1.: We optimize (32) by choosing \(\varepsilon=\varepsilon(\delta)\). Let \[B=\log_{A}(1/\delta)>1,\quad\eta=\log_{A}(B)-\log_{A}\log_{A}(B),\quad \varepsilon=1/\eta. \tag{33}\] This leads to the first constraint \[\delta<A^{-1}. \tag{34}\] The constraint on \(\varepsilon_{0}\) translates to \[\varepsilon=\frac{1}{\eta}\leq\varepsilon_{0}\Rightarrow\frac{1}{\varepsilon _{0}}\leq\log_{A}(B/\log_{A}(B))\leq\log_{A}(B)=\log_{A}(\log_{A}(1/\delta)).\] This gives the condition on \(\delta\): \[\delta<(A^{A^{1/\varepsilon_{0}}})^{-1}=\delta_{0}. \tag{35}\] It is straight-forward by (33) that \[A^{1/\varepsilon}\leq\varepsilon\log_{A}(1/\delta).\] For this reason we obtain \[A^{A^{1/\varepsilon}}\delta^{-\varepsilon}\leq\exp_{A}(2\varepsilon\log_{A}(1 /\delta))\leq\exp_{A}(\frac{4\log_{A}(1/\delta)}{\log_{A}\log_{A}(1/\delta)}).\] In the above display we used that \[\varepsilon=\frac{1}{\log_{A}B-\log_{A}\log_{A}B}\leq\frac{2}{\log_{A}B},\] which is true for \(\log_{A}B\leq B^{1/2}\). This is true because \(A\geq e\) and \(B\geq 1\). Finally, we find with \(a=1/\log(A)\leq 1\) \[\exp_{A}(\frac{4\log_{A}(1/\delta)}{\log_{A}\log_{A}(1/\delta)}) =\exp_{A}\big{(}\frac{4\log(x)}{\log(a\log(1/\delta))}\big{)}\] \[\leq\exp_{A}\big{(}\frac{8\log(x)}{\log(\log(1/\delta))}\big{)}= \exp\big{(}\frac{4\log(A)\log(x)}{\log(a\log(1/\delta))}\big{)}.\] In the estimate we used \(\log(a\log(1/\delta))\geq\log(\log(1/\delta))/2\), which amounts to \(\delta\leq\exp(-\log(A)^{2})\). This is true by (35), and the proof is complete. ## Acknowledgement Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 258734477 - SFB 1173.
2303.16104
Hallucinations in Large Multilingual Translation Models
Large-scale multilingual machine translation systems have demonstrated remarkable ability to translate directly between numerous languages, making them increasingly appealing for real-world applications. However, when deployed in the wild, these models may generate hallucinated translations which have the potential to severely undermine user trust and raise safety concerns. Existing research on hallucinations has primarily focused on small bilingual models trained on high-resource languages, leaving a gap in our understanding of hallucinations in massively multilingual models across diverse translation scenarios. In this work, we fill this gap by conducting a comprehensive analysis on both the M2M family of conventional neural machine translation models and ChatGPT, a general-purpose large language model~(LLM) that can be prompted for translation. Our investigation covers a broad spectrum of conditions, spanning over 100 translation directions across various resource levels and going beyond English-centric language pairs. We provide key insights regarding the prevalence, properties, and mitigation of hallucinations, paving the way towards more responsible and reliable machine translation systems.
Nuno M. Guerreiro, Duarte Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, Pierre Colombo, André F. T. Martins
2023-03-28T16:17:59Z
http://arxiv.org/abs/2303.16104v1
# Hallucinations in Large Multilingual Translation Models ###### Abstract Large-scale multilingual machine translation systems have demonstrated remarkable ability to translate directly between numerous languages, making them increasingly appealing for real-world applications. However, when deployed _in the wild_, these models may generate hallucinated translations which have the potential to severely undermine user trust and raise safety concerns. Existing research on hallucinations has primarily focused on small bilingual models trained on high-resource languages, leaving a gap in our understanding of hallucinations in massively multilingual models across diverse translation scenarios. In this work, we fill this gap by conducting a comprehensive analysis on both the M2M family of conventional neural machine translation models and ChatGPT, a general-purpose large language model (LLM) that can be prompted for translation. Our investigation covers a broad spectrum of conditions, spanning over 100 translation directions across various resource levels and going beyond English-centric language pairs. We provide key insights regarding the prevalence, properties, and mitigation of hallucinations, paving the way towards more responsible and reliable machine translation systems. ## 1 Introduction Recent advancements in large-scale multilingual machine translation have brought us closer to realizing a universal translation system: a single model capable of handling numerous languages and translation directions [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. Concurrently, general-purpose large language models (LLMs) have exhibited a remarkable ability to generalize to new tasks, including translation, where they are becoming increasingly stronger [1, 1, 12]. Compared to traditional bilingual models, these systems can offer significant performance improvements and greatly simplify engineering efforts, as a single model can be used for all language pairs [1]. As a result, they are an increasingly attractive choice for real-world applications. However, when deployed _in the wild_, these models may still generate _hallucinations_: highly pathological translations that can severely damage user trust and pose serious safety concerns [1, 12]. The problem of hallucinations has long been recognized by researchers [11, 13], and recent studies have contributed towards better understanding, detection and mitigation of these pathological translations. However, these studies have been conducted on _small bilingual_ models (<100M parameters) trained on a _single English-centric high-resource_ language pair [1, 1, 13, 14, 15, 16, 17]. This leaves a knowledge gap regarding the prevalence and properties of hallucinations in large-scale translation models across different translation directions, domains and data conditions. In this work, we aim to fill this gap by investigating hallucinations on two different classes of models. The first and main class in our analysis is the _de facto_ standard approach of massively multilingual supervised models: we use the M2M-100 family of multilingual NMT models [14], which includes the largest open-source multilingual NMT model with 12B parameters. The second class is the novel and promising approach of leveraging generative LLMs for translation. Contrary to conventional NMT models, these models are trained on massive amounts of monolingual data in many languages, with a strong bias towards English, and do not require parallel data. In our analysis, we use ChatGPT1, a LLM that has been shown to achieve surprisingly high translation quality over a wide range of language pairs Hendy et al. (2023); Peng et al. (2023). Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt); this system has not been documented, so details of training data and training regime are unknown. We organize our study by analyzing the two prevalent types of hallucinations in NMT considered in the literature: hallucinations under perturbation and natural hallucinations Lee et al. (2018); Raunak et al. (2021); Guerreiro et al. (2022). Firstly, we study hallucinations under perturbation and evaluate whether these translation systems are robust to source-side artificial perturbations. While previous studies have found that these perturbations (e.g., spelling errors and capitalization mistakes) can reliably induce hallucinations Lee et al. (2018); Raunak et al. (2021), it is not clear whether those conclusions hold for large multilingual models. Secondly, we comprehensively investigate natural hallucinations, and evaluate their prevalence and properties in the outputs of the massively multilingual M2M models on a vast range of conditions, spanning from English-centric to non-English-centric language pairs, translation directions with little supervision, and specialized sensitive domains where hallucinations have devastating impact on user trust (e.g., medical data). Finally, we study a hybrid setup where other translation systems can be requested as fallback systems when an original system hallucinates, with the aim of mitigating hallucinations and improving overall translation quality. Our analysis reveals several key insights on the prevalence and properties of hallucinations, including: * multilingual models predominantly struggle with hallucinations in low-resource language pairs and translating out of English, with hallucination rates well above 10% for some translation directions; * hallucinations in low-resource language pairs can manifest toxic patterns that can be traced back to the training data, posing serious safety issues; * smaller distilled models can mitigate hallucinations by incorporating modeling choices that discourage them, such as leveraging less potent shallow decoders that rely more on the encoder representations, and reducing bias towards higher-resource language pairs through uniform sampling of translation directions during distillation; * ChatGPT produces hallucinations that are qualitatively different from those of conventional translation models, mostly consisting of off-target translations, overgeneration, and even failed attempts to translate; * hallucinations are _sticky_ and hard to reverse with models that share the same training data and architecture, whereas employing more diverse models as fallback systems can substantially improve overall translation quality and eliminate pathologies like oscillatory hallucinations. We release all our code and make available over a million translations in more than 100 translation directions to spur future research.2 Footnote 2: All resources will be made available in [https://github.com/deep-spin/lmt_hallucinations](https://github.com/deep-spin/lmt_hallucinations). ## 2 Background ### Large Multilingual Language Models Massively multilingual neural machine translation has recently emerged as a powerful paradigm for building machine translation systems that can handle numerous languages Akhbardeh et al. (2021); Wenzek et al. (2021); NLLB Team et al. (2022); Siddhant et al. (2022); Bapna et al. (2022); Chowdhery et al. (2022). These systems aim to translate directly with a single model for multiple language pairs without relying on any pivot language. The dominant strategy for achieving these systems is to train large multilingual models on vast amounts of parallel data often obtained through a combination of data mining and data augmentation strategies, such as backtranslation Sennrich et al. (2016); Edunov et al. (2018). Compared to classic bilingual models, the multilinguality of these systems results in significant improvements, particularly for low-resource and non-English-centric language pairs, as these benefit the most from multilingual transfer Arivazhagan et al. (2019); Fan et al. (2020). As an alternative, a novel and promising strategy is to leverage the emergent capabilities of large language models (LLMs). These systems are pretrained on massive nonparallel corpora and can be prompted to solve arbitrary tasks (Radford et al., 2019; Brown et al., 2020). In fact, this approach has led to impressive results across a wide variety of NLP tasks (Chowdhery et al., 2022; Zhang et al., 2022). Translation is no exception: LLMs can produce fluent and adequate translations, especially for high-resource English-centric language pairs, that are competitive with those of dedicated supervised translation models (Vilar et al., 2022; Peng et al., 2023; Garcia et al., 2023; Hendy et al., 2023; Bawden and Yvon, 2023). ### Hallucinations in Machine Translation Hallucinations lie at the extreme end of translation pathologies and present a critical challenge in machine translation, as they can severely compromise the safety and reliability of real-world applications. Importantly, hallucinations in machine translation are unlike hallucinations in other natural language generation tasks (e.g., abstractive summarization and generative question answering) (Ji et al., 2022). While, for these other tasks, models often produce hallucinated outputs (Falke et al., 2019; Cao et al., 2022; Manakul et al., 2023), hallucinations in machine translation, possibly attributed to the more closed-ended nature of the task, are substantially rarer and hard to observe in clean, unperturbed data. This has led several previous studies to examine their properties by creating artificial scenarios where hallucinations are more likely to occur (e.g., introducing perturbations in the source text (Lee et al., 2018) or noise in the training data (Raunak et al., 2021)). To distinguish these two scenarios, hallucinations in machine translation are categorized into two types (Raunak et al., 2021): _hallucinations under perturbation_ and _natural hallucinations_. Hallucinations under perturbation.A model generates a hallucination under perturbation when it produces a significantly lower quality translation for a slightly perturbed input compared to the original input (Lee et al., 2018). Hallucinations under perturbation explicitly reveal the lack of robustness of translation systems to perturbations in the source text (e.g., misspellings or capitalization errors) by finding translations that undergo significant negative shifts in quality due to these changes. Natural hallucinations.Contrary to hallucinations under perturbations, these translations occur naturally without any perturbation. As a result, natural hallucinations are rare and challenging to study. In this work, we follow the taxonomy introduced in Raunak et al. (2021) and later extended in Guerreiro et al. (2022). Under this taxonomy, hallucinations are translations that contain content that is detached from the source text. To distinguish between different types of hallucinations, they can be categorized as _largely fluent detached hallucinations_ or _oscillatory hallucinations_. The former refers to translations that bear minimal or no relation at all to the source, while the latter refers to inadequate translations that contain erroneous repetitions of words and phrases. ## 3 Experimental Suite In this section, we provide an overview of the models, datasets and evaluation metrics used throughout our study. ### Models We focus on two classes of models: (i) conventional supervised multilingual NMT models, and (ii) LLMs that can be prompted for translation. For the supervised multilingual NMT models, we use the transformer-based (Vaswani et al., 2017) M2M-100 family of models (Fan et al., 2020), which consists of three variants with different sizes: M2M (S) with 418M parameters, M2M (M) with 1.2B parameters, and M2M (L) -- the largest available open-source multilingual NMT model -- with 12B parameters. These models were trained on a many-to-many parallel dataset comprising 7.5B sentences crawled from the web, and support 100 languages and thousands of translation directions. We also experiment with SMaLL100(Mohammadshahi et al., 2022), a shallow multilingual NMT model with 330M parameters obtained via distillation of M2M (L). Unlike the M2M models, SMaLL100 was trained on a much smaller training set with uniform sampling across all language pairs to reduce the bias towards high-resource languages: only 100k parallel sentences from the original M2M training data were used for each translation direction, for a total of 456M parallel sentences. For decoding, we run beam search with a beam size of 4. All experiments were run on fairseq (Ott et al., 2019). As for the alternative strategy using LLMs, we use ChatGPT (gpt-3.5-turbo)3, a vari ant of GPT3.5 -- a GPT-family (Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020) large-scale model with 175B parameters -- that has been fine-tuned with human feedback in the style of InstructGPT (Ouyang et al., 2022). ChatGPT has been shown to achieve impressive results for multiple multilingual NLP tasks, including translation (Kocmi and Federmann, 2023; Lu et al., 2023; Fu et al., 2023; Hendy et al., 2023; Peng et al., 2023). To generate translations, we use the zero-shot prompt template used in Hendy et al. (2023) and keep the generation parameters as the default API parameters.4 Footnote 4: We encountered several API/server errors when prompting ChatGPT for translation with temperature 0, particularly for low-resource language pairs and languages with lower coverage scripts. Those errors are alleviated, although not entirely eliminated, when the default parameters are used. ### Datasets We carefully selected datasets based on two main criteria: their familiarity to researchers and practitioners, and the avoidance of train/test overlap for the M2M models.5 To this end, we chose to use premier translation benchmarks: Flores-101 (Goyal et al., 2022), WMT and TICO (Anastasopoulos et al., 2020). Flores-101 is a high-quality multi-parallel dataset that consists of Wikipedia text in 101 languages and allows for the assessment of hallucinations across a vast range of translation directions; we join the dev and devtest subsets for evaluation. For WMT, we used the same benchmarks as those used in the original M2M paper evaluation suite, as these were explicitly removed from the training data. Additionally, we selected recent WMT test sets from the WMT21 and WMT22 campaigns as they were released after the models were trained. In contrast to these general-purpose datasets, TICO is a specialized medical-domain multilingual benchmark that includes COVID-19 related data, such as medical papers and news articles; we join the dev and test sets. Full details about the datasets can be found in Appendix B. Footnote 5: ChatGPT’s training data is not publicly available. As such, we cannot guarantee that it has not been exposed to the data we use in our analysis. ### Evaluation Metrics Throughout our work, we focus mainly on sentence-level evaluation. Our main lexical metric is spBLEU (Goyal et al., 2022),6 as it has been widely employed in works on massively multilingual translation (Fan et al., 2020; Wenzek et al., 2021; Mohammadshahi et al., 2022; NLLB Team et al., 2022) and offers fairer evaluation for low-resource languages compared to BLEU (Papineni et al., 2002). Moreover, we follow the most recent MT metrics shared-task recommendations (Freitag et al., 2022) and also adopt neural metrics. We use the latest reference-based and reference-free COMET variants: COMET-22 (Rei et al., 2022) and CometKiwi (Rei et al., 2022). Lastly, we use the cross-lingual encoder LaBSE (Feng et al., 2022) to obtain sentence similarity scores, as these have been successfully employed in prior research on detection of natural hallucinations (Guerreiro et al., 2022; Dale et al., 2022). Footnote 6: We use spBLEU as implemented in Sackebleu (Post, 2018): nrefs:1|case:mixed|eff:yes|tok:Flores101|smooth:exp|version:2.3.1. ## 4 Hallucinations under Perturbation We start our analysis by focusing on artificially created hallucinations. We first provide an overview of our experimental setting, focusing on the construction of the perturbed data and detection approach. Then, we present our results and analyze the properties of these hallucinations across different resource levels and models. ### Evaluation Setting Perturbations.To construct the perturbed source sequences, we apply the same minimal perturbations used in Xu et al. (2023): misspeling of words, insertion of frequent tokens in the beginning of the source sequence, and capitalization errors. For full details on the construction of the perturbed data, refer to Appendix C.1. Translation directions.We use the Flores dataset for these experiments, and focus specifically on translation out of English. We selected all bridge languages7, as well as additional low-resource languages that were underrepresented among bridge languages. Overall, we generate translations for 31 different language pairs (LPs). We present the language pairs and more details on our choice of languages in Appendix C.1. Detection.Our detection approach is inspired by that of previous works on hallucinations under perturbation Lee et al. (2018); Raunak et al. (2021); Ferrando et al. (2022); Xu et al. (2023). The algorithm is a simple 2-rule process: we fix (i) a minimum threshold quality score for the original translations, and (ii) an extremely low maximum quality score for the perturbed translations. A model generates a hallucination under perturbation when both translations meet the thresholds. Crucially, rule (i) ensures that low-quality translations for unperturbed sources are not considered as candidates for hallucinations under perturbation.8 Footnote 8: Note that low-quality translations for unperturbed sources fall under the scope of the study on natural hallucinations, that follows in subsequent sections of the paper. We extend this algorithm to handle multiple models and language pairs by adapting rule (i). We first obtain source sentences for which all models produce translations that meet a minimum quality threshold (spBLEU > 9). Then, we sort them according to average quality across the different models, and select the top 20% as candidates. Finally, we apply rule (ii) and set the threshold to spBLEU < 3. We selected both thresholds based on the choices made in previous works Raunak et al. (2021); Ferrando et al. (2022); Xu et al. (2023). This approach ensures a fixed sample size across different language pairs, and that the sentences analyzed for each language pair are consistent across all models. Moreover, it allows us to effectively detect hallucinations under perturbation across multiple models in a multilingual scenario in a scalable manner, while accounting for the unique quality trends observed across different models and languages.9 Footnote 9: Note that detection of hallucinations under perturbation does not explicitly target detachment from the source text. We provide a broader discussion on the difference between this detection approach and that of natural hallucinations (introduced later in Section 5.1) in Appendix C.2. ### Results We show aggregated results in Table 1 and language-specific results in Figure 1. Overall, they reveal that perturbations have the potential to trigger hallucinations under perturbation, even in larger models. In what follows, we highlight several noteworthy trends found in our results. Average hallucination rates generally decrease with increasing resource levels.Table 1 shows that all models, with the exception of ChatGPT that we analyze separately below, exhibit lower hallucination rates as resource levels increase. This \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{**Low Resource**} & \multicolumn{2}{c}{**Mid Resource**} & \multicolumn{2}{c}{**High Resource**} \\ \cline{2-7} & LP Fraction & Rate (\%) & \multicolumn{1}{c}{LP Fraction} & Rate (\%) & \multicolumn{1}{c}{LP Fraction} & Rate (\%) \\ \hline SMLL100 & 2/7 & 0.213\({}_{0.00}\) & 2/19 & 0.009\({}_{0.00}\) & 1/5 & 0.017\({}_{0.00}\) \\ M2M (S) & 5/7 & 0.261\({}_{0.08}\) & 11/19 & 0.140\({}_{0.08}\) & 0/5 & 0.000\({}_{0.00}\) \\ M2M (M) & 3/7 & 0.083\({}_{0.00}\) & 6/19 & 0.035\({}_{0.00}\) & 0/5 & 0.000\({}_{0.00}\) \\ M2M (L) & 4/7 & 0.296\({}_{0.08}\) & 3/19 & 0.017\({}_{0.00}\) & 0/5 & 0.000\({}_{0.00}\) \\ ChatGPT & 4/7 & 0.059\({}_{0.08}\) & 10/19 & 0.183\({}_{0.08}\) & 0/5 & 0.000\({}_{0.00}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Fraction of languages for which models produces at least one hallucination under perturbation, and average hallucination rate (and median, in subscript) across all languages at each resource level. Figure 1: Heatmap of hallucination rates for each model in the languages considered. Pattern-filled cells indicate at least one hallucination under perturbation for a given model-language pair. is expected and suggests that models are better equipped to handle source-side perturbations for language pairs with more parallel data during training. In fact, hallucinations under perturbation for high-resource languages are almost non-existent. However, Figure 1 reveals variability across languages, and even within the models in the same family that have been trained on the same data. For instance, when translating to Asturian (ast), M2M (L) and its distilled version SMaLL100 have significantly higher hallucination rates than the smaller M2M (S). Thus, hallucinations under perturbation may emerge in other non-trivial ways unrelated to the training data. **SMaLL100 exhibits lower hallucination rates than its teacher model M2M (L).** Recall that SMaLL100 was trained using uniform sampling across all language pairs to prevent bias towards higher resourced language pairs. The results in Table 1 may reflect one positive outcome from such approach: despite being much smaller than M2M (L), SMaLL100 hallucinates less and for fewer languages than its teacher model for low- and mid-resource language pairs. **Hallucinations under perturbation are not correlated with the quality of original translations.** The common approach for detection of hallucinations under perturbation (see Section 4.1) raises an interesting question: _are the original source sentences for which models produce higher quality translations less likely to lead to hallucinations when perturbed?_ Our analysis found a very weak correlation (according to Pearson correlation; see Appendix C.2) between hallucinations under perturbation and spBLEU scores for the original unperturbed sources across all models. This indicates that even minimal perturbations in the source text can cause models to undergo significant shifts in translation quality. **ChatGPT exhibits different hallucination patterns from conventional translation models.** Table 1 shows that, contrary to traditional models, ChatGPT generates more hallucinations for mid-resource languages than for low-resource languages. In fact, it surprisingly produces fewer hallucinations for low-resource languages than any other model. Moreover, ChatGPT's hallucinations are qualitatively different from those of other models: they often consist of off-target translations,10 overgeneration, or even failed attempts to translate (e.g., _"This is an English sentence, so there is no way to translate it to Vietnamese"_; we provide further examples in Appendix C.2). Furthermore, unlike traditional NMT models that frequently produce oscillatory character hallucinations, ChatGPT does not generate any such hallucinations under perturbation. This is further evidence that translation errors, even severely critical ones, obtained via prompting a LLM are different from those produced by traditional machine translation models (Vilar et al., 2022; Garcia et al., 2023; Hendy et al., 2023; Bawden and Yvon, 2023). Footnote 10: We perform automatic language identification using the fast text (Joulin et al., 2016) LID model lid.176.bin. Interestingly, we also found that the vast majority of the hallucinations can be reversed with further sampling from the model.11 This connects to findings in Guerreiro et al. (2022); Manakul et al. (2023): as with traditional NMT models, hallucinations with a LLM may not necessarily indicate model defect or incapacity to generate adequate translations, and may just result from "bad luck" during generation. Footnote 11: We also found this to be the case with a one-shot prompt. ## 5 Natural Hallucinations Let us now turn to investigating natural hallucinations.12 We first provide an in-depth overview of our evaluation setting, focusing on the scenarios and detection methodology. Subsequently, we present a thorough analysis, exploring diverse properties of natural hallucinations such as their different types, the influence of translation direction, and prevalence of toxicity. Footnote 12: From now on, we will use the terms natural hallucinations — both detached and oscillatory hallucinations — and hallucinations interchangeably. ### Evaluation Setting **Evaluation scenarios.** Analyzing massively multilingual translation models opens up several research scenarios that have not been studied in previous works that focused solely on bilingual models. We will take advantage of this opportunity and investigate natural hallucinations in three different evaluation scenarios, studying more than 100 translation directions in the main text alone. We start with an English-centric scenario where we pair 32 different languages with English for a total of 64 translation directions. Then, we study a non-English-centric scenario inspired by Fan et al. (2020), where we explore 25 language pairs corresponding to real-world use cases of translation not involving English (e.g., translating Greek directly to Turkish). Finally, we assess the prevalence of hallucinations on sensitive medical data where they can have a devastating impact on user trust. We pair 9 different languages with English for a total of 18 directions. We present all the translation directions investigated in these setups in Appendix D.1. We report results for the first two setups using the Flores dataset in the main text and WMT in Appendix D.2. For the final setup, we use the medical-domain TICO dataset. Detection.We integrate key findings from recent research on detection of hallucinations and focus on two main detectors: ALTI+ (Ferrando et al., 2022) for detached hallucinations, and top \(n\)-gram (TNG) (Raunak et al., 2021, 2022; Guerreiro et al., 2022) for oscillatory hallucinations. ALTI+ evaluates the relative contributions of both source and target prefixes to model predictions. As hallucinations are translations detached from the source sequence, ALTI+ can effectively detect them by identifying sentences with minimal source contribution. Notably, it faithfully reflects model behavior and explicitly signals model detachment from the source text in any translation direction (Ferrando et al., 2022). In previous works, this method has been successfully employed to detect hallucinated toxicity in a multilingual context in NLLB Team et al. (2022), and it has been validated on human-annotated hallucinations in Dale et al. (2022), where it was demonstrated that ALTI+ scores easily separate detached hallucinations from other translations.13 Footnote 13: We followed the recommendations in Guerreiro et al. (2022) and set model-based ALTI+ thresholds based on validation data where the models are expected to perform well. Specifically, we obtained the lowest 0.02% — in line with natural hallucination rates reported in the literature (Raunak et al., 2022) — of the ALTI+ score distributions for high-resource WMT benchmarks. Additionally, to ensure further trustworthy, high-precision measurements, we excluded detected candidates with LaBSE or CometKwi scores — as these have been also been validated for detection of human-annotated detached hallucinations (Dale et al., 2022; Guerreiro et al., 2022) — exceeding the top 10% of scores on translations from the same WMT benchmarks. TNG, on the other hand, is a straightforward, lightweight black-box heuristic targeting oscillatory hallucinations. It works by comparing the count of the top repeated translation \(n\)-gram to the count of the top repeated source \(n\)-gram, ensuring the difference is at least \(t\). This approach has been validated on human-annotated hallucinations and found to identify oscillatory hallucinations with perfect precision (Guerreiro et al., 2022). We follow previous work by using \(n=4\) and \(t=2\)(Raunak et al., 2021; Guerreiro et al., 2022) and excluding translations that meet the reasonable quality threshold outlined in Section 4.1.14 Footnote 14: Note that oscillatory hallucinations can be simultaneously detected with ALTI+ and TNG. Remark on Model Selection.We rely on ALTI+, a model-based detector, for reliable detection of detached hallucinations. Since we lack access to glass-box internal features from ChatGPT, we exclude it from our model selection to ensure consistency in our analysis. It is important to note that using alternative detectors could lead to misleading results and create discrepancies between the evaluation scenarios for ChatGPT and other models. Nonetheless, we will further examine ChatGPT in Section 6, exploring various aspects such as the generation of oscillatory hallucinations and translation quality in scenarios where other models produce hallucinations. ### English-Centric Translation We start by investigating natural hallucinations on English-centric language pairs. We reveal key insights on how properties of hallucinations change across resource levels, models and translation directions. We present language-pair specific results in Appendix D.2. Hallucinations in low-resource language pairs are not only more frequent, but also distinct.Table 2 shows that hallucinations occur frequently for low-resource directions, with all M2M models exhibiting average hallucination rates exceeding 10%. Furthermore, all models generate hallucinations for the vast majority of low-resource language pairs. On what comes to the type of hallucinations, Figure 2 demonstrates that, in contrast to mid- and high-resource language pairs, oscillatory hallucinations are less prevalent, while detached hallucinations occur more frequently in low-resource languages. This reveals that models tend to rely less on the source context when translating to or from low-resource languages. Importantly, although massive multilingual models have significantly improved translation quality for low resource languages, these findings not only suggest that there is considerable room for improvement, but also highlight potential safety concerns arising from translations in these directions. **SMALL100 consistently relies more on the source text than other models.** Despite having the smallest number of parameters, SMALL100 shows remarkable hallucination rates across low- and mid-resource language pairs, hallucinating significantly less than its larger counterparts in low-resource settings. These improved rates may be attributed not only to the uniform sampling of language pairs during training, but also to architectural decisions. While SMALL100 shares a 12-layer encoder with the other models to process source representations, it diverges by employing a shallow 3-layer decoder--instead of a 12-layer decoder--and placing the target language code on the encoder side. We hypothesize that this design encourages greater reliance on the more complex encoder representations, reducing the likelihood of detachment from the source. In fact, distinct patterns in ALTI+ scores (shown in Appendix D.2) support this hypothesis: SMALL100 consistently demonstrates higher source contributions and similar patterns across all resource levels. In contrast, M2M models show a greater tendency to rely less on the source, especially in low-resource language pairs. Importantly, however, SMALL100's reduced hallucination rates do not necessarily imply superior translation quality compared to the other M2M models: we observed a strong correlation between M2M models' corpus-level COMET-22 scores and their respective hallucination rates for low-resource languages, whereas, contrastingly, for SMALL100 the correlation is weak. This indicates that despite detaching less from the source content, SMALL100's translations are not necessarily of higher quality to those of other M2M models. This and other statistics can be found in the Appendix D.2. **Scaling up models within the same model family leads to reduced hallucination rates.** As shown in Table 2, increasing the size of the M2M family models results in consistent reductions in hallucination rates. Relative improvements are more pronounced for mid- and high-resource language pairs, with M2M (L) exhibiting fewer hal \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{**Low Resource**} & \multicolumn{2}{c}{**Mid Resource**} & \multicolumn{2}{c}{**High Resource**} \\ \cline{2-7} & LP Fraction & Rate (\%) & LP Fraction & Rate (\%) & LP Fraction & Rate (\%) \\ \hline SMALL100 & 14/16 & \(2.352\,_{\, lucinations and hallucinating for fewer languages than all other models. Hallucinations are more frequent when translating out of English.Table 3 demonstrates that models are significantly more prone to hallucinate when translating out of English. In fact, in line with the observations of Ferrando et al. (2022), we found that models tend to detach more from the source text when translating out of English. This is evidenced by ALTI+ source contributions being lower across all language pairs in this direction compared to translating into English. Interestingly, we discovered that the translation direction can also influence the properties of hallucinations: (i) over 90% of off-target hallucinations occur when translating out of English, and (ii) nearly all hallucinations into English for mid- and high-resource language pairs are oscillatory. Toxic hallucinations pose substantial safety risks.Toxic text in translations can emerge in the form of hallucinations (NLLB Team et al., 2022). To assess the prevalence of toxic text in detected hallucinations, we utilized the toxicity wordlists provided by NLLB Team et al. (2022). We found that toxic text primarily appears in translations out of English and almost exclusively affects low-resource language pairs. For instance, over 1 in 8 hallucinations in Tamil contain toxic text. Interestingly, these toxic hallucinations not only exhibit high lexical overlap among them, but are repeated across models for multiple unique source sentences. Moreover, they are not necessarily reduced by scaling up the model size. These observations suggest that these hallucinations are likely to be traced back to toxic patterns in the training data,15 aligning with observations in Raunak et al. (2021); Guerreiro et al. (2022). Moreover, we also found that these hallucinations can be propagated through model distillation, as evidenced by SMALL100 generating toxic hallucinations that are copies of those of its teacher model. This underlines the necessity of rigorously filtering training data to ensure safe and responsible use of these models in real-world applications. Footnote 15: Upon inspecting the Common Crawl corpora that were used to create the training data, we found reference translations that exactly match the toxic hallucinations. ### Beyond English-Centric Translation We shift our focus to translation directions that do not involve English, typically corresponding to directions with less supervision during training. We present language-pair specific results in Appendix D.3. Trends are largely similar to English-centric directions.Table 4 reveals trends that largely mirror those observed in the English-centric setup:16 (i) hallucinations are more frequent in low-resource settings; (ii) SMALL100 significantly outperforms the M2M models in low-resource language pairs; and (iii) scaling up to M2M (L) consistently yields substantial improvements over the smaller M2M models in low- and mid-resource directions. Additionally, the trends related to hallucination types also hold across the two setups: detached hallucinations are more prevalent in low-resource settings, while oscillatory hallucinations overwhelmingly dominate in mid- and high-resource directions (see Appendix D.3). Less supervised language pairs exhibit extremely high hallucination rates.As expected, models struggle more with hallucinations for directions with less or even no supervision during training, such as ro-hy and af-zu. For instance, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{**Low Resource**} & \multicolumn{2}{c}{**Mid Resource**} & \multicolumn{2}{c}{**High Resource**} \\ \cline{2-7} & LP Fraction & Rate (\%) & LP Fraction & Rate (\%) & LP Fraction & Rate (\%) \\ \hline SMALL100 & 5/10 & 2.160\({}_{.02}\) & 6/13 & 0.054\({}_{.00}\) & 1/2 & 0.025\({}_{.02}\) \\ M2M (S) & 10/10 & 12.61\({}_{.179}\) & 12/13 & 0.467\({}_{.05}\) & 1/2 & 0.075\({}_{.07}\) \\ M2M (M) & 7/10 & 12.22\({}_{.41}\) & 7/13 & 0.172\({}_{.05}\) & 0/2 & 0.000\({}_{.00}\) \\ M2M (L) & 6/10 & 6.580\({}_{.20}\) & 4/13 & 0.077\({}_{.00}\) & 0/2 & 0.000\({}_{.00}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Fraction of LPs on the non-English-centric setup for which models produce at least one hallucination, and average hallucination rate (and median, in subscript) across all LPs at each resource level. M2M (M) hallucinates for nearly half of the translations in these directions. ### Translation on Specialized Domains We now turn to investigating hallucinations in data from the medical domain, where they can have devastating consequences. Using the TICO dataset, we compare hallucination rates with the Flores dataset for 18 translation directions. We present language-pair specific results in Appendix D.4. Hallucinations are not exacerbated under medical domain data.Table 5 reveals that hallucination rates for the TICO medical data do not consistently exceed those observed for the Flores Wikipedia data. This finding diverges from previous works that investigated hallucinations for specialized domain data Wang and Sennrich (2020); Muller et al. (2020). We hypothesize that, in contrast with the smaller models typically trained on limited datasets from a single domain used in those works, the concept of "domain shift" may not be as pronounced for M2M models. These models are not only much larger but, crucially, they are trained on a dataset containing over 7 billion parallel sentences gathered from the web, which encompasses a broad array of domains. This massive training set potentially mitigates the impact of domain shift and, consequently, reduces its influence on hallucinations. ## 6 Mitigation of hallucinations through Fallback Systems Building upon our analysis on natural hallucinations in the previous section, we now explore the potential of reducing hallucinations and enhancing overall translation quality by employing a simple hybrid setup that can take advantage of multiple systems with possible complementary strengths. Put simply, we leverage an alternative system as a fallback when the primary original model produces hallucinations. Our analysis in the main text is focused on the more extensive English-centric setup. We provide results on the non-English-centric setup in Appendix E. ### Employing models of the same family as fallback systems We begin by analyzing the performance of same-family models when employed as fallback systems for one another (e.g., using SMaLL100, M2M (M), and M2M (L) as fallbacks for M2M (S)).17 Footnote 17: For simplicity, we consider the distilled SMaLL100 as a model from the M2M family. Detached hallucinations are particularly _sticky_ across M2M models.Figure 3 reveals that when employing M2M models as fallback systems, reversal rates--percentage of hallucinations from the original system that are corrected by the fallback system--are consistently higher for oscillatory hallucinations than for detached hallucinations. These findings not only corroborate those in Guerreiro et al. (2022), where oscillatory hallucinations were found to be less related to model Figure 3: Reversal rates for oscillatory (Osc.) and detached (Det.) hallucinations when using models of the same family as fallback systems. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{**Low Resource**} & \multicolumn{2}{c}{**Mid Resource**} & \multicolumn{2}{c}{**High Resource**} \\ \cline{2-7} & Flores & TICO & Flores & TICO & Flores & TICO \\ \hline SMaLL100 & \(0.448_{0.17}\) & \(0.429_{0.29}\) & \(0.062_{0.02}\) & \(0.083_{0.07}\) & \(0.008_{0.00}\) & \(0.000_{0.00}\) \\ M2M (S) & \(7.778_{3.48}\) & \(6.262_{3.67}\) & \(0.087_{0.05}\) & \(0.167_{0.17}\) & \(0.017_{0.00}\) & \(0.024_{0.00}\) \\ M2M (M) & \(3.484_{0.67}\) & \(2.167_{0.33}\) & \(0.268_{0.05}\) & \(0.363_{0.02}\) & \(0.008_{0.00}\) & \(0.000_{0.00}\) \\ M2M (L) & \(1.008_{0.37}\) & \(0.596_{0.21}\) & \(0.031_{0.02}\) & \(0.018_{0.00}\) & \(0.000_{0.00}\) & \(0.000_{0.00}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison between average hallucination rate (and median, in subscript) for the same LPs at each resource level for Flores and TICO medical data. defects, but also further emphasize the close connection between detached hallucinations and training data. This connection can help explain their _stickiness_: since the M2M models share the same training data, reversing these hallucinations using other same-family models as fallbacks is more challenging. Interestingly, we also observe that M2M (L) particularly struggles to reverse the detached hallucinations generated by its distilled counterpart SMaLL100, suggesting that model defects can persist and be shared during distillation. Scaling up within the model family is not an effective strategy for mitigating hallucinations. In line with our analysis in Section 5.2, Figure 3 shows that reversal rates using SMaLL100 as a fallback system are higher for detached hallucinations than for oscillatory hallucinations. In fact, although SMaLL100 is a distilled M2M model, its training data, training procedure, and architecture differ from those of the M2M models. This distinction may make it more complementary as a fallback system to other M2M models than simply scaling up within the same model family. This suggests that merely increasing the scale of models within the same family is not an effective strategy for mitigating hallucinations, and exploring alternative models with different architectures and trained on different data could yield more substantial improvements. In the next section, we will analyze this alternative strategy. ### Employing external models as fallback systems Motivated by the findings from the previous section, we will now study how models that are not from the M2M family can be employed to further mitigate hallucinations and improve translation quality. We will test this approach with two different models: (i) we will prompt ChatGPT as detailed in Section 3, and (ii) we will use a high-quality 3.3B parameter model from the NLLB family of multilingual NMT models (NLLB) proposed in NLLB Team et al. (2022). Translation quality can be significantly improved with external fallback systems.Figure 4(a) demonstrates that external fallback systems, particularly NLLB, can significantly enhance translation quality of originally hallucinated translations compared to same-family models.18 This improvement is especially notable for low-resource languages, where both ChatGPT and NLLB consistently boost translation quality. Remarkably, NLLB generally outperforms ChatGPT as a fallback system for low- and mid-resource languages, aligning Figure 4: Fallback system analysis recurring to models of different families, such as ChatGPT and NLLB. We analyse overall translation quality improvements on the original model hallucinated translations (represented with dashed lines) across different resource levels via COMET-22 scores in (a), and overall prevalence of oscillatory hallucinations among the fallback translations in (b). with the findings in Hendy et al. (2023), which revealed that GPT models have limited capabilities for lower-resourced languages and lag behind dedicated translation models in those settings. Nonetheless, ChatGPT still surpasses dedicated M2M translation systems in these resource levels when used as a fallback system, underscoring the limitations of relying on same-family models as fallback systems. **Oscillatory hallucinations are practically non-existent when ChatGPT is the fallback system.** From Figure 3(b), we see another benefit of employing external fallback systems: oscillatory hallucinations are almost entirely eliminated. Interestingly, consistent with our findings in Section 4, we observe that ChatGPT produces very few, if any, oscillations, slightly improving the rates obtained with NLLB. This provides further evidence that, although hallucinations obtained via prompting LLMs may still occur, they exhibit different properties and surface forms. Investigating and understanding these differences in hallucination properties presents an interesting research path for future work. ## 7 Conclusion We have comprehensively investigated the phenomenon of hallucinations in massively multilingual translation models. By departing from the settings studied in previous work that focused on bilingual models trained on high-resource language pairs, we were able to explore a wide range of research scenarios that remained overlooked. Our analysis revealed several key insights on the prevalence and properties of hallucinations across various models of different scale, translation directions, and data conditions, including: the prevalence of hallucinations across multiple translation directions across different resource levels and beyond English-centric translation; the emergence of toxicity in hallucinations; and the effect of scaling up within the same model family on the prevalence of hallucinations. Additionally, we explored how fallback systems can mitigate hallucinations and improve overall translation quality. We found that hallucinations can be _sticky_ and difficult to reverse when using models that share the same training data and architecture. However, by leveraging other external models, we can significantly improve translation performance and virtually eliminate pathologies such as oscillatory hallucinations. To support future research on this topic, we are open-sourcing our code and releasing over a million translations and detection results across several models and language pairs. ## Limitations Our study mainly focuses on the M2M family of multilingual models. We chose this family of models as it includes several models at different sizes and the largest open-source multilingual NMT model. It is unclear how our findings generalize to other families of multilingual models (e.g., the NLLB family of models). Our detection approaches inherit the limitations that carry over with the metrics that are leveraged in them. For instance, following all of previous work, we adopt a BLEU metric to detect hallucinations under perturbation. However, this and other lexical metrics ranked worst than reference-based neural metrics in last year's WMT22 Metrics Shared Task Freitag et al. (2022). We analyzed ChatGPT as it has demonstrated impressive capabilities for translation and other multilingual tasks, such as MT evaluation. Unfortunately, the model remains behind API walls and documentation is scarce. As such, we could not ensure that ChatGPT was not trained on our evaluation sets, nor could we evaluate the contribution of the source text to ChatGPT's translations, which would have enabled detection of detached hallucinations. Despite these limitations, we believe our findings provide relevant insights into the properties of translations generated by the model. ## Acknowledgments We would like to thank Meta AI for open-sourcing the M2M models and maintaining libraries such as stopes Andrews et al. (2022) and nllb NLLB Team et al. (2022). The work is partially supported by the European Research Council (ERC StG DeepSPIN 758969), by EU's Horizon Europe Research and Innovation Actions (UTTER, contract 101070631), by the FCT through contract UIDB/50008/2020, and by the projects MAIA and NextGenAI (LISBOA-01-0247-FEDER-045909 and 2022-C05i0102-02). Part of this work was performed using HPC resources from GENCI-IDRIS Grant 2022-AD01101838).
2309.03234
Natural Example-Based Explainability: a Survey
Explainable Artificial Intelligence (XAI) has become increasingly significant for improving the interpretability and trustworthiness of machine learning models. While saliency maps have stolen the show for the last few years in the XAI field, their ability to reflect models' internal processes has been questioned. Although less in the spotlight, example-based XAI methods have continued to improve. It encompasses methods that use examples as explanations for a machine learning model's predictions. This aligns with the psychological mechanisms of human reasoning and makes example-based explanations natural and intuitive for users to understand. Indeed, humans learn and reason by forming mental representations of concepts based on examples. This paper provides an overview of the state-of-the-art in natural example-based XAI, describing the pros and cons of each approach. A "natural" example simply means that it is directly drawn from the training data without involving any generative process. The exclusion of methods that require generating examples is justified by the need for plausibility which is in some regards required to gain a user's trust. Consequently, this paper will explore the following family of methods: similar examples, counterfactual and semi-factual, influential instances, prototypes, and concepts. In particular, it will compare their semantic definition, their cognitive impact, and added values. We hope it will encourage and facilitate future work on natural example-based XAI.
Antonin Poché, Lucas Hervier, Mohamed-Chafik Bakkay
2023-09-05T09:46:20Z
http://arxiv.org/abs/2309.03234v1
# Natural Example-Based Explainability: a Survey ###### Abstract Explainable Artificial Intelligence (XAI) has become increasingly significant for improving the interpretability and trustworthiness of machine learning models. While saliency maps have stolen the show for the last few years in the XAI field, their ability to reflect models' internal processes has been questioned. Although less in the spotlight, example-based XAI methods have continued to improve. It encompasses methods that use examples as explanations for a machine learning model's predictions. This aligns with the psychological mechanisms of human reasoning and makes example-based explanations natural and intuitive for users to understand. Indeed, humans learn and reason by forming mental representations of concepts based on examples. This paper provides an overview of the state-of-the-art in natural example-based XAI, describing the pros and cons of each approach. A "natural" example simply means that it is directly drawn from the training data without involving any generative process. The exclusion of methods that require generating examples is justified by the need for plausibility which is in some regards required to gain a user's trust. Consequently, this paper will explore the following family of methods: similar examples, counterfactual and semi-factual, influential instances, prototypes, and concepts. In particular, it will compare their semantic definition, their cognitive impact, and added values. We hope it will encourage and facilitate future work on natural example-based XAI. Keywords:Explainability XAI Survey Example-based Case-based Counterfactuals Semi-factuals Influence Functions Prototypes Concepts ## 1 Introduction With the ever-growing complexity of machine learning models and their large diffusion, understanding models' decisions and behavior became a necessity. Therefore, explainable artificial intelligence (XAI), the field that aims to understand and clarify models, flourished with a huge diversity of methods. To differentiate between methods several taxonomies were proposed, and common components emerged [2, 4, 50]: i) Local vs global: Local methods explain a specific decision of the model (in this case, the model's input is called the studied sample or query). Global methods give insight into the general model behavior. Methods can also explain the dataset but it will not be covered in this survey. ii) _Post-hoc_ vs intrinsic vs explainable by-design: _Post-hoc_ methods are applied on an already trained model. By-design methods produce models explainable by construction. Intrinsic methods need to be taken into account during the training of the model but do not affect the final state of the model and explain either the training process or the trained model. iii) Black-box vs white-box: White-box methods need access to the model's weights and/or gradients. iv) The format of the explanation: The multiplicity of methods translates through the large range of forms of explanations such as attribution methods [33, 102], concepts [35, 66], surrogate models [67, 94], rule-based explanations [112], natural language explanations [20], dependencies [40, 49], and finally example-based explanations [57, 111]. Nonetheless, no matter the taxonomy of a method, its explanations are aimed at humans, hence, they should exploit the vast literature in philosophy, psychology, and cognitive science on how humans generate, understand, and react to explanations [78]. The psychology literature argued that, in everyday life, humans use examples as references to understand, explain something, or demonstrate their arguments [18, 38, 78, 98]. Afterward, through user studies in the XAI field [35, 51, 61], researchers validated that example-based explainability provides better explanations over several other formats. Example-based explainability corresponds to a family of methods where explanations are represented by or communicated through samples, or part of samples like crops. This means the explanation's format is a data point (an example). Figure 1: Natural example-based explanation formats w.r.t the studied sample (or query) and the decision boundary. We can see similar examples are the closest elements to the query, while counterfactuals and semi-factual are on either side of the point of the decision boundary the closest to the query. Prototypes are representative of each class in a dense zone of the dataset and the influential instance bends the decision boundary. Examples can either be training samples (natural examples) or generated elements. To generate high-dimensional data points, methods are essentially based on deep neural networks [6, 62]. However, for most high dimensional data, such methods fail to ensure that generated examples are plausible and belong to the manifold (subspace of the input space where samples follow the data distribution), and examples need to be realistic for humans to interpret them [19]. Therefore, natural examples have two advantages, they do not use a model to explain another model which eases their acceptance, and natural examples are plausible by definition. Hence, this survey will cover natural (non-generative) example-based explainability methods that explain AI models. Explanations in example-based explainability are all data points but there exist different semantic meanings to a given example. Depending on the relation between the example, the query, and the model, the information provided by the example will differ. The semantic definition of an example and the kind of insight it provides divide the example-based format into sub-groups, which are presented in Fig. 1. This overview is organized around those sub-groups (also called formats), this work will unfold as follows: The first format is **similar examples** (or factuals) (Section 2), for the model, they are the closest elements to the query. Factuals give confidence in the prediction or explain misclassification, but they are limited to the close range of the considered sample. To provide insight into the model behavior on a larger zone around the query, **counterfactuals** and **semi-factuals** (Sections 3.1 and 3.2) are more adapted. They are respectively the closest and the farthest samples on which the model makes a different and similar prediction. They are mainly used in classification, give insight into the decision boundary, and are complementary if paired. While they give an idea of the limit, they do not provide insights on how one could bend the decision boundaries of the model by altering the training data. This is addressed through **influential instances** (Section 4), the training samples with the highest impact on the model's state. In addition, contrary to previously listed example-based formats, influential instances are not limited to local explanations. Indeed, one can extract the most influential instances for the model in general. Another global explanation format is **Prototypes** (Section 5), which are a set of samples representative of either the dataset or a class. Most of the time they are selected without relying on the model and give an overview of the dataset, but some models are designed through prototypes, thus explainable by design. Concepts (Section 6), a closely-related format, is also investigated. A concept is the abstraction of the common elements between samples - e.g. for trees, the concepts could be trunk, branch, and leaf. To communicate such concepts, if they are not labeled, the easiest way is through examples of such concepts (often part of samples such as patches). Finally, **feature visualization**[89] are generated images that maximize the model prediction for a given class. It shows what the model associate with a given class, however, it is generative and will not be further discussed in this review. Thus we could summarize the contributions of this paper as follows: i) To the best of our knowledge, we are the first to compile natural example-based explainability literature in a survey. Previous works either covered the whole XAI literature with a superficial analysis of example-based XAI or focused on a given sub-format of example-based XAI. ii) For each format we provide simple definitions, semantic meanings, and examples. When possible, we additionally ground formats into social sciences and depict their cognitive added values. iii) We explore, classify, and describe available methods in each natural example-based XAI format. We highlight common points and divergences for the reader to understand each method easily, with a focus on key methods. (see Tab. 1) ### Notations Throughout the paper, methods will explain a machine learning model \(h:\mathcal{X}\to\mathcal{Y}\), with \(\mathcal{X}\) and \(\mathcal{Y}\) being respectively the input and output domain. Especially, this model is parameterized by the weights \(\theta\in\Theta\subseteq\mathbb{R}^{d}\). If not specified otherwise, \(h\) is trained on a training dataset \(\mathcal{D}_{train}\subset(\mathcal{X}\times\mathcal{Y})\) of size \(n\) with the help of a loss function \(l:(\mathcal{X},\mathcal{Y},\Theta)\to\mathbb{R}\). We denote a sample by the tuple \(z=(x,y)|\quad x\in\mathcal{X},y\in\mathcal{Y}\). When an index subscript as \(i\) or \(j\) is added, _e.g._\(z_{i}\), it is assumed that \(z_{i}\) belongs to the training dataset. If the subscript "test" is added, \(z_{test}\), the sample does not belong to the training data. When there is no subscript, the sample can either be or not in the training data. Finally, the empirical risk function is denoted as \(\mathcal{L}(\theta):=\frac{1}{n}\sum_{(x,y)\in\mathcal{D}_{train}}l(x,y,\theta )=\frac{1}{n}\sum_{z_{j}\in\mathcal{D}_{train}}l(z_{j},\theta)\), the parameters that minimized this empirical risk as \(\theta^{*}:=\arg\min_{\theta}\mathcal{L}(\theta)\) and an estimator of \(\theta^{*}\) is denoted \(\hat{\theta}\). ## 2 Similar examples In the XAI literature, similar examples, also referred to as factual examples (see Fig. 2), are often used as a way to provide intuitive and interpretable explanations. The core idea is to retrieve the most similar, or the closest, elements in the training set to a sample under investigation \(z_{test}\) and to use them as a way to explain a model's output. Specifically, Case-Based Reasoning (CBR) is of particular interest as it mimics the way humans draw upon past experiences to navigate novel situations [38, 98]. For example, when learning to play a new video game, individuals do not typically begin from a complete novice level. Instead, they rely on their pre-existing knowledge and skills in manipulating game controllers and draw upon past experiences with similar video games to adapt and apply strategies that have been successful in the past. As described by Aamodt and Plaza [1], a typical CBR cycle can be delineated by four fundamental procedures: i) RETRIEVE: Searching for the most analogous case or cases, ii) REUSE: Employing the information and expertise extracted from that case to address the problem, iii) REVISE: Modifying the proposed solution as necessary, iv) RETAIN: Preserving the pertinent aspects of this encounter that could be beneficial for future problem-solving endeavors. The CBR approach has gained popularity in fields that require transparent systems to justify their outcomes, such as medicine [15], due to its psychological plausibility. In addition to being intuitive, the cases retrieved by a CBR system for a given prediction are natural explanations for this output. While CBR systems are a must-know in the XAI literature, we will not review them as they have already been well analyzed, reviewed, motivated, and described many times [27, 29, 100]. Instead, the focus here is on case-based explanations (CBE) [100]. CBE are methods that use CBR to explain other systems, also referred to as twin systems [57, 60]. Indeed, the CBR system must be coupled with the system you want to explain. In particular, explanations of the system under inspection are generally the outcomes of the RETRIEVE functionality of the twinned CBR system, which oftentimes rely on \(k\)-nearest neighbor (\(k\)-NN) retrieval [25]. The idea behind \(k\)-NN is to retrieve the \(k\) most similar training samples (cases) to a test sample \(z_{test}\). In fact, presenting similar examples to an end-user as an explanation for a model's outcomes has been shown through user studies to be generally more convincing than other approaches [53, 112]. ### Defining similarity Defining similarity is not trivial. Indeed, there are many ways of defining similarity measures, and different approaches are appropriate for different representations of a training sample [29]. Generally, CBR systems assume that similar input features are likely to produce similar outcomes. Thus, using a distance metric defined on those input features engenders a similarity measure: the closer the more similar they are. One of the simplest is the unweighted Euclidean distance: \[dist(z,z^{\prime})=||x-x^{\prime}||_{2}\quad|\quad z=(x,y)\in(\mathcal{X} \times\mathcal{Y}) \tag{1}\] However, **where** - _i.e._ in which space - the distance is computed does have major implications. As pointed out by Hanawa _et al._[46], the input space does not seem to bring pieces of information on the internal working of the model under inspection but provides more of a data-centric analysis. Thus, recent methods rely instead on either computing the distance in a latent space or weighting features for the \(k\)-NN algorithm [32]. #### 2.1.1 Computing distance in a latent space is one possibility to include the model in the similarity measure which is of utmost importance if we want to explain it, as pointed out by Caruana _et al._[21]. Consequently, Caruana _et al._[21] suggested applying the Euclidean distance on the last hidden units \(h_{-1}\) of a trained Deep Neural Network (DNN) as a similarity which considers the model's predictions: \[dist_{DNN}(z,z^{\prime})=||h_{-1}(x)-h_{-1}(x^{\prime})||_{2}\quad|\quad z=(x, y)\in(\mathcal{X}\times\mathcal{Y}) \tag{2}\] Similarly, for Deep Convolutional Neural Networks, Papernot and McDaniel [90], and Sani _et al._[96] suggested conducting the \(k\)-NN search in the latent representation of the network and using the cosine similarity distance. #### 2.1.2 Weighting features is another popular paradigm in CBE. For instance, Shin _et al._[104] proposed various **global weighting** schemes - _i.e._ methods in which the weights assigned to each input's feature remain constant across all samples as in Eq. (3) - where the weights are computed using the trained network to reveal the input features that were the most relevant for the network's prediction. \[dist_{features\_weights}(z,z^{\prime})=||w(\hat{\theta})^{T}(x-x^{\prime})|| _{2}\quad|\quad z=(x,y)\in(\mathcal{X}\times\mathcal{Y}) \tag{3}\] Alternatively, Park _et al._[91] examined **local weighting** by considering varying feature weights across the instance space. However, their approach is not _post-hoc_ for DNN. Besides, Nugent _et al._[87] also focused on local weighting and proposed a method that can be applied to any black-box model. However, their method involves generating multiple synthetic datasets around a specific sample, which may not be suitable for explaining a large number of samples or high-dimensional inputs. In the same line of work, Kenny and Keane [60, 61] proposed COLE, by suggesting the direct \(k\)-NN search in the attribution space - _i.e_ computing saliency maps [7, 105, 108] for all instances and performing a \(k\)-NN search in the resulting dataset of attributions. By denoting \(c(\hat{\theta},z)\) the attribution map of the sample \(z\) for the model parameterized by \(\hat{\theta}\) that gives: \[dist_{COLE}(z,z^{\prime})=||c(\hat{\theta},z)-c(\hat{\theta},z^{\prime})||_{2} \tag{4}\] They used three saliency map techniques ([7, 105, 108]) but nothing prevents one to leverage any other saliency map techniques. However, we should also point out that Fel _et al._[34] questioned attribution methods' ability to truly capture the internal process of DNN. Additionally in [61], Kenny and Keane proposed to use the Hadamard product of the gradient times the input features as a contribution score in the case of DNN with non-linear outputs. ### Limitations The current limitations of similarity-based XAI are still significant. Indeed, even though one defines a relevant distance or similarity measure between samples one still has to perform the search in the training dataset to retrieve the closest samples for a given \(z_{test}\). Naively, this would at least require computing the distance between \(z_{test}\) with every training data point, which prohibits its computation for large datasets. Fortunately, there are efficient techniques available for searching, as briefly discussed in the paper by Bhatia _et al._[14]. However, if the training data is sparse in the space in which the distance is computed the retrieved cases might be far from \(z_{test}\), thus questioning their relevance. Furthermore, **where** the distance is computed does have major implications as mentioned by Hanawa _et al._[46]. Consequently, authors have suggested different feature spaces or weighting schemes to investigate, but their relevance to reflect the inner workings of a model is as questionable as it is for attributions methods [34]. In addition, it is still unclear in the literature if one approach prevails over others. Moreover, when a human is faced with examples, he may not be able to understand why they were considered similar. As an example, if two elements are red and round, the human may think the important thing is the red color while the model focuses on the round shape [84]. Finally, the consideration of the position of the retrieved similar examples w.r.t. the decision boundaries of the model, in terms of whether their prediction matches that of \(z_{test}\), is not always accounted for. It is a major issue as providing similar examples to an end-user should comfort it with the model's decision but that becomes confusing if you showcase a factual example for which the model's prediction is different. Thus, taking into account the decision boundaries of a model seems crucial for the explanations' relevance. Such considerations are motivating the field of contrastive explanations, as discussed in section 3. ## 3 Contrastive explanations Contrastive explanations are a class of explanation that provides the consequences of another plausible reality, the repercussion of changes in the model's input [18, 111]. More simply, they are explanations where we modify the input and observe the reaction of the model's prediction, the modified input is returned as the explanation and its meaning depends on the model's prediction of it. Those methods are mainly _post-hoc_ methods applied to classification models. This includes i) counterfactuals (CF): _an imagined alternative to reality about the past, sometimes expressed as "if only... " or "what if..."_[18], ii) semi-factuals (SF): _an imagined alternative that results in the same outcome as reality, sometimes expressed as "even if... "_[18], and iii) adversarial examples (perturbations or attacks) (AP): _inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence_[41]. Examples of those three formats are provided in Fig. 2 from Kenny and Keane [62]. AP and CF are both perturbations with an expected change in the prediction, they only differ in the goal as CF attempt to provide an explanation of the model's decision while AP are mainly used to evaluate robustness. In fact, AP can be considered CF [113], and for robust models, AP methods can generate interpretable CF [103]. Nonetheless, AP are hardly perceptible perturbations designed to fool the model [109], therefore, they are generative and those methods will not be further detailed in this work. Then, we can generalize \(SF\) and \(CF\), with a given distance \(dist\), and the examples conditioned space \(\mathcal{X}_{cond(f,x)}\subset\mathcal{X}\): \[CF(x_{test}) :=\operatorname*{arg\,min}_{x\in\mathcal{X}_{cond(f,x_{test})}|h( x)\neq h(x_{test})}dist(x_{test},x) \tag{5}\] \[SF(x_{test}) :=\operatorname*{arg\,max}_{x\in\mathcal{X}_{cond(f,x_{test})}|h( x)=h(x_{test})}dist(x_{test},x) \tag{6}\] For natural CF and SF, the input space is conditioned to the training set, \(\mathcal{X}_{cond(f,x_{test})}=X_{train}\). While for AP, there is no condition on the input space, in Eq. (5), \(\mathcal{X}_{cond(f,x_{test})}=\mathcal{X}\). The distance and the condition of the input space are the key differences between CF and SF methods. ### Counterfactuals #### 3.1.1 The social science grounding of counterfactuals is deep, either in philosophy, or psychology. Indeed, the search for CF's semantic definition goes back a long time [13, 44, 72], and historically revolves around the notion of cause and effect, sometimes called facts and foils [75, 78]. Then, Halpern and Pearl [44] proved that the cause of an event is an answer to the question "Why?" and thus, provides a powerful explanation. Moreover, the philosophical literature argued that CF allow us to communicate and understand the causal relation between facts and foils [72, 78]. Psychology also possesses a rich literature regarding CF [18, 95], which has continued to evolve in recent years [19, 59, 79] thanks to the arrival of CF in XAI through Wachter _et al._[113]. Humans' natural use of counterfactuals in many situations was highlighted by Byrne [18]: _From amusing fantasy to logical support, they explain the past, prepare the future, modulate emotional experience, and support moral judgments_. Furthermore, when people encounter CF they have both the counterfactual and the factual in mind [19, 110]. The insights from philosophy and psychology [19, 79] have shown the pertinence and potential of CF as well as SF for XAI. To match such promises, CF in XAI need to verify the definitions and properties of CF typically employed by humans. #### 3.1.2 Expected properties for natural CF can be extrapolated from conclusions and discovered properties in XAI for generated CF even though the literature on natural CF is slim. Such desirable properties for CF, derived from social sciences, could be summarized as follows: i) **plausibility**[58, 59, 111]: CF should be as realistic as possible; ii) **validity**[83]: if the model's prediction on CF differ from the prediction on the query (see the definition (5)); iii) **sparsity**[58, 83, 111]: the number of features that were changed between CF and the query should be as Figure 2: Illustration of factuals, SF, and CF from Kenny and Keane [62]. The factual makes us understand the misclassification, while SF and CF show us how far or close the decision boundary is. Min-edit represents the AP, as differences are not visible. little as possible; iv) **diversity**[54; 83]: if several CF are proposed, they should be different from each other; v) **actionability**[58; 111]: the method should allow the user to select features, to modify and specify immutable ones; vi) **proximity**[54; 58; 59; 83]: CF should be as close as possible to the query. **Counterfactuals methods:** Keane _et al._[59] argued that nearest unlike neighbors (NUN) [28] is the ancestor of counterfactuals in XAI. NUN are derivative of nearest neighbors [25], which is looking for the nearest element that belongs to a different class, it matches perfectly with the definition of natural counterfactuals. NUN were first used in XAI by Doyle _et al._[30; 88] but not as an explanation, only to find SF. The only method to the best of our knowledge that uses NUN as explanations is KLEOR from Cummins and Bridge [26], which was also called "the nearest miss" and was provided as a complement to SF explanation. Indeed, following the definition, pairs of CF and SF should give a good intuition of the decision boundary. Nonetheless, they highlighted that the decision boundary might be much more complex than what the SF and CF pairs can reveal. Indeed, a line between SF and CF may intersect the decision boundary several times, which can lead to explanations that are not always faithful. Furthermore, Keane _et al._[59] argued that "good natural counterfactuals are hard to find" as the dataset's low density prevents sparse and proximal natural CF. Counterfactuals as known in XAI appeared with Wachter _et al._[113]. While there are numerous methods, as shown through the number of surveys in this field[54; 83; 111], those are all generative methods. We can distinguish two periods among those papers: a first one with a focus on small and interpretable tabular datasets as described by Verma _et al._ survey [111], and a second on more complex data types such as images [6; 62]. While in the first CF period, generating plausible instances was not an issue, it appeared to be a huge drawback toward the generalization of CF to more complex data types [6; 62]. Even the most recent methods based on diffusion models [6] failed to consistently generate plausible images. We are surprised that there is so little work that explores natural CF as explanations with their inherent plausibility. Furthermore, in the literature, natural examples were used to ensure plausibility in generated CF [59; 111]. Moreover, adversarial perturbations proved that for non-robust DNN, a generated example close to a natural instance is not necessarily plausible. That is to say, we cannot prove that generated instances belong to the manifold without a proper definition of the manifold. To conclude, for high dimensional data, the reader is faced with the choice of simple and plausible natural CF or proximal and sparse generated CF through a model explaining another model. ### Semi-factuals SF literature is most of the time included in the CF literature be it in philosophy [42], psychology [18], or XAI [26; 62]. In fact, SF, "even if..." are semantically close to CF, "what if..." [5; 13; 42], (see Eqs. (5) and (6)). However, psychology has demonstrated that human reactions differ between CF and SF. While CF strengthen the causal link between two elements, SF reduce it [19], CF increase fault and blame in a moral judgment while SF diminish it. #### 4.1.2 Expected properties for CF and SF were inspired by social science, hence, because of their close semantic definition, many properties are common between both: SF should also respect their definition in Eq. (6) (**validity**), then to make the comparison possible and relevant they should aim towards **plausibility**[5], **sparsity**[5], **diversity**, and **actionability**. Nonetheless, the psychological impact of CF and SF differ, hence there are also SF properties that contrast with CF properties. The difference between equations (5) and (6) - _i.e._\(\arg\min\) vs \(\arg\max\) - suggests that to replace CF's proximity, SF should be the farthest from the studied sample, while not crossing the decision boundary [26]. As such, we propose the **decision boundary closeness** as a necessary property, and a metric to evaluate it could be the distance between SF and SF's NUN. Finally, SF should not go in any direction from the studied sample but aim toward the closest decision boundary. Therefore, it should be aligned with NUN [26, 30, 88], this property was not named, we suggest calling it **counterfactual alignment**. #### 4.1.3 Semi-factuals methods were first reviewed in XAI by a recent survey from Aryal and Keane [5]. They divided SF methods and history into four parts. The first three categories consist of one known method that will illustrate them: * **SF based on feature-utility**, Doyle _et al._[30] discovered that similar examples may not be the best explanations and suggested giving examples farther from the studied sample. To find the best explanation case, \(dist\) in Eq. (6) is a utility evaluation based on features difference. * **NUN-related SF**, Cummins and Bridge [26] proposed KLEOR where Eq. (6)'s \(dist\) is based on NUN similarity. Then, they penalize this distance to make sure the SF are between the query and nearest unlike neighbors. * **SF near local-region boundaries**, Nugent _et al._[88] approximate the decision boundary of the model in the neighborhood of the studied sample through input perturbations (like LIME [94]). Then SF are given by the points that are the closest to the decision boundary. * **The modern era: _post-_2020 methods**, inspired by CF methods, many generative methods emerged in recent years [55, 62]. In conclusion, semi-factuals are a natural evolution of similar examples. Furthermore, their complementarity with counterfactuals was exposed through the literature, first to find and evaluate SF, and then to provide a range to the decision boundary. Even though contrastive explanations bring insights into a model's behavior by answering a "_what if..._ or a "_even if..._" statement, it has no impact on the current model situation and what led to this state or how to change it. Contrastively, influential instances (see Section 4) extract the samples with the most influence on the model's training, hence its current state. Thus, removing such samples from the training set will have a huge impact on the resulting model. ## 4 Influential Examples Influential instances could be defined as instances more likely to change a model's outcome if they were not in the training dataset. Furthermore, such measures of influence provide one with information on "in which direction" the model decision would have been affected if that point was removed. Being able to trace back to the most influential training samples for a given test sample \(z_{test}\) has been a topic of interest mainly for example-based XAI. ### Influence functions **Influence functions** originated from robust statistics in the early 70s. In essence, they evaluate the change of a model's parameters as we up-weight a training sample by an infinitesimal amount: [45]\(\hat{\theta}_{\epsilon,z_{j}}:=\arg\min_{\theta}\mathcal{L}(\theta)+\epsilon l (z_{j},\theta)\). One way to estimate the change in a model's parameters of a single training sample would be to perform _Leave-One-Out_ (LOO) retraining, that is, to train the model again with the sample of interest being held out of the training dataset. However, repeatedly re-training the model to exactly retrieve the parameters' changes could be computationally prohibitive, especially when the dataset size and/or the number of parameters grows. As removing a sample \(z_{j}\) can be linearly approximated by up-weighting it by \(\epsilon=-\frac{1}{n}\), computing influence helps to estimate the change of a model's parameters if a specific training point was removed. Thus, by making the assumption that the empirical risk \(\mathcal{L}\) is twice-differentiable and strictly convex w.r.t. the model's parameters \(\theta\) making the Hessian \(H_{\hat{\theta}}:=\frac{1}{n}\sum_{z_{i}\in\mathcal{D}_{train}}\nabla_{\theta }^{2}l(z_{i},\hat{\theta})\) positive definite, Cook and Weisberg [24] proposed to compute the influence of \(z_{j}\) on the parameters \(\hat{\theta}\) as: \[\mathcal{I}(z_{j}):=-H_{\hat{\theta}}^{-1}\nabla_{\theta}l(z_{j},\hat{\theta}) \tag{7}\] Later, Koh and Liang [68] popularized influence functions in the machine learning community as they took advantage of auto-differentiation frameworks to efficiently compute the hessian for DNN and derived Eq. (7) to formulate the influence of up-weighting a training sample \(z_{j}\) on the loss at a test point \(z_{test}\): \[\text{IF}(z_{j},z_{test}):=-\nabla_{\theta}l(z_{test},\hat{\theta})^{T}H_{ \hat{\theta}}^{-1}\nabla_{\theta}l(z_{j},\hat{\theta}) \tag{8}\] This formulation opens its way into example-based XAI as it compares to the study of finding the nearest neighbors of \(z_{test}\) in the training dataset - _i.e._ the most similar examples (Section 2) - with two major differences though: i) points with high training loss are given more influence _revealing that outliers can dominate the model parameters_[68], and ii) \(H_{\hat{\theta}}^{-1}\) measures what Koh and Liang called: _the resistance of the other training points to the removal of \(z_{j}\)_[68]. However, it should be noted that hessian computation remains a significant challenge, that could be alleviated with common techniques [3, 77, 99]. By normalizing Eq. (8), Barshan _et al._[10] further added stability to the formulation. Oftentimes, we are not only interested in individual instance influence but in the influence of a group of training samples (_e.g._ mini-batch effect, multi-source data, etc..). Koh _et al._[69] suggested that using the sum of individual influences as the influence of the group constitutes a good proxy to rank those groups in terms of influence. Basu _et al._[12] on their side suggested using a second-order approximation to capture possible cross-correlations but they specified it is most likely impracticable for DNN. In a later work, Basu _et al._[11] concluded that influence function estimates for DNN are fragile as the assumptions on which they rely, being near optimality and convexity, do not hold in general for DNN. #### 4.1.3 LOO approximation is one of the previously mentioned motivations behind influence estimates as it avoids the prohibitive LOO retraining required for every sample in the training data. Thus, some authors proposed approaches that optimize the number of LOO retraining necessary to get a grasp on a sample's influence such as Feldman and Zhang [36]. Although this significantly reduces the number of retraining compared to naive LOO retraining, it still requires a significant amount of them. Recently, a new approach that relates to influence functions and involves training many models, was introduced with data models [52, 97] which we do not review here. As Basu _et al._[11] pointed out, there is a discrepancy between LOO approximation and influence function estimates, especially for DNN. However, Bae _et al._[9] claimed that this discrepancy is due to influence functions approaching what they call the proximal Bregman response function (PBRF), rather than approximating the LOO retraining, which does not interfere with their ability to perform the task they were thought for, especially XAI. Thus, they suggested evaluating the quality of influence estimates by comparing them to the PBRF rather than LOO retraining as it was done until now. ### Other techniques #### 4.2.1 Influence computation that relies on kernels is another paradigm to find the training examples that are the most responsible for a given set of predictions. For instance, Khanna _et al._[63] proposed an approach that relies on Fisher's kernels and they related it to the one from Koh and Liang [68] as a generalization of the latter under certain assumptions. Yeh _et al._[115] also suggested an approach that leverages kernels but this time they relied on the representer theorem [101]. That allows them to focus on explaining only the _pre-activation prediction layer_ of a DNN for classification tasks. In addition, their influence scores, called representer values, provide supplementary information, with positive representer values being excitatory and negative values being inhibitory. However, this approach requires introducing an \(L2\) regularizer during optimization, which can prevent _post-hoc_ analysis if not responsible for training. Additionally, Sui _et al._[107] argued that this approach provides more of a _class-level_ explanation rather than an _instance-level_ explanation. To address this issue and the \(L2\) regularizer problem, they proposed a method that involves hessian computation on the classification layer, with only the associated computational cost. However, the ability to retrieve relevant samples when investigating only the final prediction layer was questioned by Feldmann and Zhang [36], who found that memorization does not occur in the last layer. #### 4.2.2 Tracing the training process has been another research field to compute influence scores. It relies on the possibility to replay the training process by saving some checkpoints of our model parameters, or states, and reloading them in a post-hoc fashion [23, 47, 93]. In contrast to the previous approaches, they rely neither on being near optimality nor being strongly convex, which is more realistic when we consider the reality of DNN. However, they require handling the training procedure to save the different checkpoints, potentially numerous, hence they are intrinsic methods, which in practice is not always feasible. ### In a nutshell Influential techniques can provide both global and local explanations to enhance model performance. Global explanations allow for the identification of training samples that significantly shape decision boundaries or outliers (see Fig. 1), aiding in data curation. On the other hand, local explanations offer guidance for altering the model in a desired way (see Fig. 3). Although they have been compared to similar examples and have been shown to be more relevant to the model [46], they are more challenging to interpret and their effectiveness for trustworthiness is unclear. Further research, particularly user studies, is necessary to determine their ability to take advantage of human cognitive processes. ## 5 Prototypes Prototypes are a set of representative data instances from the dataset, while criticisms are data instances that are not well represented by those prototypes [64]. Figure 3: Figure taken from F. Liu [93]: A tracing process for estimating influence, TracIn, applied on ImageNet. The first column is composed of the test sample, the next three columns display the training examples that have the most positive value of influence score while the last three columns point out the training examples with the most negative values of influence score. (fr-bulldog: french-bulldog) Fig. 4 shows examples of prototypes and criticisms from Imagenet dataset. Prototypes and criticism can be used to add data-centric interpretability, _post-hoc_ interpretability, and to build an interpretable model [82]. The data-centric approaches will be very briefly introduced. ### Prototypes for data-centric interpretability Clustering algorithms that return actual data points as cluster centers such as k-medoids methods [56, 86] could be used to better understand the data distribution. In fact, the cluster centers can be considered as prototypes. The abundance of large datasets has renewed the interest in the data summarization methods [8, 73, 74, 81, 106], also known as set cover methods, which consist of finding a small subset of data points that covers a large dataset. The subset elements can be considered prototypes. Additionally, we found data summarization methods based on the Maximum Mean Discrepancy (MMD), such as MMD-critic [64] and Protodash [43], that learn both prototypes and criticisms. ### Prototypes for _post-hoc_ interpretability Prototypes and criticisms can be used to add _post-hoc_ interpretability [82]. This can be achieved by predicting the outputs for the selected prototypes and criticisms with the black-box model, and then using these predictions to find the weaknesses of the model. We can also explain the model by applying clustering and data summarization methods to select prototypes in its latent space. Filho _et al._[37] proposed M-PEER (Multiobjective Prototype-based Explanation for Regression) method that finds the prototypes using both the training data and the model output. It optimizes both the error of the explainable model and the fidelity and interpretability metrics. The selected prototypes are then used to provide global and local post-hoc explanations for regression problems. ### Prototype-based models interpretable by design After data-centric and _post-hoc_ methods, there are methods that construct prototypes models. Those models are interpretable by design because they provide a Figure 4: Figure taken from [64]: Learned prototypes and criticisms from Imagenet dataset (two types of dog breeds) set of prototypes that make sense for the model, those methods are mainly designed for classification. Given a training set of points \(X_{c}:=\{(x,y)\in\mathcal{D}_{train}|y=c\}\) for each class \(c\), an interpretable classifier learns a set of prototypes \(P_{c}\subseteq X_{c}\) for each class \(c\). Each \(P_{c}\) is designed to capture the full variability of the class \(c\) while avoiding confusion with other classes. The learned prototypes are then used by the model to classify the input. We identified three types of prototypes based classifiers: those that resolve set cover problems, those that use Bayesian models for explanation, and those that are based on neural networks. #### 4.1.2 Prototype-based classifiers resolving set cover problems select convex sets that cover each class with prototypes to represent it. Various types of convex sets such as boxes, balls, convex hulls, and ellipsoids can be used. Class Cover Catch Digraphs (CCCD) [76] and ProtoSelect [16] used balls where the centers were considered prototypes. Then, the nearest-prototype rule is used to classify the data points. CCCD finds, for each class \(c\), one ball that covers all points of the class \(c\) and no points of other classes. Its radius is chosen as large as possible. However, even within large classes, there can still be a lot of interesting within-class variability that should be taken into account when selecting the prototypes. To overcome this limitation, ProtoSelect used a fixed radius across all points, to allow the selection of multiple prototypes for large classes, and they also allow wrongly covered and non-covered points. They simultaneously minimize three elements: i) the number of prototypes; ii) the number of uncovered points; iii) the number of wrongly covered points. #### 4.1.3 Prototype-based classifiers using Bayesian models for explanation: Kim _et al._[65] proposed the Bayesian Case Model (BCM) that extends Latent Dirichlet Allocation [17]. In BCM, the idea is to divide the data into \(s\) clusters. For each cluster, a prototype is defined as the sample that maximizes the subspace indicators that characterize the cluster. When a sample is given to BCM, this last one yield a vector of probability to belong to each of the \(s\) clusters which can be used for classification. Thus, the classifier uses as an input a vector of dimension \(s\), which allows the use of simpler models due to dimensionality reduction. In addition, the prototype of the most likely cluster can then be used as an explanation. #### 4.1.4 Prototype-based neural network classifiers learn to select prototypes defined in the latent space, which are used for the classification. This lead to a model that is more interpretable than a standard neural network since the reasoning process behind each prediction is "transparent". Learning Vector Quantization (LVQ) [70] is widely used for generating prototypes as weights in a neural network. However, the use of generated prototypes reduces their interpretability. ProtoPNet [22] also stocks prototypes as weights and trains them, but projects them to training samples patches representation during training. Given an input image, its patches are compared to each prototype, the resulting similarity scores are then multiplied by the learned class connections of each prototype. ProtoPNet has been extended to time series data using ProSeNet [80], or with a more interpretable structure with ProtoTree [85] and HPNet [48]. Instead of using linear bag-of-prototypes, ProtoTree and HPNet used hierarchically organized prototypes to classify images. ProtoTree improves upon ProtoPNet by using a decision tree which provides an easy-to-interpret global explanation and can be used to locally explain a single prediction. Each node in this tree contains a prototype (as defined by ProtoPNet). The similarity scores between image patches and the prototypes are used to determine the routing through the tree. Decision-making is therefore similar to human reasoning [85]. Nauta _et al._[84] proposed a method called "This Looks Like That, Because" to complete the "This Looks Like That" reasoning used in ProtoPnet. This method allows checking why the model considered two examples as similar. For instance, it is possible that a human thinks that the common point between two examples is their color, while the model uses their shape. The method modifies some characteristics of the input image, such as hue, or shape, to observe how the similarity score changes. This allows us to measure the importance of each of these characteristics. ### In conclusion Prototypes can either be: i) selected from the training data to explain the data distribution. These prototypes can also be used to find weaknesses of a black-box model by analyzing the output prediction of these prototypes with this model. ii) selected using both the training data and the model output or in the latent space of the model. This allows for _post-hoc_ explanations on the model. iii) integrated and selected by the model itself during training and then used for prediction. This allows the model to be interpretable by design. ## 6 Concept-based XAI Prototype-based models compare prototypical parts, _e.g._ patches, and the studied sample to make the classification. The idea of parts is not new to the literature, the part-based explanation field, developed for fine-grained classification, is able to detect semantically significant parts of images. The first part-based model required labeled parts for training and can be considered object detection with a semantic link between the detected objects. Afterward, unsupervised methods such as OPAM [92] or Particul [114] emerged, those methods still learned classification in a supervised fashion, but no labels were necessary for part identification. In fact, the explanation provided by this kind of method can be assimilated into concept-based explanations. A concept is an abstraction of common elements between samples, as an example Fig. 5 shows the visualization of six different concepts that the CRAFT method [35] associated with the given image. To understand parts or concepts, the method uses examples and supposes that with a few examples, humans are able to identify the concept. Like in part-based XAI, the first concept-based method used labeled concepts. Kim et al. [66] introduced concept activation vectors (CAV) to represent concepts using a model latent space representation of images. Then, they design a post-hoc method, TCAV [66] based on CAV to evaluate an image correspondence to a given concept. Even though it seems promising, this method requires prior knowledge of the relevant concepts, along with a labeled dataset of the associated concepts, which is costly and prone to human biases. Fortunately, recent works have been conducted to automate the concept discovery in the training dataset without humans in the loop. For instance, ACE, proposed by Ghobarni et al. [39], employs a semantic segmentation technique on images belonging to a specific class of interest and use an Inception-V3 neural network to compute activations of an intermediate model layer for these segments. The resulting activations are then clustered to form a set of prototypes, which they refer to as "concepts". However, the presence of background segments in these concepts requires a post-processing clean-up step to remove irrelevant and outlier concepts. Zhang et al. [116] propose an alternative approach to solve the unsupervised concept discovery problem through matrix factorizations [71] in the networks' latent spaces. However, such methods operate at the convolutional kernel level, which may lead to concepts based on shape and/or ignore more abstract concepts. As an answer, Fel et al. [35] propose CRAFT, which uses Non-Negative Matrix Factorization [71] for concept discovery. In addition to filling in the blank of previous approaches, their method provides an explicit link between the concepts' global and local explanations (Fig. 5). While their approach successfully alleviates the previously mentioned issues, the retrieved concepts are unfortunately not always interpretable. Nonetheless, their user study proved the pertinence of the method. To conclude, concept-based explanations allow _post-hoc_ global and local explanations, by understanding the general concepts associated with a given class Figure 5: Illustration from Fel _et al._[35]. Natural examples in the colored boxes define a concept. **Purple box**: could define the concept of ”**chainsaw**”. **Blue box**: could define the concept of ”**saw’s motor**”. **Red box**: could define the concept of ”**jeans**”. and the concepts used for a decision. We draw attention to methods that do not require expert knowledge to find out relevant concepts as it is prone to confirmation bias. Even though automated concept discovery is making tremendous progress, the interpretation of such concepts and their ability to gain users' trust stay questionable as very few user studies have been conducted on the subject. ## 7 Conclusions This paper explored explainability literature about natural example-based explainability and provided a general social science justification for example-based XAI. We described each kind of explanation possible through samples. For each possibility, we reviewed what explanation do they bring, classified and presented the major methods. We summarize all explored described methods in table 1. We saw that all those methods are based on a notion of similarity. As such, for them to explain the model, the similarity between instances should take into account the model. There are two ways of doing it: project the instances in a meaningful space for the model and/or weight instances. Among the formats, similar examples and influential instances are natural examples by definition. However, contrastive explanations, prototypes, and concept examples can be generated, which brings competition to non-generative methods. We argue that while a "good" natural example may not exist for a given case, at least, natural examples are realistic in the sense that they belong to the data distribution. While generative methods may be able to create such "good" examples, they cannot prove that the generated samples belong to the data manifold. Furthermore, such methods require a model to explain another model, which in turn should be investigated and might involve extensive tuning. We have illustrated that the different example-based formats bring different kinds of explanations, and each one has its own advantages, Fig. 1 shows their diversity and complementarity. To summarize those advantages non-exhaustively: i) Factuals give confidence in the decisions of the model and are pertinent in AI-assisted decisions. ii) For classification, contrastive explanations give insight into the decision boundary in the locality of the studied sample. iii) Influential instances explain how samples influenced the model training. iv) Prototypes and concepts give information on a global scale, on the whole, model behavior, but may also be used to explain decisions. Nonetheless, like all explanations, we cannot be sure that humans will have a correct understanding of the model or the decision. Furthermore, there is a non-consensus on how to ensure a given method indeed explain the decisions or inner working of the model. Moreover, for example-based explainability, the data is used as an explanation, hence, without profound knowledge of the dataset, humans will not be able to draw conclusions through such explanations. Therefore, the evaluation of example-based methods should always include a user study, which is lacking in this field and in XAI in general. Finally, we hope our work will motivate, facilitate and help researchers to keep on developing the field of XAI and in particular, natural example-based XAI and to address the identified challenges. ## 8 Acknowledgments This work has been supported by the French government under the "France 2030" program as part of the SystemX Technological Research Institute. This work was conducted as part of the Confiance.AI program, which aims to develop innovative solutions for enhancing the reliability and trustworthiness of AI-based systems. Additional funding was provided by ANR-3IA Artificial and Natural Intelligence Toulouse Institute (ANR-19-PI3A-0004). We are also thankful to the DEEL's3 core team for their expertise and feedback. A.M. Picard, D. Vigouroux, C. Friedrich, V. Mussot, and Y. Prudent. Footnote 3: [https://www.deel.ai/](https://www.deel.ai/) Finally, we are thankful to the authors who accepted our use of their figures. E.M Kenny and M.T. Keane [61, 62], F. Liu [93], B. Kim [64], and T. Fel [35]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **SIMILAR** & \multirow{2}{*}{Yes} & [\begin{subarray}{c}Global \\ Local \\ \end{subarray}] & \begin{tabular}{c}Post-hoc \\ Intrinsic \\ \end{tabular} & \begin{tabular}{c}Model or data \\ -type specificity \\ \end{tabular} & Distance & Weighting \\ \hline Carusna et al. [21] & 1999 & Local & Post-hoc & DSN & Euclidean & None \\ \hline Shin et al. [104] & 2000 & Local & Post-hoc & DNN & Euclidean & Global \\ \hline Park et al. [91] & 2000 & Local & Intrinsic & DNN & Euclidean & Local \\ \hline Niosot et al. [87] & 2005 & Local & Post-hoc & None & Euclidean & Local \\ \hline Sani et al. [96] & 2017 & Local & Post-hoc & Deep CNN & Cosine similarity & Local \\ \hline Paprront and McDaniot [90] & 2018 & Local & Post-hoc & Deep CNN & Cosine similarity & Local \\ \hline Cole [60] & 2019 & Local & Post-hoc & None & Euclidean & Local with attribution \\ \hline \hline **CONSTATIVE** & \begin{tabular}{c} Global \\ Local \\ \end{tabular} & \begin{tabular}{c}Post-hoc \\ Intrinsic \\ \end{tabular} & \begin{tabular}{c}Model or data \\ -type specificity \\ \end{tabular} & \begin{tabular}{c}Sem-factual \\ group of method \\ \end{tabular} \\ \hline Doyle et al. [30, 31] & 2004 & Local & Post-hoc & None & SF based on feature-utility \\ \hline NUN [26, 28, 30] & 2006 & Local & Post-hoc & None & Natural CF \\ \hline KEOR [26] & 2006 & Local & Post-hoc & None & NUN-related SF \\ \hline Nugent et al. [88] & 2009 & Local & Post-hoc & None & Local-region boundaries \\ \hline \hline **INFLUENTIAL** & \multirow{2}{*}{Yes} & [\begin{subarray}{c}Global \\ Local \\ \end{subarray}] & \begin{tabular}{c}Post-hoc \\ Intrinsic \\ \end{tabular} & \begin{tabular}{c}Model or data \\ -type specificity \\ \end{tabular} & \begin{tabular}{c}Reguises \\ model’s gradients \\ \end{tabular} \\ \hline Koh and Liang [68] & 2017 & Both & Post-hoc & \(\angle\) twice-differentiable and strictly convex w.r.t. \(\theta\) & Yes \\ \hline Klanna and al. [63] & 2018 & Local & Post-hoc & Requires an access to the function and gradient-overcles & Yes \\ \hline Yeh and al. [115] & 2018 & Local & Intrinsic & Work for classification neural networks with regularization & Yes \\ \hline Hara and al. [47] & 2019 & Local & Intrinsic & Models trained with SGD, scoring intermediate checkpoints & Yes \\ \hline Koh and Liang [60] & 2019 & Both & Post-hoc & \(\angle\) twice-differentiable and strictly convex w.r.t. \(\theta\) & Yes \\ \hline Basu and al. [12] & 2019 & Both & Post-hoc & \(\angle\) twice-differentiable and strictly convex w.r.t. \(\theta\) & Yes \\ \hline Barkham and Jiang [60] & 2020 & Both & Post-hoc & \(\angle\) twice-differentiable and strictly convex w.r.t. \(\theta\) & Yes \\ \hline Feldman and Zhang [60] & 2020 & Global & Intrinsic & Requires to train numerous models on subsampled datasets & No \\ \hline Pruthi and al. [38] & 2020 & Local & Intrinsic & Requires awiring intermediate checkpoints & Yes \\ \hline Sui and al. [107] & 2021 & Local & Post-hoc & Work for classification neural networks & Yes \\ \hline Chan and al. [23] & 2022 & Both & Intrinsic & Requires awiring intermediate checkpoints & Yes \\ \hline \hline **PROOTYPES** & \begin{tabular}{c} Global \\ Local \\ \end{tabular} & \begin{tabular}{c}Post-hoc \\ Intrinsic \\ \end{tabular} & \begin{tabular}{c}Model or data- \\ type specificity \\ \end{tabular} & \begin{tabular}{c}Task \\ \end{tabular} & \begin{tabular}{c}Other \\ \end{tabular} \\ \hline CCCD [76] & 2003 & Both & NA & by-design & Classification & Set cover \\ \hline ProGSelect [16] & 2011 & Both & NA & by-design & Classification & Set cover \\ \hline Kim et al. [65] & 2019 & Both & NA & by-design, tabular & Classification & Bayesian-based \\ \hline ProtoPNet [22] & 2019 & Both & NA & by-design, FGCV & Classification & Neural network \\ \hline ProGNet [80] & 2019 & Both & NA & by-design, sequences & Classification & Neural network \\ \hline ProtoNet [80] & 2021 & Both & NA & By-design, FGCV & Classification & Neural network \\ \hline M-PEER [35] & 2023 & Both & Post-hoc & No & Regression & NA \\ \hline \hline **CONCEPTS** & \begin{tabular}{c} \begin{tabular}{c}Global \\ Local \\ \end{tabular} \\ \end{tabular} & \begin{tabular}{c}Post-hoc \\ Intrinsic \\ \end{tabular} & \begin{tabular}{c}Model or data \\ -type specificity \\ \end{tabular} & \begin{tabular}{c}Need labeled \\ -type specificity \\ \end{tabular} & \begin{tabular}{c}Concepts \\ format \\ \end{tabular} \\ \hline OFDM [92] & 2017 & Global & NA & By-design, FGCV & Yes & part-based \\ \hline TCAV [60] & 2019 & Global & Post-hoc & Neural network & Yes & same as input \\ \hline ACE [30] & 2019 & Global & Post-hoc & Neural network & No & segmented parts \\ \hline Zhang et al. [116] & 2021 & Global & Post-hoc & Neural network & No & segmented parts \\ \hline CRUPT [35] & 2022 & Global & Post-hoc & Neural network & No & crops \\ \hline Particular [114] & 2017 & Global & NA & By-design, FGCV & Yes & part-based \\ \hline \end{tabular} \end{table} Table 1: Comparison table between the different natural example-based formats and methods. NA: Not applicable, FGCV: Fine-grained computer vision
2305.05590
Description Complexity of Regular Distributions
Myerson's regularity condition of a distribution is a standard assumption in economics. In this paper, we study the complexity of describing a regular distribution within a small statistical distance. Our main result is that $\tilde{\Theta}{(\epsilon^{-0.5})}$ bits are necessary and sufficient to describe a regular distribution with support $[0,1]$ within $\epsilon$ Levy distance. We prove this by showing that we can learn the regular distribution approximately with $\tilde{O}(\epsilon^{-0.5})$ queries to the cumulative density function. As a corollary, we show that the pricing query complexity to learn the class of regular distribution with support $[0,1]$ within $\epsilon$ Levy distance is $\tilde{\Theta}{(\epsilon^{-2.5})}$. To learn the mixture of two regular distributions, $\tilde{\Theta}(\epsilon^{-3})$ pricing queries are required.
Renato Paes Leme, Balasubramanian Sivan, Yifeng Teng, Pratik Worah
2023-05-09T16:25:59Z
http://arxiv.org/abs/2305.05590v1
# Description Complexity of Regular Distributions ###### Abstract Myerson's regularity condition of a distribution is a standard assumption in economics. In this paper, we study the complexity of describing a regular distribution within a small statistical distance. Our main result is that \(\tilde{\Theta}(\epsilon^{-0.5})\) bits are necessary and sufficient to describe a regular distribution with support \([0,1]\) within \(\epsilon\) Levy-distance. We prove this by showing that we can learn the regular distribution approximately with \(\tilde{O}(\epsilon^{-0.5})\) queries to the cumulative density function. As a corollary, we show that the pricing query complexity to learn the class of regular distribution with support \([0,1]\) within \(\epsilon\) Levy-distance is \(\tilde{\Theta}(\epsilon^{-2.5})\). To learn the mixture of two regular distributions, \(\tilde{\Theta}(\epsilon^{-3})\) pricing queries are required. ## 1 Introduction A Myerson-regular distribution is a distribution with CDF \(F\) such that the revenue curve in quantile space \(R(q)=q\cdot F^{-1}(1-q)\) is concave. For distributions that can be represented by a PDF \(f\) (i.e., have no point masses) this is equivalent to the (non-decreasing) monotonicity of the virtual value function \[\phi(v)=v-\frac{1-F(v)}{f(v)}. \tag{1}\] Myerson-regularity (or simply regularity) is a standard condition in Economics that was originally introduced by Myerson in his seminar paper on optimal auctions [11]. Since then it has played a fundamental role in the design and analysis of various economic setups such as: bilateral trade [11], prior-independent mechanism design [1, 1, 10, 14, 15, 16], auctions from samples [1, 1, 13, 12, 14], approximation in mechanism design and revenue management [1, 1, 15, 16, 17],... Besides being widely used in Economics, they also encompass various important distributions: * distributions with log-concave PDF, i.e. distributions where the PDF is of the form \(f(x)=\exp(-g(x))\) for a convex function \(g\), such as uniform, exponential and normal. * distributions satisfying the monotone hazard rate conditions (MHR) condition which is a notion from reliability theory. If a random variable measures the time until a certain failure event happens (e.g. a light bulb goes out) then MHR means that the probability that the failure happens at any given moment conditioned it hasn't yet happened weakly increases over time. * equal-revenue distributions where pricing at each point of the support leads to the same revenue. As an example, consider the distribution supported on \([1,\infty)\) with CDF \(F(x)=1-1/x\). Those are known to be extremal regular distributions and are often used to derive lower bounds in revenue management. * certain distributions arising from machine learning, for example: let \(x\) be a random feature vector in \(\mathbb{R}^{d}\) which is sampled from a uniform or Gaussian distribution restricted to a \(d\)-dimensional convex set \(K\) and let \(w\in\mathbb{R}^{d}\) be a fixed vector of weights \(w\in\mathbb{R}^{d}\). Then the distribution of the dot product \(\langle w,x\rangle\) is regular by the Prekopa-Leidner Theorem [14]. Describing Regular DistributionsIn this paper we ask how many bits of information we need to describe a regular distribution. We obtain a sharp bound and apply to derive new tight guarantees for learning regular distributions. Since a regular distribution is a continuous object with possibly unbounded support, we need to make a few assumptions to make the problem well-defined. * **Bounded support**: We will assume that the distribution has support in \([0,1]\). Without this assumption, even representing the subclass of uniform distributions supported on \([n,n+1]\) requires infinitely many bits. * **Levy-distance approximation** We will allow for \(\epsilon\)-error in the representation measured in Levy distance, which allows an \(\epsilon\) error both in values and probabilities. Formally: \[\text{\Levy}(F,G):=\inf\{\epsilon\text{ s.t. }F(v-\epsilon)-\epsilon\leq G(v) \leq F(v+\epsilon)+\epsilon,\forall v\}\] (2) Now we can state our main question formally: **Definition 1**.: _We say that it is possible to describe a class of distributions \(\mathcal{C}\) with \(b\) bits and \(\epsilon\) error, if there is a class of distributions \(\mathcal{C}^{\prime}\) with \(|\mathcal{C}^{\prime}|\leq 2^{b}\) such that for every distribution \(F\in\mathcal{C}\) there exists \(F^{\prime}\in\mathcal{C}^{\prime}\) such that \(\text{\Levy}(F,F^{\prime})\leq\epsilon\)._ Our main theorem is: **Theorem 1.1**.: _It is possible to describe the class of regular distributions bounded in \([0,1]\) within \(\epsilon\) Levy-distance using \(\tilde{O}(1/\epsilon^{0.5})\) bits. Moreover, this is tight up to polylog factors._ It is useful to contrast this with the class of _general_ distribution supported in \([0,1]\) for which \(\Omega(1/\epsilon)\) bits are necessary and \(\tilde{O}(1/\epsilon)\) bits sufficient. For the sufficient part, we can represent a CDF \(F\) by the numbers \(\lfloor F(k\epsilon)/\epsilon\rfloor\) for \(k=0,1,\ldots,\lceil 1/\epsilon\rceil\). Given those, we can construct a distribution with CDF: \[\hat{F}(x)=\epsilon\lfloor F(k\epsilon)/\epsilon\rfloor\text{ for }x\in[k \epsilon,(1+k)\epsilon)\] which is \(\epsilon\)-close to \(F\) in Levy-distance. To see that \(\Omega(1/\epsilon)\) bits are necessary, it is enough to construct a set of \(2^{\Omega(1/\epsilon)}\) distributions such that each pair differ by at least \(\epsilon\) in Levy-distance. Given bits \(b_{0},\ldots,b_{k}\in\{0,1\}\) for \(k=1/(2\epsilon)\) construct a distribution such that: \[F_{b}(x)=2(k+b_{k})\epsilon\text{ for }x\in[2k\epsilon,2(1+k)\epsilon),k=0,1, \ldots,1/(2\epsilon)\] For regular distributions, we will be able to construct a more succinct representation, using only square root of the number of bits. However, we will need a more sophisticated sampling procedure. Beyond Regular DistributionsWhile regular distributions are common in auction theory, many distributions encountered in practice are irregular. To generalize our results beyond regularity we define a notion of an irregularity coefficient, which measures how close to regular a distribution is: \[\textsc{IrregCoeff}(F)=\inf\left\{\beta\geq 0\text{ s.t. }qF^{-1}(1-q)+\beta\int_{1-q}^{1}F^{-1}(x)dx \text{ is concave}\right\} \tag{3}\] A \(0\)-irregular distribution is a regular distribution in the usual sense and an \(\infty\)-irregular distribution is a general distribution. The \(\beta\)-irregular class for \(\beta>0\) contains irregular distributions that are close enough to regularity to afford a low description complexity: **Theorem 1.2**.: _It is possible to describe the class of \(O(1)\)-irregular distributions bounded in \([0,1]\) within \(\epsilon\) Levy-distance using \(\tilde{O}(1/\epsilon^{0.5})\) bits._ The notion of \(\beta\)-irregularity coincides with the notion of \(\alpha\)-strong regularity of Cole and Roughgarden [10] when the sign is negative. The definition was originally intended to interpolate between regular and MHR distributions for positive values of \(\alpha\). When one considers negative values for \(\alpha\), we obtain a measure of how close a certain distribution is to regularity. A distribution is \(\alpha\)-strongly-regular in the sense of [10] whenever: \[f^{\prime}(v)\cdot(1-F(v))\geq-(2-\alpha)f^{2}(v)\] and hence it is \(\beta\)-irregular when it is \((-\beta)\)-strongly-regular, i.e.: \[f^{\prime}(v)\cdot(1-F(v))\geq-(2+\beta)f^{2}(v) \tag{4}\] Application: Pricing Query ComplexityOur first application is to settle the pricing query complexity of learning a regular distribution with \(\epsilon\)-error in Levy-distance. The pricing query complexity model was introduced in [11] with the goal of learning in economic settings where the only viable mechanism is posted prices. In such settings, we only observe if prices posted for different agents led to a sale or no sale. This notion is also useful when the auction of choice is a first-price auction, since we don't have access to truthful bids, but we still know if the bidder chose to bid above the reserve or not. In such settings, we use the binary sale/no-sale outcomes observed from previous periods to optimize the price in future auctions. In this model a learning algorithm is able to interact with a distribution \(F\) via pricing queries: in each query, the algorithm chooses a price \(p_{t}\), a sample \(v_{t}\) is drawn from \(F\) and the algorithm only learns the sign \(\{+1,-1\}\) of \(v_{t}-p_{t}\). The goal of the algorithm is to estimate some parameters of the distribution such as the mean, median or monopoly price for a given class of distributions. Paes Leme, Sivan, Teng and Worah [11] give matching upper and lower bounds for several parameters of interest, but the leave a gap in estimating the pricing query complexity of learning the CDF of a regular distribution. They give provide a \(\tilde{O}(1/\epsilon^{3})\) upper bound and a \(\Omega(1/\epsilon^{2.5})\) lower bound. We settle this question by providing a \(\tilde{O}(1/\epsilon^{2.5})\), matching the lower bound up to polylogarithmic factors. This is obtained by computing the \(O(1/\epsilon^{0.5})\) description using \(O(1/\epsilon^{2})\) pricing queries to evaluate each point using the Chernoff bound: **Theorem 1.3**.: _There is a \(\tilde{O}(1/\epsilon^{2.5})\) upper bound on the pricing query complexity of learning the CDF of a regular distribution within \(\epsilon\) Levy-distance error. This result is tight up to polylogarithmic factors._ Application: Mixture DistributionsWe say that a distribution with CDF \(F\) is a mixture of \(k\) distributions \(F_{1},\ldots,F_{k}\) if there are weights \(w_{1},\ldots,w_{k}\geq 0\) with \(\sum_{i}w_{i}=1\) such that \(F(v)=\sum_{i}w_{i}F_{i}(v),\forall v\). In various applications, it is useful to write distributions as mixtures of other distributions. For example, Sivan and Syrgkanis [11] design auctions for mixtures of regular distributions whose performance depends on the number of distributions in the mixture. One may ask how many regular distributions are needed to represent a general distribution. The description complexity bounds automatically imply the following corollary: **Corollary 1.4**.: _There exist a irregular distribution that can't be \(\epsilon\)-approximated in Levy-distance by a mixture of \(o(1/\epsilon^{0.5})\) regular distributions._ This follows directly from the fact that a mixture of \(k\) regular distributions can be described by \(O(k/\epsilon^{0.5})\) while a general distribution requires \(\Omega(1/\epsilon)\) bits to represent. It is also useful in ML applications to represent distributions as mixtures of Gaussians. Our result also implies that one may require many Gaussians to represent a regular distribution, since a Gaussian distribution can be represented by \(\tilde{O}(1)\) bits. **Corollary 1.5**.: _There exist a regular distribution that can't be \(\epsilon\)-approximated in Levy-distance by a mixture of \(o(1/\epsilon^{0.5})\) Gaussian distributions._ Learning Mixtures of Regular DistributionsA mixture of two regular distributions can be described using \(\tilde{O}(1/\epsilon^{0.5})\) bits since we need \(\tilde{O}(1/\epsilon^{0.5})\) bits to describe each distribution and \(\tilde{O}(1)\) bits more to describe the weights. Given that, one would guess that the pricing query complexity of learning a mixture is also \(\tilde{O}(1/\epsilon^{2.5})\). We conclude the paper with the following rather surprising result: **Theorem 1.6**.: _Let \(\mathcal{C}\) be the class of distributions supported in \([0,1]\) that can be written as a mixture of two regular distributions. To estimate the CDF of a distribution in that class within \(\epsilon\) in Levy-distance we require at least \(\Omega(1/\epsilon^{3})\) pricing queries._ Why Levy Distance?There are different ways to measure the distance between two distributions such as Total Variation (TV), Wasserstein, Kolmogorov, and Levy. We note that bounds on the Levy-distance automatically imply bounds on the Wasserstein. The Kolmogorov and TV distances are stronger notions but it is impossible to obtain any approximation in either one using pricing queries. This is discussed in detail in [10]. The Kolmogorov distance between two distributions \(F\) and \(G\) is given by: \[\text{Kol}(F,G):=\inf\{\epsilon\text{ s.t. }F(v)-\epsilon\leq G(v)\leq F(v)+ \epsilon,\forall v\} \tag{5}\] If a distribution is a deterministic value in \([0,1]\), to get any meaningful approximation in Kolmogorov distance, one needs to estimate this value exactly, which is impossible using pricing queries. Another reason to study Levy-distance is that such a metric has been considered in the literature on the sample complexity of learning revenue-optimal auctions, see Brustle et al [1] and Cherapanamjeri et al [1] for examples. Better understanding on the complexity of estimating a distribution within Levy-distance can lead to improved results on the complexity of auction learning. Related WorkOur work is broadly situated in the theme of sample complexity in algorithmic economics, where the goal is to understand what is the minimal amount of information to describe or optimize a certain economic setup. There are several ways one can explore this question. For example, Dhangwatnotai et al [2] and Fu et al [13] ask to what extent one can optimize an auction using a single sample of a distribution. In the other extreme, we can ask how many samples from a certain distribution are required to optimize an auction (Cole and Roughgarden [14] and Morgenstern and Roughgarden [15]) or compute the optimal reserve price (Huang et al [12]). Closest to our paper is the paper by Paes Leme at al [11] which considers a restricted query model. Instead of having access to samples of a distribution, we are only allowed to post a price and observe for a fresh draw of that distribution whether the price was above or below the posted price. This is motivated by learning in two important scenarios: (i) settings where posted-price mechanisms are used and we only observe purchase/no-purchase decisions; (ii) learning in non-truthful auctions (like first-price auctions) where the bid is not an unbiased sample of the value but we can still observe whether a bidder decided to bid above the reserve price or not. In this setting, Paes Leme at al [11] provides tight bounds on how to learn the monopoly price of MHR, regular and general distributions and shows that it requires strictly less samples to learn the reserve than to learn the entire CDF of a regular distribution. However, [11] leaves a gap on the number of pricing queries required to learn the CDF of a regular distribution: they show a lower bound of \(\Omega(1/\epsilon^{2.5})\) and an upper bound of \(\tilde{O}(1/\epsilon^{3})\). We settle this question by providing an algorithm with pricing query complexity \(\tilde{O}(1/\epsilon^{2.5})\). Our work is also related to the line of work on query complexity in Computer Science, which asks what is the minimum number of queries to a black box required to perform a certain task. This approach has been applied to learning theory, parallel computing, quantum computing, analysis of Boolean functions, and optimization, among others. Since this literature is too broad to be surveyed here, we refer the reader to the book by Kothari, Lee, Newman, and Szegedy [14]. ## 2 Description Complexity of Regular Distributions An \(\Omega(1/\epsilon^{0.5})\) lower bound on the description complexity of regular distributions is implicit in the \(\Omega(1/\epsilon^{2.5})\) lower bound on the pricing query complexity given in [11]. We can even modify their example to strengthen the result to work for the class of distributions with non-decreasing PDF functions (which is a subclass of MHR distributions and regular distributions). **Theorem 2.1**.: _There is a set of \(2^{\Omega(1/\epsilon^{0.5})}\) distributions with non-decreasing PDF supported on \([0,1]\) such that for every given pair the Levy-distance is at least \(\Omega(\epsilon)\)._ Proof.: Let \(n=1/(4\epsilon^{0.5})\) and let \(b=(b_{0},\ldots,b_{n-1})\in\{0,1\}^{n}\) be a sequence of bits. Now define a PDF \(f_{b}(x)\) such that for \(v\in[4\sqrt{\epsilon}k,4\sqrt{\epsilon}(k+1)]\) we have \(f_{b}(v)=2v\) if \(b_{k}=0\) and \[f_{b}(v)=\left\{\begin{array}{ll}2v,&\text{ if }v\in[v_{0}-2\sqrt{\epsilon},v_{0}- \sqrt{\epsilon}]\cup[v_{0}+\sqrt{\epsilon},v_{0}+2\sqrt{\epsilon}];\\ 2v_{0},&\text{ if }v\in(v_{0}-\sqrt{\epsilon},v_{0}+\sqrt{\epsilon}).\end{array} \right.\qquad\text{ for }v_{0}=(4k+2)\sqrt{\epsilon}\] if \(b_{k}=1\) (see Figure 1). First, we observe that all such functions have non-decreasing PDFs. Now, if two functions differ in a certain bit \(b_{t}\) then the Levy-distance must be at least \(\Omega(\epsilon)\) by comparing their CDF around \(v_{0}=(4k+2)\sqrt{\epsilon}\). In particular, let \(b,b^{\prime}\in\{0,1\}^{n}\) be two sequences that only differs at index \(k\), with \(b_{k}=0\) and \(b^{\prime}_{k}=1\). Let \(F,G\) be the CDFs of distributions constructed by sequence \(b\) and \(b^{\prime}\) respectively. Then \(F\) and \(G\) have \(\Omega(\epsilon)\) Levy-distance since \(F(v_{0})<G(v_{0}-0.1\epsilon)-0.1\epsilon\) The above theorem shows that to describe a regular distribution within \(\epsilon\) Levy-distance, \(\Omega(\epsilon^{-0.5})\) bits are necessary. The main focus of the remaining section is to provide a matching upper bound \(\tilde{O}(\epsilon^{-0.5})\) on the description complexity. ### Learning a smooth distribution We will start with the assumption that the distribution is smooth, by which we mean that it has no point masses and is described by a PDF of class \(C^{1}\), i.e., the derivative \(f^{\prime}(v)\) exists and is continuous. Under this assumption, regularity can be written as the following condition: \[f^{\prime}(v)\cdot(1-F(v))\geq-2f^{2}(v) \tag{6}\] which corresponds to the derivative of the virtual value function being non-negative. We start proving Lemma 2.2 with sufficient conditions for learning a smooth distribution. In Section 2.2 we describe a simplified version of our algorithm to learn a distribution with a convex PDF (\(f^{\prime}(v)\geq 0\)). After we describe our main ideas in this special case, we then turn to learn a smooth regular distribution in Section 2.3. Finally, we drop the smoothness assumption in Section 2.4 by using a regularity-preserving mollification argument. **Lemma 2.2**.: _For any unknown distribution \(F\) and \(m\) points \(x_{1}<x_{2}<\dots<x_{m}\), such that for each \(i\),_ * _we can identify_ \(F_{i}\in[0,1]\) _such that_ \(F(x_{i})\in[F_{i}-\epsilon,F_{i}+\epsilon]\)_, and_ * _either_ \(x_{i+1}-x_{i}\leq\epsilon\)_, or there exists_ \(\underline{f}_{i}\leq\overline{f}_{i}\) _such that_ \(f(x)\in[\underline{f}_{i},\overline{f}_{i}]\) _for every_ \(x\in[x_{i},x_{i+1}]\)_, and satisfy_ \((x_{i+1}-x_{i})(\overline{f}_{i}-\underline{f}_{i})\leq\epsilon\)_,_ _we can construct a distribution \(\hat{F}\) within \(O(\epsilon)\) Levy-distance from \(F\) on \([x_{1},x_{m}]\)._ Proof.: We can assume that \(F_{1}\leq F_{2}\leq\dots\leq F_{m}\). If not, we can sort the values of \(F_{i}\) in increasing order and the properties in the lemma will continue to hold. To see that, first notice that if \(F_{i}>F_{i+1}\) then we have \(F(x_{i})\) and \(F(x_{i+1})\) are both in range \([F_{i}-\epsilon,F_{i+1}+\epsilon]\). This happens because \(F(x_{i})\leq F(x_{i+1})\), and \(F(x_{i})\in[F_{i}-\epsilon,F_{i}+\epsilon]\), \(F(x_{i+1})\in[F_{i+1}-\epsilon,F_{i+1}+\epsilon]\). Then \(F(x_{i})\in[F_{i}-\epsilon,F_{i+1}+\epsilon]\subseteq[F_{i+1}-\epsilon,F_{i+ 1}+\epsilon]\), and \(F(x_{i+1})\in[F_{i}-\epsilon,F_{i+1}+\epsilon]\subseteq[F_{i}-\epsilon,F_{i}+ \epsilon]\). Therefore, if we switch \(F_{i}\) and \(F_{i+1}\), the properties in the lemma still hold. Consider the following distribution \(\hat{F}\): For each \(x_{i}\), \(\hat{F}(x_{i})=F_{i}\) and \[\hat{F}(x)=F_{i}+\frac{F_{i+1}-F_{i}}{x_{i+1}-x_{i}}(x-x_{i})\text{ for }x\in(x_{i},x_{i+1}).\] In other words, \(\hat{F}\) is defined by the estimation of \(F\) at some points, and filling the curve in between by a linear function. Now we show that \(\hat{F}\) is within \(O(\epsilon)\) Levy-distance from \(F\) on \([x_{1},x_{m}]\). We prove this by showing that this is true on every interval \([x_{i},x_{i+1}]\). If \(x_{i+1}-x_{i}\leq\epsilon\), then \(F\) and \(\hat{F}\) are within \(\epsilon\) Levy-distance on \([x_{i},x_{i+1}]\) since for any \(x\in[x_{i},x_{i+1}]\) \[F(x-\epsilon)-\epsilon\leq F(x_{i})-\epsilon\leq F_{i}\leq\hat{F}(x)\leq F_{i +1}\leq F(x_{i+1})+\epsilon\leq F(x+\epsilon)+\epsilon.\] If \(x_{i+1}-x_{i}>\epsilon\), we show a stronger statement that \(\hat{F}\) is within \(O(\epsilon)\) Kolmogorov distance from \(F\) on \([x_{i},x_{i+1}]\), i.e., for any \(x\in[x_{i},x_{i+1}]\), \(|\hat{F}(x)-F(x)|\leq O(\epsilon)\). By the lemma statement, this is true for every \(x_{i}\). Now we show that this is also true for every \(x\in(x_{i},x_{i+1})\). Let \(\hat{f}_{i}=\frac{F_{i+1}-F_{i}}{x_{i+1}-x_{i}}\) be the PDF of \(\hat{F}\) on \((x_{i},x_{i+1})\). Then \[|\hat{F}(x)-F(x)| = \left|(F_{i}+(x-x_{i})\hat{f}_{i})-(F(x_{i})+\int_{x_{i}}^{x}f(t )dt)\right| \tag{7}\] \[\leq |F_{i}-F(x_{i})|+\left|(x-x_{i})\hat{f}_{i}-\int_{x_{i}}^{x}f(t)dt\right|\] \[\leq \epsilon+\int_{x_{i}}^{x}|\hat{f}_{i}-f(t)|dt\leq\epsilon+\int_{x _{i}}^{x_{i+1}}|\hat{f}_{i}-f(t)|dt.\] Here the first line is by the definition of the CDF and PDF; the second line is by \(|a+b|\leq|a|+|b|\) for any \(a,b\); the third line is by the definition of \(F_{i}\). Now we bound \(\hat{f}_{i}\). As \(f(x)\leq\overline{f}_{i}\) for every \(x\in(x_{i},x_{i+1})\), we have \[(x_{i+1}-x_{i})\hat{f}_{i}-2\epsilon=(F_{i+1}-\epsilon)-(F_{i}+\epsilon)\leq F (x_{i+1})-F(x_{i})=\int_{x_{i}}^{x_{i+1}}f(t)dt\leq(x_{i+1}-x_{i})\overline{f} _{i}.\] The same way, as \(f(x)\geq\underline{f}_{i}\) for every \(x\in(x_{i},x_{i+1})\), we have \[(x_{i+1}-x_{i})\hat{f}_{i}+2\epsilon=(F_{i+1}+\epsilon)-(F_{i}-\epsilon)\geq F (x_{i+1})-F(x_{i})=\int_{x_{i}}^{x_{i+1}}f(t)dt\geq(x_{i+1}-x_{i})\underline{f} _{i}.\] Combine the two inequalities above, we can bound \(\hat{f}_{i}\) by \[\underline{f}_{i}-\frac{2\epsilon}{x_{i+1}-x_{i}}\leq\hat{f}_{i}\leq\overline{ f}_{i}+\frac{2\epsilon}{x_{i+1}-x_{i}}.\] Thus for any \(t\in(x_{i},x_{i+1})\), as \(f(t)\in[\underline{f}_{i},\overline{f}_{i}]\), we have \(|\hat{f}_{i}-f(t)|\leq\overline{f}_{i}-\underline{f}_{i}+\frac{2\epsilon}{x_{i+ 1}-x_{i}}\). Apply this bound to (7), we get \[|\hat{F}(x)-F(x)| \leq \epsilon+\int_{x_{i}}^{x_{i+1}}|\hat{f}_{i}-f(t)|dt\] \[\leq \epsilon+(x_{i+1}-x_{i})\left(\overline{f}_{i}-\underline{f}_{i}+ \frac{2\epsilon}{x_{i+1}-x_{i}}\right)\] \[= 3\epsilon+(x_{i+1}-x_{i})(\overline{f}_{i}-\underline{f}_{i}) \leq 4\epsilon.\] ### Learning a smooth distribution with convex CDF In this section, we assume we have a smooth distribution \(F\) and an oracle that allows us to sample the value of its CDF \(F(\cdot)\) and its PDF \(f(\cdot)\). We start with a special case of convex CDF (\(f^{\prime}(v)>0\)) to describe a simplified version of our algorithm. For a high-level intuition, it is useful to perform the following 'heuristic' calculation based on Lemma 2.2: if \(\Delta x\) is the distance between query points and \(\Delta f\) is the variation of the PDF between those points, our goal is to obtain \(\Delta x\cdot\Delta f\leq\epsilon\). Approximating \(\Delta f\approx f^{\prime}(x)\cdot\Delta x\) we obtain \(\Delta x\leq\sqrt{\epsilon/f^{\prime}(x)}\). Hence, we will try to sample points at a rate \(\sqrt{f^{\prime}(x)/\epsilon}\). We make this formal in the following proof and bound the number of queries we need to guarantee this sample rate. Note that our proof is algorithmic and will be easily converted later to a pricing query complexity bound. **Theorem 2.3**.: _Let \(F\) be a smooth distribution with a convex PDF. Then \(\tilde{O}(\epsilon^{-0.5})\) oracle queries to \(F\) and \(f\) are sufficient to learn the distribution within \(\epsilon\) Levy-distance._ Proof.: We will show it is possible to find a set of \(O(1/\epsilon^{0.5})\) points satisfying the conditions of Lemma 2.2. We start by noticing that for a convex CDF, the PDF \(f\) is monotone. Step 1We set \(K=\log(1/\epsilon)\) and use binary search to identify points \(x_{0},x_{1},\ldots,x_{2K+3}\) such that for each \(k=0,\ldots,K+1\) we have \(|x_{2k+1}-x_{2k+2}|\leq\epsilon\) and for the interval \(I_{k}=[x_{2k},x_{2k+1}]\) it holds that: * \(f(v)\leq 1\) for \(v\in I_{0}\) * \(2^{k-1}\leq f(v)\leq 2^{k}\) for \(v\in I_{k}\), \(k=1,...,K\) * \(f(v)\geq 2^{K}\) for \(v\in I_{K+1}\) Identifying each point takes \(O(\log(1/\epsilon))=\tilde{O}(1)\) queries to \(f(\cdot)\). Given that we have \(\tilde{O}(1)\) such points, we used a total of \(\tilde{O}(1)\) so far. We observe that the last interval \(I_{K+1}\) already satisfies the conditions of Lemma 2.2 since its length \(|I_{K+1}|\) is at most \(\epsilon\): \[1\geq\int_{v\in I_{K+1}}f(v)dv\geq 2^{K}|I_{K+1}|=\frac{1}{\epsilon}|I_{K+1}|.\] Step 2For each interval \(I_{k}\) with \(k=0,...,K\), partition \(I_{k}\) to intervals \(I_{k,1},I_{k,2},\cdots\) of length \(\frac{1}{2^{k}}\sqrt{\epsilon}\). The number of endpoints added is at most \(\frac{|I_{k}|}{\sqrt{\epsilon/2^{k}}}=2^{k}\epsilon^{-1/2}|I_{k}|\) for each interval \(I_{k}\), and sums up to at most \[\sum_{k=0}^{K}2^{k}\epsilon^{-1/2}|I_{k}| = \epsilon^{-1/2}|I_{0}|+\sum_{k=1}^{K}2^{k}\epsilon^{-1/2}|I_{k}|\] \[\leq \epsilon^{-1/2}+\sum_{k=1}^{K}\epsilon^{-1/2}\int_{v\in I_{k}}2f( v)dv\] \[\leq \epsilon^{-1/2}+\epsilon^{-1/2}\int_{v\in[0,1]}2f(v)dv\] \[= \epsilon^{-1/2}+2\epsilon^{-1/2}=3\epsilon^{-1/2}.\] Thus if we query \(f\) and \(F\) for all the endpoints of the intervals from the first two steps, the total number of queries needed is at most \(O(\epsilon^{-1/2})\). Step 3For each interval \(I_{k,j}=[v_{k,j},v_{k,j}+2^{-k}\sqrt{\epsilon}]\subseteq I_{k}\) partitioned in Step 2, define \(\underline{f}_{k,j}=f(v_{k,j})\) and \(\overline{f}_{k,j}=f(v_{k,j}+2^{-k}\sqrt{\epsilon})\). If \(\overline{f}_{k,j}-\underline{f}_{k,j}=t_{k,j}2^{k}\sqrt{\epsilon}\), then partition the interval to \(\tilde{t}_{k,j}:=\max(1,\lceil t_{k,j}\rceil)\) intervals of length \(\frac{1}{t_{k,j}2^{k}}\sqrt{\epsilon}\), and query \(F\) for the newly added endpoints. As any neighboring points \(\overline{v},\underline{v}\) have distance \(\overline{v}-\underline{v}\leq\frac{1}{t_{k,j}2^{k}}\sqrt{\epsilon}\). Hence this interval satisfies the second condition in Lemma 2.2 : \[(\overline{v}-\underline{v})(\overline{f}-\underline{f})\leq\frac{1}{\tilde{t }_{k,j}2^{k}}\sqrt{\epsilon}\cdot t_{k,j}2^{k}\sqrt{\epsilon}\leq\epsilon\] which allows us to learn the distribution up to \(\epsilon\) Levy-distance. Finally, we only need to bound the number of queries needed in this step. At most \(\lfloor t_{k,j}\rfloor\) are needed in \(I_{k,j}\) which has length \(2^{-k}\sqrt{\epsilon}\). Now if \(I_{k}=[x_{2k},x_{2k+1}]\) we have: \[2^{k-1}\geq f(x_{2k+1})-f(x_{2k})=\sum_{j}(f(v_{k,j+1})-f(v_{k,j}))=\sum_{j}t _{k,j}2^{k}\sqrt{\epsilon},\] which implies \(\sum_{j}t_{k,j}\leq 1/(2\sqrt{\epsilon})\). Thus in this step, for all \(k\), the total number of queries on \(F\) is \(\tilde{O}(\epsilon^{-1/2})\). ### Learning a smooth regular distribution Unlike convex functions, the PDF of a regular distribution can increase or decrease, so we can't easily partition the interval as in the previous lemma. However, regularity imposes a rate at which the PDF can decrease (equation (6)). Our first lemma shows that the PDF of a regular distribution can't decrease by a factor of 2 too many times. We make this intuition formal in the next lemma: **Lemma 2.4**.: _Let \(F\) be a smooth regular distribution with PDF \(f\). Fix an integer \(\ell\). Now assume \(S\) is a collection of disjoint intervals of the form \([v_{i},v^{\prime}_{i})\) such that:_ * _for each interval_ \([v_{i},v^{\prime}_{i})\) _we have_ \(2^{-\ell}\leq 1-F(v^{\prime}_{i})\leq 1-F(v_{i})<2^{-\ell+1}\)__ * _For each_ \(v\in[v_{i},v^{\prime}_{i})\) _we have_ \(f(v^{\prime}_{i})\leq f(v)\leq f(v_{i})\) _and_ \(f(v^{\prime}_{i})\leq\frac{1}{2}f(v_{i})\)__ _Then the number of intervals is bounded by a constant \(|S|\leq 32\)._ Proof.: We will count the maximum number of intervals in \(S\). We can assume w.l.o.g. that \(f(v^{\prime}_{i})=\frac{1}{2}f(v_{i})\) as we can always take a subinterval where the last condition holds with equality. Consider an interval \([v_{i},v^{\prime}_{i})\) in the collection \(S\) with \(f(v_{i})\in[2^{k-1},2^{k})\). As for any \(v\in[v_{i},v^{\prime}_{i})\), \[f^{\prime}(v)\geq-\frac{2f^{2}(v)}{1-F(v)}\geq-\frac{2f^{2}(v_{i})}{1-F(v^{ \prime}_{i})}\geq-\frac{2\cdot(2^{k})^{2}}{2^{-\ell}}=-2^{2k+\ell+1}. \tag{8}\] Here the first inequality is by the definition of regular distributions; the second and the third inequalities are by \(f\) is bounded between \(f(v^{\prime}_{i})\), thus \(f(v)\leq f(v_{i})\leq 2^{k}\), and \(1-F(v)\geq 1-F(v_{i})\geq 2^{-\ell}\). Therefore \[-\frac{1}{2}\cdot 2^{k-1}\geq f(v^{\prime}_{i})-f(v_{i})=\int_{v_{i}}^{v^{ \prime}_{i}}f^{\prime}(v)dv\geq-2^{2k+\ell+1}(v^{\prime}_{i}-v_{i}). \tag{9}\] Here the first inequality is by \(f(v^{\prime}_{i})=\frac{1}{2}f(v_{i})\) and \(f(v_{i})\geq 2^{k-1}\); the second inequality is by inequality (8). By inequality (9) we can lower bound the length of the interval by \(v^{\prime}_{i}-v_{i}\geq\frac{1}{2}2^{k-1-(2k+\ell+1)}=2^{-k-\ell-3}\). Thus the CDF change in the interval can be lower bounded as follows: \[F(v^{\prime}_{i})-F(v_{i})=\int_{v_{i}}^{v^{\prime}_{i}}f(v)dv\geq f(v^{\prime }_{i})(v^{\prime}_{i}-v_{i})\geq 2^{k-2}\cdot 2^{-k-\ell-3}=2^{-\ell-5}. \tag{10}\] Here the first inequality is by \(f(v)\geq f(v_{i})\) in the interval with a non-increasing PDF function, and the second inequality is by \(f(v^{\prime}_{i})=\frac{1}{2}f(v_{i})\geq 2^{k-2}\), and \(v^{\prime}_{i}-v_{i}\geq 2^{-k-\ell-3}\). Since the intervals \([v_{i},v^{\prime}_{i})\) are disjoint, the sum of \(F(v^{\prime}_{i})-F(v_{i})\) is at most \(2^{-\ell+1}-2^{-\ell}=2^{-\ell}\) Therefore, there are at most \(2^{5}=O(1)\) such intervals. Lemma 2.4 shows that in an interval with the quantile bounded by a factor of \(2\), the PDF can decrease by a factor of \(\frac{1}{2}\) for at most \(32\) times. A useful corollary is that in this case the PDF can decrease by a constant factor of at most \(1-2^{-33}\) between two points. **Corollary 2.5**.: _Let \(F\) be a smooth regular distribution and \(\ell\) an integer. For any two points \(x<x^{\prime}\) with \(2^{-\ell}\leq 1-F(x^{\prime})<1-F(x)\leq 2^{-\ell+1}\) it holds that \(f(x^{\prime})>2^{-33}f(x)\)._ Proof.: If \(f(x)\geq 2^{33}f(x^{\prime})\), as \(f\) is continuous, \([x,x^{\prime})\) can be partitioned to \(33\) intervals \([x,v_{1})=[v_{0},v_{1})\), \([v_{1},v_{2})\), \([v_{2},v_{3})\), \(\cdots\), \([v_{31},v_{32})\), \([v_{32},x^{\prime})=[v_{32},v_{33})\) such that for any interval \([v_{i},v_{i+1})\), \(f(v_{i+1})\leq\frac{1}{2}f(v_{i})\) which contradicts Lemma 2.4. With Lemma 2.4 we can also obtain the following key component of the learning algorithm. **Lemma 2.6**.: _Let \(F\) be a smooth regular distribution and \(\ell\) an integer and \(\epsilon>0\) a parameter. For any interval \([\underline{x},\overline{x}]\) with \(2^{-\ell}\leq 1-F(\overline{x})<1-F(\underline{x})\leq 2^{-\ell+1}\), using \(\tilde{O}(1)\) queries to \(f\) we can find a sequence of points \(\underline{x}=x_{1}\leq x_{2}\leq\cdots\leq x_{m}=\overline{x}\) in the interval, such that: for any sub-interval \([x_{i},x_{i+1})\), either_ * \(x_{i+1}-x_{i}\leq\epsilon\)_;_ * \(f(x_{i+1})<2^{-33}\)_, which implies for any point_ \(x\in[x_{i},x_{i+1})\)_,_ \(f(x)\leq 1\)_;_ * _for any two points_ \(x,x^{\prime}\in[x_{i},x_{i+1})\)_,_ \(2^{-36}f(x)<f(x^{\prime})<2^{36}f(x)\)_._ Proof.: Consider the following binary search algorithm: For any interval \([a,b]\) (begin with \([a,b]=[\underline{x},\overline{x}-\epsilon]\) and only \(\underline{x},\overline{x}-\epsilon,\overline{x}\) in the output sequence), 1. If \(b-a\leq\epsilon\), stop searching \([a,b]\); 2. If \(f(b)<4f(a)\), stop searching \([a,b]\); 3. If \(f(b)<2^{-33}\), stop searching \([a,b]\); 4. Otherwise, add \(\frac{a+b}{2}\) to the output sequence, then search both \([a,\frac{a+b}{2}]\) and \([\frac{a+b}{2},b]\). We show that all the first three cases in the algorithm are valid stopping conditions. In Case 1, we don't need to search the interval \([a,b]\) as the endpoints are close enough to satisfy the required property of the lemma. In Case 2, consider interval \([a,b]\) with \(f(b)\leq 4f(a)\), and any \(x<x^{\prime}\) in \([a,b]\). By Corollary 2.5\(f(x^{\prime})<2^{36}f(x)\). If \(f(x^{\prime})\geq 2^{36}f(x)\), as \(f(b)<4f(a)\), there exist at least \(33\) intervals \([a,v_{1})\), \([v_{1},v_{2})\), \(\cdots\), \([v_{j-2},v_{j-1})\), \([v_{j-1},x)\), \([x^{\prime},v_{j+1})\), \([v_{j+1},v_{j+2})\), \(\cdots\), \([v_{31},v_{32})\), \([v_{32},b)\) such that for any interval listed above, the PDF between the two endpoints decreases by at least a factor of \(\frac{1}{2}\). This also contradicts Lemma 2.4 that in \([\underline{x},\overline{x}]\) the PDF can only decrease by half for at most \(32\) times. Similarly, \(f(x^{\prime})>2^{-36}f(x)\) otherwise there are at least \(33\) intervals between \(x\) and \(x^{\prime}\) where the PDF decreases by half. In Case 3, if \(f(b)<2^{-33}\), by Corollary 2.5 there does not exist any \(x\in[a,b)\) such that \(f(x)>1>2^{33}f(b)\). Thus for any \(x\in[a,b)\), \(f(x)\leq 1\). Thus, all first 3 cases of the algorithm are valid stopping conditions. Now we analyze how many points are added to the sequence. First notice that for any \(v\leq\overline{x}-\epsilon\), \(f(v)<\frac{2^{33}}{\epsilon}\): otherwise if \(f(v)\geq\frac{2^{33}}{\epsilon}\), as \(f(x)>2^{-33}f(v)\geq\frac{1}{\epsilon}\) for every \(x\in[v,v+\epsilon]\) by Corollary 2.5, we have \(F(v+\epsilon)-F(v)=\int_{v}^{v+\epsilon}f(t)dt>\frac{1}{\epsilon}((v+\epsilon) -v)=1\), which is impossible. Consider the following set \(S\) containing all searched intervals \([a_{i},b_{i}]\) that are one level from the leaves: in other words, the algorithm searches \([a_{i},\frac{a_{i}+b_{i}}{2}]\) and \([\frac{a_{i}+b_{i}}{2},b]\), but does not search deeper into either interval. If we list all intervals in \(S\) in increasing order (of either endpoint) \([a_{1},b_{1}],[a_{2},b_{2}],\cdots,[a_{s},b_{s}]\), then \(f\) increases by a factor of 4 from \(a_{i}\) to \(b_{i}\), but decreases by at least half from \(b_{i}\) to \(a_{i+1}\) for at most 32 times due to Lemma 2.4. Therefore, \(f(b_{i+1})>2f(b_{i})\) for all but at most 32 times. Partition \(S\) to at most 32 subsequence of intervals \(S_{d}=([a_{d+1},b_{d+1}]\), \([a_{d+2},b_{d+2}]\), \(\cdots\), \([a_{d+j},b_{d+j}])\) with \(f(b_{d+i+1})>2f(b_{d+i})\) for every \(i\in[j-1]\). Since for every \([a_{i},b_{i}]\in S\), \(2^{-33}\leq f(b_{i})<\frac{2^{33}}{\epsilon}\), \(S_{d}\) contains at most \(O(\log\frac{1}{\epsilon})=\tilde{O}(1)\) intervals, thus \(S\) also contains \(\tilde{O}(1)\) intervals. Notice that each interval in the search tree satisfies one of the 4 conditions: 1. The interval is in \(S\); 2. The interval is a child of an interval in \(S\); 3. In the search tree, the interval is an ancestor of an interval in \(S\); 4. In the search tree, the interval is a child of an ancestor of an interval in \(S\). Since the length of each searched interval is at least \(\epsilon\), the depth of the search tree is \(O(\log\frac{1}{\epsilon})=\tilde{O}(1)\). Therefore, each interval \([a_{i},b_{i}]\in S\) defined in the previous paragraph has \(\tilde{O}(1)\) ancestors in the search tree. Thus each interval in \(S\) can define at most \(\tilde{O}(1)\) intervals in the above 4 conditions, which means there are \(\tilde{O}(|S|)=\tilde{O}(1)\) intervals in the search tree. We conclude that the proposed algorithm uses \(\tilde{O}(1)\) queries to \(f\) to output a sequence of points satisfying the conditions in the lemma statement. Now we are ready to generalize the algorithm for distributions with monotone PDF to arbitrary regular distributions. The algorithm has almost the same structure, but we need to be more careful in the analysis of query complexity. **Theorem 2.7**.: _Let \(F\) be a smooth regular distribution. Then \(\tilde{O}(\epsilon^{-0.5})\) oracle queries to \(F\) and \(f\) are sufficient to learn the distribution within \(\epsilon\) Levy-distance._ Proof.: Let \(L=\log\frac{1}{\epsilon}\). We describe the steps of the algorithm as follows. We start with "Step 0" just to keep the correspondence of Steps 1, 2, and 3. Step 0Use binary search to identify \(2(L+1)\) points \(z_{i}\) such that for the intervals \(I^{(\ell)}=[z_{2\ell-1},z_{2\ell}]\) the CDF satisfies \(1-2^{-\ell+1}\leq F(v)<1-2^{-\ell}\) and \(I^{(L+1)}\) contains all values with \(1-F(v)\geq 1-2^{-L}=1-\epsilon\). And \(z_{2\ell+1}-z_{2\ell}\leq\epsilon\). Binary search requires \(O(\log\frac{1}{\epsilon})=\tilde{O}(1)\) queries for each interval, thus \(\tilde{O}(1)\) queries in total for all intervals as there are only \(\tilde{O}(1)\) intervals. We are going to run an algorithm similar to the case with monotone PDF for each interval and show that \(\tilde{O}(\epsilon^{-0.5})\) queries to \(F\) and \(f\) are enough to learn the CDF for each \(I^{(\ell)}\), thus \(\tilde{O}(\epsilon^{-0.5})\) queries are enough in total. Step 1All operations below are applied to interval \(I^{(\ell)}\) with fixed \(\ell\). We only focus on \(I^{(\ell)}\) with \(\ell\leq L\), as there is no need to learn \(I^{(L+1)}\) with \(\epsilon\) Levy distance. When there is no ambiguity, we omit the superscript \(\ell\). For interval \(I^{(\ell)}=[\underline{x},\overline{x}]\), by Lemma 2.6 we can find a sequence of points \(x_{1}=\underline{x}\leq x_{2}\leq\cdots\leq x_{m}=\overline{x}\) in the interval, such that: for any subinterval \([x_{i},x_{i+1})\), either (a). \(x_{i+1}-x_{i}\leq\epsilon\); or (b). \(f(x_{i+1})<2^{-33}\), which implies for any point \(x\in[x_{i},x_{i+1})\), \(f(x)\leq 1\); or (c). for any two points \(x,x^{\prime}\in[x_{i},x_{i+1})\), \(2^{-36}f(x)<f(x^{\prime})<2^{36}f(x)\). Construct interval collections \(I_{0},I_{1},\cdots,I_{K},I_{K+1}\) for some \(K=\tilde{O}(1)\) as follows. For all subintervals \([x_{i},x_{i+1})\) satisfying (a), group them to an interval collection \(I_{K+1}\); for all subintervals satisfying (b), group them to an interval collection \(I_{0}\); for all subintervals satisfying (c), add \([x_{i},x_{i+1})\) to interval collection \(I_{k}\), if \(2^{k-1}\leq 2^{33}f(x_{i+1})<2^{k}\). Different from the analysis for monotone non-decreasing \(f\), for any interval \([x_{i},x_{i+1})\) in \(I_{k}\), point \(x\in[x_{i},x_{i+1})\) satisfies \(f(x)<2^{36}f(x_{i+1})<2^{k+3}\), and \(f(x)>2^{-36}f(x_{i+1})\geq 2^{k-70}\). As the intervals in the collections are constructed by Lemma 2.6 which only uses \(\tilde{O}(1)\) queries, the total number of queries to \(f\) used in this step is \(\tilde{O}(1)\). Step 2For each interval collection \(I_{k}\) with \(0\leq k\leq K\), partition all points in \(I_{k}\) to intervals \(I_{k,1},I_{k,2},\cdots\) of length \(\frac{1}{2^{k+\ell}}\sqrt{\epsilon}\). Denote by \(|I_{k}|\) the total length of all intervals in \(I_{k}\). The number of endpoints added is at most \(\frac{|I_{k}|}{\sqrt{\epsilon}/2^{k+\ell}}=2^{k+\ell}\epsilon^{-1/2}|I_{k}|\) for each interval collection \(I_{k}\), and sums up to at most \[\sum_{k=0}^{K}2^{k}\epsilon^{-1/2}|I_{k}| = \epsilon^{-1/2}|I_{0}|+\sum_{k=1}^{K}2^{k+\ell}\epsilon^{-1/2}|I _{k}|\] \[\leq \epsilon^{-1/2}+\sum_{k=1}^{K}\epsilon^{-1/2}\int_{v\in I_{k}}2^{ 70+\ell}f(v)dv\] \[\leq \epsilon^{-1/2}+2^{70+\ell}\epsilon^{-1/2}\int_{v\in I^{(\ell)}} f(v)dv\] \[= \epsilon^{-1/2}+2^{70+\ell}\epsilon^{-1/2}\cdot 2^{-\ell}=O( \epsilon^{-1/2}).\] Thus if we query \(f\) and \(F\) for all the endpoints of the intervals from this step, the total number of queries needed is at most \(O(\epsilon^{-1/2})\). Step 3For each interval \(I_{k,j}=[v_{k,j},v_{k,j}+2^{-k-\ell}\sqrt{\epsilon}]\subseteq I_{k}\) partitioned in Step 2, if \(f(v_{k,j}+2^{-k-\ell}\sqrt{\epsilon})-f(v_{k,j})=t_{k,j}\cdot 2^{k+\ell}\sqrt{\epsilon}\), we further partition the interval to \(\bar{t}_{k,j}:=\max(1,\lceil t_{k,j}\rceil)\) intervals \([v_{k,j,0},v_{k,j,1}]\), \([v_{k,j,1},v_{k,j,2}]\), \(\cdots\), \([v_{k,j,\bar{t}_{k,j}-1},v_{k,j,\bar{t}_{k,j}}]\) of length \(\frac{1}{\bar{t}_{k,j}2^{k+\ell}}\sqrt{\epsilon}\) (with \(v_{k,j,0}=v_{k,j}\) and \(v_{k,j,\bar{t}_{k,j}}=v_{k,j+1}\)), and query \(F\) for the newly added endpoints. By the definition of regular distribution, for any \(x\in[v_{k,j},v_{k,j+1}]\), \[f^{\prime}(x)\geq-\frac{2f^{2}(x)}{1-F(x)}>\frac{2\cdot(2^{k+3})^{2}}{2^{-\ell }}=-2^{\ell+2k+7}\] by \(f(x)<2^{k+3}\) and \(F(x)<1-2^{\ell}\). Thus we can bound \(f(x)\) for \(x\in[v_{k,j},v_{k,j+1}]\) as follows: \[f(x) \leq f(v_{k,j+1})-(-2^{\ell+2k+7})\cdot(v_{k,j+1}-v_{k,j})\] \[< f(v_{k,j})+t_{k,j}\cdot 2^{k+\ell}\sqrt{\epsilon}+2^{\ell+2k+7}2^{-k- \ell}\sqrt{\epsilon}\] \[= f(v_{k,j})+t_{k,j}\cdot 2^{k+\ell}\sqrt{\epsilon}+2^{k+7}\sqrt{ \epsilon}\,\overline{\epsilon}\,\overline{f},\] \[f(x) \geq f(v_{k,j})+(-2^{\ell+2k+7})\cdot(v_{k,j+1}-v_{k,j})\] \[> f(v_{k,j})-2^{\ell+2k+7}2^{-k-\ell}\sqrt{\epsilon}\] \[= f(v_{k,j})-2^{k+7}\sqrt{\epsilon}\stackrel{{ def}}{{= }}\underline{f},\] Thus for any neighboring points \(v_{k,j,a},v_{k,j,a+1}\) in the partition with distance \(d=\frac{1}{t_{k,j}2^{k+\ell}}\sqrt{\epsilon}\), if \(x\in[v_{k,j,a},v_{k,j,a+1}]\), we have \(f(x)\in(\underline{f},\overline{f})\) with \(\overline{f}-\underline{f}<t_{k,j}\cdot 2^{k+\ell}\sqrt{\epsilon}+2^{k+8}\sqrt{\epsilon}\), and \[(v_{k,j,a+1}-v_{k,j,a})(\overline{f}-\underline{f}) < \frac{1}{\tilde{t}_{k,j}2^{k+\ell}}\sqrt{\epsilon}(t_{k,j}\cdot 2^{k+ \ell}\sqrt{\epsilon}+2^{k+8}\sqrt{\epsilon})\] \[= \frac{t_{k,j}}{\tilde{t}_{k,j}}\epsilon+2^{8-\ell}\epsilon=O( \epsilon).\] Figure 2 provides an illustrative example of the analysis. By Lemma 2.2 we can learn a distribution within \(\epsilon\) Levy-distance from \(F\) on \(\bigcup I_{k,j}\) for \(k\in[0,K]\). As all intervals in \(I_{K+1,j}\) have length at most \(\epsilon\), we do not need to learn those intervals to satisfy \(\epsilon\) Levy-distance. Now we analyze how many additional points (i.e. queries to \(F\)) are needed in this step. \(\tilde{t}_{k,j}-1=\max(1,\lceil t_{k,j}\rceil)-1\leq\max(0,t_{k,j})\) additional queries are needed for each interval \(I_{k,j}\). This means that if \(f\) increases on \(I_{i,j}\), i.e. \(f(v_{k,j+1})>f(v_{k,j})\), then at most \(t_{k,j}=\frac{f(v_{k,j+1})-f(v_{k,j})}{2^{k+\ell}\sqrt{\epsilon}}\) additional queries are needed; otherwise when \(f\) does not increase on \(I_{i,j}\), no additional query is required. Figure 2: Illustration of the analysis on interval \([v_{k,j},v_{k,j+1}]\) with \(t_{k,j}\leq 1\). The two black solid lines have slope \(-2^{\ell+2k+7}\), which is a lower bound of \(f^{\prime}(v)\) in this range by the definition of regular distribution. Given \(f(v_{k,j})\) and \(f(v_{k,j+1})\), the PDF function \(f\) of a regular distribution cannot go beyond the two black lines. As the CDF of any distribution is the area below the PDF curve, the CDF difference of any two regular distributions given the CDF and the PDF at the \(v_{k,j}\) and \(v_{k,j+1}\) (for example, the red curve and the blue curve) is smaller than the area of the dashed rectangle, which is \((v_{k,j+1}-v_{k,j})(\overline{f}-\underline{f})\). Let \(I_{k,\uparrow}=\{x:f^{\prime}(x)\geq 0,2^{k-70}<f(x)<2^{k+3}\}\) and \(I_{k,\downarrow}=\{x:f^{\prime}(x)<0,2^{k-70}<f(x)<2^{k+3}\}\) be the sets of values such that \(f(x)\) are bounded in range \((2^{k-70},2^{k+3})\), while \(f\) is non-decreasing and decreasing respectively. Notice that \(I_{k}\) is a subset of \(I_{k,\uparrow}\cup I_{k,\downarrow}\). Therefore, we can upper-bound the total number of additional queries in \(I_{k}\) by \[\sum_{j}\max(0,t_{k,j})\leq\sum_{j}\frac{f(v_{k,j+1})-f(v_{k,j})}{2^{k+\ell} \sqrt{\epsilon}}\leq\frac{1}{2^{k+\ell}\sqrt{\epsilon}}\int_{I_{k,\uparrow}}f ^{\prime}(x)dx. \tag{11}\] As on \(I_{k,\uparrow}\cup I_{k,\downarrow}\), \(f(x)\) can increase to at most \(2^{k+3}\), we have \(\int_{I_{k,\downarrow}}f^{\prime}(x)dx\leq 2^{k+3}-\int_{I_{k,\downarrow}}f ^{\prime}(x)dx\). Notice that for any \(x\) such that \(2^{k-70}<f(x)<2^{k+3}\), when \(f^{\prime}(x)<0\), \(f^{\prime}(x)>-\frac{2f^{2}(x)}{1-F(x)}>-2^{2k+\ell+7}\). Also by \[2^{k-70}|I_{k,\downarrow}|<\int_{I_{k,\downarrow}}f(x)dx\leq 2^{-\ell}\] we have \(|I_{k,\downarrow}|<2^{70-k-\ell}\). Therefore, \[\int_{I_{k,\uparrow}}f^{\prime}(x)dx \leq 2^{k+3}-\int_{I_{k,\downarrow}}f^{\prime}(x)dx<2^{k+3}+\int_{I_{ k,\downarrow}}2^{2k+\ell+7}dx \tag{12}\] \[< 2^{k+3}+2^{2k+\ell+7}|I_{k,\downarrow}|<2^{k+3}+2^{2k+\ell+7} \cdot 2^{70-k-\ell}\] \[< 2^{k+3}+2^{k+77}<2^{k+78}.\] Combine (11) and (12), we have the total number of additional query in \(I_{k}\) is at most \[\frac{1}{2^{k+\ell}\sqrt{\epsilon}}\int_{I_{k,\uparrow}}f^{\prime}(x)<\frac{1 }{2^{k+\ell}\sqrt{\epsilon}}\cdot 2^{k+78}=2^{78-\ell}\sqrt{\epsilon}.\] Thus for every \(k\), the number of queries in Step 3 is \(O(\epsilon^{-1/2})\), which means that the total number of queries in Step 3 is \(\tilde{O}(\epsilon^{-1/2})\) since there are \(\tilde{O}(1)\) different values of \(k\). Discussion on \(\beta\)-irregular distributions.Observe that there is nothing special about the factor of \(2\) in equation (6). By repeating the proof with a different irregularity coefficient we obtain a proof of Theorem 1.2. ### Description complexity of a general (non-smooth) regular distribution When the regular distribution does not have a smooth CDF function, the PDF function may not exist, and our algorithm for learning the CDF distribution using oracle queries to the PDF function may not apply to general regular distributions. However, we show in the following lemma, that any regular distribution \(F\) can be uniformly approximated by a smooth regular distribution \(\tilde{F}\) within arbitrarily small Levy-distance \(\delta\). Thus, to describe \(F\) within \(\epsilon\) Levy-distance, it suffices to describe \(\tilde{F}\) within \(\epsilon\) Levy-distance, which is doable via the algorithm in previous subsections. What we will describe above is a variation of the well-known mollification procedure in real analysis, which replaces a function with its convolution with a very concentrated \(C^{\infty}\)-function. The only specific detail below is that we argue we can do mollification in a way that is regularity-preserving. **Lemma 2.8**.: _Given any regular distribution \(F\), for any \(\delta>0\) there is another regular distribution \(\tilde{F}\) such that \(\text{Levy}(F,\tilde{F})\leq\delta\) and \(\tilde{F}\) has no point-masses and has a \(C^{\infty}\) CDF._ Proof.: Let \(R(q)=qF^{-1}(1-q)\) be the revenue curve associated \(F\). Since \(F\) is regular, the curve \(R\) is concave, but it may not be \(C^{\infty}\). There are different techniques to obtain a uniform approximation of concave functions by \(C^{\infty}\) concave functions, one of which is taking a convolution with a non-negative mollifier (see [1] for example). For our purpose, it is enough that it exists a function \(C^{\infty}\)-concave \(\hat{R}(x)\) such that \(\hat{R}(0)=0\) and \(|R(x)-\hat{R}(x)|\leq\delta^{2}/2,\forall x\in[0,1]\). Now, if we take \(\tilde{R}(x)=\hat{R}(x)+\delta^{2}x(1-x)\) we obtain a function that is \(C^{\infty}\) strongly concave and \(|R(x)-\tilde{R}(x)|\leq\delta^{2}\). Now, define \(\tilde{F}\) such that \(\tilde{R}(q)=q\tilde{F}^{-1}(1-q)\). Because \(\tilde{R}\) is strongly convex, \(\tilde{F}\) has no point masses. The function \(\tilde{F}\) is strictly increasing and \(C^{\infty}\), so \(\tilde{f}(v)>0\) in its domain. As a consequence, \(\tilde{F}(v)\) is also \(C^{\infty}\). Finally observe that for \(q\in[\delta,1-\delta]\) we have \(|\tilde{F}^{-1}(q)-F^{-1}(q)|\leq\delta^{2}/\delta=\delta\). Now, for a given \(q\), let \(v=F^{-1}(q)\). Then we know that: \(v-\delta\leq\tilde{F}^{-1}(q)\leq v+\delta\). Hence \(\tilde{F}(v-\delta)\leq q=F(v)\leq\tilde{F}(v+\delta)\) which implies that \(\text{Levy}(F,\tilde{F})\leq\delta\). Putting it all togetherWe can apply Theorem 2.7 to the smooth approximation from Lemma 2.8 and obtain a \(\tilde{O}(1/\epsilon^{0.5})\) bound on the number of queries to \(F\) and \(f\) to learn any regular distribution. To complete the result, we still need to argue about the number of bits necessary to represent the result since both the points \(x\) queried and the results \(F(x)\) and \(f(x)\) are in principle real numbers. However, we can see each query to \(F\) (or \(f\)) as follows: query \(F(x)\) (or \(f(x)\)) where \(x\) is an integer times \(\epsilon\), and an approximate answer within \(F(x)\pm\epsilon\) (or \(f(x)\pm\epsilon\)). All of the lemmas and algorithms already accommodate \(\pm\epsilon\) errors both in \(x\), \(F\), and \(f\). This automatically leads to a \(\tilde{O}(1/\epsilon^{0.5})\) bit complexity bound. Combined with the lower bound in Theorem 2.1, this proves our main result (Theorem 1.1). ## 3 Pricing Query Complexity of Learning a Regular Distribution In this section, we study the pricing query complexity of learning a regular distribution within \(\epsilon\) Levy-distance. In particular, for each pricing query, we submit a price \(x\), and a value \(v\sim F\) is realized. We cannot directly observe \(v\), but can observe \(\text{sign}(v-x)\in\{-1,+1\}\). By Chernoff bound, \(\tilde{O}(1/\epsilon^{2})\) pricing queries are sufficient to learn \(F(x)\) with \(\epsilon\) additive error. However, the \(\tilde{O}(\epsilon^{-0.5})\) description complexity upper bound result is not sufficient to show that the pricing query complexity of learning a regular distribution on \([0,1]\) within \(\epsilon\) Levy-distance. There are two problems remaining. Firstly, the algorithms for showing description complexity upperbound results rely on queries to not only the CDF, but also the PDF. Secondly, for non-smooth regular distributions, we prove the description complexity result by learning a smooth regular distribution \(\tilde{F}\) that is uniformly close to \(F\), but not exactly \(F\). Thus when we query \(F(x)\), we are only able to get \(\tilde{F}(x+\epsilon_{1})+\epsilon_{2}\) where \(|\epsilon_{1}|\) can be arbitrarily small (and we use the following bound \(|\epsilon_{1}|\leq\frac{\epsilon^{2}}{2}\)), and \(|\delta_{2}|<\epsilon\). Thus, the problem we want to solve is equivalent to the following theorem: **Theorem 3.1**.: _For any regular distribution \(F\) with smooth PDF function, suppose that we have perturbed oracle access to \(F\) that every time when querying \(x\), the oracle \(F^{*}\) returns \(F^{*}(x)=F(x+\epsilon_{1})+\epsilon_{2}\) for some unknown \(\epsilon_{1}\in[-\frac{\epsilon^{2}}{2},\frac{\epsilon^{2}}{2}]\), and \(\epsilon_{2}\in[-\epsilon,\epsilon]\). Then \(\tilde{O}(\epsilon^{-0.5})\) queries to the perturbed oracle \(F^{*}\) are sufficient to learn \(F\) within \(\epsilon\) Levy-distance._ ### Query complexity of distributions with convex \(F\) Before proving Theorem 3.1 for general regular distributions, we first prove the theorem for a distribution with convex \(F\) to give more intuition. The spirit of the general case is almost identical. **Theorem 3.2**.: _For any regular distribution \(F\) with smooth PDF function, suppose that we have perturbed oracle access to \(F\) that every time when querying \(x\), the oracle \(F^{*}\) returns \(F^{*}(x)=F(x+\epsilon_{1})+\epsilon_{2}\) for some unknown \(\epsilon_{1}\in[-\frac{\epsilon^{2}}{2},\frac{\epsilon^{2}}{2}]\), and \(\epsilon_{2}\in[-\epsilon,\epsilon]\). Then \(\tilde{O}(\epsilon^{-0.5})\) queries to the perturbed oracle \(F^{*}\) are sufficient to learn \(F\) within \(\epsilon\) Levy-distance._ Proof.: We first show how to define a (perturbed) query to \(f\) via the perturbed oracle. Suppose that we want to query \(f\). When we query \(x+\gamma\) and \(x-\gamma\) to the perturbed oracle \(F^{*}\), we will get \(F^{*}(x+\gamma)=F(x+\gamma+\epsilon_{1})+\epsilon_{2}\), and \(F^{*}(x-\gamma)=F(x-\gamma+\epsilon_{3})+\epsilon_{4}\) for some unknown \(\epsilon_{1},\epsilon_{3}\in[-\frac{\epsilon^{2}}{2},\frac{\epsilon^{2}}{2}]\), and \(\epsilon_{2},\epsilon_{4}\in[-\epsilon,\epsilon]\). Let \[f^{*}_{\gamma}(x)=\frac{F^{*}(x+\gamma)-F^{*}(x-\gamma)}{2\gamma}.\] By Lagrange mean value theorem, for smooth function \(F\), there exists \(x^{*}\in[x-\gamma+\epsilon_{3},x+\gamma+\epsilon_{1}]\) such that \[f(x^{*}) = \frac{F(x+\gamma+\epsilon_{1})-F(x-\gamma+\epsilon_{3})}{(x+ \gamma+\epsilon_{1})-(x-\gamma+\epsilon_{3})}\] \[= \frac{(F^{*}(x+\gamma)-\epsilon_{2})-(F^{*}(x-\gamma)-\epsilon_{ 4})}{2\gamma+\epsilon_{1}-\epsilon_{3}}\] \[= \frac{2\gamma f^{*}_{\gamma}(x)-\epsilon_{2}+\epsilon_{4}}{2 \gamma+\epsilon_{1}-\epsilon_{3}}.\] Then we have \[f^{*}_{\gamma}(x)=\frac{2\gamma+\epsilon_{1}-\epsilon_{3}}{2\gamma}f(x^{*})+ \frac{\epsilon_{2}-\epsilon_{4}}{2\gamma}=(1+\delta_{1})f(x+\gamma_{1})+ \delta_{2}, \tag{13}\] for some \(\delta_{1},\delta_{2},\gamma_{1}\) where \(|\delta_{1}|\leq\frac{\epsilon^{2}}{2\gamma}\), \(|\delta_{2}|\leq\frac{\epsilon}{\gamma}\), \(|\gamma_{1}|<2\gamma\). Later, we will make perturbed queries to \(f\) via \(f^{*}_{\gamma}(x)\), and specify the accuracy parameter \(\gamma\) needed in each step of the learning algorithm. Changes to Lemma 2.2.Suppose that we are only able to identify \(F_{i}\in[0,1]\) such that \(F(x_{i}+\epsilon_{i})\in[F_{i}-\epsilon,F_{i}+\epsilon]\) for some unknown small \(\epsilon_{i}\in[-\epsilon^{2}/2,\epsilon^{2}/2]\). We show that Lemma 2.2 still holds. Observe that the distribution constructed from input \((x_{1},\cdots,x_{m})\) and \((F_{1},\cdots,F_{m})\) in the proof of Lemma 2.2 is only \(O(\epsilon)\) Levy-distance from the distribution constructed from input \((x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{m})=(x_{1}+\epsilon_{1},x_ {2}+\epsilon_{2},\cdots,x_{m}+\epsilon_{m})\) and \((F_{1},\cdots,F_{m})\). Furthermore, as either \(x^{\prime}_{i+1}-x^{\prime}_{i}\) is \(O(\epsilon)\), or \((x^{\prime}_{i+1}-x^{\prime}_{i})(\overline{f}_{i}-\underline{f}_{i})<2(x_{i+1 }-x_{i})(\overline{f}_{i}-\underline{f}_{i})=O(\epsilon)\) (since \(x^{\prime}_{i+1}-x^{\prime}_{i}\leq x_{i+1}-x_{i}+\epsilon^{2}<2(x_{i+1}-x_{i})\)), the distribution constructed from input \((x^{\prime}_{1},\cdots,x^{\prime}_{m})\) and \((F_{1},\cdots,F_{m})\) is within \((\epsilon)\) Levy-distance from \(F\). Thus the distribution constructed from input \((x_{1},\cdots,x_{m})\) and \((F_{1},\cdots,F_{m})\) is within \(O(\epsilon)\) Levy-distance from \(F\) on \([x_{1},x_{m}]\). To summarize, to apply Lemma 2.2, we do not need \(F_{i}\) to be an estimate of \(F(x)\) with \(O(\epsilon)\) error; letting \(F_{i}=F^{*}(x)\) is good enough. Changes to Theorem 2.3.We now describe in detail when \(F\) is convex (i.e. with monotone \(f\)), how does the algorithm change. Step 1We set \(K=\log(1/\epsilon)\) and use binary search to identify points \(x_{0},x_{1},\ldots,x_{2K+3}\) such that for each \(k=0,\ldots,K+1\) we have \(|x_{2k+1}-x_{2k+2}|=O(\epsilon)\) and for the interval \(I_{k}=[x_{2k},x_{2k+1}]\) it holds that: * \(f(v)\leq 2\) for \(v\in I_{0}\) * \(2^{k-2}\leq f(v)\leq 2^{k+1}\) for \(v\in I_{k}\), \(k=1,...,K\) * \(f(v)\geq 2^{K-1}\) for \(v\in I_{K+1}\) Notice that now we do not have access to \(f\). Thus in order to perform the binary search, we query \(f_{\gamma}^{*}\) with \(\gamma=4\epsilon\). By (13), for any query \(x\), \(f_{\gamma}^{*}(x)=(1+\delta_{1})f(x+\gamma_{1})+\delta_{2}\) for some \(\delta_{1},\delta_{2},\gamma_{1}\) with \(|\delta_{1}|\leq\frac{\epsilon^{2}}{2\gamma}\leq\frac{1}{8}\), \(|\delta_{2}|\leq\frac{\epsilon}{\gamma}=\frac{1}{4}\), \(|\gamma_{1}|\leq 2\gamma=8\epsilon\). Thus \[\frac{7}{8}f(x-8\epsilon)-\frac{1}{4}\leq f_{\gamma}^{*}(x)\leq\frac{9}{8}f(x +8\epsilon)+\frac{1}{4}.\] Therefore, when \(f_{\gamma}^{*}(x)\geq 2^{k-1}\), \(f(x+8\epsilon)>2^{k-2}\); when \(f_{\gamma}^{*}(x)\leq 2^{k}\), \(f(x-8\epsilon)<2^{k+1}\). Thus, suppose we do binary search on \(f_{\gamma}^{*}\) to find a sequence of points \(x_{0}^{\prime},x_{1}^{\prime},\cdots,x_{2K+3}^{\prime}\) such that for every \(k=0,\cdots,K\), \(f_{\gamma}^{*}(x_{2k+1}^{\prime})\leq 2^{k}\), \(f_{\gamma}^{*}(x_{2k+2}^{\prime})\geq 2^{k}\). Then if we set \(x_{2k+1}=x_{2k+1}^{\prime}-8\epsilon\), \(x_{2k+2}=x_{2k+2}^{\prime}+8\epsilon\), the sequence \(x_{0},\cdots,x_{2K+3}\) satisfies all the properties required for the step. Identifying each point still takes \(O(\log(1/\epsilon))=\tilde{O}(1)\) queries to \(f_{\gamma}^{*}(\cdot)\) (thus \(F(\cdot)\)). Given that we have \(\tilde{O}(1)\) such points, we used a total of \(\tilde{O}(1)\) queries. The last interval \(I_{K+1}\) again satisfies the conditions of Lemma 2.2. Step 2For each interval \(I_{k}\) with \(k=0,...,K\), partition \(I_{k}\) to intervals \(I_{k,1},I_{k,2},\cdots\) of length \(\frac{1}{2^{k}}\sqrt{\epsilon}\). The number of endpoints added is at most \(\frac{|I_{k}|}{\sqrt{\epsilon}/2^{k}}=2^{k}\epsilon^{-1/2}|I_{k}|\) for each interval \(I_{k}\), and sums up to at most \[\sum_{k=0}^{K}2^{k}\epsilon^{-1/2}|I_{k}| = \epsilon^{-1/2}|I_{0}|+\sum_{k=1}^{K}2^{k}\epsilon^{-1/2}|I_{k}|\] \[\leq \epsilon^{-1/2}+\sum_{k=1}^{K}\epsilon^{-1/2}\int_{v\in I_{k}}4f( v)dv\] \[\leq \epsilon^{-1/2}+\epsilon^{-1/2}\int_{v\in[0,1]}4f(v)dv\] \[= \epsilon^{-1/2}+4\epsilon^{-1/2}=5\epsilon^{-1/2}.\] The only change compared to the previous section is that \(2^{k}\leq 4f(v)\) instead of \(2f(v)\) in the second line. Thus if we query \(F^{*}\) for all the endpoints of the intervals from the first two steps, the total number of queries needed is at most \(O(\epsilon^{-1/2})\). Step 3For each interval \(I_{k,j}=[v_{k,j},v_{k,j+1}]=[v_{k,j},v_{k,j}+2^{-k}\sqrt{\epsilon}]\subseteq I _{k}\) partitioned in Step 2, assume that \(2^{-k}\sqrt{\epsilon}>10\epsilon\), otherwise as \(|I_{k,j}|=O(\epsilon)\), it already satisfies the condition of Lemma 2.2. Define \(\underline{f}_{k,j}\) and \(\overline{f}_{k,j}\) as follows. Let \(\gamma^{*}=2^{-k-2}\sqrt{\epsilon}\). If \(I_{k,j}\) is the leftmost interval in \(I_{k}\), then \(\underline{f}_{k,j}=2^{k-2}\), otherwise \(\underline{f}_{k,j}=f_{\gamma^{*}}^{*}(v_{k,j}-2^{-k-1}\sqrt{\epsilon})-2^{k +3}\sqrt{\epsilon}\); if \(I_{k,j}\) is the rightmost interval in \(I_{k}\), then \(\overline{f}_{k,j}=2^{k+1}\), otherwise \(\overline{f}_{k,j}=f_{\gamma^{*}}^{*}(v_{k,j}+2^{-k}\sqrt{\epsilon}+2^{-k-1} \sqrt{\epsilon})+2^{k+3}\sqrt{\epsilon}\). Notice that by (13), if \(\underline{f}_{k,j}\neq 2^{k-2}\), then \[f_{\gamma^{*}}^{*}(v_{k,j}-2^{-k-1}\sqrt{\epsilon})=(1+\delta_{1})f(v_{k,j}-2^{ -k-1}\sqrt{\epsilon}+\gamma_{1})+\delta_{2}\] for some \(\delta_{1},\delta_{2},\gamma_{1}\) where \(|\delta_{1}|\leq\frac{\epsilon}{2\gamma^{*}}=2^{k+3}\epsilon^{1.5}\), \(\delta_{2}\leq\frac{\epsilon}{\gamma^{*}}=2^{k+2}\sqrt{\epsilon}\), \(|\gamma_{1}|<2\gamma=2^{-k-1}\sqrt{\epsilon}\). This means that \(v_{k,j}-2^{-k-1}\sqrt{\epsilon}+\gamma_{1}\in[v_{k,j}-2^{-k}\sqrt{\epsilon},v_ {k,j}]\). Since \(f(v_{k,j})\leq 2^{K+1}=\frac{2}{\epsilon}\), we have \(f(v_{k,j})|\delta_{1}|\leq\frac{2}{\epsilon}\cdot 2^{k+1}\epsilon^{1.5}=2^{k+2}\sqrt{\epsilon}\). Then \[\underline{f}_{k,j} = f_{\gamma^{*}}^{*}(v_{k,j}-2^{-k-1}\sqrt{\epsilon})-2^{k+3}\sqrt{\epsilon}\] \[= (1+\delta_{1})f(v_{k,j}-2^{-k-1}\sqrt{\epsilon}+\gamma_{1})+\delta_{2 }-2^{k+3}\sqrt{\epsilon}\] \[\leq (1+\delta_{1})f(v_{k,j})+2^{k+2}\sqrt{\epsilon}-2^{k+3}<f(v_{k,j}).\] This means that \(\underline{f}_{k,j}\) is indeed a lower bound of \(f(v_{k,j})\). Symmetrically, \(\overline{f}_{k,j}\) is indeed an upper bound of \(f(v_{k,j+1})\). If \(\overline{f}_{k,j}-\underline{f}_{k,j}=t_{k,j}2^{k}\sqrt{\epsilon}\), then partition the interval to \(\tilde{t}_{k,j}:=\max(1,\lceil t_{k,j}\rceil-2^{5})\) intervals of length \(\frac{1}{t_{k,j}2^{k}}\sqrt{\epsilon}\), and query \(F\) for the newly added endpoints. As any neighboring points \(\overline{v},\underline{v}\) have distance \(\overline{v}-\underline{v}\leq\frac{1}{t_{k,j}2^{k}}\sqrt{\epsilon}\). Hence this interval satisfies the second condition in Lemma 2.2 : \[(\overline{v}-\underline{v})(\overline{f}_{k,j}-\underline{f}_{k,j})\leq \frac{1}{\tilde{t}_{k,j}2^{k}}\sqrt{\epsilon}\cdot(\tilde{t}_{k,j}+2^{5})2^{k }\sqrt{\epsilon}=O(\epsilon)\] which allows us to learn the distribution up to \(\epsilon\) Levy-distance. Finally, we only need to bound the number of queries needed in this step. Observe that if \(I_{k,j}\) is not the leftmost interval in \(I_{k}\), \[\underline{f}_{k,j} = (1+\delta_{1})f(v_{k,j}-2^{-k-1}\sqrt{\epsilon}+\gamma_{1})+ \delta_{2}-2^{k+3}\sqrt{\epsilon}\] \[\geq (1-|\delta_{1}|)f(v_{k,j}-2^{-k}\sqrt{\epsilon})-2^{k+2}\sqrt{ \epsilon}-2^{k+3}\sqrt{\epsilon}\] \[\geq f(v_{k,j-1})-2^{k+4}\sqrt{\epsilon}.\] Symmetrically, if \(I_{k,j}\) is not the rightmost interval in \(I_{k}\), \(\overline{f}_{k,j}\leq f(v_{k,j+2})+2^{k+4}\sqrt{\epsilon}\). Thus, \[\overline{f}_{k,j}-\underline{f}_{k,j}\leq f(v_{k,j+2})-f(v_{k,j-1})+2^{k+5} \sqrt{\epsilon}.\] Since the interval \(I_{k,j}\) is partitioned to \(\max(1,\lceil t_{k,j}\rceil-2^{5})\) intervals, the number of queries to \(f_{\gamma^{*}}^{*}\) is at most \(\frac{f(v_{k,j+2})-f(v_{k,j-1})}{2^{k}\sqrt{\epsilon}}\). If we sum up \(\frac{f(v_{k,j+2})-f(v_{k,j-1})}{2^{k}\sqrt{\epsilon}}\) for every \(j\), we get the total number of queries to \(f_{\gamma^{*}}^{*}\) is at most1 Footnote 1: To be more precise, for the rightmost interval in \(I_{k}\), \(f(v_{k,j+2})\) is replaced by \(2^{k+1}\); for the leftmost interval in \(I_{k}\), \(f(v_{k,j-1})\) is replaced by \(2^{k-2}\). \[\sum_{j}\frac{f(v_{k,j+2})-f(v_{k,j-1})}{2^{k}\sqrt{\epsilon}}\leq\frac{3}{2^{ k}\sqrt{\epsilon}}(2^{k+1}-2^{k-2})=O(\epsilon^{-0.5})\] by simplifying the telescoping sum. Thus in this step, for each \(k\), the number of queries on \(F^{*}\) for added points is \(\tilde{O}(\epsilon^{-1/2})\). Also, the number of queries on \(f_{\gamma^{*}}^{*}\) for \(I_{k}\) is twice the total number of endpoints in \(I_{k,j}\), which is \(O(\epsilon^{-1/2})\) by Step 2. As there are only \(\tilde{O}(1)\) different values for \(k\), the total number of queries on \(F^{*}\) is \(\tilde{O}(\epsilon^{-1/2})\). To summarize, we have described how to modify the algorithm for learning a distribution with smooth convex CDF with oracle queries to \(F\) and \(f\), to an algorithm with only oracle queries to \(F^{*}\). The query complexity is asymptotically the same, which is \(\tilde{O}(\epsilon^{-1/2})\). ### Query complexity of general regular distributions Essentially the same modifications described for the convex CDF can be applied to the general regular distributions, obtaining a proof of Theorem 3.1. We omit the details since it is essentially a re-writing of the proof of Theorem 2.7 and instead describe the main modifications. In Step 0, the partition of \([0,1]\) to intervals with quantile within a factor of \(2\) can also be done via queries to \(F^{*}\). In Step 1 of the proof of Theorem 2.7, the partition is doable with queries to \(f\) replaced by queries to \(f^{*}_{O(\epsilon)}\), as Lemma 2.6 only requires estimating \(f\) within a constant factor, which is the same as Step 1 for Theorem 3.2. The only difference is that the value range of \(f\) in each interval may get expanded by a constant factor. This leads to the number of queries in Step 2 increasing by a constant factor. In the analysis of Step 3, similar to Step 3 of the modified algorithm in Theorem 3.2 the accuracy needed for estimating \(f\) is \(O(2^{k+\ell}\sqrt{\epsilon})\), which is achievable via queries to \(f^{*}_{O(2^{-k-\ell}\sqrt{\epsilon})}\). A simple Chernoff bound shows that a query to \(F^{*}\) (defined in Theorem 3.1) can be simulated by \(O(1/\epsilon^{2})\) pricing queries. Combining Theorem 3.1 with Chernoff bound we obtain a proof of Theorem 1.3, matching the lower bound established in [13]. ## 4 Mixture Distributions Our second application is the analysis of mixtures of regular distributions. Corollaries 1.4 and 1.5 are straightforward from Theorem 1.1. We end the paper with the proof of the more surprising result (Theorem 1.6) that even though mixtures of \(O(1)\) regular distributions can be described with \(\tilde{O}(1/\epsilon^{0.5})\) bits, they require \(\Omega(1/\epsilon^{3})\) pricing queries to be learned within \(\epsilon\) errors, the same asymptotic amount as a general distribution. **Theorem** (Restatement of Theorem 1.6).: _Let \(\mathcal{C}\) be the class of distributions supported in \([0,1]\) that can be written as a mixture of two regular distributions. To estimate the CDF of a distribution in that class within \(\epsilon\) in Levy-distance we require at least \(\Omega(1/\epsilon^{3})\) pricing queries._ Proof of Theorem 1.6.: We will use Lemma 3.5 in [13] which says that it takes \(\Omega(1/\epsilon^{2})\) pricing queries to distinguish between any two distributions with CDFs \(F_{1}\) and \(F_{2}\) such that \[\frac{1}{1+\epsilon}\leq\frac{F_{1}(v)}{F_{2}(v)},\frac{1-F_{1}(v)}{1-F_{2}(v )}\leq 1+\epsilon,\ \forall v. \tag{14}\] Now, we will construct a set \(S\) of \(\Omega(1/\epsilon)\) distributions such that each of which is: (1) a mixture of two regular distributions and (2) each follows Equation 14. Moreover, the Levy-distance between any two pairs is \(\Omega(\epsilon)\). Construct a class of regular distributions parameterized by \(a<2,\delta>0\) as follows. For a distribution with PDF \(f_{a,\delta}\), \[f_{a,\delta}(v)=\left\{\begin{array}{ll}a,&\text{if }v\in[0,\frac{1-\delta}{a }],\\ \frac{a^{2}}{2\delta}(\frac{1+\delta}{a}-v),&\text{if }v\in(\frac{1-\delta}{a}, \frac{1+\delta}{a}),\\ 0,&\text{if }v\in[\frac{1+\delta}{a},1].\end{array}\right.\] In other words, \(f_{a}\) is a distribution that is uniform with density \(a\), until the point \(v=\frac{1-\delta}{a}\) where the CDF reaches \(1-\delta\). Then \(f_{a,\delta}\) decreases linearly to \(0\), and remains \(0\) afterwards. We first show that any distribution in this class is regular. We want to show that for any \(x\in[\frac{1-\delta}{a},\frac{1+\delta}{a}]\), \(f^{\prime}_{a,\delta}(x)\geq-\frac{2f^{2}(x)}{1-F(x)}\). Firstly, in this range, \(f^{\prime}_{a,\delta}(x)=-\frac{a^{2}}{2\delta}\). Secondly, as in this range \(1-F_{a,\delta}(x)=\frac{1}{2}(\frac{1+\delta}{a}-x)f_{a,\delta}(x)\), we have \[-\frac{2f^{2}_{a,\delta}(x)}{1-F_{a,\delta}(x)}=-\frac{2f_{a,\delta}(x)}{ \frac{1}{2}(\frac{1+\delta}{a}-x)}=-\frac{2\cdot\frac{a^{2}}{2\delta}(\frac{1 +\delta}{a}-x)}{\frac{1}{2}(\frac{1+\delta}{a}-x)}=-\frac{2a^{2}}{\delta}<- \frac{a^{2}}{2\delta}=f^{\prime}_{a,\delta}(x).\] Thus for any \(a,\delta\), \(f_{a,\delta}\) is the PDF of a regular distribution. Consider any distribution with PDF \(g_{a,\delta}(v)=2-f_{a,\delta}(v)\) for every \(v\in[0,1]\). Then such a distribution is also regular, since its PDF is monotone non-decreasing. Furthermore, the uniform mixture of \(f_{a,\delta}\) and \(g_{a,\delta}\) is a uniform distribution on \([0,1]\). Let \(F\) be the uniform distribution on \([0,1]\). Define the following class of distributions with CDF \(F_{a}\) and PDF \(f_{a}\) parameterized by \(a\in[1.5,1.6]\) as follows. For any \(v\in[0,1]\), \[f_{a}(v)=f_{a,\epsilon/2}(v)+g_{a,\epsilon}.\] Then \(f_{a}\) is a uniform distribution, except for \(v\in[\frac{1-\epsilon}{a},\frac{1+\epsilon}{a}]\). For any \(v\in[\frac{1-\epsilon}{a},\frac{1+\epsilon}{a}]\), \(F_{a,\epsilon}(v)\) and \(F_{a,\epsilon/2}(v)\) differs by at most \(\epsilon\), thus \(F_{a}(v)=\frac{F_{a,\epsilon/2}(v)+G_{a,\epsilon}(v)}{2}\) and \(F(v)=\frac{F_{a,\epsilon}(v)+G_{a,\epsilon}(v)}{2}\) differs by at most \(\epsilon\). Furthermore, since \(F_{a}(v)\) and \(F(v)\) are both constant away from \(1\), thus (14) is satisfied and \(\Omega(1/\epsilon^{2})\) pricing queries are needed to distinguish \(F_{a}\) and \(F\). Observe that for any \(a\), \(F_{a}\) and the uniform distribution \(F\) has Levy-distance at least \(\Omega(\epsilon)\), as \(F_{a,\epsilon/2}(\frac{1-\epsilon/2}{a})-F_{a,\epsilon}(\frac{1-\epsilon/2}{a })=\Omega(\epsilon)\). Take \(S=\{F_{a}|a=1.5+4k\epsilon\text{ for }k=1,2,\cdots,\frac{1}{40\epsilon}\}\). As any \(F_{a}\in S\) and the uniform distribution \(F\) differs only on \([\frac{1-\epsilon}{a},\frac{1+\epsilon}{a}]\), we know that \(\Omega(1/\epsilon^{2})\) pricing queries are needed on \([\frac{1-\epsilon}{a},\frac{1+\epsilon}{a}]\) to distinguish \(F_{a}\) and \(F\). Also since for every \(a^{\prime}<a\), \(\frac{1+\epsilon}{a}<\frac{1-\epsilon}{a^{\prime}}\) for \(a,a^{\prime}\in[1.5,1.6]\) and small enough \(\epsilon\), we know that for different \(a\), the interval where \(F_{a}\) and \(F\) differs are non-overlapping. Thus given any distribution \(F_{a}\) where \(a=1.5+4k\epsilon\) for some \(k=1,2,\cdots,\frac{1}{40\epsilon}\), \(\Omega(1/\epsilon^{3})\) queries are required to learn a distribution within \(o(\epsilon)\) Levy-distance from \(F_{a}\).
2301.08543
On the growth rate inequality for self-maps of the sphere
Let $S^m = \{x_0^2 + x_1^2 + \cdots + x_m^2 = 1\}$ and $P = \{x_0 = x_1 = 0\} \cap S^m$. Suppose that $f$ is a self--map of $S^m$ such that $f^{-1}(P) = P$ and $|\mathrm{deg}(f_{|P})| < |\mathrm{deg}(f)|$. Then, the number of fixed points of $f^n$ grows at least exponentially with base $|d| > 1$, where $d = \mathrm{deg}(f)/\mathrm{deg}(f_{|P}) \in \mathbb Z$.
Héctor Barge, Luis Hernández-Corbato
2023-01-20T12:57:50Z
http://arxiv.org/abs/2301.08543v1
# On the growth rate inequality for self-maps of the sphere ###### Abstract. Let \(S^{m}=\{x_{0}^{2}+x_{1}^{2}+\cdots+x_{m}^{2}=1\}\) and \(P=\{x_{0}=x_{1}=0\}\cap S^{m}\). Suppose that \(f\) is a self-map of \(S^{m}\) such that \(f^{-1}(P)=P\) and \(|\mathrm{deg}(f_{|P})|<|\mathrm{deg}(f)|\). Then, the number of fixed points of \(f^{n}\) grows at least exponentially with base \(|d|>1\), where \(d=\mathrm{deg}(f)/\mathrm{deg}(f_{|P})\in\mathbb{Z}\). Key words and phrases:Growth rate inequality, periodic point, topological degree 2020 Mathematics Subject Classification: Primary 37C25, 37E99; Secondary 55M20 The authors are partially supported by the Spanish Ministerio de Ciencia e Innovacion (grants PGC2018-098321-B-I00 and PID2021-126124NB-I00). ## 1. Introduction In [13], Shub raised the question on whether algebraic intersections numbers bound assymptotically from below geometrical intersection numbers for \(C^{1}\) maps. For a self-map \(f\colon M\to M\) on a manifold, a particular case of the previous question comes down to whether the number \(\#\mathrm{Fix}(f^{n})\) of points fixed by \(f^{n}\) and the Lefschetz numbers \(L(f^{n})\) satisfy \[\limsup\frac{1}{n}\log(\#\mathrm{Fix}(f^{n}))\geq\limsup\frac{1}{n}\log\,|L(f ^{n})|\] This inequality is known as the _growth rate inequality_ and bounds from below the base of the exponential growth rate of periodic points. If the sequence of Lefschetz numbers \(L(f^{n})\) is unbounded, Lefschetz-Dold theorem together with a result of Shub and Sullivan [15] (that states that the index sequence of a fixed point for a \(C^{1}\) map is bounded) imply that there are infinitely many periodic points. The growth rate inequality appeared again as an open problem in the Proceedings of the ICM 2006 [14]. It is wide open in dimensions greater than \(1\). The Lefschetz number of a self-map on a sphere can be computed from the topological degree as \(L(f)=1\pm\mathrm{deg}(f)\), so the growth rate inequality for self-maps of the sphere can be rewritten in terms of the degree as follows: \[\limsup_{n\to+\infty}\frac{1}{n}\log(\#\mathrm{Fix}(f^{n}))\geq\log\,|\mathrm{ deg}(f)| \tag{1}\] Shub's original paper [13] proved (1) for rational maps on \(S^{2}\). However, even for a \(C^{1}\) map of degree \(2\) on \(S^{2}\), it is not known whether the growth rate inequality holds. Note that the \(C^{1}\) assumption is crucial, a degree-\(2\) north-south map on \(S^{2}\) has only \(2\) fixed points and no other periodic point. Recently, the growth rate inequality has been proved in several instances for maps on the sphere. In dimension \(2\), the sharp bound of \(\log\,|\mathrm{deg}(f)|\) was obtained when \(f\) preserves some singular foliations [12, 10, 4] or under hypothesis of dynamical nature [5, 7, 6, 8]. In higher dimensions the results are scarce. Weaker bounds for the growth rate of periodic points when the map preserves some foliation and some mild hypotheses is satisfied were obtained in [2, 3]. In this article we prove a weak form of the growth rate inequality for maps on \(S^{m}=\{x_{0}^{2}+x_{1}^{2}+\cdots+x_{m}^{2}=1\}\subset\mathbb{R}^{m+1}\). Suppose \(f\colon S^{m}\to S^{m}\) leaves the codimension-\(2\) sphere \(P=\{x_{0}=x_{1}=0\}\) completely invariant, that is, \(f^{-1}(P)=P\). Then, the degree of \(f\) is equal to the product of two factors: the degree of the restriction of \(f\) to \(P\) and a "transversal" degree denoted \(d\in\mathbb{Z}\). The latter can be interpreted in terms of the action induced by \(f\) on the homology group or the fundamental group of \(S^{m}-P\). **Theorem 1**.: _Let \(f\colon S^{m}\to S^{m}\) be a map such that \(f^{-1}(P)=P\) and \(\deg(f)\neq 0\) and let \(d=\deg(f)/\deg(f_{|P})\). Then, \(\#\mathrm{Fix}(f^{n})+\#\mathrm{Fix}(f^{n}_{|P})\geq|d^{n}-1|\). In particular,_ \[\liminf_{n\to+\infty}\frac{1}{n}\log(\#\mathrm{Fix}(f^{n}))\geq\log|d|\] In dimension \(m=2\), \(P\) is a \(0\)-sphere and \(\deg(f_{|P})\) can only take the values \(-1,0,1\). We deduce the following corollary from Theorem 1, that was previously stated in [6]. **Corollary 2**.: _Suppose \(f\colon S^{2}\to S^{2}\) and there are two points \(\{p,p^{\prime}\}\) such that \(f^{-1}(\{p,p^{\prime}\})=\{p,p^{\prime}\}\). Then,_ \[\liminf_{n\to+\infty}\frac{1}{n}\log(\#\mathrm{Fix}(f^{n}))\geq\log\,|\deg(f)|\] In higher dimensions we obtain a weak bound for the growth rate: **Corollary 3**.: _Under the hypothesis of Theorem 1, if \(m=3\) or, more generally, if the growth rate inequality holds in \(S^{m-2}\) then_ \[\liminf_{n\to+\infty}\frac{1}{n}\#\mathrm{Fix}(f^{n})\geq\frac{1}{2}\log\,| \deg(f)|\] The result follows from the first inequality in Theorem 1 and the fact that \(\max\{|d|,|\deg(f_{|P})|\}\geq\sqrt{|\deg(f)|}\). Theorem 1 is deduced from Theorem 5, which is stated in the ensuing section. The proof is contained in the last section of the article. It requires a detailed local analysis of the map in the normal direction to \(P\), presented in Section 4, and uses an argument from topological degree theory, which is quickly reviewed in Section 3. ## 2. Setting Let \(S=S^{m}=\{(x_{0},\ldots,x_{m}):x_{0}^{2}+\cdots+x_{m}^{2}=1\}\) be the standard \(m\)-sphere in \(\mathbb{R}^{m+1}\) and \(P=\{x_{0}=x_{1}=0\}\cap S\) be an \(m-2\)-dimensional sphere which we will refer to as the _polar sphere_. The complement \(S-P\) is diffeomorphic to \(S^{1}\times D^{m-1}\), where \(D^{m-1}\) denotes the \((m-1)\)-dimensional open unit disk, via the map \[(x_{0},\ldots,x_{m})\longmapsto\left(\frac{(x_{0},x_{1})}{||(x_{0},x_{1})||}, (x_{2},\ldots,x_{m})\right) \tag{2}\] On the other hand, \(P^{\prime}=\{x_{2}=\cdots=x_{m}=0\}\cap S\) is a \(1\)-sphere such that \(S\) is the join of \(P\) and \(P^{\prime}\). The set \(S-P^{\prime}\) is diffeomorphic to \(D^{2}\times S^{m-2}\) by \[(x_{0},\ldots,x_{m})\longmapsto\left((x_{0},x_{1}),\frac{(x_{2},\ldots,x_{m}) }{||(x_{2},\ldots,x_{m})||}\right) \tag{3}\] Equations (2) and (3) define coordinate charts for \(S-P\) and \(S-P^{\prime}\), respectively. If we replace \((x_{2},\ldots,x_{m})\) by \((0,0,x_{2},\ldots,x_{m})\) in (3), we obtain a diffeomorphism between \(S-P^{\prime}\) and \(D^{2}\times P\). This description will be used extensively throughout this note, as it provides a product structure for the system of neighborhoods of \(P\) defined by the inequalities \(x_{0}^{2}+x_{1}^{2}<r\) for \(0<r<1\): they are diffeomorphic to \(D^{2}(r)\times P\). Similarly, the inequalities \(x_{0}^{2}+x_{1}^{2}>1-r^{\prime}\), \(0<r^{\prime}<1\), define neighborhoods of \(P^{\prime}\) in \(S-P\) diffeomorphic by (2) to \(S^{1}\times D^{m-1}(r^{\prime})\). Note that the radial coordinates of \(D^{2}\) and \(D^{m-1}\) are related by \(r+r^{\prime}=1\). For any \(0<r<1\), \[S \cong D^{2}(r)\times P\ \sqcup\ S^{1}\times\overline{D}^{m-1}(1-r) \tag{4}\] Evidently, the fundamental group of \(S-P\) is isomorphic to \(\mathbb{Z}\). Using the coordinates from (3), for any \(r\in(0,1)\) and \(p\in P\), it is easy to see that \(\pi_{1}(S-P)\) is generated by a loop \(\gamma\) that makes a positive turn around the origin in the \(2\)-disk \(D^{2}(r)\times\{p\}\). Let us consider the lift of \(S-P\) that trivializes \([\gamma]\), \(\mathrm{pr}\colon\widehat{S-P}\to S-P\). In our results, \(f\) is a self-map of \(S\) for which \(P\) is completely invariant, \(f^{-1}(P)=P\). Since \(S-P\) is invariant under \(f\), \(f\) can be lifted to \(\widetilde{S-P}\). Let \(\tau\) be a generator of the group of deck transformations of the cover. Any lift \(F\) of \(f\) satisfies \[F\,\tau=\tau^{d}\,F \tag{5}\] where \(d\in\mathbb{Z}\) is the _transversal degree_ defined by \(f_{*}[\gamma]=d\cdot[\gamma]\) in \(\pi_{1}(S-P)\) (see (12) later). As we will prove in Subsection 3.1, \(\deg(f)=d\cdot\deg(f_{|P})\). A fixed point of \(F\) projects onto a fixed point of \(f\) in \(S-P\). The following lemma is a consequence of (5) and establishes a relation between fixed points of different lifts. It is standard in Nielsen theory (cf. [9, Lemma 4.1.10]). **Lemma 4**.: _If two different lifts \(F,G=\tau^{m}F\) of \(f\) satisfy \(\operatorname{pr}(\operatorname{Fix}(F))\cap\operatorname{pr}(\operatorname{ Fix}(G))\neq\emptyset\), then \(d\neq 1\) and \(|d-1|\) divides \(m\)._ By the previous lemma, we can bound from below \(\#\{\operatorname{Fix}(f)\cap(S-P)\}\) by the number of lifts of \(f\) among \(\{F,\tau F,\dots,\tau^{|d-1|-1}F\}\) that have fixed points. **Theorem 5**.: _Suppose that \(d\neq 0,1\). There are at most \(2\,\#\operatorname{Fix}(f_{|P})\) fixed point free maps among \(\{F,\tau F,\dots,\tau^{|d-1|-1}F\}\)._ Theorem 5 and Lemma 4 immediately imply that \[\#\{\operatorname{Fix}(f)\cap(S-P)\}\geq|d-1|-2\,\#\{\operatorname{Fix}(f_{| P})\}\] Replace \(f\) by \(f^{n}\) in this inequality to deduce Theorem 1. Note that the transversal degree associated to \(f^{n}\) is \(d^{n}\). ## 3. Topological degree The topological degree of a map \(g\colon M\to L\) between closed connected and oriented manifolds of dimension \(n\geq 1\) roughly counts with multiplicity the number of preimages of a point. For a complete account on degree theory we refer to [11]. We give below a precise definition of the degree in algebraic topological terms. Recall that the orientation of a closed connected manifold \(M\) is given by a fundamental class \([M]\), i.e. a generator of the reduced homology group \(\widetilde{H}_{n}(M)\). Reduced and unreduced homology groups only differ at dimension \(0\), the reason to choose here the reduced groups shall become clear later. The image \([M]_{x}\) of the fundamental class \([M]\) under the projection \(\widetilde{H}_{n}(M)\to H_{n}(M,M-\{x\})\) is the local orientation at \(x\in M\), and it is a generator of \(H_{n}(M,M-\{x\})\). **Definition 6**.: _Let \(g\colon M\to L\) be a map between \(n\)-dimensional closed orientable manifolds such that \(\widetilde{H}_{n}(M)\cong\widetilde{H}_{n}(L)\cong\mathbb{Z}\) and \([M],[L]\) be fundamental classes of \(M,L\), respectively. The degree of \(g\), denoted \(\deg(g)\), is the integer that satisfies_ \[g_{*}([M])=\deg(g)\cdot[L], \tag{6}\] The reason to choose the hypothesis \(\widetilde{H}_{n}(M)\cong\mathbb{Z}\) in the definition is that it is satisfied by closed connected oriented manifolds of dimension \(n\geq 1\) (spaces that appear in the standard definition of topological degree) and also by \(0\)-spheres (such as the polar sphere in \(S^{2}\), that is relevant in this paper). Incidentally, note that the degree of a map between \(0\)-spheres can only take the values \(-1,0,1\). By duality, the degree can be alternatively defined using reduced cohomology groups. If \(\omega_{M},\omega_{L}\) are generators of the reduced \(n\)-th cohomology group of \(M,L\), we have that \(g^{*}(\omega_{L})=\deg(g)\,\omega_{M}\). ### Decomposition of the degree Suppose now that \(f\colon M\to M\) is a self-map and \(N\) is a completely invariant submanifold (i.e. \(f^{-1}(N)=N\)). Under some topological hypothesis, the degree of \(f\) is equal to the product of two factors: the degree of the restriction of \(f\) to \(N\) and an integer \(d\) that accounts for the winding around \(N\) in the transversal direction, which we call _transversal_ degree. \[\deg(f)=d\cdot\deg(f_{|N}) \tag{7}\] **Lemma 7**.: _Let \(M\) be a closed connected and orientable \(n\)-dimensional manifold and \(f:M\to M\) a continuous map with \(\deg(f)\neq 0\). Suppose that \(N\subset M\) is a closed orientable submanifold of codimension \(k\geq 2\) that is completely invariant, i.e. \(f^{-1}(N)=N\), and \(\widetilde{H}^{n-k}(N)\cong\mathbb{Z}\). Then \(\deg(f_{|N})\) divides \(\deg(f)\). Moreover, if \(H_{k}(M)\cong 0\) or if \(k=n\) there is a non trivial class \(\beta\in H_{k-1}(M-N)\) such that \(f_{*}(\beta)=d\cdot\beta\), where \(d=\deg(f)\,/\deg(f_{|N})\)._ Proof.: Observe that, since \(N\) is completely invariant, the map \(f\) can be considered as a map of pairs \(f:(M,M-N)\to(M,M-N)\). Let \([M]\in H_{n}(M)\) a fundamental class in \(M\). The homomorphism \[\frown[M]:H^{n-k}(N)\to H_{k}(M,M-N) \tag{8}\] consisting in capping each (unreduced) cohomology class in \(H^{n-k}(N)\) with the fundamental class \([M]\) is an isomorphism (see [1, Theorem 8.3, pg. 351]). Let \(\omega\) be a generator of the reduced cohomology group \(\widetilde{H}^{n-k}(N)\) (note that \(\omega\) also generates \(H^{n-k}(N)\) unless \(k=n\)). Then, by naturality of the cap product [1, Theorem 5.2, pg. 336] it follows \[f_{*}(f_{|N}^{*}(\omega)\frown[M])=\omega\frown f_{*}([M]). \tag{9}\] If we examine the left-hand side of (9) we get \[f_{*}(f_{|N}^{*}(\omega)\frown[M])=f_{*}((\deg(f_{|N})\cdot\omega)\frown[M])= \deg(f_{|N})\cdot f_{*}(\omega\frown[M]). \tag{10}\] On the other hand, \[\omega\frown f_{*}([M])=\omega\frown(\deg(f)\cdot[M])=\deg(f)\cdot(\omega \frown[M]). \tag{11}\] Hence, from (9), (10) and (11) it follows that \(\deg(f_{|N})\mid\deg(f)\) and the quotient is an integer \(d\) that satisfies \(f_{*}(\omega\frown[M])=d\cdot(\omega\frown[M])\) in \(H_{k}(M,M-N)\). The second statement follows immediately from the naturality of the long exact sequence of homology of \((M,M-N)\) in the case \(H_{k}(M)\) is trivial. We can take \(\beta\) to be the image of \(\omega\frown[M]\) by the boundary morphism \(H_{k}(M,M-N)\to H_{k-1}(M-N)\). In fact, by exactness the existence of \(\beta\) is guaranteed as long as \(\omega\frown[M]\) does not belong to the image of \(p\colon H_{k}(M)\to H_{k}(M,M-N)\). In the case \(k=n\), \(N=\{x,y\}\) is a \(0\)-sphere, \(H_{k}(M)\) is generated by the fundamental class \([M]\) and \(p([M])=[M]_{x}+[M]_{y}\). Then, the preimage of \(p([M])\) under the duality isomorphism (8) is the (unreduced) \(0\)-cohomology class represented by the constant map on \(N\) equal to \(1\) and, in particular, is different from \(\omega\). Then, \(\omega\frown[M]\notin\operatorname{im}(p)\) and the result follows. Heuristically, the transversal degree \(d\) counts how many times the image of the boundary of a small neighborhood of \(N\) wraps around \(N\). The interpretation is much clearer in the case \(M=S=S^{m}\) and \(N=P\), the polar sphere of codimension \(k=2\). Note that \(H_{2}(S)\) is trivial for all \(m>2\) and if \(m=2=k\), \(P\) is a \(0\)-sphere and the lemma still applies. From (2) we get that \(H_{1}(S-P)\cong\mathbb{Z}\) and we deduce that \[(f_{|S-P})_{*}\colon H_{1}(S-P)\to H_{1}(S-P)\] is conjugate to the multiplication by \(d\) in \(\mathbb{Z}\). Evidently, the same description applies to the action induced in the fundamental group of \(S-P\) as well: if \(\gamma\) is a loop that generates \(\pi_{1}(S-P)\) then \[f_{*}[\gamma]=d\cdot[\gamma]. \tag{12}\] ### Vector fields and fixed points The final step in the proof of Theorem 5 uses an argument from topological degree theory. In order to keep the article self-contained, we formulate and prove the elementary results which are needed. Let \(U\) be an open subset of \(\mathbb{R}^{m}\) and \(B\subset U\) be diffeomorphic to \(\overline{D}^{m}\). Any non-singular vector field \(v\) on \(\partial B\) defines a map \(j\colon\partial B\to S^{m-1}\) by \(j_{v}(x)=v(x)/||v(x)||\). **Lemma 8**.: 1. _If_ \(v\) _points inwards_ \(B\) _then_ \(j_{v}\) _is not nilhomotopic (i.e., not homotopic to the constant map)._ 2. _If_ \(v,w\) _never point to the same direction (that is,_ \(j_{v}(x)\neq j_{w}(x)\) _for all_ \(x\)_) and_ \(j_{v}\) _is not nilhomotopic then_ \(j_{w}\) _is not nilhomotopic._ _Further, suppose that \(\partial B\) is decomposed as the union of two hemispheres \(E_{+},E_{-}\), that is \((E_{+},E_{-})\) is diffeomorphic to \((H_{+},H_{-})\), where \(H_{+}\) and \(H_{-}\) denote the uppper and lower hemisphere of \(S^{m-1}\)._ 1. _If_ \(v\) _points inwards on_ \(E_{+}\cap E_{-}\)_,_ \(j_{v}(E_{+})\subset H_{+}\) _and_ \(j_{v}(E_{-})\subset H_{-}\) _then_ \(j_{v}\) _is not nilhomotopic._ Proof.: (i) Clearly, \(j_{v}\) is conjugate to a self-map of \(S^{m-1}\) that is homotopic to the antipodal map. The conclusion follows from the fact that the antipodal map on \(S^{m-1}\) is not nilhomotopic (otherwise, we could construct an homotopy from the identity map to a constant map by composing with the antipodal map). (ii) \(j_{w}\) is homotopic to \(j_{-v}\). We can now use an argument as in (i) to conclude. (iii) If \(\sigma\) is the reflection through the equator on \(S^{m-1}\), \(\sigma\circ j_{v}\) is conjugate to the antipodal map. Again, we conclude that it is not nilhomotopic. Given a map \(h\colon U\to\mathbb{R}^{m}\), we define a vector field \(v_{h}(x)=h(x)-x\). Singularities of \(v_{h}\) correspond to fixed points of \(h\), so when we work with \(j_{v_{h}}\) we tacitly assume \(h\) has no fixed points on the boundary of \(B\). One of the central ideas of topological degree theory is that it is possible to detect fixed points of \(h\) inside \(B\) just by studying \(v_{h}\) on \(\partial B\) or, more precisely, the homotopy class of \(j_{v_{h}}\). Indeed, if \(\operatorname{Fix}(h)\cap B=\emptyset\) then \(v_{h}\) has no singularities in \(B\) and, using a foliation of \(B\) by spheres that converge to an interior point, it is possible to construct an homotopy from \(j_{v_{h}}\) to a constant map. In other words, **Lemma 9**.: _If \(j_{v_{h}}\) is not nilhomotopic, there exists a fixed point of \(h\) in \(B\)._ Let us point out that one of the first results in topological degree theory is that the reverse implication is true up to homotopy. If \(j_{v_{h}}\) is nilhomotopic, it is possible to construct an homotopy between \(h\) and \(h^{\prime}\) relative to \(\partial B\) such that \(h^{\prime}\) has no fixed points in \(B\). ## 4. Local analysis at fixed points in \(P\) Recall the setting from Section 2. \(P\) has a basis of neighborhoods diffeomorphic by (3) to \(D^{2}(r)\times S^{m-2}\). The map \(f\) induces, by projection onto the first factor, a dynamics in the \(2\)-dimensional normal direction around \(P\). We obtain a family of \(C^{1}\) maps \(f_{p}\colon D^{2}(s)\to D^{2}\), \(p\in P\), for some fixed small \(s>0\). The smoothness of \(f\) poses a restriction on the behavior of \(f_{p}\) for a fixed point \(p\in P\) because of the following reason: a \(C^{1}\) map is injective in a neighborhood of a repelling fixed point. This fact follows from the inverse function theorem and the fact that the repelling condition implies that the eigenvalues of the jacobian matrix at the fixed point lie outside the unit disk and, in particular, away from zero. We shall prove later that if any \(f_{p}\) is injective then the transversal degree satisfies \(|d|\leq 1\) and Theorem 1 becomes trivial. Accordingly, we focus on the case \(|d|>1\) in which \(f_{p}\) is not injective for any \(p\in P\) and, in particular, the Jacobian matrix \(A_{p}\) of \(f_{p}\) at the origin in the \(2\)-dimensional normal direction is singular. Therefore, there are only two dynamically different cases stated in terms of the spectral radius of \(A_{p}\), either it is smaller than \(1\) and the origin is an attractor for \(f_{p}\) or it is greater or equal than \(1\) and there is an attracting cone region. We proceed now to study the dynamics of planar maps such as \(f_{p}\). Later, we apply the local picture to describe the behavior of \(f\) in the normal direction to \(P\). ### Planar results Suppose \(g\in\mathbb{R}^{2}\to\mathbb{R}^{2}\) is a \(C^{1}\) map that fixes the origin, \(g(0)=0\). Denote by \(A\) the Jacobian matrix of \(g\) at \(0\). By the definition of differentiability at the origin, for every \(\epsilon>0\) there exists \(\delta>0\) such that \[\frac{||g(u)-Au||}{||u||}<\epsilon,\qquad\text{for all}\;\;u\;\;\text{such that}\;\;0<||u||<\delta \tag{13}\] where we have used the identification \(T_{0}\mathbb{R}^{2}\cong\mathbb{R}^{2}\) and \(||\cdot||\) is a norm in \(\mathbb{R}^{2}\). Recall that all the norms in a finite dimensional vector space are equivalent. The spectral radius of \(A\), \(\rho(A)\), largely determines the behavior of \(g\) in a neighborhood of \(0\). **Lemma 10**.: _For every \(c>\rho(A)\) there exists a norm \(||\cdot||\) in \(\mathbb{R}^{2}\) such that_ \[||Au||<c\,||u||\text{ for every }u\in\mathbb{R}^{2}-\{0\}.\] Proof.: If \(A\) is diagonalizable over \(\mathbb{R}\), we can take the \(\ell^{1}\)-norm associated to a basis \(\mathcal{B}\) composed of eigenvectors, that is, \(||u||=||(u_{1},u_{2})_{\mathcal{B}}||:=|u_{1}|+|u_{2}|\). If the eigenvalues of \(A\) are not real, the \(\ell^{2}\)-norm associated to an orthogonal basis satisfies the conclusion. Finally, if the eigenvalues of \(A\) are equal but \(A\) is not diagonalizable, let \(e_{0}\) be an eigenvector and \(e_{1}\neq 0\) not collinear to \(e_{0}\). Then, we can take the \(\ell^{1}\)-norm associated to the basis \(\{Ke_{0},e_{1}\}\) for large enough \(K>0\). An immediate consequence of the previous lemma and (13) is that if \(\rho(A)<1\) then the origin is a local attractor for \(g\). **Corollary 11**.: _Suppose \(\rho(A)<1\) and let \(\epsilon\in(0,1-\rho(A))\), then there exists \(\delta>0\) and a norm in \(\mathbb{R}^{2}\) such that \(||g(u)||<(1-\epsilon)||u||\) whenever \(0<||u||<\delta\)._ In the case there is an eigenvalue \(\lambda\) with \(|\lambda|\geq 1\), we can locate the region where the inequality \(||g(u)||<||u||\) does not hold. **Lemma 12**.: _Suppose the eigenvalues of \(A\) are \(\{0,\lambda\}\) with \(|\lambda|\geq 1\). Denote \(\mathcal{B}=\{e_{0},e_{\lambda}\}\) a basis composed of eigenvectors, \(||\cdot||\) the \(\ell^{1}\)-norm associated to \(\mathcal{B}\) and_ \[C(\alpha)=\{u=(u_{0},u_{\lambda})_{\mathcal{B}}\in\mathbb{R}^{2}-\{0\}:|u_{0} /u_{\lambda}|<\alpha\}.\] _For every \(\epsilon\in(0,1/2)\) there exists \(\delta>0\) such that if \(0<||u||<\delta\) and_ * _if_ \(u\notin C\left(\frac{|\lambda|+2\epsilon-1}{1-2\epsilon}\right)\) _then_ \(||g(u)||<(1-\epsilon)||u||\)_._ * _if_ \(u\in C\left(\frac{|\lambda|-3\epsilon}{3\epsilon}\right)\) _then_ \(|(g(u))_{\lambda}|>\epsilon||u||\) _and, in particular,_ \(g(u)\notin\langle e_{0}\rangle\)_. (_\(\langle v\rangle\) _denotes the subspace spanned by_ \(v\)_)_ Proof.: Apply (13) to \(||\cdot||\) and \(\epsilon\). For (i), if \(u=(u_{0},u_{\lambda})_{\mathcal{B}}\) with \(||u||<\delta\) \[(1-\epsilon)||u||=(1-\epsilon)|u_{0}|+(1-\epsilon)|u_{\lambda}|\geq\epsilon|u_ {0}|+(|\lambda|+\epsilon)|u_{\lambda}|=||Au||+\epsilon||u||>||g(u)||\] To deduce (ii) we use that \(||g(u)||+\epsilon||u||\geq||Au||\): \[|(g(u))_{\lambda}|=||g(u)||-|(g(u))_{0}|\geq|\lambda||u_{\lambda}|-2\epsilon|| u||=(|\lambda|-2\epsilon)|u_{\lambda}|-2\epsilon|u_{0}|>\epsilon||u||>0.\] ### Analysis in the normal direction Since \(f(P)=P\), \(f\) restricts to a continuous map \[f\colon D^{2}(s)\times P\longrightarrow D^{2}\times P\] for some \(s>0\), where we extensively use the coordinates introduced in (3). The Jacobian matrix of \(f\) in a point \(p\in P\) has the following form: \[Jf_{p}=\begin{pmatrix}A_{p}&0\\ *&(Jf_{[P]})_{p}\end{pmatrix}\] where the basis for the tangent space at \(p\) is ordered according to the local product structure around \(P\): first the (2-dimensional) normal space to \(P\) and then the (\((m-2)\)-dimensional) tangent space to \(P\). Alternatively, \(A_{p}\) can be defined as the Jacobian at \(0\) of the following composition \[f_{p}\colon D^{2}(s)\to D^{2}(s)\times\{p\}\hookrightarrow D^{2}(s)\times P \stackrel{{ f}}{{\longrightarrow}}D^{2}\times P\stackrel{{ \mathrm{proj}}}{{\longrightarrow}}D^{2} \tag{14}\] **Lemma 13**.: _If \(|d|>1\) then \(A_{p}\) is singular for every \(p\in P\)._ Proof.: Suppose that \(A_{p}\) is regular. Then, \(f_{p}\) restricts to a diffeomorphism between \(D^{2}(r)\) and \(V=f_{p}(D^{2}(r))\), for small \(r>0\). In particular, if \(\gamma\) is a generator of \(\pi_{1}(D^{2}(r)-\{0\})\), then \(f_{p}(\gamma)\) is a generator of \(\pi_{1}(V-\{0\})\). Think of \(\gamma\) as a loop in \((D^{2}(r)-\{0\})\times\{p\}\) and choose it small so that \(f(\gamma)\subset D^{2}\times U_{f(p)}\), where \(U_{f(p)}\) is contractible in \(P\). It follows that both \(\gamma\) and \(f(\gamma)\) generate \(\pi_{1}(S-P)\) and, by (12), we deduce that \(d=\pm 1\), a contradiction. Therefore, either Corollary 11 or Lemma 12 apply to \(f_{p}\) when \(|d|>1\). For a given \(p\in P\), we extend the results by continuity to \(f_{q}\) for \(q\) close to \(p\). **Proposition 14**.: _Suppose that \(|d|>1\). For every \(p\in P\), there exists a neighborhood \(D^{2}(\delta)\times U_{p}\) of \(p\) in \(S\) and a norm \(||\cdot||\) in \(\mathbb{R}^{2}\) such that:_ 1. _If_ \(\rho(A_{p})<1\)_, for all_ \(q\in U_{p}\) _and all_ \(u\in D^{2}(\delta)\)_,_ \[||f_{q}(u)||\leq||u||.\] 2. _Otherwise, there is a basis_ \(\{e_{0},e_{1}\}\) _of_ \(\mathbb{R}^{2}\) _and_ \(\alpha\in\mathbb{R}^{+}\) _such that for every_ \(q\in U_{p}\)__ * _if_ \(u\notin C(\alpha)\) _and_ \(u\in D^{2}(\delta)\) _then_ \(||f_{q}(u)||\leq||u||.\)__ * \(f_{q}(C(\alpha)\cap D^{2}(\delta))\cap\langle e_{0}\rangle=\emptyset\)_._ Proof.: Let \(||\cdot||\) be the norm from Corollary 11 or Lemma 12 depending on which alternative, (i) or (ii), applies. For (i), apply Corollary 11 to any \(\epsilon\in(0,1-\rho(A))\) to obtain \(||\cdot||\) and \(\delta\) such that \(||f_{p}(u)||\leq(1-\epsilon)||u||\) holds whenever \(||u||<\delta\). In order to extend the conclusion to \(f_{q}\), for \(q\) close to \(p\), we use the smoothness of \(f\). Since \(f_{q}(u)=\left(\int_{0}^{1}Df_{q}|_{tu}\,dt\right)u\), we have that \[||f_{q}(u)-f_{p}(u)||\leq\left(\int_{0}^{1}||Df_{q}|_{tu}-Df_{p}|_{tu}||\,dt \right)||<\gamma_{q}||u|| \tag{15}\] where \(\gamma_{q}=\max_{v\in D^{2}(\delta)}||Df_{q}|_{v}-Df_{p}|_{v}||\). Since \(f\) is \(C^{1}\), \(\gamma_{q}\to 0\) as \(q\to p\). Let \(U_{p}\) be a neighborhood of \(p\) in \(P\) such that \(|\gamma_{q}|<\epsilon\) for all \(q\in U_{p}\). Then we conclude that \(||f_{q}(u)||\leq||u||\) for all \(q\in U_{p}\) and \(||u||<\delta\). For (ii), denote \(\lambda\) be the non-zero eigenvalue of \(A_{p}\). Firstly, take \(\epsilon>0\) small enough so that \(\frac{|\lambda|+2\epsilon-1}{1-2\epsilon}<\frac{|\lambda|-3\epsilon}{3\epsilon}\) and set \(\alpha=\frac{|\lambda|+2\epsilon-1}{1-2\epsilon}\). Apply Lemma 12 to obtain \(||\cdot||\), \(\delta\) and \(\{e_{0},e_{1}\}\). The conclusions for \(q=p\) follow immediately from the lemma. To extend the results to a neighborhood \(U_{p}\) of \(p\) we use (15) and proceed exactly as in (i). The argument above can be used verbatim to prove the first item. For the second item, \(|(f_{q}(u))_{\lambda}|\geq|(f_{p}(u))_{\lambda}|-\epsilon||u||>0\). Finally, note that the norm from Lemma 12 may not be the \(\ell^{2}\)-norm so we might need to shrink \(\delta\) to \(\delta^{\prime}\) so that the standard \(2\)-disk \(D^{2}(\delta^{\prime})\) fits inside the disk defined by \(||u||<\delta\). Below, in the proof of Theorem 5, only the fixed points \(p\) of \(P\) for which the second alternative in Proposition 14 applies require special attention. The local description obtained above provides a cone \(C(\alpha)\) above each \(q\) close to \(p\) that contains the repelling sector (if any) and whose image misses the attracting direction spanned by \(e_{0}\). ## 5. Proof of Theorem 5 Recall (see (2)) that \(S-P\) is diffeomorphic to \(S^{1}\times D^{m-1}\). Consider the cover \[\widetilde{S-P}\cong\mathbb{R}\times D^{m-1}\longrightarrow S^{1}\times D^{m-1} \cong S-P \tag{16}\] defined as the standard cover \(\mathbb{R}\to S^{1}\), \(t\mapsto e^{2\pi it}\) in the first factor and as the identity in the second factor. Our aim is to prove that except for a few cases, every lift of \(f\) to \(\widetilde{S-P}\) has a fixed point in \(B_{M}=[-M,M]\times D^{m-1}(1-\delta)\) for large \(M\) and small \(\delta>0\). The argument is based on topological degree theory. The key observation is that, for most of the lifts \(F\), the vector field \(v_{F}(x)=F(x)-x\) in \(\mathbb{R}\times D^{m-1}\) never points to the same direction as the coordinate vector field \(\partial/\partial r(x)\), whose definition will be recalled next, on the boundary of \(\mathbb{R}\times D^{m-1}(1-\delta)\). Then, we can apply Lemmas 8 and 9 to conclude that \(F\) has a fixed point inside \(B_{M}\) for large \(M>0\). From (3), we deduce that \(S-P-P^{\prime}\) is diffeomorphic to \(D_{0}^{2}\times P\), where the subscript in \(D_{0}^{2}\) indicates that the disk is punctured at the origin. The lift of \(S-P-P^{\prime}\) to the cover (16) is therefore diffeomorphic to \(\mathbb{R}\times(0,1)\times P\), where the second factor, \((0,1)\), corresponds to the radial coordinate \(r\) of \(D_{0}^{2}\) and also to \(1-r^{\prime}\), where \(r^{\prime}\) is the radial coordinate of \(D^{m-1}\) in (16) (cf. the discussion before (4)). The coordinate \(r\) and, more precisely, the vector field \(\partial/\partial r\) that it defines in the cover play a central role in the discussion. On the lateral face of the cylinder \(B_{M}\), \(\partial/\partial r\) points inwards. Note that, alternatively, if we use the coordinates from (2) instead of those of (3), we see that the lift of \(S-P-P^{\prime}\) is diffeomorphic to \(\mathbb{R}\times D_{0}^{m-1}\). The lift of the partition (4) of \(S-P\) for \(r=\delta\) still displays a product structure: \[\widetilde{S-P}=\mathbb{R}\times(0,\delta)\times P\sqcup\mathbb{R}\times D^{m -1}(1-\delta)\] The factor \(\mathbb{R}\) corresponds to the angular coordinate in the normal bundle of \(P\) in the first term and to the lift of \(S^{1}\) (that parametrizes \(P^{\prime}\)) in the second instance. The projection onto the first factor \(\operatorname{pr}\colon\widetilde{S-P}\to\mathbb{R}\) conjugates a generator \(\tau\) of the group of deck transformations of the cover and the translation by \(1\) in \(\mathbb{R}\). Recall from the statement of Theorem 5 that \(d\neq 0,1\). For an arbitrary lift \(F\) of \(f\), by (5) we have that \(F\tau=\tau^{d}F\) so \[\operatorname{pr}(x)=\operatorname{pr}(y)+1\quad\Rightarrow\quad\operatorname {pr}(F(x))=\operatorname{pr}(F(y))+d.\] for all \(x,y\in\widetilde{S-P}\). Therefore, for a fixed \(\delta>0\), if \(M\) is sufficiently large and if \(d>1\) then \[\operatorname{pr}(F(x)) \leq-M-1\quad\text{for every $x\in\{-M\}\times D^{m-1}(1-\delta)$ and}\] \[\operatorname{pr}(F(x)) \geq M+1\quad\text{for every $x\in\{M\}\times D^{m-1}(1-\delta)$} \tag{17}\] whereas if \(d\leq-1\) then \[\operatorname{pr}(F(x)) \geq-M+1\quad\text{for every $x\in\{-M\}\times D^{m-1}(1-\delta)$ and}\] \[\operatorname{pr}(F(x)) \leq M-1\quad\text{for every $x\in\{M\}\times D^{m-1}(1-\delta)$} \tag{18}\] These inequalities imply that on the left and right faces (as in Figure 1) of the solid cylinder \(B_{M}\) the vector field \(v_{F}\) points outwards when \(d>1\) and inwards when \(d\leq-1\). Let us focus now on the lateral face of \(B_{M}\) and suppose further that \(d\neq-1\). Let \(p\in\operatorname{Fix}(f|_{P})\) and consider the neighborhood \(D^{2}(\delta)\times U_{p}\) of \(p\) and the norm \(||\cdot||\) from Proposition 14. Denote \(V_{p}\) the lift of \(D_{0}^{2}(\delta)\times U_{p}\) to the cover (note that the disk is punctured) and suppose \(v_{F}\) and \(\partial/\partial r\) point to the same direction at \(x\in V_{p}\). This implies that the projection of \(x\) and \(F(x)\) to \(D^{2}\times P\), have the form \((u,q)\) and \((au,q)\) for some \(a>1\) and \(q\in U_{p}\). In particular, \(f_{q}(u)=au\), which automatically implies \(||f_{q}(u)||>||u||\). Thus, the second alternative of Proposition 14 applies to \(p\), so there exists a basis \(\{e_{0},e_{1}\}\) and \(\alpha\in\mathbb{R}^{+}\) such that \(u\in C(\alpha)\cap D^{2}(\delta)\) and \(f_{q}(C(\alpha)\cap D^{2}(\delta))\cap\langle e_{0}\rangle=\emptyset\) for every \(q\in U_{p}\). The last property can be stated equivalently as \[f((C(\alpha)\cap D^{2}(\delta))\times U_{p})\cap(\langle e_{0}\rangle\times U_ {p})=\emptyset \tag{19}\] We now lift these elements to the cover. The cone region \(C(\alpha)\times U_{p}\) lifts to a sequence of domains \(O_{n}=(I+\frac{n}{2})\times(0,1)\times U_{p}\), indexed by \(n\in\mathbb{Z}\), where we are now employing the coordinates of \(\mathbb{R}\times(0,1)\times P\), the lift of \(D_{0}^{2}\times P\), and \(I\) is an interval in \(\mathbb{R}\) of length \(<1/2\). See Figure 1. Similarly, the strip \(\langle e_{0}\rangle\times U_{p}\) lifts to a sequence of strips \(E_{n}=\{\widetilde{e}_{0}+\frac{n}{2}\}\times(0,1)\times U_{p}\) for some \(\widetilde{e}_{0}\in\mathbb{R}\). Restricted to \(V_{p}\), the strips \(E_{n}\) and the domains \(O_{n}\) are pairwise disjoint and are placed alternately. Evidently, \(\tau(E_{n})=E_{n+2}\) and \(\tau(O_{n})=O_{n+2}\). Denote \(O_{n}^{\delta}=O_{n}\cap(\mathbb{R}\times(0,\delta)\times U_{p})\), the subset of \(O_{n}\) defined by \(0<r<\delta\). The condition (19) implies that \(F(O_{n}^{\delta})\) does not meet \(E_{m}\) for any \(m\in\mathbb{Z}\). This imposes a serious restriction on the number of lifts for which the image of \(O_{n}^{\delta}\) intersects itself. **Lemma 15**.: _There are at most two elements \(G\) of \(\{F,\tau F,\ldots,\tau^{|d-1|-1}F\}\) such that \(G(O_{n}^{\delta})\cap O_{n}\neq\emptyset\) for some \(n\in\mathbb{Z}\)._ Proof.: Recall that for any lift \(G\), \(G(O_{n+2})=G\tau(O_{n})=\tau^{d}G(O_{n})\) so if \(G(O_{n}^{\delta})\) lies in between \(E_{k}\) and \(E_{k+1}\) then \(G(O_{n+2}^{\delta})\) lies in between \(E_{k+2d}\) and \(E_{k+2d+1}\). Suppose that \(G,G^{\prime}=\tau^{m}G\) are two lifts of \(f\) such that \(G(O_{n}^{\delta})\cap O_{n}\neq\emptyset\) and \(G^{\prime}(O_{n+2s}^{\delta})\cap O_{n+2s}\neq\emptyset\) for some \(s\neq 0\). It follows that \(n+2s=m+n+2sd\), so \(|d-1|\) divides \(m\). In sum, there is at most one lift in \(\{F,\tau F,\ldots,\tau^{|d-1|-1}F\}\) such that the intersection in the statement is non-empty for some even \(n\) and at most one lift such that the intersection is non-empty for some odd \(n\). Incidentally, note that the previous lemma trivially holds when \(d=-1\). As a consequence of the discussion above we deduce: **Lemma 16**.: _Let \(p\in\mathrm{Fix}(f_{|P})\) and \(d\neq 0,1\). There exists \(\delta_{p}>0\), \(U_{p}\) neighborhood of \(p\) in \(P\) such that there are at most two lifts \(G\) among \(\{F,\tau F,\ldots,\tau^{|d-1|-1}F\}\) for which the vector field \(v_{G}(x)=G(x)-x\) points to the same direction as \(\partial/\partial r\) at some point of \(V_{p}\)._ In the second part of the proof, we show that the lifts for which \(v_{G}\) and \(\partial/\partial r\) do not point to the same direction at \(V_{p}\), for any \(p\in\mathrm{Fix}(f_{|P})\), always have a fixed point on \(B_{M}\). In view of the previous corollary this assertion concludes the proof. We have already described what happens in a neighborhood \(\cup U_{p}\) of the set of fixed points in \(P\). Since \(f_{|P}\) has no fixed point on \(P-\cup U_{p}\), by continuity, there exists \(\delta_{1}\) such that every point in \(D^{2}(\delta_{1})\times(P-\cup U_{p})\) is displaced tangentially to \(P\) by \(f\), that is, if \(f((v,q))=(f_{q}(v),q^{\prime})\) then \(q\neq q^{\prime}\). As a consequence, for any lift \(G\) of \(f\), \(v_{G}\) and \(\partial/\partial r\) do not point to the same direction on \(\mathbb{R}\times(0,\delta_{1})\times(P-\cup U_{p})\). Take \(\delta>0\) smaller than all \(\delta_{p}\) and \(\delta_{1}\). The results from the previous paragraph and Lemma 16 imply that there are at least \(|d-1|-2\#\mathrm{Fix}(f_{|P})\) lifts among \(\{F,\tau F,\ldots,\tau^{|d-1|-1}F\}\) such that \(v_{G}\) and \(\partial/\partial r\) do not point to the same direction on the region \(\{r=\delta\}=\mathbb{R}\times\{\delta\}\times P\). Say \(G\) is one of them. Take \(M\) large enough so that (17) holds for \(G\) and \(\delta\). Smooth out a small neighborhood of the edges of the cylinder \(B_{M}\) to obtain a convex domain \(B_{M}^{\prime}\) diffeomorphic to a closed ball. We now define a vector field \(w\) on \(\partial B_{M}^{\prime}\) to apply the results from Section 3. **Case \(d>1\)**. Let \(w\) be the normal unitary vector on \(\partial B_{M}^{\prime}\) that point inwards. Note that \(w\) coincides with \(\partial/\partial r\) on the lateral face of \(B_{M}^{\prime}\). In the rest of \(\partial B_{M}^{\prime}\) the inequalities (17) hold Figure 1. Picture of a piece of \(B_{M}\) for \(d=2\). The darker pieces and thicker segments in the horizontal strip are the intersection of the domains \(O_{n}\) and the strips \(E_{n}\) with the lateral face of \(B_{M}\), defined by \(r=\delta\). Arrows illustrate \(v_{F}\) at \(\tau^{-1}x,x,\tau x\). provided the smoothing region is small enough. By the choice of \(G\) we conclude that \(w\) never points to the same direction as \(v_{G}\). Since \(B^{\prime}_{M}\) is diffeomorphic to a ball and \(w\) points inwards on its boundary, we can apply Lemma 8 (i) and (ii) to deduce that \(j_{v_{G}}\) is not nullhomotopic. Then, Lemma 9 concludes that \(G\) has a fixed point inside \(B^{\prime}_{M}\), as desired. **Case \(d\leq-1\)**. Define \(w\) as the unitary vector that points inwards on the lateral face of \(B^{\prime}_{M}\) and as the unitary vector that points outwards on the pieces of \(\partial B^{\prime}_{M}\) that are part of the left and right faces of \(B_{M}\). Complete the definition of \(w\) on \(\partial B^{\prime}_{M}\) by an interpolation that guarantees that in the smoothing region and close to the left face, \(\{-M\}\times D^{m-1}(1-\delta)\), \(w\) never points in the positive direction (increasing first coordinate), while close to the right face, \(\{M\}\times D^{m-1}(1-\delta)\), \(w\) never points in the negative direction (decreasing first coordinate). Again, by the choice of \(G\), \(w\) and \(v_{G}\) never point to the same direction and, by Lemma 8 (iii) and (ii) and Lemma 9 we conclude that \(G\) has a fixed point in \(B^{\prime}_{M}\).
2302.11542
Unified tetraquark equations
We derive covariant equations describing the tetraquark in terms of an admixture of two-body states $D\bar D$ (diquark-antidiquark), $MM$ (meson-meson), and three-body-like states $q\bar q (T_{q\bar q})$, $q q (T_{\bar q\bar q})$, and $\bar q\bar q (T_{qq})$ where two of the quarks are spectators while the other two are interacting (their t matrices denoted correspondingly as $T_{q\bar q}$, $T_{\bar q\bar q}$, and $T_{qq}$). This has been achieved by describing the $qq\bar q\bar q$ system using the Faddeev-like four-body equations of Khvedelidze and Kvinikhidze [Theor. Math. Phys. 90, 62 (1992)] while retaining all two-body interactions (in contrast to previous works where terms involving isolated two-quark scattering were neglected). As such, our formulation, is able to unify seemingly unrelated models of the tetraquark, like, for example, the $D\bar D$ model of the Moscow group [Faustov et al., Universe 7, 94 (2021)] and the coupled channel $D \bar D-MM$ model of the Giessen group [Heupel et al., Phys. Lett. B718, 545 (2012)].
A. N. Kvinikhidze, B. Blankleider
2023-02-18T11:00:00Z
http://arxiv.org/abs/2302.11542v1
# Unified tetraquark equations ###### Abstract We derive covariant equations describing the tetraquark in terms of an admixture of two-body states \(D\bar{D}\) (diquark-antidiquark), \(MM\) (meson-meson), and three-body-like states \(q\bar{q}(T_{q\bar{q}})\), \(qq(T_{\bar{q}\bar{q}})\), and \(\bar{q}\bar{q}(T_{qq})\) where two of the quarks are spectators while the other two are interacting (their two matrices denoted correspondingly as \(T_{q\bar{q}}\), \(T_{\bar{q}\bar{q}}\), and \(T_{qq}\)). This has been achieved by describing the \(qq\bar{q}\bar{q}\) system using the Faddeev-like four-body equations of Khvedelidze and Kvinikhidze [Theor. Math. Phys. **90**, 62 (1992)] while retaining all two-body interactions (in contrast to previous works where terms involving isolated two-quark scattering were neglected). As such, our formulation, is able to unify seemingly unrelated models of the tetraquark, like, for example, the \(D\bar{D}\) model of the Moscow group [Faustov _et al._, Universe **7**, 94 (2021)] and the coupled channel \(D\bar{D}-MM\) model of the Giessen group [Heupel _et al._, Phys. Lett. **B718**, 545 (2012)]. Introduction With the inception of the quark model of hadrons in 1964, all known baryons and mesons could be described as stable combinations of valence quarks \(q\) and antiquarks \(\bar{q}\), baryons consisting of three quarks (\(qqq\)) and mesons of a quark-antiquark pair (\(q\bar{q}\)) [1; 2]. Although multiquark states such as the tetraquark (\(qq\bar{q}\bar{q}\)) and pentaquark (\(qqqq\bar{q}\)) were also considered to be a possibility [1; 3], it wasn't until 2003 that the first experimental evidence for an exotic multiquark state (a tetraquark) became available [4]. Since then there has been a virtual explosion in the number of multiquark hadron candidates discovered, together with a correspondingly large variety of theoretical models developed in order to learn about the dynamics of their formation, see [5] for a recent review. Out of the many recent theoretical works on this subject, we would like to address the works of the Moscow group (Faustov _et al._) [6; 7; 8; 9], who modeled tetraquarks as a diquark-antidiquark (\(D\bar{D}\)) system, and the Giessen group (Fischer _et al._) [10; 11; 12; 13], who modeled tetraquarks as a coupled mix of meson-meson (\(MM\)) and diquark-antidiquark (\(D\bar{D}\)) states. It has been noted that these works differ significantly not only in their prediction of heavy tetraquark masses [8], but moreover, in the very attribution of the inner structure a heavy tetraquark, with the Giessen group finding the \(MM\) components to be generally dominant, with the \(D\bar{D}\) components being small or even negligible [13]. In view of the strongly differing predictions made by these models, it would be interesting and important to express these seemingly unrelated models in terms of a common theoretical foundation. It is to this end that we have derived a universal set of tetraquark equations which produce both the above approaches in different approximations. In order to demonstrate how a unified theoretical approach is achieved, we first note that the Moscow group's model can be viewed as being based on the solutions of the bound-state equation for the \(D\bar{D}\)-tetraquark amplitude \(\phi_{D}\), as illustrated in Fig. 1. As seen from this figure, the kernel of the equation consists of a single term where a \(q\bar{q}\) pair scatters elastically in the presence of spectating \(q\) and \(\bar{q}\) quarks. More specifically, the Moscow model corresponds to the case where \(T_{q\bar{q}}\), the t matrix describing the mentioned \(q\bar{q}\) scattering, is expressed as a sum of two potentials \[T_{q\bar{q}}=V_{\rm gluon}+V_{\rm conf} \tag{1}\] where \(V_{\rm gluon}\) is the \(q\bar{q}\) one-gluon-exchange potential and \(V_{\rm conf}\) is a local confining poten tial.1 However, in this paper we shall consider the general case of the \(T_{q\bar{q}}\) t matrix, and correspondingly refer to the intermediate state of the kernel of Fig. 1 as \(q\bar{q}(T_{q\bar{q}})\). Footnote 1: To be precise, the Moscow group uses quasipotential bound state form factors instead of the \(D\to qq\) form factor \(\Gamma_{12}(p,P)\) and the \(\bar{D}\to\bar{q}\bar{q}\) form factor \(\Gamma_{34}(p,P)\), appearing as small blue circles in Fig. 1. Formally, this is equivalent to assuming that \(\Gamma_{12}(p,P)\) and \(\Gamma_{34}(p,P)\) do not depend on the longitudinal projection of the relative 4-momentum \(p\) with respect to the total momentum \(P\) of the two quarks or two antiquarks. In a similar way, the Giessen group's model is based on the solutions of the coupled-channel equations for the \(MM\)-tetraquark and \(D\bar{D}\)-tetraquark amplitudes \(\phi_{M}\) and \(\phi_{D}\), respectively, as illustrated in Fig. 2. In this case there are no contributions of type \(q\bar{q}(T_{q\bar{q}})\), with \(D\bar{D}\) scattering taking place only via intermediate \(MM\) states. One of the features of the Giessen group's model is that it is based on a rigorous field-theoretic derivation for the \(2q2\bar{q}\) system where all approximations can be clearly specified. Thus, following the deriva Figure 1: Diquark-antidiquark bound state equation encompassing the Moscow group’s approach [6; 7; 8; 9]. The form factor \(\phi_{D}\) couples the tetraquark to diquark and antidiquark states (both represented by double-lines). Shown is the general form of the kernel where one \(q\bar{q}\) pair interacts (the red circle representing the corresponding t matrix \(T_{q\bar{q}}\)) while the other \(q\bar{q}\) pair is spectating. Quarks (antiquarks) are represented by left (right) directed lines. Figure 2: Tetraquark equations of the Giessen group [10; 11; 12; 13]. Form factor \(\phi_{M}\) couples the tetraquark to two mesons (dashed lines), and form factors \(\phi_{D}\) couples the tetraquark to diquark-antidiquark states (double-lines). tion presented in [10], the model is covariant, retains only pair-wise interactions between the quarks, and thus leads to the use of the t matrix \(T_{aa^{\prime}}\) corresponding to the scattering of the 4 quarks where all interactions are switched off except those within the pairs labelled by \(a\) and \(a^{\prime}\). It can be shown (see Eq. (10) of [10]) that \[T_{aa^{\prime}}=T_{a}+T_{a^{\prime}}+T_{a}T_{a^{\prime}} \tag{2}\] where \(T_{a}\) and \(T_{a^{\prime}}\) are the separate two-body t matrices for the scattering of the quarks within pairs \(a\) and \(a^{\prime}\), respectively. The first two terms on the right hand side (RHS) of Eq. (2) were neglected in the derivation of [10], yet are responsible for contributions like that of the \(q\bar{q}(T_{q\bar{q}})\) intermediate state in the Moscow group's model. Implementing a further approximation where \(T_{a}\) and \(T_{a^{\prime}}\) are assumed to be dominated by meson and diquark pole contributions, leads to the equations of Fig. 2. In order to achieve a unified description where all the contributions illustrated in Fig. 1 and Fig. 2 are taken into account, we derive coupled equations similar to those of Fig. 2, but where the first two terms on the RHS of Eq. (2) are retained at least to first order at this stage. The resulting equations have the same form as those of Fig. 2, but with a kernel that contains additional diagrams illustrated in Fig. 5. Thus, the \(q\bar{q}(T_{q\bar{q}})\) contribution is included, as well as corresponding \(qq(T_{\bar{q}\bar{q}})\) and \(\bar{q}\bar{q}(T_{qq})\) contributions. In this way we unify the Moscow and Giessen approaches, and hope that the resulting unified tetraquark equations will lead to a more accurate description of a tetraquark, including an improved assessment of the relative roles played by its \(D\bar{D}\) and \(MM\) components. ## II Derivation For simplicity, in Sec. II A we derive general tetraquark equations for the case of distinguishable quarks. Then, in Sec. II B, corresponding equations for two identical quarks and two identical antiquarks are obtained by explicitly antisymmetrizing the distinguishable quark case. In Sec. II C, after the introduction of separable approximations for the two-body t matrices in the product term \(T_{a}T_{a^{\prime}}\) of Eq. (2), the resulting coupled channel \(MM-D\bar{D}\) equations are recast so as to expose three-body-like states of the form \(q\bar{q}(T_{q\bar{q}})\), \(qq(T_{\bar{q}\bar{q}})\), and \(\bar{q}\bar{q}(T_{qq})\). The final part of the derivation, in Sec. II D, is devoted to symmetrizing the two-meson states in the formalism, as these may not have the required symmetry for the case of identical mesons. ### Four-body equations for distinguishable quarks To describe the \(2q2\bar{q}\) system where coupling to \(q\bar{q}\) channels is neglected and only pairwise interactions are taken into account, we follow the formulation of Khvedelidze and Kvinikhidze [14] in the same way as in Ref. [10] and in our previous work [15]. Thus, assigning labels 1,2 to the quarks and 3,4 to the antiquarks, the \(q\bar{q}\)-irreducible 4-body kernel for distinguishable particles, \(K^{d}\), is written as a sum of three terms whose structure is illustrated in Fig. 3, and correspondingly expressed as \[K^{d}=\sum_{aa^{\prime}}K^{d}_{aa^{\prime}}=\sum_{\alpha}K^{d}_{\alpha} \tag{3}\] where the index \(a\in\{12,13,14,23,24,34\}\) enumerates six possible pairs of particles, the double index \(aa^{\prime}\in\{(13,24),(14,23),(12,34)\}\) enumerates three possible two pairs of particles, and the Greek index \(\alpha\) is used as an abbreviation for \(aa^{\prime}\) such that \(\alpha=1\) denotes \(aa^{\prime}=(13,24)\), \(\alpha=2\) denotes \(aa^{\prime}=(14,23)\), and \(\alpha=3\) denotes \(aa^{\prime}=(12,34)\). Thus \(K^{d}_{\alpha}\equiv K^{d}_{aa^{\prime}}\) describes the part of the four-body kernel where all interactions are switched off Figure 3: Structure of the terms \(K^{d}_{\alpha}\) (\(\alpha=1,2,3\)) making up the four-body kernel \(K^{d}\) where only two-body forces are included. The coloured circles represent two-body kernels \(K^{d}_{ij}\) for the scattering of quarks \(i\) and \(j\), as indicated. except those within the pairs \(a\) and \(a^{\prime}\). Figure 3 illustrates the fact that \(K_{\alpha}^{d}\) can be expressed in terms of the two-body kernels \(K_{a}^{d}\) and \(K_{a^{\prime}}^{d}\) as [10; 14; 15], \[K_{\alpha}^{d}=K_{a}^{d}{G_{a^{\prime}}^{0}}^{-1}+K_{a^{\prime}}^{d}{G_{a}^{0}}^ {-1}-K_{a}^{d}K_{a^{\prime}}^{d}, \tag{4}\] where \(G_{a}^{0}\) ( \(G_{a^{\prime}}^{0}\) ) is the 2-body disconnected Green function for particle pair \(a\) (\(a^{\prime}\)). Of note is the presence of a minus sign in the last term of Eq. (4), which is necessary to avoid overcounting. To simplify the notation, we shall suppress writing disconnected Green functions whenever these are self-evident; thus we may write Eq. (4) as the three expressions \[K_{1}^{d}=K_{13}^{d}+K_{24}^{d}-K_{13}^{d}K_{24}^{d}, \tag{5a}\] \[K_{2}^{d}=K_{14}^{d}+K_{23}^{d}-K_{14}^{d}K_{23}^{d},\] (5b) \[K_{3}^{d}=K_{12}^{d}+K_{34}^{d}-K_{12}^{d}K_{34}^{d}, \tag{5c}\] and the \(2q2\bar{q}\) kernel for distinguishable quarks in the pairwise approximation, as \[K^{d}=K_{1}^{d}+K_{2}^{d}+K_{3}^{d}. \tag{6}\] Although the superscript "\(d\)" (to indicate the distinguishable particle assumption) is redundant for quantities like \(K_{1}^{d}\) and \(K_{2}^{d}\) involving \(q\bar{q}\) pairs, we keep it for the moment in order to avoid a mixed notation. The \(2q2\bar{q}\) bound state form factor for distinguishable quarks is then \[\Phi^{d}=K^{d}G_{0}^{(4)}\Phi^{d} \tag{7}\] where \(G_{0}^{(4)}\) is the fully disconnected part of the full \(2q2\bar{q}\) Green function \(G^{(4)}\)[16]. The four-body kernels \(K_{\alpha}\) can be used to define the Faddeev components of \(\Phi^{d}\) as \[\Phi_{\alpha}^{d}=K_{\alpha}^{d}G_{0}^{(4)}\Phi^{d}, \tag{8}\] so that \[\sum_{\alpha}\Phi_{\alpha}^{d}=\Phi^{d}. \tag{9}\] From Eq. (7) follow Faddeev-like equations for the components, \[\Phi_{\alpha}^{d}=T_{\alpha}^{d}\sum_{\beta}\bar{\delta}_{\alpha\beta}G_{0}^{ (4)}\Phi_{\beta}^{d} \tag{10}\] where \(\bar{\delta}_{\alpha\beta}=1-\delta_{\alpha\beta}\) and \(T_{\alpha}^{d}\) is the t matrix corresponding to kernel \(K_{\alpha}^{d}\); that is, \[T_{\alpha}^{d}=K_{\alpha}^{d}+K_{\alpha}^{d}G_{0}^{(4)}T_{\alpha}^{d} \tag{11}\] with \(T_{\alpha}^{d}\) being expressed in terms of two-body t matrices \(T_{a}^{d}\) and \(T_{a^{\prime}}^{d}\) as \[T_{\alpha}^{d}=T_{a}^{d}{G_{a^{\prime}}^{0}}^{-1}+T_{a^{\prime}}^{d}{G_{a}^{0} }^{-1}+T_{a}^{d}T_{a^{\prime}}^{d}, \tag{12}\] or in the simplified notation analogous to Eq. (5), \[T_{1}^{d}=T_{13}^{d}+T_{24}^{d}+T_{13}^{d}T_{24}^{d}, \tag{13a}\] \[T_{2}^{d}=T_{14}^{d}+T_{23}^{d}+T_{14}^{d}T_{23}^{d},\] (13b) \[T_{3}^{d}=T_{12}^{d}+T_{34}^{d}+T_{12}^{d}T_{34}^{d}. \tag{13c}\] Equations (10) can likewise be written with dropped \(G_{0}^{(4)}\)'s as \[\Phi_{1}^{d}=T_{1}^{d}(\Phi_{2}^{d}+\Phi_{3}^{d}), \tag{14a}\] \[\Phi_{2}^{d}=T_{2}^{d}(\Phi_{3}^{d}+\Phi_{1}^{d}),\] (14b) \[\Phi_{3}^{d}=T_{3}^{d}(\Phi_{1}^{d}+\Phi_{2}^{d}). \tag{14c}\] ### Four-body equations for indistinguishable quarks The \(2q2\bar{q}\) bound state form factor \(\Phi\) for two identical quarks \(1,2\) and two identical antiquarks \(3,4\), satisfies the equation \[\Phi=\frac{1}{4}KG_{0}^{(4)}\Phi \tag{15}\] where the kernel \(K\) is antisymmetric with respect to swapping quark or antiquark quantum numbers either in the initial or in the final state; that is, \[\mathcal{P}_{34}K=\mathcal{P}_{12}K=K\mathcal{P}_{34}=K\mathcal{P}_{12}=-K \tag{16}\] where the exchange operator \(\mathcal{P}_{ij}\) swaps the quantum numbers associated with particles \(i\) and \(j\) in the quantity on which it is operating; for example, \(\mathcal{P}_{12}\Phi(p_{1}p_{2}p_{3}p_{4})=\Phi(p_{2}p_{1}p_{3}p_{4})\) and \(\mathcal{P}_{34}\Phi(p_{1}p_{2}p_{3}p_{4})=\Phi(p_{1}p_{2}p_{4}p_{3})\). The factor \(\frac{1}{4}\) in Eq. (15) is a product of the combinatorial factors \(\frac{1}{2}\), one for identical quarks and another for identical antiquarks. The in this way antisymmetric kernel \(K\) can be represented as \[K=(1-\mathcal{P}_{12})(1-\mathcal{P}_{34})K^{d} \tag{17}\] where \(K^{d}\) is symmetric with respect to swapping either quark or antiquark quantum numbers in the initial and final states simultaneously, \({\cal P}_{12}K^{d}{\cal P}_{12}={\cal P}_{34}K^{d}{\cal P}_{34}=K^{d}\). This symmetry property of \(K^{d}\) can be written in the form of commutation relations \[[{\cal P}_{34},K^{d}]=[{\cal P}_{12},K^{d}]=0, \tag{18}\] and follows directly from the following relations implied by Eqs. (5): \[{\cal P}_{12}K^{d}_{3}{\cal P}_{12} = {\cal P}_{34}K^{d}_{3}{\cal P}_{34}=K^{d}_{3}, \tag{19a}\] \[{\cal P}_{12}K^{d}_{1}{\cal P}_{12} = {\cal P}_{34}K^{d}_{1}{\cal P}_{34}=K^{d}_{2}. \tag{19b}\] Due to the antisymmetry properties of \(K\) as specified in Eq. (16), the solution of the identical particle bound state equation, Eq. (15), is correspondingly antisymmetric; namely, \({\cal P}_{34}\Phi={\cal P}_{12}\Phi=-\Phi\). However, because \(K^{d}\) usually corresponds to a fewer number of diagrams than \(K\), rather than solving Eq. (15), it may be more convenient to determine \(\Phi\) by antisymmetrising the solution \(\Phi^{d}\) of the bound state equation for distinguishable quarks, as \[\Phi=(1-{\cal P}_{12})(1-{\cal P}_{34})\Phi^{d}. \tag{20}\] Then, in view of the commutation relations of Eq. (18), if the solution \(\Phi^{d}\) exists, its antisymmetrized version as given by Eq. (20), also satisfies the bound state equation for distinguishable quarks, Eq. (7), as well as the one for indistinguishable ones, Eq. (15): \[\Phi = (1-{\cal P}_{12})(1-{\cal P}_{34})K^{d}G_{0}^{(4)}\Phi^{d} \tag{21}\] \[= K^{d}G_{0}^{(4)}(1-{\cal P}_{12})(1-{\cal P}_{34})\Phi^{d}=K^{d} G_{0}^{(4)}\Phi\] \[= \frac{1}{4}K^{d}G_{0}^{(4)}(1-{\cal P}_{12})(1-{\cal P}_{34})\Phi\] \[= \frac{1}{4}KG_{0}^{(4)}\Phi.\] A further consequence of the commutation relations of Eq. (18), is that the system corresponding to the kernel \(K^{d}\) is degenerate, having multiple linearly independent solutions (eigenfunctions) corresponding to one eigenenergy (tetraquark mass), unless by chance \(K^{d}\) is symmetric or antisymmetric in the final and initial state variables independently, \({\cal P}_{34}K^{d}={\cal P}_{12}K^{d}=\pm K^{d}\). In the case of the \(2q2\bar{q}\) system, there are 4 such eigenfunctions related to each other by quark swapping operators, or symmetrized in 4 possible ways using \((1\pm{\cal P}_{12})\) and \((1\pm{\cal P}_{34})\). By contrast, the system corresponding to the kernel \(K\) is _not_ degenerate, because \(K\) is fully antisymmetric from both (initial and final state) sides independently and consequently it has only one, fully antisymmetric, solution. Indeed, in this system any swapping of the identical-quark quantum numbers does not change the wave function because only fully antisymmetric wave functions satisfy the bound state equation, \({\cal P}_{ij}\Phi=\frac{1}{4}{\cal P}_{ij}KG_{0}^{(4)}\Phi=-\frac{1}{4}KG_{0}^{ (4)}\Phi=-\Phi\). As \(\Phi\) satisfies the same bound state equation as \(\Phi^{d}\), \[\Phi=K^{d}G_{0}^{(4)}\Phi, \tag{22}\] the kernels \(K_{\alpha}^{d}\) can again be used to define Faddeev components, but this time for \(\Phi\): \[\Phi_{\alpha}=K_{\alpha}^{d}G_{0}^{(4)}\Phi, \tag{23}\] where \[\sum_{\alpha}\Phi_{\alpha}=\Phi. \tag{24}\] In view of Eqs. (19), the Faddeev components \(\Phi_{\alpha}\) have the following properties: \[{\cal P}_{12}\Phi_{3}=-\Phi_{3},\qquad{\cal P}_{12}\Phi_{1}=-\Phi _{2}, \tag{25a}\] \[{\cal P}_{34}\Phi_{3}=-\Phi_{3},\qquad{\cal P}_{34}\Phi_{1}=-\Phi _{2}. \tag{25b}\] Since \(\Phi\) satisfies the same bound state equation as \(\Phi^{d}\), the components \(\Phi_{\alpha}\) satisfy the same Faddeev-like equations as for distinguishable quarks [Eqs. (14)], \[\Phi_{1}=T_{1}^{d}(\Phi_{2}+\Phi_{3}), \tag{26a}\] \[\Phi_{2}=T_{2}^{d}(\Phi_{3}+\Phi_{1}),\] (26b) \[\Phi_{3}=T_{3}^{d}(\Phi_{1}+\Phi_{2}). \tag{26c}\] where, like in Eqs. (14), factors of \(G_{0}^{(4)}\) have been dropped. Although an arbitrary solution \(\{\Phi_{1},\Phi_{2},\Phi_{3}\}\) of Eqs. (34) will not necessarily obey the symmetry properties of Eq. (25), we note that if \(\{\Phi_{1},\Phi_{2},\Phi_{3}\}\) is a solution, then so is \({\cal P}_{12}\{\Phi_{2},\Phi_{1},\Phi_{3}\}\) and \({\cal P}_{34}\{\Phi_{2},\Phi_{1},\Phi_{3}\}\), and therefore so are their linear combinations \[\{\Phi_{1}^{\prime},\Phi_{2}^{\prime},\Phi_{3}^{\prime}\}=\{\Phi_{1},\Phi_{2},\Phi_{3}\}-{\cal P}_{12}\{\Phi_{2},\Phi_{1},\Phi_{3}\}, \tag{27}\] and \[\{\Phi_{1}^{\prime\prime},\Phi_{2}^{\prime\prime},\Phi_{3}^{\prime\prime}\}= \{\Phi_{1}^{\prime},\Phi_{2}^{\prime},\Phi_{3}^{\prime}\}-{\cal P}_{34}\{\Phi_ {2}^{\prime},\Phi_{1}^{\prime},\Phi_{3}^{\prime}\} \tag{28}\] which does have these symmetry properties. Thus, without loss of generality, we shall assume that we are dealing with a solution \(\{\Phi_{1},\Phi_{2},\Phi_{3}\}\) of Eqs. (26) which has the symmetry properties of Eq. (25). We also note that the input 2-body t matrices \(T_{12}^{d}\) and \(T_{34}^{d}\) can be antisymmetrized by defining \[T_{12}=\frac{1}{2}(1-\mathcal{P}_{12})T_{12}^{d},\quad T_{34}=\frac{1}{2}(1- \mathcal{P}_{34})T_{34}^{d}, \tag{29}\] so that \[T_{12}\mathcal{P}_{12}=\mathcal{P}_{12}T_{12}=-T_{12}, \tag{30a}\] \[T_{34}\mathcal{P}_{34}=\mathcal{P}_{34}T_{34}=-T_{34}, \tag{30b}\] which also allows Eq. (12) to be extended to the case of identical particles as \[T_{\alpha}=T_{a}G_{a^{\prime}}^{0\;-1}+T_{a^{\prime}}G_{a}^{0\;-1}+T_{a}T_{a^{ \prime}}, \tag{31}\] or explicitly with \(G_{a^{\prime}}^{0\;-1}\) and \(G_{a}^{0\;-1}\) suppressed, \[T_{1}=T_{13}+T_{24}+T_{13}T_{24}, \tag{32a}\] \[T_{2}=T_{14}+T_{23}+T_{14}T_{23},\] (32b) \[T_{3}=T_{12}+T_{34}+T_{12}T_{34}, \tag{32c}\] where the equations for \(T_{1}\) and \(T_{2}\) are just those of Eq. (13a) and Eq. (13b) written without the redundant "\(d\)" superscripts, and where \(T_{3}\) is defined by Eq. (32c). Furthermore, as the physical (antisymmetric) t matrices for \(qq\) and \(\bar{q}\bar{q}\) scattering are \(T_{qq}=(1-\mathcal{P}_{12})T_{12}^{d}=2T_{12}\) and \(T_{\bar{q}\bar{q}}=(1-\mathcal{P}_{34})T_{34}^{d}=2T_{34}\), respectively, it is convenient to use the antisymmetric \(T_{12}\) and \(T_{34}\) as the input \(qq\) and \(\bar{q}\bar{q}\) t matrices. This is accomplished by multiplying Eq. (26c) by \((1-\mathcal{P}_{12})\) and using the symmetry properties of Eq. (25) to obtain \[\Phi_{3} =\frac{1}{2}(1-\mathcal{P}_{12})T_{3}^{d}\frac{1}{2}(1-\mathcal{P }_{34})(\Phi_{1}+\Phi_{2})\] \[=T_{3}(\Phi_{1}+\Phi_{2}) \tag{33}\] thereby allowing us to write Eqs. (26) as \[\Phi_{1} =T_{1}(\Phi_{2}+\Phi_{3}), \tag{34a}\] \[\Phi_{2} =T_{2}(\Phi_{3}+\Phi_{1}),\] (34b) \[\Phi_{3} =T_{3}(\Phi_{1}+\Phi_{2}). \tag{34c}\] For physical (antisymmetric) solutions of Eqs. (34), only two of these three equations are independent. For example, Eq. (34b) can be written as \[-\mathcal{P}_{12}\Phi_{1} =\mathcal{P}_{12}T_{1}\mathcal{P}_{12}(\Phi_{3}+\Phi_{1})\] \[=\mathcal{P}_{12}T_{1}(-\Phi_{3}-\Phi_{2}) \tag{35}\] where Eq. (25) and \(T_{2}=\mathcal{P}_{12}T_{1}\mathcal{P}_{12}\) have been used; then, after a further application of \(\mathcal{P}_{12}\), one obtains Eq. (34a). Choosing Eq. (34a) and Eq. (34c) as the two independent equations, we can use \(\Phi_{2}=-\mathcal{P}_{12}\Phi_{1}\) to obtain closed equations \[\Phi_{1} =T_{1}(-\mathcal{P}_{12}\Phi_{1}+\Phi_{3}), \tag{36a}\] \[\Phi_{3} =T_{3}(\Phi_{1}-\mathcal{P}_{12}\Phi_{1}), \tag{36b}\] where, necessarily, \(\mathcal{P}_{12}\Phi_{3}=-\Phi_{3}\). In this way an arbitrary solution of Eqs. (36) results in components \(\{\Phi_{1},\Phi_{2},\Phi_{3}\}\) which obey the symmetry properties of Eqs. (25a) but not necessarily of Eq. (25b); however, invoking a similar argument as previously, no generality is lost in choosing a solution of Eqs. (36) that has all the symmetry properties of Eq. (25). Equation (36b) can be further simplified using \(\mathcal{P}_{12}\Phi_{1}=\mathcal{P}_{34}\Phi_{1}\) and the assumption that \(T_{12}\) and \(T_{34}\) are antisymmetric in their labels, so that \[T_{3}\mathcal{P}_{12}\Phi_{1}=(T_{12}+T_{34}+T_{12}T_{34})\mathcal{P}_{12}\Phi _{1}=-T_{3}\Phi_{1}. \tag{37}\] In this way Eqs. (36) take the form \[\Phi_{1} =T_{1}(-\mathcal{P}_{12}\Phi_{1}+\Phi_{3}) \tag{38a}\] \[\Phi_{3} =2T_{3}\Phi_{1}. \tag{38b}\] Again, without loss of generality, we choose a solution of Eqs. (38) which has all the symmetry properties of Eq. (25). Tetraquark equations with exposed \(\mathbf{q\bar{q}(T_{q\bar{q}})}\), \(\mathbf{qq(T_{\bar{q}\bar{q}})}\), and \(\mathbf{\bar{q}\bar{q}(T_{qq})}\) channels Choosing Eqs. (38) as the four-body equations describing a tetraquark, they may be expressed in matrix form as \[\Phi=\mathcal{T}\mathcal{R}\,\Phi \tag{39}\] where \[\varPhi=\begin{pmatrix}\Phi_{1}\\ \Phi_{3}\end{pmatrix},\ \ \mathcal{T}=\begin{pmatrix}\frac{1}{2}T_{1}&0\\ 0&T_{3}\end{pmatrix},\ \ \mathcal{R}=2\begin{pmatrix}-\mathcal{P}_{12}&1\\ 1&0\end{pmatrix}. \tag{40}\] Writing \[T_{1}=T_{1}^{\times}+T_{1}^{+},\hskip 28.452756ptT_{3}=T_{3}^{\times}+T_{3}^{+}, \tag{41}\] where \[T_{1}^{\times}=T_{13}T_{24},\hskip 28.452756ptT_{1}^{+}=T_{13}+T_{24}, \tag{42a}\] \[T_{3}^{\times}=T_{12}T_{34}\hskip 28.452756ptT_{3}^{+}=T_{12}+T_{34}, \tag{42b}\] we have that \[\mathcal{T}=\mathcal{T}^{\times}+\mathcal{T}^{+} \tag{43}\] where \[\mathcal{T}^{\times}=\begin{pmatrix}\frac{1}{2}T_{1}^{\times}&0\\ 0&T_{3}^{\times}\end{pmatrix},\ \ \mathcal{T}^{+}=\begin{pmatrix}\frac{1}{2}T_{1}^{+}&0\\ 0&T_{3}^{+}\end{pmatrix}. \tag{44}\] Thus \[\varPhi=(\mathcal{T}^{\times}+\mathcal{T}^{+})\mathcal{R}\varPhi \tag{45}\] and consequently \[\varPhi=(1-\mathcal{T}^{+}\mathcal{R})^{-1}\mathcal{T}^{\times}\mathcal{R} \varPhi. \tag{46}\] To be close to previous publications we choose a separable approximation for the two-body t matrices in \(T_{1}^{\times}\) and \(T_{3}^{\times}\) (but not necessarily in \(T_{1}^{+}\) and \(T_{3}^{+}\)); namely, for \(a\in\{13,24,12,34\}\) we take \[T_{a}=i\Gamma_{a}D_{a}\bar{\Gamma}_{a}, \tag{47}\] where \(D_{a}=D_{a}(P_{a})\) is a propagator whose structure can be chosen to best describe the two-body t matrix \(T_{a}\), and \(\Gamma_{a}\) is a corresponding vertex function. In the simplest case, one can follow previous publications and choose the pole approximation where \(D_{a}(P_{a})=1/(P_{a}^{2}-m_{a}^{2})\) is the propagator for the bound particle (diquark, antidiquark, or meson) of mass \(m_{a}\). In view of Eq. (30), note that \[\mathcal{P}_{12}\Gamma_{12}=-\Gamma_{12},\hskip 28.452756pt \bar{\Gamma}_{12}\mathcal{P}_{12}=-\bar{\Gamma}_{12}, \tag{48a}\] \[\mathcal{P}_{34}\Gamma_{34}=-\Gamma_{34},\hskip 28.452756pt\bar{ \Gamma}_{34}\mathcal{P}_{34}=-\bar{\Gamma}_{34}. \tag{48b}\] We can thus write \[{\cal T}^{\times}=-\Gamma D\bar{\Gamma} \tag{49}\] where \[\Gamma=\begin{pmatrix}\Gamma_{13}\Gamma_{24}&0\\ 0&\Gamma_{12}\Gamma_{34}\end{pmatrix},\,\,\,D=\begin{pmatrix}\frac{1}{2}D_{13} D_{24}&0\\ 0&D_{12}D_{34}\end{pmatrix},\,\,\,\bar{\Gamma}=\begin{pmatrix}\bar{\Gamma}_{13} \bar{\Gamma}_{24}&0\\ 0&\bar{\Gamma}_{12}\bar{\Gamma}_{34}\end{pmatrix}. \tag{50}\] In this way \({\cal T}^{\times}\) exposes intermediate state meson-meson (\(D_{13}D_{24}\)) and diquark-antidiquark (\(D_{12}D_{34}\)) channels. Using Eq. (49) in Eq. (46), \[\phi=-\bar{\Gamma}{\cal R}(1-{\cal T}^{+}{\cal R})^{-1}\Gamma D\phi \tag{51}\] where \[\phi=\bar{\Gamma}{\cal R}\,\Phi. \tag{52}\] In this way we obtain the bound state equation for \(\phi\) in meson-meson (\(MM\)) and diquark-antidiquark (\(D\bar{D}\)) space, \[\phi=VD\phi \tag{53}\] where the \(2\times 2\) matrix potential (with reinserted \(G_{0}^{(4)}\)) is \[V=-\bar{\Gamma}{\cal R}G_{0}^{(4)}(1-{\cal T}^{+}{\cal R}G_{0}^{(4)})^{-1} \Gamma. \tag{54}\] Expanding the term in square brackets in powers of \({\cal T}^{+}\) (i.e., with respect to the contribution of intermediate states \(q\bar{q}(T_{q\bar{q}})\), \(qq(T_{\bar{q}\bar{q}})\), and \(\bar{q}\bar{q}(T_{qq})\)), \[V=-\bar{\Gamma}{\cal R}G_{0}^{(4)}\left[1+{\cal T}^{+}{\cal R}G_{0}^{(4)}+ \ldots\right]\Gamma, \tag{55}\] it turns out that each of the first two terms of this expansion corresponds to different existing approaches to modelling tetraquarks in terms of \(MM-D\bar{D}\) coupled channels. In particular, the lowest order term \[V^{(0)} = -\bar{\Gamma}{\cal R}G_{0}^{(4)}\Gamma \tag{56}\] \[= -2\begin{pmatrix}\bar{\Gamma}_{1}&0\\ 0&\bar{\Gamma}_{3}\end{pmatrix}\begin{pmatrix}-{\cal P}_{12}&1\\ 1&0\end{pmatrix}G_{0}^{(4)}\begin{pmatrix}\Gamma_{1}&0\\ 0&\Gamma_{3}\end{pmatrix}\] \[= -2\begin{pmatrix}-\bar{\Gamma}_{1}{\cal P}_{12}\Gamma_{1}&\bar{ \Gamma}_{1}\Gamma_{3}\\ \bar{\Gamma}_{3}\Gamma_{1}&0\end{pmatrix},\] where \[\bar{\Gamma}_{1} = \bar{\Gamma}_{13}\bar{\Gamma}_{24},\hskip 28.452756pt\Gamma_{1}= \Gamma_{13}\Gamma_{24}, \tag{57a}\] \[\bar{\Gamma}_{3} = \bar{\Gamma}_{12}\bar{\Gamma}_{34},\hskip 28.452756pt\Gamma_{3}= \Gamma_{12}\Gamma_{34}, \tag{57b}\] consists of Feynman diagrams illustrated in Fig. 4, and corresponds to the Giessen group (GG) model of Heupel _et al._[10] where tetraquarks are modelled by solving the equation \[\phi^{(0)}=V^{(0)}D\phi^{(0)}. \tag{58}\] Similarly, the first order correction (without the lowest order term included) is \[V^{(1)} = -\bar{\Gamma}\mathcal{R}G_{0}^{(4)}\mathcal{T}^{+}\mathcal{R}G_{0 }^{(4)}\,\Gamma \tag{59}\] \[= -4\begin{pmatrix}\bar{\Gamma}_{1}&0\\ 0&\bar{\Gamma}_{3}\end{pmatrix}G_{0}^{(4)}\begin{pmatrix}-\mathcal{P}_{12}& 1\\ 1&0\end{pmatrix}\begin{pmatrix}\frac{1}{2}T_{1}^{+}&0\\ 0&T_{3}^{+}\end{pmatrix}\begin{pmatrix}-\mathcal{P}_{12}&1\\ 1&0\end{pmatrix}G_{0}^{(4)}\begin{pmatrix}\Gamma_{1}&0\\ 0&\Gamma_{3}\end{pmatrix}\] \[= -2\begin{pmatrix}\bar{\Gamma}_{1}[\mathcal{P}_{12}T_{1}^{+} \mathcal{P}_{12}+2T_{3}^{+}]\Gamma_{1}&-\bar{\Gamma}_{1}\mathcal{P}_{12}T_{1} ^{+}\Gamma_{3}\\ -\bar{\Gamma}_{3}T_{1}^{+}\mathcal{P}_{12}\Gamma_{1}&2\bar{\Gamma}_{3}T_{1}^{+ }\Gamma_{3}\end{pmatrix},\] which consists of Feynman diagrams illustrated in Fig. 5, and corresponds to the Moscow group (MG) model of Faustov _et al._[7] where they modelled tetraquarks by solving the equation \[\phi^{(1)}=V^{(1)}D\phi^{(1)}, \tag{60}\] albeit, with only diquark-antidiquark channels retained. It is an essential result of this paper, that it is the sum of the potentials \(V^{(0)}\) and \(V^{(1)}\), each associated with the separate approaches of the Giessen and Moscow groups, with tetraquarks modelled by the bound Figure 4: Feynman diagrams making up the elements of the coupled channel \(MM-D\bar{D}\) kernel matrix \(V^{(0)}\) of Eq. (56): (a) \(\bar{\Gamma}_{1}\mathcal{P}_{12}\Gamma_{1}\), (b) \(\bar{\Gamma}_{1}\Gamma_{3}\), and (c) \(\bar{\Gamma}_{3}\Gamma_{1}\). Solid lines with leftward (rightward) arrows represent quarks (antiquarks), dashed lines represent mesons, and double-lines represent diquarks and antidiquarks. state equation \[\phi=[V^{(0)}+V^{(1)}]D\phi, \tag{61}\] that results in a complete \(MM-D\bar{D}\) coupled channel description up to first order in \(\mathcal{T}^{+}\) [i.e., up to first order in intermediate states where one \(2q\) pair (\(qq\), \(q\bar{q}\), or \(\bar{q}\bar{q}\)) is mutually interacting while the other \(2q\) pair is spectating]. ### Meson-meson symmetry To discuss the symmetry of identical meson legs, we note that the potential \(V\) consists of diagrams, some of which are illustrated in Fig. 4 and Fig. 5, where a four-meson leg contribution, for example \(\bar{\Gamma}_{1}\mathcal{P}_{12}\Gamma_{1}\) as illustrated in Fig. 4(a), consists of a diagram which is not symmetric with respect to meson quantum numbers, being only symmetric with respect to swapping meson legs in both initial and final states simultaneously. Thus, to establish a description in terms of physical amplitudes, we will need to explicitly symmetrise identical meson states in the bound state equation, Eq. (53). To do this, we define \(\mathcal{P}\) to be the operator that swaps meson quantum numbers, and note the useful relations \[\mathcal{P}\bar{\Gamma}_{1}=\bar{\Gamma}_{1}\mathcal{P}_{12} \mathcal{P}_{34}, \tag{62a}\] \[\mathcal{P}_{12}\mathcal{P}_{34}\Gamma_{3}=\Gamma_{3}, \tag{62b}\] the first of which shows that interchanging the two mesons in the final state of the vertex function product \(\bar{\Gamma}_{1}=\bar{\Gamma}_{13}\bar{\Gamma}_{24}\) is equivalent to interchanging the identical quarks and antiquarks in the initial state, and the second of which follows from the antisymmetry of the \(qq\) and \(\bar{q}\bar{q}\) vertex functions in \(\Gamma_{3}=\bar{\Gamma}_{12}\bar{\Gamma}_{34}\). Using these relations it is straightforward to prove \[\begin{pmatrix}\mathcal{P}&0\\ 0&1\end{pmatrix}V=V\begin{pmatrix}\mathcal{P}&0\\ 0&1\end{pmatrix} \tag{63}\] which shows the equivalence of exchanging identical mesons in initial and final states. In turn this implies that if \(\phi\) is a solution of Eq. (53) then so is \(\begin{pmatrix}\mathcal{P}&0\\ 0&1\end{pmatrix}\phi\), and therefore, so is \[\phi^{S}=\begin{pmatrix}1+\mathcal{P}&0\\ 0&2\end{pmatrix}\phi, \tag{64}\] where \(\phi^{S}\) is the physical solution which is symmetric with respect to the exchange of the two identical mesons. One can then write \[\phi^{S}=V^{S}D\phi^{S} \tag{65}\] where \[V^{S}=\frac{1}{2}\begin{pmatrix}1+\mathcal{P}&0\\ 0&2\end{pmatrix}V \tag{66}\] is the properly symmetrised kernel. In particular, \[V^{S}=V^{S\,(0)}+V^{S\,(1)} \tag{67}\] where \[V^{S\,(0)} =-\begin{pmatrix}1+\mathcal{P}&0\\ 0&2\end{pmatrix}\begin{pmatrix}-\bar{\Gamma}_{1}\mathcal{P}_{12}\Gamma_{1}& \bar{\Gamma}_{1}\Gamma_{3}\\ \bar{\Gamma}_{3}\Gamma_{1}&0\end{pmatrix}, \tag{68a}\] \[V^{S\,(1)} =-\begin{pmatrix}1+\mathcal{P}&0\\ 0&2\end{pmatrix}\begin{pmatrix}\bar{\Gamma}_{1}[\mathcal{P}_{12}T_{1}^{+} \mathcal{P}_{12}+2T_{3}^{+}]\Gamma_{1}&-\bar{\Gamma}_{1}\mathcal{P}_{12}T_{1}^ {+}\Gamma_{3}\\ -\bar{\Gamma}_{3}T_{1}^{+}\mathcal{P}_{12}\Gamma_{1}&\bar{\Gamma}_{3}T_{1}^{+} \Gamma_{3}\end{pmatrix}. \tag{68b}\] According to the discussion below Eqs. (32), \(T_{12}=\frac{1}{2}T_{qq}\) and \(T_{34}=\frac{1}{2}T_{\bar{q}\bar{q}}\), so that \(T_{3}=\frac{1}{2}(T_{qq}+T_{\bar{q}\bar{q}})\), where \(T_{qq}\) and \(T_{\bar{q}\bar{q}}\) are the physical (antisymmetric) scattering amplitudes for identical quarks. In the separable approximation \(i\Gamma_{\bar{q}\bar{q}}\bar{\Gamma}_{\bar{q}\bar{q}}/(P^{2}-M_{\bar{q}\bar{q}}^{2})\) which define the corresponding symmetrised quark vertex functions \(\Gamma_{qq}\), \(\Gamma_{\bar{q}\bar{q}}\), \(\bar{\Gamma}_{qq}\), and \(\bar{\Gamma}_{\bar{q}\bar{q}}\). It follows that \(\Gamma_{3}=\frac{1}{2}\Gamma_{qq}\Gamma_{\bar{q}\bar{q}}\). It is convenient to re-express the symmetric (in mesons) kernels of Eqs. (68) in terms of these antisymmetric (in quarks) quantities. To do this in a way that does not change notation, we shall implement the following replacements: \(T_{12}\to\frac{1}{2}T_{12}\), \(T_{34}\to\frac{1}{2}T_{34}\) and \(\Gamma_{3}\to\frac{1}{2}\Gamma_{3}\). After these replacements \(T_{12}\) and \(T_{34}\) become the physical scattering amplitudes for indistinguishable quarks and antiquarks. In this way Eqs. (68) become \[V^{S\,(0)} = -\begin{pmatrix}1+{\cal P}&0\\ 0&2\end{pmatrix}\begin{pmatrix}-\bar{\Gamma}_{1}{\cal P}_{12}\Gamma_{1}& \frac{1}{2}\bar{\Gamma}_{1}\Gamma_{3}\\ \frac{1}{2}\bar{\Gamma}_{3}\Gamma_{1}&0\end{pmatrix}, \tag{69a}\] \[V^{S\,(1)} = -\begin{pmatrix}1+{\cal P}&0\\ 0&2\end{pmatrix}\begin{pmatrix}\bar{\Gamma}_{1}[{\cal P}_{12}T_{1}^{+}{\cal P }_{12}+T_{3}^{+}]\Gamma_{1}&-\frac{1}{2}\bar{\Gamma}_{1}{\cal P}_{12}T_{1}^{+} \Gamma_{3}\\ -\frac{1}{2}\bar{\Gamma}_{3}T_{1}^{+}{\cal P}_{12}\Gamma_{1}&\frac{1}{4}\bar{ \Gamma}_{3}T_{1}^{+}\Gamma_{3}\end{pmatrix}. \tag{69b}\] which using Eqs. (62), simplify to \[V^{S\,(0)} = \begin{pmatrix}(1+{\cal P})\bar{\Gamma}_{1}{\cal P}_{12}\Gamma_{1 }&-\bar{\Gamma}_{1}\Gamma_{3}\\ -\bar{\Gamma}_{3}\Gamma_{1}&0\end{pmatrix}, \tag{70a}\] \[V^{S\,(1)} = \begin{pmatrix}-(1+{\cal P})\bar{\Gamma}_{1}[{\cal P}_{12}T_{1}^{ +}{\cal P}_{12}+T_{3}^{+}]\Gamma_{1}&\bar{\Gamma}_{1}{\cal P}_{12}T_{1}^{+} \Gamma_{3}\\ \bar{\Gamma}_{3}T_{1}^{+}{\cal P}_{12}\Gamma_{1}&-\frac{1}{2}\bar{\Gamma}_{3} T_{1}^{+}\Gamma_{3}\end{pmatrix}. \tag{70b}\] A few observations are in order: 1. The expression for the lowest order potential, \(V^{S\,(0)}\), corresponds to the model of the Giessen group as previously derived in [10]. 2. One can see explicitly that the Giessen group potential \(V^{S\,(0)}\) does not support \(D\bar{D}\) elastic transition, \(D\bar{D}\gets D\bar{D}\), whereas the one of the Moscow group, \(V^{S,(1)}\), does (see the right lower corner matrix element \(2\Gamma_{3}T_{1}^{+}\Gamma_{3}\)). 3. Equation (70b) can be simplified by removing \(T_{24}\) in \(T_{1}^{+}=T_{13}+T_{24}\), as follows. Using Eq. (62), \[\bar{\Gamma}_{1}\mathcal{P}_{12}T_{1}^{+}\mathcal{P}_{12}\Gamma_{1} =\bar{\Gamma}_{1}\mathcal{P}_{12}(T_{13}+T_{24})\mathcal{P}_{12} \Gamma_{1}\] \[=\bar{\Gamma}_{1}\mathcal{P}_{12}(T_{13}+\mathcal{P}_{12} \mathcal{P}_{34}T_{13}\mathcal{P}_{12}\mathcal{P}_{34})\mathcal{P}_{12}\Gamma_{1}\] \[=\bar{\Gamma}_{1}\mathcal{P}_{12}T_{13}\mathcal{P}_{12}\Gamma_{1} +\mathcal{P}\bar{\Gamma}_{1}\mathcal{P}_{12}T_{13}\mathcal{P}_{12}\Gamma_{1} \mathcal{P} \tag{71a}\] \[\bar{\Gamma}_{1}\mathcal{P}_{12}T_{1}^{+}\Gamma_{3} =\bar{\Gamma}_{1}\mathcal{P}_{12}T_{13}\Gamma_{3}+\mathcal{P}\bar{ \Gamma}_{1}\mathcal{P}_{12}\mathcal{P}_{34}\mathcal{P}_{12}T_{24}\Gamma_{3}\] \[=\Gamma_{1}\mathcal{P}_{12}T_{13}\Gamma_{3}+\mathcal{P}\bar{ \Gamma}_{1}\mathcal{P}_{12}T_{13}\Gamma_{3}\] \[=(1+\mathcal{P})\bar{\Gamma}_{1}\mathcal{P}_{12}T_{13}\Gamma_{3},\] (71b) \[\bar{\Gamma}_{3}T_{1}^{+}\Gamma_{3} =\bar{\Gamma}_{3}(T_{13}+\mathcal{P}_{12}\mathcal{P}_{34}T_{24} \mathcal{P}_{12}\mathcal{P}_{34})\Gamma_{3}\] \[=2\bar{\Gamma}_{3}T_{13}\Gamma_{3}. \tag{71c}\] The simplification is in that when solving numerically Eq. (65), instead of calculating two integrals of Eq. (71a), \(\bar{\Gamma}_{1}\mathcal{P}_{12}T_{13}\mathcal{P}_{12}\Gamma_{1}+\bar{\Gamma}_ {1}\mathcal{P}_{34}T_{13}\mathcal{P}_{34}\Gamma_{1}\), we calculate only one of them, \(I=\bar{\Gamma}_{1}\mathcal{P}_{12}T_{13}\mathcal{P}_{12}\Gamma_{1}\), the second integral being obtained by only swapping meson quantum numbers in the first one, \(\mathcal{P}I\mathcal{P}\). Similarly for \(\bar{\Gamma}_{1}\mathcal{P}_{12}T_{1}^{+}\Gamma_{3}\). ## III Summary and discussion We have derived tetraquark equations that take the form of a Bethe-Salpeter equation in coupled \(MM-D\bar{D}\) space, Eq. (65), where the kernel \(V^{S}\) is a sum of two terms: \(V^{S\,(0)}\) consisting of terms involving non-interacting quark exchange, as illustrated in Fig. 4, and \(V^{S\,(1)}\) consisting of terms involving interacting quark exchange where one pair of quarks mutually scatter in intermediate state, as illustrated in Fig. 5. The mathematical expressions for these potentials are given by Eq. (70), which takes into account the antisymmetry of identical quarks (\(qq\) and \(\bar{q}\bar{q}\)), and the symmetry of identical mesons (\(MM\)). Assuming pairwise interactions between the quarks, our derivation stems from the covariant four-body equations of Khvedelidze and Kvinikhidze [14], which in this approximation, are exact equations for a four-body system in relativistic quantum field theory. Only two additional approximations are made to obtain our final equations: (i) separable approximations were made for each of the two-body t matrices in the product terms \(T_{a}T_{a^{\prime}}\), of Eq. (31), thereby exposing \(MM\) and \(D\bar{D}\) channels, and (ii) the two-body t matrices in the sum \(T_{a}+T_{a^{\prime}}\), of Eq. (31), are retained only to first order in the expression for the four-body kernel \(V\), Eq. (55), which is sufficient to introduce \(q\bar{q}(T_{q\bar{q}})\), \(qq(T_{\bar{q}\bar{q}})\), and \(\bar{q}\bar{q}(T_{qq})\) states, as illustrated in Fig. 5, into the resulting description. A feature of our equations, is that they provide a unified description of previous seemingly unrelated approaches. In particular, neglecting \(V^{S\,(1)}\) from our kernel of Eq. (67), results in the \(MM-D\bar{D}\) coupled channels model of the Giessen group (Fischer _et al._) [10; 11; 12; 13], while neglecting \(V^{S\,(0)}\) from our kernel of Eq. (67), encompasses the \(D\bar{D}\) model of the Moscow group (Faustov _et al._) [6; 7; 8; 9]. More specifically, the Moscow group model corresponds to keeping just the \(D\bar{D}\to D\bar{D}\) element of the matrix \(V^{S\,(1)}\) given in Eq. (70b), namely \[-\frac{1}{2}\bar{\Gamma}_{3}T_{1}^{+}\Gamma_{3}=-\bar{\Gamma}_{1 2}\bar{\Gamma}_{34}T_{13}\Gamma_{12}\Gamma_{34}\] \[=-\bar{\Gamma}_{D}\bar{\Gamma}_{D}G^{0}_{q\bar{q}}T_{q\bar{q}}G^ {0}_{q\bar{q}}\Gamma_{D}\Gamma_{\bar{D}} \tag{72}\] where \(\Gamma_{D}\equiv\Gamma_{12}\), \(\Gamma_{\bar{D}}\equiv\Gamma_{34}\), \(\bar{\Gamma}_{D}\equiv\bar{\Gamma}_{12}\), \(\bar{\Gamma}_{\bar{D}}\equiv\bar{\Gamma}_{34}\), \(T_{q\bar{q}}\equiv T_{13}\), and \(G^{0}_{q\bar{q}}\) is the product of propagators for \(q\) and \(\bar{q}\). In this respect it is interesting to note that theory specifies \(T_{q\bar{q}}\) to be the full t matrix for quark-antiquark scattering, and as such, is expressible as a sum of three types of contributions: (i) \(s\)-channel pole contributions corresponding to the formation of mesons (the typical approximation used for two-quark scattering amplitudes by the Giessen group), (ii) a long-range contribution due to one-gluon-exchange, and (iii) all other possible contribution including contributions responsible for confinement. Indeed, as shown in the Appendix, one can write the general structure of \(T_{q\bar{q}}\) as \[T_{q\bar{q}}=\frac{\Phi_{q\bar{q}}\bar{\Phi}_{q\bar{q}}}{P^{2}-M_{q\bar{q}}^{ 2}}+K_{g}+K_{C} \tag{73}\] where the pole term corresponds to a meson of mass \(M_{q\bar{q}}\), \(K_{g}\) is the one-gluon-exchange potential, and \(K_{C}\) includes all other contributions to \(T_{q\bar{q}}\) including those responsible for confinement. Correspondingly, the \(D\bar{D}\) kernel in our approach is given by the sum of the three terms illustrated in Fig. 6. Comparison with the Moscow group's \(D\bar{D}\) kernel shows that they did not include the \(s\)-channel meson exchange contribution (second diagram of Fig. 6), but did include one-gluon exchange taking into account the finite sizes of the diquark and antidiquark through corresponding form factors, [first term of Eq. (10) in Ref. [8]], a contribution corresponding to the third diagram of Fig. 6. The Moscow group also included a phenomenological \(D\bar{D}\) confining potential [second term of Eq. (10) in Ref. [8]], that correspond to the last diagram of Fig. 6 for the case of a local \(q\bar{q}\) potential. Note that the confining interaction between a quark and an antiquark that are constitutents of a diquark and an antidiquark, results in diquark-antidiquark confinement, i.e., the two-body diquark-antidiquark potential produced in this way also has a confining part. Given the locality of the \(q\bar{q}\) confining potential, it only needs to be multiplied by a diquark form factor to result in the diquark-antidiquark confining potential, because the form factor does not change the long-range (small momentum transfer) behaviour of the \(q\bar{q}\) potential. Finally, it is worth noting that although we have singled out the works of the Moscow and Giessen groups as a means of demonstrating how our tetraquark equations can provide a common theoretical basis for very different approaches, it seems likely that these equations are able to encompass yet other theoretical works on the tetraquark. ###### Acknowledgements. A.N.K. was supported by the Shota Rustaveli National Science Foundation (Grant No. FR17-354). ## Appendix A General structure of the \(q\bar{q}\) scattering amplitude Although it is not possible as yet to solve Quantum Chromodynamics to obtain the precise form of the force between a quark and an antiquark, there are three basic features of this force that would be desirable to take into account when constructing a phenomenological Figure 6: General structure of the \(D\bar{D}\) kernel in the unified tetraquark equations. Illustrated is the \(D\bar{D}\) kernel (left diagram where the red circle represents the full \(q\bar{q}\) t matrix \(T_{q\bar{q}}\) in intermediate state), expressed as a sum of three terms (from left to right): (i) a \(q\bar{q}\)\(s\)-channel meson exchange (dashed line) contribution, (ii) a \(q\bar{q}\) one-gluon-exchange (curly line) contribution, and (iii) all possible other contributions to intermediate state \(q\bar{q}\) scattering (shaded circle). version of the \(q\bar{q}\) scattering amplitude: (i) the force binds \(q\bar{q}\) pairs to form mesons, (ii) one-gluon-exchange is an important contribution to the short-range part of this force, and (iii) the force has the property of color confinement. To construct the \(q\bar{q}\) t matrix \(T\) with these features, one can first write the full \(q\bar{q}\) Green function at total momentum \(P\) in the form \[G=\frac{\Psi\bar{\Psi}}{P^{2}-M^{2}}+G_{C} \tag{10}\] where the pole term takes into account the bound state meson of mass \(M\) (one can of course take into account more than one bound state by having a sum over such pole terms) and \(G_{C}\) is the rest of the Green function with no pole at \(P^{2}=M^{2}\). If \(K\) is the \(q\bar{q}\) potential that generates \(G\), that is if \[G=G_{0}+G_{0}KG, \tag{11}\] then the corresponding t matrix \(T\), defined as the solution of \[T=K+KG_{0}T, \tag{12}\] can be written as \[T = K+KGK \tag{13}\] \[= K+K\left[\frac{\Psi\bar{\Psi}}{P^{2}-M^{2}}+G_{C}\right]K\] \[= K+\frac{\Phi\bar{\Phi}}{P^{2}-M^{2}}+KG_{C}K\] where \(\Phi=K\Psi\). It is seen that the pole term is generated by the sum of the iterated terms of Eq. (12), apart from \(K\), i.e., the iteration series for the pole term starts with \(KG_{0}K\). This means that adding the potential \(K\) to the pole term does not overcount \(K\), as one might otherwise expect. Writing \(K\) as \[K=K_{g}+K_{c} \tag{14}\] where \(K_{g}\) is the one-gluon-exchange potential and \(K_{c}\equiv K-K_{g}\), one obtains the general structure of the \(q\bar{q}\) t matrix: \[T=\frac{\Phi\bar{\Phi}}{P^{2}-M^{2}}+K_{g}+K_{C} \tag{15}\] where \[K_{C}\equiv K_{c}+KG_{C}K \tag{10}\] is responsible for confinement in view of its contributions from \(K_{c}\). As noted, neither \(K_{g}\) nor \(K_{C}\) is overcounted in Eq. (10).
2308.04672
Resource Constrained Model Compression via Minimax Optimization for Spiking Neural Networks
Brain-inspired Spiking Neural Networks (SNNs) have the characteristics of event-driven and high energy-efficient, which are different from traditional Artificial Neural Networks (ANNs) when deployed on edge devices such as neuromorphic chips. Most previous work focuses on SNNs training strategies to improve model performance and brings larger and deeper network architectures. It is difficult to deploy these complex networks on resource-limited edge devices directly. To meet such demand, people compress SNNs very cautiously to balance the performance and the computation efficiency. Existing compression methods either iteratively pruned SNNs using weights norm magnitude or formulated the problem as a sparse learning optimization. We propose an improved end-to-end Minimax optimization method for this sparse learning problem to better balance the model performance and the computation efficiency. We also demonstrate that jointly applying compression and finetuning on SNNs is better than sequentially, especially for extreme compression ratios. The compressed SNN models achieved state-of-the-art (SOTA) performance on various benchmark datasets and architectures. Our code is available at https://github.com/chenjallen/Resource-Constrained-Compression-on-SNN.
Jue Chen, Huan Yuan, Jianchao Tan, Bin Chen, Chengru Song, Di Zhang
2023-08-09T02:50:15Z
http://arxiv.org/abs/2308.04672v1
# Resource Constrained Model Compression via Minimax Optimization for Spiking Neural Networks ###### Abstract. Brain-inspired Spiking Neural Networks (SNNs) have the characteristics of event-driven and high energy-efficient, which are different from traditional Artificial Neural Networks (ANNs) when deployed on edge devices such as neuromorphic chips. Most previous work focuses on SNNs training strategies to improve model performance and brings larger and deeper network architectures. It's difficult to deploy these complex networks on resource-limited edge devices directly. To meet such demand, people compress SNNs very cautiously to balance the performance and the computation efficiency. Existing compression methods either iteratively pruned SNNs using weights norm magnitude or formulated the problem as a sparse learning optimization. We propose an improved end-to-end Minimax optimization method for this sparse learning problem to better balance the model performance and the computation efficiency. We also demonstrate that jointly applying compression and finetuning on SNNs is better than sequentially, especially for extreme compression ratios. The compressed SNN models achieved state-of-the-art (SOTA) performance on various benchmark datasets and architectures. Our code is available at _[https://github.com/chenjallen/Resource-Constrained-Compression-on-SNN_](https://github.com/chenjallen/Resource-Constrained-Compression-on-SNN_). Spiking Neural Networks, Model Compression, Sparse Pruning, Minimax Optimization 1 Footnote 1: The corresponding author. 2023 Spiking Neural Networks, Model Compression, Sparse Pruning, Minimax Optimization 2023 ## 1. Introduction As the third generation of neural networks (Wang et al., 2017), Spiking Neural Networks (SNNs) has been widely concerned in recent years. SNNs are the core of brain heuristic intelligence research, which have high biological interpretability and strong Spatio-temporal information processing ability (Shen et al., 2018). In addition, due to the inherent asynchrony and sparsity of spiking training, these types of networks can maintain relatively good performance as well as low power consumption, especially when combined with neuromorphic chips (Shen et al., 2018; Chen et al., 2019). With the development of efficient deep SNN training strategies, some useful network architectures are built, such as Spiking ResNet (Shen et al., 2018; Chen et al., 2019; Chen et al., 2019) and SEW ResNet (He et al., 2019) to improve the performance of SNNs. The parameters and computational energy of SNN models rapidly increase, while the computational resources of edge devices are usually limited. For example, SpiNNaker demonstrated to run networks with up to 250,000 neurons and 80 million synapses on a 48-chip board (Shen et al., 2018), which is still unable to run those more advanced SNNs. Thus, it is of great significance to compress SNNs before deploying them in real scenarios, which reduces computing costs, saves storage resources, and helps researchers exploit more benefits from high energy savings. Model compression was proposed to reduce the model size and improve the inference efficiency of the DNNs (Chen et al., 2019). Weights pruning (Chen et al., 2019) is one of the widely used techniques for compressing the model size by zeroing out the individual weight of the convolutional kernel or fully connected weights matrix. Filter pruning (Wang et al., 2017; Chen et al., 2019; Chen et al., 2019) is another kind of pruning technique that prunes entire filters (or nodes for fully connected layers) and their corresponding weights. In this way, the entire filters can be removed and the original DNN can be transformed to be a thinner network, thus achieving speedup on general hardware. Recently, researchers have carried out several works on SNNs pruning methods and made considerable progress. In GPSNN (Chen et al., 2019), a two-stage growing-pruning algorithm was used to compress fully connected SNN so that the network could achieve better performance. In (Kumar et al., 2017), the non-critical synapses of SNNs were regularly pruned during Spike Timing Dependent Plasticity (STDP) training process based on a preset threshold. A soft pruning method has been considered to reduce the number of SNN weight updating during network training (Kumar et al., 2017). Recently, ADMM optimization combined with Spatio-temporal backpropagation (STBP) training was used to compress SNNs (Kumar et al., 2018). An attention-guided compression technique presented in (Song et al., 2019), used two steps to generate compressed deep SNN that could reduce spiking activity. Recent work (Chen et al., 2020) performs pruning on the temporal dimension of SNNs to reduce time steps for inference. Grad Rewiring (Gord et al., 2019) is a joint learning algorithm of SNN connection and weight, which can strengthen the exploration of network structures. Most existing SNNs pruning work has either focused on shallow structures or has only attempted to prune networks at low sparsity. Besides, A very recent work proposed a dynamic pruning framework to prune SNNs based on temporal lottery ticket hyperthesis (Song et al., 2020), which handles the weights pruning of the deep SNN structures. In this paper, we present an end-to-end framework of weights pruning to compress the SNNs with a given resource budget. Unlike most resource-constrained compression methods which treat the resource consumption function as a black box (Kumar et al., 2019), we directly use the resource consumption to formulate a constrained optimization. The key idea is to use learnable parameters to control the lower bound of the sparsities. This introduces a sparsity constraint so that the resource constraint will only depend on the sparsity parameters. The constrained problem can be transformed into a Minimax optimization. Since the sparsity and resource constraints are not differentiable, the Minimax problem cannot be directly solved by gradient-based methods. In this work, we use the difference of convex function (DC) (Song et al., 2019) sparsity reformulation and straight-through estimator (STE) (Beng et al., 2019) to build a gradient-based algorithm to effectively optimize the compression problem. We summarize the contributions as below: * We propose an end-to-end Minimax optimization method to successfully compress the SNNs, as shown in Figure 1. DC sparsity reformulation (Song et al., 2019) and STE (Beng et al., 2019) are key components in this Minimax reformulation. Our compression procedure is end-to-end joint training of compression and fine-tuning on SNNs. * We formulate the resource-constrained SNNs compression problem into a constrained optimization problem where the SNNs weights and resource consumption are linked with learnable sparsity parameters. * The algorithm is gradient-based and easy to train. Evaluations of SNNs pruning on the public benchmark tasks show that our method is effective to compress SNNs and achieves state-of-the-art (SOTA) performance. ## 2. Related Work We review the related work from three aspects: the set of work in SNNs; the set of work in model compression and the specific model compression for SNNs. ### Spiking Neural Networks Different from ANNs, SNNs have a temporal dimension inherently, which uses sparse binary spike event sequences to represent information. Therefore, they contribute to more energy savings in specialized neuromorphic hardware (Kumar et al., 2018). The information is transmitted among neurons via synapses. When the membrane potential exceeds a certain threshold caused by accumulating received spikes, the neuron fires a spike to the next layer. In this study, we employed the Leaky Integrate-and-Fire (LIF) neuron (Leaky Integrate and-Fire, 1990), which is one of the most widely used neurons due to its effectiveness. The most common form of the LIF neuron is described as: \[\tau_{m}\frac{\mathrm{d}V_{m}(t)}{\mathrm{d}t}=-\left(V_{m}(t)-V_{\mathrm{ rest}}\right)+X_{t} \tag{1}\] where \(V_{m}(t)\) represents the membrane potential of the neuron at time \(t\), \(X_{t}\) represents the input from the presynaptic neuron. \(\tau_{m}\) is the membrane time as a constant value, that controls the decay and \(V_{\mathrm{rest}}\) is the resting potential after firing. A spike will fire if \(V_{t}\) exceeds the threshold \(V_{\mathrm{th}}\). As claimed in previous works (Kumar et al., 2018) and (Gord et al., 2019), We convert the above continuous differential equation into a discrete version: \[H_{t+1} =V_{t}+\frac{1}{\tau_{m}}\left(-\left(V_{t}-V_{\mathrm{rest}}\right) +X_{t}\right) \tag{3}\] \[S_{t+1} =\Theta\left(m_{t+1}-V_{\mathrm{th}}\right)\] (4) \[V_{t+1} =S_{t+1}V_{\mathrm{rest}}+\left(1-S_{t+1}\right)H_{t+1} \tag{2}\] where \(H_{t}\) and \(V_{t}\) denote the value of membrane potential after neural dynamics and after generating a spike at time step \(t\), respectively. \(S_{t}\) denotes the spike output at time step \(t\). \(\Theta(\cdot)\) is the Heaviside step function which is defined as \(\Theta(x)=1\) for \(x>=0\) and \(\Theta(x)=0\) for \(x<0\). As we can see, the integration and firing behavior of neurons will result in the non-differentiability of the transfer function. So it is difficult to apply standard backpropagation in the training phase (Beng et al., 2019). To obtain a high-performance SNN, researchers have proposed various training methods (Beng et al., 2019; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020). Recently, some works focus on supervised learning based on backpropagation algorithms, where they use a surrogate gradient function to approximate the gradient of non-differentiable spike activity (Kumar et al., 2018; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020). These surrogate gradient methods provide an effective solution for training SNNs with deeper architecture (Xu et al., 2019), such as VGGNet (Wang et al., 2019) and ResNet (Krizhevsky et al., 2015) families. Therefore, we adopt a backpropagation algorithm based on surrogate gradient (Song et al., 2019) as the basic method for our SNNs training. Figure 1. Our whole pipeline. The resource-constrained Minimax Optimization method can compress SNNs into a light-weight model with different sparsity levels. ### Model Compression There are different techniques for model compression, such as pruning (Zhou et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019), quantization (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019), low-rank factorization (Chen et al., 2017; Chen et al., 2019), etc. Pruning utilizes the sparsity of weights tensors to achieve model compression. Weights pruning (Zhou et al., 2017; Chen et al., 2019) is effective to remove the single elements by zeroing out the individual weight. Moreover, structured pruning (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019) prunes the weights according to specific structures to achieve more speedup on general hardware. Filter pruning (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019) is the most broadly used structured pruning, which prunes all the weights associated with certain filters. Most filter pruning works prune the channels with low magnitude weights (Chen et al., 2017; Chen et al., 2019), or estimate the importance of channels for pruning (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019). Yang _et al._(Yang et al., 2019) pre-compute a lookup table to estimate the latency of a convolution layer with different input/output channels, and use a greedy search strategy to gradually reduce the number of filters until the given resource budget is achieved. He _et al._(He et al., 2020) adopt reinforcement learning to search the number of pruned filters for each layer. The classification accuracy is used as the reward, and the number of pruned filters is taken as the action. Recently, more approaches (He et al., 2020; Chen et al., 2019; Chen et al., 2019) consider the model compression as a constrained problem (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). Furthermore, resource consumption is used to restrict the action space. These methods are successfully applied to fully supervised tasks such as image classification and object detection. In this paper, we proposed an end-to-end optimization method to solve a resource-constrained compression problem on SNNs, we demonstrate the problem formulation from the unstructured weights pruning perspective. ### Model Compression for SNN To reduce the energy consumption of SNNs, some approaches focus on the compression of SNN models recently, such as connection pruning (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019) and model quantization (Chen et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). Deng _et al._(Deng et al., 2019) defined the connection pruning and weight quantization as a constrained optimization problem and used Spatio-temporal backpropagation (STBP) and alternating direction method of multipliers (ADMM) to solve it. Chen _et al._(Chen et al., 2017) formulated the gradient rewiring (Grad R) algorithm which redefines the gradient of SNNs to a new synaptic parameter and joint learning SNNs connection and weight. In addition, Kim _et al._(Kim et al., 2019) performed the connection pruning toward deeper SNNs (\(\geq 16\) layers) and combined the Iterative Magnitude Pruning (Chen et al., 2019) and Early-Bird (Evely and Ferrara, 2019) tickets to obtain smaller SNNs. Chen _et al._(Chen et al., 2019) proposed a dynamic pruning algorithm based on nonlinear reparameterization mapping from spine size to SNN weights. To compare with these unstructured weights pruning works for SNNs, we adopt the Minimax optimization method to jointly optimize the global sparsity of SNNs and weights parameters. We handle all layers' weights parameters globally with one sparsity parameter to solve the unstructured pruning problem. ## 3. Formulation ### Resource-Constrained Optimization The ideal scenario of SNN compression is that given a resource budget \(R_{\text{budget}}\) (based on a certain metric, e.g., Parameters, Flops, latency, or energy), the compression method can return a compressed model which satisfies the given budget and maintains the accuracy as well as possible. The whole process should be automatic, i.e., there is no need to manually set the sparsity of each layer. In this paper, we directly formulate such a compression scheme for a constrained optimization problem: (5a) \[\min_{\mathcal{W},\mathbf{s}} \mathcal{L}(\mathcal{W})\] (5c) \[\text{s. t.} R(s)\leq R_{\text{budget}},\] (5d) \[\sum_{i}\mathbf{I}(\mathcal{W}_{i}=0)\geq s\] (5e) where \(R(s)\) evaluates a general resource consumption (e.g., Flops or latency) based on the number of (nonzero) weights for each layer. It does not need to be differentiable. For example, when representing latency, it can be computed by a latency table as in Yang et al. (Yang et al., 2019). \(\mathcal{L}\) is the standard training loss. \(\mathbf{I}(\cdot)\) is the indicator function that returns \(1\) if the argument is satisfied and \(0\) otherwise. \(\mathbf{s}\) is a learnable scalar variable to control the lower bound of the sparsity of weight parameters vector \(\mathcal{W}\) of the whole network. This formulation holds because the resource function \(R\) monotonically decreases with respect to the increasing sparsities, i.e., the more weights are pruned, the smaller the resource consumption we have. Note that we mainly focus on the unstructured pruning (or called weights sparsification) for SNN in the main text, thus \(\mathcal{W}_{i}\) in the above equation stands for **each element of weight parameters vector \(\mathcal{W}\)** of the whole network. ## 4. Optimization In the previous section, we have already formulated the resource-constrained pruning as a constrained optimization (5). In this section, we first do some reformulation to make it more convenient to solve. Then we propose a gradient-based algorithm to solve the resource-constrained pruning in an end-to-end way. ### Minimax Reformulation The sparsity constraint (\(\mathbf{c}_{\text{C}}\)) is non-convex and the non-continuous indicator function makes it more difficult. Common approaches to deal with this constraint include \(\ell_{1}\)-norm relaxation (Zhou et al., 2017) and \(\ell_{0}\)-norm projection (Chen et al., 2019). The ideas of \(\ell_{1}\)-norm relaxation have been applied in DNN compression (Wang et al., 2019; Chen et al., 2019; Chen et al., 2019). However, the \(\ell_{1}\)-norm can only approximate the sparsity constraint, so it is not equivalent to the sparsity constraint and there is no guarantee to bound the real sparsity by restricting the \(\ell_{1}\)-norm. Figure 2. The dynamics of LIF neurons, as similarly described in (Chen et al., 2019). When the membrane potential exceeds a threshold value, the neuron will fire a spike to the next layer and resets. Let \(\|\mathbf{u}\|_{\mathrm{s,2}}\) be the bottom-(s, 2) "norm", which denotes the \(\ell_{2}\)-norm of the sub vector composed of bottom-\(s\) elements in magnitude. Then we have an _equivalent_ reformulation for the sparsity constraint (5c): \[\|\mathcal{W}\|_{\mathrm{s,2}}=0\Leftrightarrow\sum_{i}\mathbf{I}(\mathcal{W}_ {i}=0)\geq s, \tag{6}\] Equation (6) is proved by Tono _et al._Tono et al. (2017), where the left-hand side is called DC (difference of convex functions) representation of the \(\ell_{0}\)-constraint. By this transformation, the sparsity constraint becomes an equality constraint of a continuous function. Compared to the original \(\ell_{0}\)-norm constraint, it can be written as a "soft" constraint and avoid being stuck in the bad local optimum of the constraint set. By introducing dual variables \(y\) and \(z\), we derive the minimax reformulation of problem (5): \[\min_{\mathcal{W},s}\max_{y,z\geq 0}\mathcal{L}(\mathcal{W})+\underbrace{y\| \mathcal{W}\|_{\mathrm{f}=1}^{2}}_{\mathrm{sparsity\ loss}\cdot\mathcal{S}(y,s \mathcal{W})}+\underbrace{z(R(s)-R_{\mathrm{budget}})}_{\mathrm{resource\ loss}}. \tag{7}\] Where we introduce the sparsity loss \(\mathcal{S}(y,s,\mathcal{W}):=y\|\mathcal{W}\|_{\mathrm{f}=1}^{2}\), and resource loss \(z(R(s)-R_{\mathrm{budget}})\) to substitute the original constraints. Figure 3 shows an illustration of the reformulated minimax problem. It is easy to verify that (7)\(\rightarrow\)\(\ \ ### Implementation details In our works, we validate the compression method on six SNN models, including the shallow SNNs (e.g. 2 FC, 6 Conv and 2 FC) and the deep SNNs (e.g. VGG16 (Wang et al., 2017), ResNet19 (He et al., 2017), SEW ResNet18 (He et al., 2017), VGGSNN (He et al., 2017)). We compare the performance of our method with previous SNN compression methods on static MNIST, CIFAR10, CIFAR100, ImageNet11K datasets and neuromorphic CIFAR10-DVS dataset which is converted from the static image dataset by using a DVS camera. Experiments are conducted on NVIDIA V100 GPUs and we use Spikingfelly (Srivastava et al., 2015) framework to implement SNNs. Similar to the previous SNN work (Chen et al., 2017), we use a shallow SNN with 2 FC layers on the MINST dataset, and a model with 6 convolution layers and 2 FC layers for the CIFAR10 dataset. These two shallow SNNs are trained with Adam optimizer with a learning rate 1e-4. The timestep is set to 8. Other hyperparameters of baseline are the same as [6](e.g. batch size, learning rate). What's more, we use deep SNNs, VGG16, ResNet19, and VGGSNN. The training method follows the previous SNN work (He et al., 2017). We train deep SNNs by SGD optimizer with momentum 0.9 and weight decay 5e-4. The learning rate is set to 0.05 for baseline training and cosine decay to 0. The timestep is set to 5 and the batch size is set to 32. The default number of epochs is set to 300. As for training SEW ResNet18 on ImageNet, we follow all the training setting of SEW ResNet (He et al., 2017). The base learning is 0.1 with cosine annealing scheduler. The number of epochs is 320, the batch size is set to 32, and the timestep is set to 4. In all datasets, the learning rate of \(y\) is set to 0.1, while the default learning rate of \(z\) is set to \(10^{5}\). We count the number of zero and nonzero values of the whole weights in SNN and compute the percentage of zero values to be the sparsity. Training methodologyAccording to previous SNNs work settings respectively, we first train SNNs models to get pre-trained baseline models. Then our compression training stage starts with pre-trained models. Before pruning the SNN models, we set a budget list which values are some compression ratios from 0.5 to 0.005 (e.g. [0.25, 0.1, 0.05, 0.01, 0.005]). Furthermore, the value of the budget list can be connectivity, parameter size, latency times, FLOPs, and so on. For connection pruning, the values in the budget list are connectivity ratios. For structure pruning, the values in the budget list are FLOPs ratios in our work. During our compression training, if the model connectivity meets the current target compression ratio, we pause the pruning and then fine-tune the snapshot at this ratio until achieving the maximum fine-tuning epochs. After the \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Pruning Method** & **Dataset** & **Arch.** & **Top-1 Acc. (\%)** & **Acc. Loss (\%)** & **Sparsity (\%)** \\ \hline \multirow{3}{*}{ADMM-based (Srivastava et al., 2015)} & \multirow{3}{*}{MNIST} & \multirow{3}{*}{LeNet-5 like} & \multirow{3}{*}{99.07} & +0.03 & 50.00 \\ & & & & -0.43 & 60.00 \\ & & & & -2.23 & 75.00 \\ \hline \multirow{3}{*}{Deep R (Chen et al., 2017)} & \multirow{3}{*}{MNIST} & \multirow{3}{*}{2 FC} & \multirow{3}{*}{98.92} & -0.36 & 62.86 \\ & & & & -0.56 & 86.70 \\ & & & & -2.47 & 95.82 \\ & & & & -9.67 & 98.90 \\ \hline \multirow{3}{*}{Grad R (Chen et al., 2017)} & \multirow{3}{*}{MNIST} & \multirow{3}{*}{2 FC} & \multirow{3}{*}{98.92} & -0.33 & 74.29 \\ & & & & -0.43 & 82.06 \\ & & & & -2.02 & 94.37 \\ & & & & -3.55 & 96.94 \\ & & & & -8.08 & 98.62 \\ \hline \multirow{3}{*}{**Ours**} & \multirow{3}{*}{MNIST} & \multirow{3}{*}{2 FC} & \multirow{3}{*}{**98.91**} & **-0.06** & **75.00** \\ & & & & **-0.16** & **85.00** \\ & & & & **-1.23** & **95.00** \\ & & & & **-2.70** & **97.00** \\ & & & & & **-7.34** & **98.70** \\ \hline \hline \multirow{3}{*}{ADMM-based (Srivastava et al., 2015)} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{7 Conv, 2 FC} & \multirow{3}{*}{89.53} & -0.38 & 50.00 \\ & & & & -2.16 & 75.00 \\ & & & & -3.85 & 90.00 \\ \hline \multirow{3}{*}{Deep R (Chen et al., 2017)} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{6 Conv, 2 FC} & \multirow{3}{*}{92.84} & -1.98 & 94.76 \\ & & & & -2.56 & 98.05 \\ & & & & -3.53 & 98.96 \\ \hline \multirow{3}{*}{Grad R (Chen et al., 2017)} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{6 Conv, 2 FC} & \multirow{3}{*}{92.84} & -0.30 & 71.59 \\ & & & & -0.34 & 87.96 \\ & & & & -0.81 & 94.92 \\ & & & & -1.47 & 97.65 \\ & & & & -3.52 & 99.27 \\ \hline \multirow{3}{*}{STDS (Chen et al., 2017)} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{6 Conv, 2 FC} & \multirow{3}{*}{92.84} & -0.35 & 97.77 \\ & & & & -2.63 & 99.25 \\ \hline \multirow{3}{*}{**Ours**} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{6 Conv, 2 FC} & \multirow{3}{*}{**92.88**} & **+0.84** & **75.00** \\ & & & & **+0.52** & **88.04** \\ & & & & **+0.41** & **95.07** \\ & & & & **-0.13** & **97.71** \\ & & & & **-1.38** & **98.84** \\ & & & & **-2.56** & **99.31** \\ \hline \hline \end{tabular} \end{table} Table 1. Performance comparison between our method and previous works on MNIST and CIFAR10 datasets. fine-tuning process at the current compression rate finishes, our method removes the current ratio from the budget list and then continues the compression training automatically to achieve the next compression ratio. Compression and fine-tuning are jointly performed in the one-stage training process of SNNs. The number of epochs in fine-tuning phase for \(i_{th}\) compression ratio in the budget list is set to the same value or scheduled as \(\frac{1}{S^{t}}\frac{T_{\text{epoch}}-C_{\text{epoch}}}{\sum\limits_{j=1}^{N} \frac{1}{S^{t}}}\). where, \(S^{t}\) is the \(i_{th}\) compression ratio. \(T_{\text{epoch}}\) is the total number of epochs, \(C_{\text{epoch}}\) is the number of already used epochs. ### Quantitative Experiments _Connection pruning and fine-tuning jointly._ We use the connection of the SNNs model as the budget compression ratios for our Minimax optimization method. During compression training, the connection pruning and fine-tuning are trained jointly. Our joint compression method not only reduces the training time and simplifies the tedious fine-tuning process for different compression ratio, but also help the model under smaller ratios get better performance. Therefore, for one SNN model, we can achieve state-of-the-art (SOTA) performance of different ratios in one compression training process, which is different from previous work, since they only can get one compression ratio per training process [38]. As shown in Table 1 and 2, we summed up the results of our compression \begin{table} \begin{tabular}{c c c c c} \hline \hline **Pruning Method** & **Dataset** & **Arch.** & **Top-1 Acc. (\%)** & **Acc. (\%)** & **Sparsity (\%)** \\ \hline \multirow{3}{*}{IMP [38]} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{VGG16} & \multirow{3}{*}{-} & 92.66 & 68.30 \\ & & & & 92.54 & 89.91 \\ & & & & 92.38 & 95.69 \\ & & & & 91.81 & 98.13 \\ \hline \multirow{3}{*}{**Ours**} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{VGG16} & \multirow{3}{*}{**93.25**} & **93.27** & **69.02** \\ & & & & **93.26** & **90.09** \\ & & & & **92.76** & **95.70** \\ & & & & **92.00** & **98.13** \\ \hline \hline \multirow{3}{*}{Grad R [6]} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{93.22} & 92.68 & 76.90 \\ & & & & 91.91 & 94.25 \\ & & & & 91.12 & 97.56 \\ \hline \multirow{3}{*}{IMP [38]} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{93.22} & 93.50 & 76.20 \\ & & & & 93.46 & 94.29 \\ & & & & 93.18 & 97.54 \\ \hline \multirow{3}{*}{**Ours**} & \multirow{3}{*}{CIFAR10} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{**94.85**} & **94.84** & **80.05** \\ & & & & **94.36** & **95.07** \\ & & & & **93.81** & **97.07** \\ \hline \hline \multirow{3}{*}{Grad R [6]} & \multirow{3}{*}{CIFAR100} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{71.34} & 69.36 & 77.03 \\ & & & & 67.47 & 94.92 \\ & & & & 67.31 & 97.65 \\ \hline \multirow{3}{*}{IMP [38]} & \multirow{3}{*}{CIFAR100} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{71.34} & 71.45 & 76.20 \\ & & & & 69.05 & 97.54 \\ \hline \multirow{3}{*}{**Ours**} & \multirow{3}{*}{CIFAR100} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{**74.71**} & **73.05** & **79.99** \\ & & & & **72.67** & **95.19** \\ & & & & **70.80** & **97.31** \\ \hline \hline \multirow{3}{*}{Grad R [6]} & \multirow{3}{*}{ImageNet} & \multirow{3}{*}{SEW ResNet18} & \multirow{3}{*}{63.22} & 60.65 & 50.94 \\ & & & & 24.62 & 53.65 \\ \hline \multirow{3}{*}{ADMM [10]} & \multirow{3}{*}{ImageNet} & \multirow{3}{*}{SEW ResNet18} & \multirow{3}{*}{63.22} & 59.48 & 82.58 \\ & & & & 55.85 & 88.84 \\ \hline \multirow{3}{*}{STDS [8]} & \multirow{3}{*}{ImageNet} & \multirow{3}{*}{SEW ResNet18} & \multirow{3}{*}{63.22} & 61.30 & 82.58 \\ & & & & 59.93 & 88.84 \\ & & & & 58.06 & 93.24 \\ & & & & 56.28 & 95.30 \\ \hline \multirow{3}{*}{**Ours**} & \multirow{3}{*}{ImageNet} & \multirow{3}{*}{SEW ResNet18} & \multirow{3}{*}{**63.25**} & **61.42** & **82.50** \\ & & & & **60.51** & **88.84** \\ & & & & **58.12** & **93.20** \\ & & & & **56.46** & **94.39** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison between our method and previous works on larger models. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Arch.** & **Top-1 Acc. (\%)** & **Acc. (\%)** & **Sparsity (\%)** \\ \hline \multirow{3}{*}{IMP [38]} & \multirow{3}{*}{VGGNN} & \multirow{3}{*}{82.80} & 81.10 & 76.20 \\ & & & 81.50 & 86.57 \\ & & & 80.10 & 89.91 \\ & & & 78.60 & 94.28 \\ \hline \multirow{3}{*}{**Ours**} & \multirow{3}{*}{VGGNN} & \multirow{3}{*}{82.80} & **82.40** & **85.18** \\ & & & **81.90** & **90.14** \\ & & & **81.20** & **93.14** \\ & & & **80.10** & **95.16** \\ \hline \hline \end{tabular} \end{table} Table 3. Performance of our method and previous work IMP on CIFAR10-DVS dataset. experiments which were obtained by jointly pruning connection and fine-tuning the model. Comparisons to previous work.We compare our method with previous works in Table 1, 2 and 3. The results of two shallow SNN models on MNIST and CIFAR10 are shown in Table 1. We compare our connection pruning and fine-tuning jointly training method with previous research, including ADMM-based (Deng et al., 2017), Deep R (Chen et al., 2018), Grad R (Chen et al., 2018), and STDS (Chen et al., 2018). It is observed that our approach outperforms previous pruning methods in terms of connection sparsity and accuracy on the two benchmark datasets. Furthermore, we have smaller \(\Delta\)Acc degradations under all comparable compression ratios. We present the comparisons of deep SNN models, VGG16 (Vgg et al., 2016), ResNet19 (He et al., 2016) and SEW ResNet18 (He et al., 2016) in Table 2. We reproduce the baseline based on the previous work (Wang et al., 2018), and the performance is much higher than the reported result in (Wang et al., 2018): 1.63% higher on CIFAR10 and 3.37% higher on CIFAR100 for ResNet19 (He et al., 2016) SNN model, therefore, we compare absolute values of accuracy on deep SNNs. As shown in Table 2, our method is comparable to other approaches. For the VGG16 model, our methods have the highest accuracy at all connection sparsity ratios compared with Grad R (Chen et al., 2018) and IMP (Wang et al., 2018). For ResNet19 (He et al., 2016), when the connection sparsity ratio is less than 97%, our method significantly outperforms other methods. When the connection sparsity is higher than 97%, we still achieve better accuracy compared with other works, but the connection sparsity is slightly smaller than Grad R (Chen et al., 2018) and IMP (Wang et al., 2018). It is worth mentioning that the accuracy of our method can even be further improved compared to the baseline on all datasets when the sparsity is nearly 80%. Even on the large-scale dataset like ImageNet, our pruning method has also achieved competitive performance compared with the state-of-the-art. We also validated on neuromorphic datasets that have been less involved in previous work. To the best of our knowledge, our work is the first work to compress SNNs on the temporal CIFAR10-DVS dataset. Under the same structure and settings, we implemented the IMP method (Wang et al., 2018) and conducted our experiments on the CIFAR10-DVS dataset. As shown in Table 3, our method significantly improves the accuracy of VGGSNN (Chen et al., 2018) model with different sparsity ratios. In summary, experiments have shown that our compression optimization method can theoretically handle any type of SNN model. ## 6. Ablation Studies Comparison with sequential method.We compare our end-to-end Minimax optimization method on both sequentially and jointly training, for pruning and fine-tuning on the CIFAR10 dataset. For the sequential compression method, we first prune the SNNs models and save the pruned model snapshots during pruning training. We then fine-tune each of these snapshots for another 256 epochs for 6 Conv, 2 FC SNN models, which means need to fine-tune extra \(n\times 256\) epochs, the \(n\) is the length of the budget list. As shown in Table 4, the compression ratios (column b) in which pruning and fine-tuning are trained jointly have better accuracy when compression ratios are smaller than 5%. Sequentially trained compression ratios (column a) which connect 11.95% have a slight advantage because trained more epochs than joint ratios with a similar connection. However, as the connection turn smaller, the joint method achieves better results and even obtains an 8.72% accuracy advantage at a 0.64 connection ratio. What's more, the joint method in Table 4 was trained 700 epochs in total, which is far less than the train epochs numbers of the sequential method. Comparison of fine-tuning options.We compare the final accuracy between applying with a cosine annealing scheduler and without any learning rate scheduler when fine-tuning on the CIFAR10 dataset. For using a cosine annealing scheduler, in each stage of achieving the new compression ratio in the resource budget list, the number of the fine-tuning epoch is reset to 300 and the initial learning rate is changed to 0.001. As shown in Table 4, the accuracy significantly increased when we use the cosine annealing scheduler in the fine-tuning phase. Training trend plots.We plot the changing trend of sparsity value \(s\) in Figure 4, which demonstrates that our Minimax optimization algorithm can converge well gradually. Per-layer Sparsity vs. Global Sparsity.We compare the difference between pruning using per-layer sparsity and our pruning using global sparsity. Our optimization is based on global sparsity \(s\), which sorts the weight values of tensors of all layers together during pruning and coordinates the total sparsity of the whole model during optimization. On the contrary, the method using per-layer sparsity sorts the weights on each layer separately, each with a sparsity variable to control. For both the per-layer sparsity method and global sparsity method, our compression target is the layers whose connectivity is more than 1e4. Figure 5 shows the difference in connectivity sparsity between the per-layer sparsity method and our global sparsity method on 6 Conv, 2 FC SNN model, the 6 source convolution layers have the same connectivity in the baseline model. In Figure 5, we can see the connectivity after pruning using the global sparsity method show more obvious diversities between these 6 convolution layers. However, the per-layer sparsity method results in almost the same pruned connectivity for these 6 convolution layers. In Table 6, we can see that our global sparsity method has better pruning performance at each connectivity level.
2301.07670
Active learning for medical image segmentation with stochastic batches
The performance of learning-based algorithms improves with the amount of labelled data used for training. Yet, manually annotating data is particularly difficult for medical image segmentation tasks because of the limited expert availability and intensive manual effort required. To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set. On the one hand, most active learning works have focused on the classification or limited segmentation of natural images, despite active learning being highly desirable in the difficult task of medical image segmentation. On the other hand, uncertainty-based AL approaches notoriously offer sub-optimal batch-query strategies, while diversity-based methods tend to be computationally expensive. Over and above methodological hurdles, random sampling has proven an extremely difficult baseline to outperform when varying learning and sampling conditions. This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images. More specifically, we propose to compute uncertainty at the level of batches instead of samples through an original use of stochastic batches (SB) during sampling in AL. Stochastic batch querying is a simple and effective add-on that can be used on top of any uncertainty-based metric. Extensive experiments on two medical image segmentation datasets show that our strategy consistently improves conventional uncertainty-based sampling methods. Our method can hence act as a strong baseline for medical image segmentation. The code is available on: https://github.com/Minimel/StochasticBatchAL.git.
Mélanie Gaillochet, Christian Desrosiers, Hervé Lombaert
2023-01-18T17:25:55Z
http://arxiv.org/abs/2301.07670v2
# Active learning for medical image segmentation with stochastic batches ###### Abstract The performance of learning-based algorithms improves with the amount of labelled data used for training. Yet, manually annotating data can be tedious and expensive, especially in medical image segmentation. To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set. On one hand, most active learning works have focused on the classification or limited segmentation of natural images, despite active learning being highly desirable in the difficult task of medical image segmentation. On the other hand, uncertainty-based AL approaches notoriously offer sub-optimal batch-query strategies, while diversity-based methods tend to be computationally expensive. Over and above methodological hurdles, random sampling has proven an extremely difficult baseline to outperform when varying learning and sampling conditions. This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images. More specifically, we propose to compute uncertainty at the level of batches instead of samples through an original use of stochastic batches during sampling in AL. Exhaustive experiments on medical image segmentation, with an illustration on MRI prostate imaging, show that the benefits of stochastic batches during sample selection are robust to a variety of changes in the training and sampling procedures. ## 1 Introduction Data annotation is fundamental to medical imaging. Notably, the performance of segmentation algorithms depends on the amount of annotated training data. The manual annotation of pixel-level ground truth is therefore highly sought but remains difficult to obtain due to two challenging problems. First, the pixel-wise annotation of entire biological structures is a laborious and expensive task that requires highly trained clinicians. Second, image acquisition grows faster than the experts' ability to manually process the data, leaving large datasets mostly unlabelled. Clinicians can realistically annotate only small sets of images with a limited capacity to scale up. This constraint creates a need for strategies that reduce the crucial but arduous annotation efforts in medical imaging. To maximize the performance of a model with reduced annotated data during training, two types of approaches can unleash the potential of unlabelled data: active learning and semi-supervised learning. Active learning (AL) aims to identify the best samples to annotate and use during training. Meanwhile, semi-supervised learning seeks to improve the representation learned from data by exploiting unlabelled samples in addition to the few labelled ones. However, this approach still leaves the question of choosing which samples to use for the labelled set, underlining the importance of active learning. Images in the training set do not contribute equally to the performance of learning-based algorithms (Settles, 2009). Given a large unlabelled dataset, active learning (see Fig. 1) identifies the most valuable samples to be annotated and added to a training set (Budd et al., 2021; Ren et al., 2021). Actively selecting which data to label conceivably maximizes the performance of machine learning models with a minimum amount of labelled data. AL strategies also have the potential of accelerating training convergence and improving robustness by targeting specific types of data points (Nath et al., 2021). Active learning methods can be divided into three Figure 1: An active learning cycle usually comprises four steps: a) Training the model with the current labelled set, b) Sampling data from the unlabelled pool using the chosen AL strategy, c) Annotating the candidate samples, and d) Updating the labelled pool with the newly annotated samples. broad categories: uncertainty-based sampling strategies, representative-based sampling strategies and hybrid approaches (Settles, 2009; Budd et al., 2021). Uncertainty-based methods assume that the most valuable samples to annotate are the ones for which the current model is least confident. These methods, which differ in ways of calculating uncertainty, are however susceptible to target outlier samples or redundant information, particularly when querying batches of samples. To avoid bias towards narrow locals in distributions, representative-based and hybrid approaches try to diversify the set of candidate samples. Ensuring such diversity generally relies on learning a latent data representation, which requires estimating pairwise distances between all samples or computing their marginal distribution. These strategies consequently hardly scale satisfyingly to high dimensions. Consequently, current active learning approaches overwhelmingly focus on lower-dimensional tasks such as classification. Few works explore segmentation tasks, notably on natural images with several thousands of annotated images (Sinha et al., 2019; Huang et al., 2021; Kim et al., 2021; Xie et al., 2022). Due to its high-dimensional nature, medical image segmentation remains largely unexplored in AL, despite the high cost of manual annotation in medical imaging. A limited yet increasing number of works acknowledge that random sampling is, in practice, a painstakingly difficult baseline to outperform in active learning (Kirsch et al., 2019; Mittal et al., 2019; Nath et al., 2021; Munjal et al., 2022). Indeed, the gains of AL strategies over random sampling are often inconsistent across different experimental setups. For example, varying the sampling budget can cancel the improvements originally observed for such strategies (Bengar et al., 2021; Munjal et al., 2022). Similarly, existing methods for AL tend to be sensitive to the model architecture, hyperparameters and regularization used during training (Mittal et al., 2019; Munjal et al., 2022). These hurdles hinder AL advances in medical image segmentation. This paper intends to address the limitations of current AL methods, notably their drawback of selecting batches solely based on per-sample uncertainty and the computational cost of ensuring diversity. Our work proposes to leverage the power of randomness during uncertainty-based batch sampling to improve the overall segmentation performance of AL models. ### Contributions We introduce the use of stochastic batches (SB) querying on top of existing uncertainty-based AL strategies (see Fig.2). Our novel approach, based on stochastic batches during sampling, has several benefits: 1. a sample selection method that is well-suited to the difficult task of segmenting medical images; 2. a flexible framework which can be used on top of any uncertainty-based AL strategy; and 3. a scalable and computationally-inexpensive method to ensure diversity of sample selection. We also address the inconsistent gains of current AL methods under different experimental settings. We show in extensive experiments that our approach performs well in the challenging problem of medical image segmentation and remains robust to the choice of training and sampling procedures. ## 2 Literature review Active learning methods maximize the future model performance by augmenting the current labelled training set with the most informative unlabelled samples. AL approaches mainly fall into uncertainty-based, representative-based or hybrid strategies, each described next. ### Uncertainty-based AL methods Uncertainty is one of the most prevalent criteria for sampling in active learning. Uncertainty-based methods query samples for which the current model is least confident. AL strategies for deep learning-based models have initially applied traditional AL methods that identify difficult examples using simple heuristics. However, in practice, they still hardly scale to high-dimensional data (Beluch et al., 2018) or are not consistently effective for deep learning models that rely on batch selection (Sener and Savarese, 2018; Ren et al., 2021). Hence, subsequent work has combined traditional uncertainty measures, such as the entropy of the output probabilities, with measures of geometric uncertainty (Konyushkova et al., 2019) or with the pseudo-labelling of samples with confident predictions (Wang et al., 2017). Similarly, Gal et al. (2017) and Kirsch et al. (2019) adapted existing heuristics to a Bayesian framework through Monte Carlo dropout. More recently, Yoo and Kweon (2019) developed a new uncertainty measure based on the predicted loss from the intermediate representations of the model. Although widely popular, purely uncertainty-based strategies relying on batch selection are susceptible to query samples with redundant information. However, manually annotating similar samples is a waste of annotation resources. Moreover, incorporating a set of similar samples to the labelled training set could bias the model towards an area outside the true data distribution. These samples could hence hamper rather than improve model generalization. ### Representative-based AL methods As opposed to uncertainty-based approaches, representative-based AL methods aim at diversifying the batch of candidate samples to improve the future performance of the model. One of the main representative-based approaches, Coreset (Sener and Savarese, 2018), identifies the most diverse and representative samples by minimizing the distance between labelled and unlabelled data. Coreset aims for the model to perform as well with the candidate set as it would with the entire dataset. While specifically designed to be applied to complex models such as CNN's, Coreset selection does not scale well to high-dimensional data since it requires computing the Euclidean distance between all pairs of data samples. A later work, VAAL (Sinha et al., 2019), learns a smooth latent-state representation of the input data via a variational auto-encoder (VAE). VAAL then selects samples different from the ones already labelled based on the learnt representation. However, since the VAE is task-agnostic, VAAL can easily query outlier data. In addition, this method has no mechanism to avoid choosing overlapping samples and requires carefully tuning its added modules. ### Hybrid AL strategies Against the limitations of uncertainty-based methods, hybrid strategies combine uncertainty and diversity measures to identify the most informative samples. Most work combines existing approaches, one focusing on model uncertainty and the other on sample diversity. For instance, Suggestive Annotation (Yang et al., 2017) combines ensembling with core-sets (Sener and Savarese, 2018). Similarly, Task-aware VAAL (Kim et al., 2021) incorporates the uncertainty measure proposed by the method Learning Loss (Yoo and Kweon, 2019) to VAAL's (Sinha et al., 2019) latent representation. BADGE (Ash et al., 2020) uses gradient embeddings to account for uncertainty and employs Kmeans++ initialization to ensure the diversity of selected samples. Nath et al. (2021) combines prevailing mutual information and entropy measures to ensure diversity and optimizes training by duplicating difficult samples. While these rely on a two-step approach, Sourati et al. (2019) directly solves an optimization problem for batch-mode sampling, yielding a distribution of candidate samples rather than specific examples. However, just like representative-based AL strategies, most of these works are difficult to scale due to their computational complexity (Ash et al., 2020; Nath et al., 2021; Sourati et al., 2019; Yang et al., 2017). Or alternatively, they require external modules which increase the range of parameters to tune and learn (Kim et al., 2021). ### AL for medical image segmentation High-dimensional data remains a particularly challenging problem in AL (Ren et al., 2021). Current work, therefore, primarily focuses on low-dimensional annotation tasks such as image classification (Gal et al., 2017; Wang et al., 2017; Sener and Savarese, 2018; Beluch et al., 2018; Sourati et al., 2019; Gao et al., 2020; Ash et al., 2020; Zhang et al., 2022). Tentative work that addresses pixel-wise annotations tends to narrow their application to natural images segmentation (Sinha et al., 2019; Huang et al., 2021; Kim et al., 2021; Xie et al., 2022). To the best of our knowledge, the few AL strategies that apply AL to medical image segmentation avoid using deep learning-based models (Top et al., 2011; Konyushkova et al., 2015, 2019), or remain computationally expensive and challenging to scale to large datasets (Yang et al., 2017; Nath et al., 2021), often requiring sub-sampling of the unlabelled pool (Sourati et al., 2019). Despite recent advances, there is, however, still a surprising gap to be filled between active learning and medical image segmentation. ## 3 Methods Given a labelled set \(\mathcal{D}_{L}=\{(\mathbf{x}^{(j)},\mathbf{y}^{(j)})\}_{j=1}^{N}\), with data \(\mathbf{x}\in\mathbb{R}^{H\times W}\) and segmentation mask \(\mathbf{y}\in\mathbb{R}^{C\times H\times W}\) (\(H\) and \(W\) are respectively the image height and width, and \(C\) is the number of classes), we train a fully Figure 2: Stochastic batch AL for uncertainty-based sampling. Our method combines the diversity brought by random sampling with the informativeness of uncertainty-based sampling. Adding our stochastic batch paradigm enables the data uncertainty to be estimated in a broader batch-level selection rather than a sample-level selection. supervised segmentation model \(f_{\theta}(\cdot)\) parameterized by \(\theta\) with labelled samples from \(\mathcal{D}_{L}\). After training the model \(f_{\theta}\) with \(\mathcal{D}_{L}\) (corresponding to one training cycle), we select \(B\) samples from the unlabelled set \(\mathcal{D}_{U}=\{\mathbf{x}_{u}^{(j)}\}_{j=1}^{M}\). These samples are annotated by an oracle before being added to the labelled training set \(\mathcal{D}_{L}\). The new labelled and unlabelled sets, therefore, become \(|\mathcal{D}_{L}|=N+B\) and \(|\mathcal{D}_{U}|=M-B\). This iterative process is repeated until the total annotation budget is exhausted. Our AL method builds upon our use of stochastic batches, summarized in Fig. 2. It operates in two stages to ensure a guided sampling diversity. First, we generate a pool of \(Q\) batches, each containing \(B\) samples chosen uniformly at random from \(\mathcal{D}_{U}\). Second, for every batch, we assign an uncertainty score to each of the samples it contains and compute the mean score across the entire batch. The samples in the batch associated with the highest mean score are subsequently chosen as annotation candidates. The pool size \(Q\) is a parameter that allows our sampling strategy to be a mix of random sampling (\(Q=1\)) and fully uncertainty-based sampling (\(Q=|\mathcal{D}_{U}|!\), assuming no repeated batch and no repeated sample in the same batch). Indeed, if the pool contains a single batch of randomly chosen samples (\(Q=1\)), the uncertainty score will not affect sampling as only one batch can be selected. On the other hand, the batch containing the top most uncertain samples will appear in the pool containing all possible combinations of batches (\(Q=|\mathcal{D}_{U}|!\)). The algorithm for our stochastic batch selection strategy is presented in Alg. 1. ``` Input num_cycles, pool size \(Q\), budget \(B\) Input model \(f\), initial weights \(\theta_{0}\), labelled data (\(X_{L}\), \(Y_{L}\)), unlabelled data \(X_{U}\) 1:for\(1\) to num_cycles do 2:\(f_{\theta}\leftarrow\) Train model \(f_{\theta_{0}}\) on \((X_{L},Y_{L})\) 3:for\(x_{u}\in X_{U}\)do 4:\(u_{score}\leftarrow\) Uncertainty(\(f_{\theta},x_{u}\)) 5:endfor 6:for\(i\in\{1,...,Q\}\)do 7: Generate batch with \(B\) random samples \(x_{u}\in X_{U}\) 8: Compute mean \(u_{score}\) over all samples of the batch 9:endfor 10:\(X_{S}\leftarrow\) Select top uncertain batch 11:\(Y_{S}\leftarrow\) Annotate \(X_{S}\) 12:\((X_{L},Y_{L})\leftarrow\) Add \((X_{S},Y_{S})\) to \((X_{L},Y_{L})\) 13:\(X_{U}\gets X_{U}\setminus X_{S}\) 14:endfor ``` **Algorithm 1** Stochastic batches for uncertainty-based sampling ### Uncertainty score To measure uncertainty and for comparison purposes, uncertainty scores from various active learning strategies (Shannon, 1948; Gal and Ghahramani, 2016; Gaillochet et al., 2022; Yoo and Kweon, 2019) are explored: * Entropy-based uncertainty (Shannon, 1948), which uses the entropy computed on the predicted output probabilities; * Dropout-based uncertainty (Gal and Ghahramani, 2016), using the variance/divergence of predictions obtained by multiple inferences with dropout; * Test-time Augmentation (TTA)-based uncertainty (Gaillochet et al., 2022), which measures the variance/divergence of predictions obtained for multiple transformations to the input; * Learning Loss uncertainty (Yoo and Kweon, 2019), which trains an external module to predict the target losses and selects unlabelled samples with the highest predicted loss. ## 4 Experiment and results We assess the benefits of our proposed stochastic batches on a medical image segmentation task. Our evaluation compares the performance with and without stochastic batches of models trained with different uncertainty-based AL strategies. These strategies include Entropy-based sampling (Shannon, 1948), Dropout-based sampling (Gal and Ghahramani, 2016), test-time augmentation (TTA)-based sampling (Gaillochet et al., 2022) and sampling based on Learning Loss (Yoo and Kweon, 2019). To evaluate the robustness of our method to the training and sampling procedure, we perform a series of experiments where we individually vary the initial labelled set, training hyperparameters, sampling budget and stochastic pool size. ### Dataset We validate our method on the Prostate MR Image Segmentation (PROMISE) 2012 Challenge dataset (Litjens et al., 2014). It contains transversal T2-weighted MR images of 50 patients, both healthy (or with benign diseases) and suffering from prostate cancer. The images were acquired with different scanning protocols and vary in prostate size and appearance. Image resolution ranges from \(15\times 256\times 256\) to \(54\times 512\times 512\times 512\) voxels. Spacing ranges from \(2\times 0.27\times 0.27\) to \(4\times 0.75\times 0.75\)\(mm^{3}\). Each volume is converted to 2D images by slicing along the short axis. Images are then resized to a resolution of \(128\times 128\) pixels, and pixel intensity is normalized based on 1% and 99% percentile of the intensity histogram for each patient. We test our model on 10 patient volumes selected uniformly at random from the PROMISE12 dataset, yielding 248 test images. Our validation uses 109 images composing 5 volumes. We only use this validation set for hyperparameter search purposes. Our training set (labelled and unlabelled) comprises 1020 images from 35 patients. ### Evaluation metrics We evaluate our method on test volumes (3D) and individual images from these volumes (2D). We use both pixel overlap-based metrics and distance-based metrics. The Dice similarity coefficient (DSC) is averaged over all non-background channels. \[\text{DSC}(X,Y)\,=\,\frac{2|X\cap Y|}{|X|\cup|Y|} \tag{1}\] The Hausdorff distance (HD) and the Average symmetric surface distance (ASSD) measure the quality of the segmentation outlines. The HD measures the maximum shortest distance between a point from the prediction outline and a point from the target outline. The ASSD computes the average of all distances from a point on the prediction outline to a point on the ground truth outline, and vice versa. Given \(\text{d}(x,Y)\) the minimum distance from the boundary pixel \(x\) to the region \(Y\), we get: \[\text{HD}(X,Y)\,=\,\max\left\{\sup\limits_{x\in X}\text{d}(x,Y), \,\sup\limits_{y\in Y}\text{d}(X,y)\right\} \tag{2}\] \[\text{ASSD}(X,Y)\,=\,\frac{\sum\limits_{x\in X}\text{d}(x,Y)\, +\,\,\sum\limits_{y\in Y}\text{d}(X,y)}{|X|\,\,+\,\,|Y|} \tag{3}\] Since the Hausdorff distance tends to be sensitive to outliers, we also use a more robust variant which considers the \(95^{th}\) percentile instead of the true maximum (HD95). ### Implementation details We start each experiment by training our model with 10 labelled data points, randomly sampled from the unlabelled set before annotation. Setting \(B=10\), we then use our AL strategy to select 10 new samples from the unlabelled set, annotate them and add them to the existing labelled set. This process corresponds to the first AL cycle. Similarly to the experimental setting of previous AL approaches (Sener and Savarese, 2018), we retrain the model from scratch after each AL cycle. We repeat each experimental setup with 5 different seeds and report the mean and standard deviation computed for the different runs. Experiments were run on NVIDIA V100 GPU with CUDA 10.2. We implemented the methods using Python 3.8.10 with the PyTorch framework. #### 4.3.1 Training We use a standard 4-layer UNet (Ronneberger et al., 2015) as our segmentation model with dropout (\(p=0.5\)), batch normalization and a leaky ReLU activation function. The model is trained for 75 epochs in all experiments, each iterating over 250 batches, with batch size \(BS=4\). The number of training steps is hence fixed during all AL cycles, ensuring a fairer comparison. We optimize a supervised CE loss with the Adam optimizer (Kingma and Ba, 2015). We apply a gradual warmup with a cosine annealing scheduler (Loshchilov and Hutter, 2017; Goyal et al., 2018) to control the learning rate. During training, we use data augmentations on the input, with parameters \(d\) and \(\epsilon\), where \(d\) is the degree of rotation in 2D, and \(\epsilon\) models Gaussian noise. Because active learning aims to minimize the amount of labelled data needed to train the model, we do not use the validation set to select the final model. Our final model is instead the model obtained after the last training epoch. When not testing for their impact, we keep the training hyperparameters fixed. We fix the initial learning rate \(LR=10^{-6}\) with optimizer weight decay set to \(10^{-4}\). The scheduler increments the learning rate by a factor 200 during the first 10 epochs. For augmentations, we set \(d\sim\mathcal{U}(-10,10)\) and \(\epsilon\sim\mathcal{N}(0,0.01)\). #### 4.3.2 Sampling BaselinesWe compare our stochastic batches strategy with random sampling as well as four purely uncertainty-based methods: Entropy-based sampling (Shannon, 1948), Dropout-based sampling (Gal and Ghahramani, 2016), Learning Loss (Yoo and Kweon, 2019) and TTA-based sampling (Gaillochet et al., 2022). Similarly to Gaillochet et al. (2022), we compute uncertainty for dropout-based and TTA-based sampling with a standard Jensen-Shannon divergence (JSD) on 8 different output probability maps. These probability maps come from multiple inferences with dropout on the same input sample, or inferences for multiple augmented versions. In the latter case, augmentation includes Gaussian noise \(\epsilon\sim\mathcal{N}(0,0.01)\) and rotation. To simulate more realistic transformations in medical data, we replace the 90 degrees rotations in Gaillochet et al. (2022) with rotations of angle \(d\sim\mathcal{U}(-10,10)\) degrees. The training parameters used for the approach based on Learning Loss (Yoo and Kweon, 2019) were obtained by grid search on 10 labelled samples. We kept these parameters fixed in all our experiments. Stochastic batchesTo optimize manual annotation, we forbid choosing duplicate samples in the same generated stochastic batch. Across batches, we experiment with two settings: sampling with and without replacement. In our first two experiments, each sample is allowed to appear in only one proposed stochas tic batch. The stochastic pool size \(Q\) is the maximum number of batches that can be generated from the current unlabelled dataset and thus decreases as more AL cycles are completed. For more flexibility when evaluating the impact of batch and pool size, the same sample is then allowed to appear in different batches. The stochastic pool size is set to \(Q=100\). Experimentally, we found that generating stochastic batches by sampling with and without replacement across batches produced similar results. ### Impact of initial labelled set In our first set of experiments, we evaluate the robustness of stochastic batch sampling to the initial labelled set. We investigate the impact of both the number and the type of samples in the initial labelled set. Our first analysis experiments with 5 different initial labelled sets chosen uniformly at random from the training set. The results displayed in Fig. 3 validate the effectiveness of our method against different initial labelled sets. Averaged over 25 experiments with varying initial labelled sets and initialization seeds, our stochastic batch querying (blue, full lines) improves the model performance of purely uncertainty-based strategies (orange, dashed lines). For all considered AL strategies, selecting the most uncertain batch of samples rather than the most uncertain individual samples improves the model's overall performance. Table 1 shows the average results over all AL cycles (omitting training with the given initial labelled set). For all methods and metrics except for TTA with the 95% Hausdorff distance metric, adding stochastic batches provides an improvement at a statistically significant level. Note that the standard deviations given in the table tend to be large because they are averaged over 25 experiments and 7 AL cycles. For the second analysis, we validate the performance of models trained on initial labelled sets of varying sizes. We conduct our experiment on two popular AL strategies: entropy and dropout-based sampling. For each given initial labelled set size, the experiment is repeated with 5 initialization seeds - controlling the initial labelled samples used, the model initialization and the training updates. Table 2 gives the average model performance over 6 AL cycles. We observe that our stochastic batch selection strategy improves upon purely uncertainty-based selection even when we vary the initial number of labelled samples. ### Impact of training hyperparameters Most AL methods perform hyperparameter tuning on the initial labelled set, keeping the obtained hyperparameters fixed during all cycles of the experiments. However, these parameters might be sub-optimal in later training cycles when more labelled data is available (Munjal et al., 2022). Hence we verify in our second set of experiments the robustness of stochastic batches to training hyperpa Figure 3: **Improvements with Stochastic Batches over varying initial labelled samples**. Active learning results on the PROMISE12 dataset in terms of 3D test dice score and corresponding 95% confidence interval. Each point is the mean over 25 experiments: 5 training hyperparameters sets and 5 initialization seeds. Depicted are the results for sampling based on Entropy (row 1), Dropout (row 2), Learning loss (row 3) and Test-time augmentation (row 4). The active learning selection is shown with (blue, full) and without (orange, dashed) stochastic batches. Stochastic batches improve the model performance of purely uncertainty-based AL strategies, regardless of the initial labelled set. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Entropy} & \multicolumn{2}{c}{Dropout} & \multicolumn{2}{c}{LearningLoss} & \multicolumn{2}{c}{TTA} \\ & RS & \multicolumn{2}{c}{(Shannon, 1948)} & \multicolumn{2}{c}{(Gal and Ghahramani, 2016)} & \multicolumn{2}{c}{(Yoo and Kweon, 2019)} & \multicolumn{2}{c}{(Gaillochet et al., 2022)} \\ \cline{3-8} & & w/o SB & Ours & w/o SB & Ours & w/o SB & Ours \\ \hline **3D DSC** & 68.83 & 0.6701 & **71.27*** & 67.69 & **72.59*** & 53.88 & **65.29*** & 64.07 & **69.71*** \\ (\(\uparrow\) best) & (\(\pm\)15.99) & (\(\pm\)16.68) & (\(\pm\)17.39) & (\(\pm\)17.16) & (\(\pm\)14.96) & (\(\pm\)21.51) & (\(\pm\)17.72) & (\(\pm\)21.13) & (\(\pm\)17.59) \\ \hline **2D DSC** & 67.94 & 66.88 & **68.99*** & 67.07 & **69.64*** & 60.22 & **65.72*** & 65.85 & **68.00*** \\ (\(\uparrow\) best) & (\(\pm\)8.28) & (\(\pm\)8.62) & (\(\pm\)9.03) & (\(\pm\)9.51) & (\(\pm\)8.05) & (\(\pm\)10.36) & (\(\pm\)8.94) & (\(\pm\)10.25) & (\(\pm\)9.02) \\ \hline **3D HD95** & 7.03 & 7.03 & **6.69*** & 6.96 & **6.58*** & 9.14 & **7.82*** & **6.92*** & 7.19 \\ (\(\downarrow\) best) & (\(\pm\)3.73) & (\(\pm\)4.27) & (\(\pm\)3.14) & (\(\pm\)4.95) & (\(\pm\)3.18) & (\(\pm\)6.44) & (\(\pm\)4.38) & (\(\pm\)4.79) & (\(\pm\)3.17) \\ \hline \hline \end{tabular} \end{table} Table 1: **Overall improvements with Stochastic Batches over varying initial labelled samples**. Mean model performance on the PROMISE12 dataset over all AL cycles (omitting training with the initial labelled set). We show the mean (std) Dice score (DSC, higher is better) and 95% Hausdorff distance (HD95, lower is better) over 3D test volumes and 2D test images. The results are averaged over 6 AL cycles, 5 initial labelled sets chosen uniformly at random and 5 initialization seeds, totalling 150 experiments per point. A * indicates the statistical significance of the result with a p-value \(<0.05\) given a paired permutation test. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & & \multicolumn{2}{c}{Entropy} & \multicolumn{2}{c}{Dropout} \\ & & \multicolumn{2}{c}{(Shannon, 1948)} & \multicolumn{2}{c}{(Gal and Ghahramani, 2016)} \\ \cline{3-8} & RS & w/o SB & Ours & w/o SB & Ours \\ \hline **5 initial samples** & 71.22 & 65.18 & **72.36*** & 61.69 & **71.45*** \\ & (\(\pm\) 15.09) & (\(\pm\) 14.43) & (\(\pm\)16.19) & (\(\pm\) 18.15) & (\(\pm\)15.41) \\ \hline **10 initial samples** & 71.08 & 66.21 & **73.78*** & 65.09 & **73.30*** \\ & (\(\pm\) 11.70) & (\(\pm\) 13.79) & (\(\pm\)10.73) & (\(\pm\) 15.63) & (\(\pm\)14.95) \\ \hline **15 initial samples** & 75.21 & 73.19 & **74.21** & 72.90 & **76.81*** \\ & (\(\pm\) 7.27) & (\(\pm\)7.54) & (\(\pm\)7.85) & (\(\pm\)6.96) & (\(\pm\)8.34) \\ \hline **20 initial samples** & 76.00 & 76.13 & **80.24*** & 77.47 & **79.81*** \\ & (\(\pm\)7.19) & (\(\pm\)5.55) & (\(\pm\)4.19) & (\(\pm\)6.38) & (\(\pm\)5.54) \\ \hline **25 initial samples** & 77.07 & 77.73 & **79.71*** & 77.44 & **81.08*** \\ & (\(\pm\)4.39) & (\(\pm\)3.79) & (\(\pm\)4.37) & (\(\pm\)4.31) & (\(\pm\)5.20) \\ \hline \hline \end{tabular} \end{table} Table 2: **Overall improvements with Stochastic Batches for initial labelled sets of different sizes**. Mean model performance on the PROMISE12 dataset over all AL cycles (omitting training with the initial labelled set) for initial sets of different sizes. We show the mean (std) Dice score (higher is better) over 3D test volumes. The results are averaged over 6 AL cycles and 5 initialization seeds, totalling 30 experiments per point. A * indicates the statistical significance of the result with a p-value \(<0.05\) given a paired permutation test. rameters. To create a collection of different yet realistic hyperparameters, we experimentally select five sets of hyperparameters, each yielding the highest validations score for labelled sets of sizes 10, 50, 100, 150 or 200. Augmentation parameters such as the maximum rotation angle ranged between 10 and 140 degrees, and the standard deviation of Gaussian noise varied between 0 and 0.1. Similarly, we optimized a loss combining Cross-entropy (CE) and Dice, with the weight of CE varying from 0.6 to 1. After 5 to 55 warmup steps, we also used a scheduler which increased the learning rate by a factor ranging from 200 to 500. We hence obtained a diverse range of parameters. The results depicted in Fig. 4 show that adopting stochastic batches during sampling (full lines) yields a significant boost in terms of 3D dice score compared to the performance obtained without stochastic batches (dashed lines). This jump in performance is notable, particularly during the first AL cycles (20-40 labelled samples). In terms of 3D dice score, the model performance becomes considerably better compared to results given by random sampling. The only exception is when applying stochastic batches to Learning Loss: in that case, model performance becomes similar to that with random sampling. However, the original Learning Loss strategy performs much more poorly than the random sampling baseline. Overall, Learning Loss actually obtains the most significant improvement jump. Table 3 gives the average results over all AL cycles (omitting the first training with the initial labelled set). The benefits of stochastic batches are most apparent in terms of dice scores, both for test images and test volumes. Although TTA yields, on average, better results with stochastic batches than without, the results are not always statistically significant. The variability of training and regularization hyperparameters used in the experiment could explain these inconsistencies. In particular, the disparity between the data augmentation parameters used during training (variable) and those used for sampling (kept fixed) could have affected the performances of TTA. It is also interesting to note that stochastic batches maximize the performance of Entropy, Dropout and TTA-based strategies, and all three methods become comparable. On the contrary, their purely uncertainty-based versions yield more considerable variations in the results. ### Impact of sampling budget We also investigate the robustness of stochastic batches to the sampling budget. Keeping the initial labelled set and training hyperparameters fixed, we run experiments with 5 different sampling budgets, which we keep constant across cycles. The results shown in Fig. 5 reveal that stochastic batches have a more consistent impact on model performance as the budget size increases. With a high budget \(B=15\) (row 3), the use of stochastic batches constantly improves purely uncertainty-based methods. Improvement is also nearly always constant for Figure 4: **Improvements with Stochastic Batches over varying hyperparameters**. Active learning results on the PROMISE12 dataset in terms of 3D test dice score and corresponding 95% confidence interval. Each point is the mean over 25 experiments: 5 training hyperparameters sets and 5 initialization seeds. Depicted are the results for sampling based on a) Entropy, b) Dropout, c) Learning loss and d) Test-time augmentation. The AL selection is shown with (blue, full) and without (orange, dashed) stochastic batches. Random sampling results (grey, dotted) are also plotted. Our stochastic batches improve the model performance of purely uncertainty-based AL strategies and boost performance well above the random sampling baseline. lower budgets \(B=5\) (row 1). With very low budgets, batch uncertainty is highly influenced by the uncertainty of each individual sample, potentially reducing the benefits of diversity offered by stochastic batches. The selection is dominated by uncertainty, and if the measure for uncertainty is not representative of the true uncertainty of the model, then uninformative samples could be selected and consequently bias the model. ### Impact of sampling stochastic pool size In our last experiment, we evaluate the influence of the number of batches in the stochastic pool on the model performance, fixing the initial labelled set, training hyperparameters and sampling budget. The results for our experiments on Entropy-based and Dropout-based sampling are given in Fig. 6. We observe that applying the biggest pool size does not necessarily yield the best performance. On the contrary, the model performs best when the most uncertain batch is selected from a pool containing 10 or 100 different batches. Increasing the pool of choices by a factor 10 or 100 does not lead to significant improvements, and can lead to worse performances. ### Candidate samples We finally investigate visually the benefits of using stochastic batches with uncertainty-based sample selection. We show in Fig. 7 two sets of candidate samples identified by Entropy-based sampling, with and without our stochastic batches. In the first two columns, the samples were selected by identifying the most uncertain randomly generated batch. In the last two columns, the most certain samples were queried based on the entropy of their predicted output probabilities. While the samples from the first two columns seem more diverse, with more variety in the candidate set, the third column contains noticeably similar samples. Indeed, the first four images of the column are slices taken from the MRI volume of the same patient. ## 5 Discussion Overall, our results demonstrate that using stochastic batches during uncertainty-based sampling is an efficient strategy to ensure diversity among the selected batch of samples. Furthermore, we experimentally observe that the benefit of using stochastic batches is robust to changes in the initial labelled set, initialization of the model and training hyperparameters, as well as to variations in the sampling budget. As illustrated in Fig. 7, the redundancy of queried samples constitutes one of the main drawbacks of uncertainty-based AL strategies. Candidate samples may indeed convey highly similar information. Hence, the annotation effort on these samples will be suboptimal. If, on the contrary, the most uncertain batches rather than the most uncertain samples are queried, the diversity within our stochastic batches mitigates the overlap of information and redundancy between samples. Our stochastic scheme adds diversity to the uncertainty-based sampling in AL. Our quantitative results demonstrate the advantages of adding such a stochastic scheme in AL in terms of added segmentation accuracy in a low-labelled set regime and reduced number of required training samples. Previous AL works have observed that the initial labelled pool can significantly impact the training and final performance of AL models (Chen et al., 2022). Nevertheless, a robust AL method should still perform well regardless of this initial labelled set. The results obtained in our experiment with varying initial labelled sets (Sec. 4.4) reveal that the performance boost from our stochastic batch sampling strategy is robust to changes in both the initial labelled set and model initialization. On average, selecting the most \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Entropy} & \multicolumn{2}{c}{Dropout} & \multicolumn{2}{c}{LearningLoss} & \multicolumn{2}{c}{TTA} \\ & & (Shannon, 1948) & (Gal and Ghahramani, 2016) & (Yoo and Kweon, 2019) & (Gaillochet et al., 2022) \\ \cline{2-10} & RS & w/o SB & Ours & w/o SB & Ours & w/o SB & Ours & w/o SB & Ours \\ \hline **3D DSC** & 75.57 & 75.13 & **78.44*** & 76.49 & **78.59*** & 69.53 & **76.25*** & 77.33 & **78.67*** \\ (\(\uparrow\)best) & (\(\pm\)6.48) & (\(\pm\)6.95) & (\(\pm\)6.02) & (\(\pm\)7.65) & (\(\pm\)6.09) & (\(\pm\)8.43) & (\(\pm\)6.68) & (\(\pm\)6.92) & (\(\pm\)5.53) \\ \hline **2D DSC** & 68.29 & 68.90 & **71.04*** & 69.62 & **71.08*** & 64.27 & **69.16*** & 70.46 & **71.31*** \\ (\(\uparrow\)best) & (\(\pm\)6.79) & (\(\pm\)7.34) & (\(\pm\)6.51) & (\(\pm\)6.70) & (\(\pm\)6.79) & (\(\pm\)7.23) & (\(\pm\)6.80) & (\(\pm\)7.05 & (\(\pm\)5.71) \\ \hline **3D HD95** & 7.58 & 7.87 & **6.83*** & **6.72** & 6.74 & 8.78 & **7.85*** & 6.32 & **6.13** \\ (\(\downarrow\)best) & (\(\pm\)3.86) & (\(\pm\)4.28) & (\(\pm\)3.31) & (\(\pm\)2.75) & (\(\pm\)3.29) & (\(\pm\)4.22) & (\(\pm\)3.68) & (\(\pm\)2.87) & (\(\pm\)2.82) \\ \hline **3D ASSD** & 2.09 & 2.19 & **1.88*** & 1.92 & **1.86** & 2.46 & **2.13*** & 1.81 & **1.75** \\ (\(\downarrow\)best) & (\(\pm\)0.93) & (\(\pm\)1.03) & (\(\pm\)0.81) & (\(\pm\)0.73) & (\(\pm\)0.81) & (\(\pm\)1.05) & (\(\pm\)0.90) & (\(\pm\)0.73 & (\(\pm\)0.70) \\ \hline \hline \end{tabular} \end{table} Table 3: **Overall improvements with Stochastic Batches over varying training hyperparameters. Mean model performance on the PROMISE12 dataset over all AL cycles (omitting training with the initial labelled set). We show the mean (std) Dice score (DSC, higher is better), 95% Hausdorff (HD95, lower is better) distance and Average symmetric surface distance (ASSD, lower is better) over 3D test volumes and individual 2D test images. The results are averaged over 7 AL cycles, 5 training hyperparameter sets and 5 seeds, totalling 175 experiments per point. * indicates the statistical significance of the result with a p-value \(<0.05\) given a paired permutation test.** uncertain batches across AL cycles yields better results than selecting the most uncertain samples. Similarly, Sec. 4.5 shows that the improvements yielded by stochastic AL batches are also robust to changes in the training and regularization parameters. Hence, our method can maintain efficiency despite changes in the learning environment. These results suggest that using stochastic batches during AL for uncertainty-based sampling can be a reliable and robust AL approach. Our stochastic batch querying strategy for uncertainty-based AL operates as a balance between a fully random and a purely uncertainty-based selection. Tuning the stochastic pool size controls the amount of randomness desired in the AL selection. With the smallest pool size (\(Q=1\)), our stochastic batch selection is equivalent to random sampling since the single suggested batch will automatically have the highest uncertainty score in the pool. With the biggest pool size (\(Q\rightarrow\infty\)), all possible combinations of samples are available in the pool, and selecting the most uncertain batch of samples is equivalent to selecting the top uncertain samples. In other words, the approach becomes a purely uncertainty-based AL strategy with a larger pool size. As shown in Sec. 4.7, the benefits of our stochastic batches are apparent in between those extreme \(Q\) values, when Figure 5: **Improvements with Stochastic Batches given different budget sizes**. Model performance in terms of 3D dice score on test volumes given active learning selection with (solid) and without (dashed) stochastic batches on the PROMISE12 dataset. The results are given for sampling budgets \(B=5\) (row 1), \(B=10\) (row 2) and \(B=15\) (row 3). Each point is the mean over 5 different initialization seeds. Depicted are the results for Entropy-based sampling (green) and Dropout-based sampling (red). Using stochastic batches during sampling improves the model performance at both low and higher budgets. Figure 6: **Impact of pool size of Stochastic Batches**. Model performance in terms of 3D dice score on test volumes from the PROMISE12 dataset given stochastic batch pools of different sizes. Each column value is the mean obtained over 5 experiments with different seed initialization. The error bars (black) corresponds to the 95% confidence interval. Depicted are the results 2 popular uncertainty-based AL methods: Entropy-based sampling (6a) and Dropout-based sampling (6b). A medium pool size between 10 to 100 yields some of the most advantageous performances. the sampling strategy combines the informativeness of uncertainty-based sampling with the diversity provided by random sampling. Our quantitative results also reveal that a small pool of 10 to 100 different batches is sufficient to obtain a significant boost in model performance, exposing another benefit of our method. By covering only a fraction of the initial unlabelled set with the stochastic pool, inference to compute uncertainties at sampling time could be made for only a reduced subset of the unlabelled set, drastically reducing the sampling time. Indeed, without stochastic batches, uncertainty-based AL strategies usually require computing uncertainty on all unlabelled samples. However, with a smaller pool size, our stochastic batch scheme can identify candidate samples with fewer inferences. ## 6 Conclusion Active learning is particularly relevant in medical image segmentation since manual labelling is highly time-consuming and expensive. This paper addresses three main limitations of AL strategies: the lack of AL works for medical image segmentation, the proneness of uncertainty-based batch sampling strategies to select similar samples and the computational burden of diversity-based methods. Instead of selecting candidate samples based on sample-level uncertainty, our method proposes to compute uncertainty at the batch level, where batches of samples are randomly generated. Stochastic batches for uncertainty-based sampling are a simple, computational-inexpensive means of improving the AL candidate selection and the final model performance. Our method is flexible and can adapt to any uncertainty-based AL strategy. Our experiments show that our method is robust to variations in training and sampling settings and effective for the complex task of medical image segmentation. ## Acknowledgments This work is supported by the Canada Research Chair on Shape Analysis in Medical Imaging, the Research Council of Canada (NSERC) and the Quebec Bio-Imaging Network (QBIN). Computational resources were partially provided by Compute Canada. The authors also thank the PROMISE12 Challenge organizers for providing the data.
2307.03798
Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrints
Models leveraging both visual and textual data such as Contrastive Language-Image Pre-training (CLIP), are the backbone of many recent advances in artificial intelligence. In this work, we show that despite their versatility, such models are vulnerable to what we refer to as fooling master images. Fooling master images are capable of maximizing the confidence score of a CLIP model for a significant number of widely varying prompts, while being either unrecognizable or unrelated to the attacked prompts for humans. The existence of such images is problematic as it could be used by bad actors to maliciously interfere with CLIP-trained image retrieval models in production with comparably small effort as a single image can attack many different prompts. We demonstrate how fooling master images for CLIP (CLIPMasterPrints) can be mined using stochastic gradient descent, projected gradient descent, or blackbox optimization. Contrary to many common adversarial attacks, the blackbox optimization approach allows us to mine CLIPMasterPrints even when the weights of the model are not accessible. We investigate the properties of the mined images, and find that images trained on a small number of image captions generalize to a much larger number of semantically related captions. We evaluate possible mitigation strategies, where we increase the robustness of the model and introduce an approach to automatically detect CLIPMasterPrints to sanitize the input of vulnerable models. Finally, we find that vulnerability to CLIPMasterPrints is related to a modality gap in contrastive pre-trained multi-modal networks. Code available at https://github.com/matfrei/CLIPMasterPrints.
Matthias Freiberger, Peter Kun, Christian Igel, Anders Sundnes Løvlie, Sebastian Risi
2023-07-07T18:54:11Z
http://arxiv.org/abs/2307.03798v3
# CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution ###### Abstract Models leveraging both visual and textual data such as Contrastive Language-Image Pre-training (CLIP), are increasingly gaining importance. In this work, we show that despite their versatility, such models are vulnerable to what we refer to as fooling master images. Fooling master images are capable of maximizing the confidence score of a CLIP model for a significant number of widely varying prompts, while being unrecognizable for humans. We demonstrate how fooling master images can be mined by searching the latent space of generative models by means of an evolution strategy or stochastic gradient descent. We investigate the properties of the mined fooling master images, and find that images trained on a small number of image captions potentially generalize to a much larger number of semantically related captions. Further, we evaluate two possible mitigation strategies and find that vulnerability to fooling master examples is closely related to a modality gap in contrastive pre-trained multi-modal networks. From the perspective of vulnerability to off-manifold attacks, we therefore argue for the mitigation of modality gaps in CLIP and related multi-modal approaches. Source code and mined CLIPMasterPrints are available at [https://github.com/matfrei/CLIPMasterPrints](https://github.com/matfrei/CLIPMasterPrints) ## 1 Introduction In recent years, contrastively trained multi-modal approaches such as Contrastive Language-Image Pre-training (CLIP) [28] have increasingly gained importance. Among numerous useful applications, they constitute a powerful approach to perform zero-shot learning and play an important role in state-of-the-art text-to-image generators [29]. Yet, recent work raises the question of robustness and safety of CLIP-trained models. For example, Qiu et al. [27] find that CLIP and related multi-modal approaches are vulnerable to distribution shifts, and several research groups have successfully mounted adversarial attacks against CLIP [24, 6, 8]. In this paper, we show for the first time, that despite their power, CLIP models are vulnerable towards fooling master images, or what we refer to as _CLIPMasterPrints_, and that this vulnerability is closely related to a modality gap between text and image embeddings. The fooling master images introduced in this paper are capable of maximizing the confidence score of a CLIP model for a significant number of widely varying prompts, while being unrecognizable to humans. This ability can ef Figure 1: Heatmap of CLIP-assigned cosine similarities of famous artworks and their titles, as well as a random noise baseline (second from right) and our found fooling master image (first from right, marked with red frame) as returned by a pre-trained CLIP model. The mined fooling example outperforms all artworks in terms of CLIP score and can therefore fool the model for all targeted titles shown. fectively result in the master example being chosen over actual objects of a class when being compared to each other by the attacked model. What makes these fooling master examples particularly interesting is that a potential attacker needs to only mine a single fooling image to target a significant range of classes and captions processed by the attacked model. The existence of such unrecognizable images raises interesting questions on the efficiency and safety of multi-modal approaches to zero-shot prediction. We argue that understanding why fooling master images exist, and how their existence can be prevented in multi-modal models, poses a significant step to safely applying CLIP and its derivatives to a wide range of real-world domains. Our main contributions are as follows: We mine fooling master examples targeting CLIP models by means of Latent Variable Evolution (LVE) [4, 35] as well as stochastic gradient descent (SGD), and demonstrate that CLIP models are vulnerable to being fooled by such generated images. In more detail, we show that using the technique introduced in this paper, a potential attacker could be able to target a significant number of prompts/captions/targets by mining only a single image. Moreover, we find that the mined fooling examples tend to generalize to non-targeted prompts which are textually related to targeted prompts potentially further increasing their impact. We analyze the spatial distribution of information in mined master images and find them to be sensitive to occlusion and cropping. Finally, we evaluate two mitigation approaches: First, we refine the attacked model to map off-manifold images to a special token in order to mitigate its vulnerability to the mined fooling example. This approach is effective in mitigating known fooling images, yet the updated model remains vulnerable to newly mined fooling examples. Second, we mitigate the attacked models modality gap by moving centroids of image and text embeddings closer together. The latter proves to be an effective mitigation strategy even for newly mined fooling images. Consequentially our results point towards a strong connection between a vulnerability to fooling master images and a modality gap between text and image embeddings. The remainder of this paper is structured as follows: Section 2 gives an overview on related work, after which we lay out our approach to finding fooling master images in Section 3. Thereafter we discuss our experimental setup in Section 4.1 and present our mined fooling master images in Section 4.2. Sections 4.3 and 4.4 discuss in how far the mined images generalize to originally non-targeted classes and how many prompts a single example can feasibly target. Section 4.5 analyses how information about different prompts is distributed in a single fooling example and Section 4.6 discusses mitigation. Finally, in Section 5 we summarize and discuss our results and point out interesting future work directions. ## 2 Related work The notion of fooling examples was originally introduced by Nguyen et al. [23], in which the authors generate fooling examples for individual classes for convolutional neural network (CNN) classifiers [17] using genetic algorithms and compositional pattern producing networks [33]. In later work, Alcorn et al. [2] showed that CNNs can even be easily fooled by familiar objects in different and out-of-distribution poses. The main difference to our work is that the authors generate images that are misclassified as just one concrete class, while our images fool the network with respect to many classes or prompts. Bontrager et al. [4] introduced the concept of latent variable evolution (LVE). The authors use Covariance Matrix Adaption - Evolution Strategy (CMA-ES) to perform stochastic search in the generator latent space of a Generative Adversarial Network (GAN) [10, 9] to create _deep master prints_. Deep master prints are introduced as synthetic fingerprint images which match large numbers of real-world fingerprints, thus undermining the security of fingerprint scanners. Our _CLIPMasterPrint_ approach builds on these introduced concepts by adapting LVE to find fooling master images for Contrastive Language-Image Pre-training transformers [34, 28]. In more detail, we adapt the original LVE approach in two ways: First, we utilize a custom fitness function rewarding CLIP embeddings of fooling images that align well to a number of targeted prompts. Second, rather than using a custom-trained GAN to generate fooling examples, in order to target CLIP models we evolve our solution in the latent space of the publicly available autoencoder [3] used by Stable Diffusion [29], which has been trained on an internet-scale collection of images. Thus our approach does not produce images on the manifold spanned by the generative model in latent space, but rather unrecognizable images, which we hypothesise may be necessary in order to generate master images for CLIP (see Section 3). Adversarial examples and adversarial learning [5, 26, 1] are also closely related to generating fooling examples, where usually adversarial examples can be disguised as regular images. The alternative gradient-based optimization approach we apply in this paper is related to a number of popular gradient-based adversarial attacks [11, 15, 16, 20]. Contrary to these attacks though, we optimize a loss function targeting many classes/prompts in parallel and do not take any measures to confine our found solution to the image manifold, as our proposed attack is intrinsically an off-manifold attack. Adversarial attacks to contrastively pre-trained multimodal networks, among others by means of text patches and adversarial pixel perturbations have been performed in a number of variations [24, 6, 18, 8]. Interesting work in terms of attacks on the text encoding has been performed by Daras and Dimakis [6], where the authors show that one is able to generate images using nonsense-phrases in DALL-E 2, which we believe to be related to the issue of the modality gap between text and image embeddings upon which our work builds as well. This modality gap in contrastively pre-trained multi-modal approaches has been documented originally by Liang et al. [19], where the authors show that the gap is caused by the inductive bias of the transformer architecture and reinforced by training a contrastive loss. While Liang et al. merely document the modality gap, in our work we find that with respect the to vulnerability to off-manifold attacks, the modality gap should be mitigated. Nukrai et al. [25] come to a similar conclusion upon finding that the modality gap causes instability when training an text decoder from CLIP embeddings. Finally, in terms of the robustness of multi-modal neural networks Qiu et al. [27] conduct an extensive evaluation of CLIP and CLIP-based text-to-image systems, where they come to the conclusion that CLIP and its derivatives are not robust with respect to distribution shift. ## 3 Approach: CLIPMasterPrints We generate fooling images for a given model \(C_{\theta}\), which has been trained using Contrastive Language-Image Pre-training (CLIP), to indicate how well a prompt or image caption \(c\) describes the contents of an image \(\mathbf{x}\). For each caption-image pair \((c,\mathbf{x})\), \(C_{\theta}\) extracts a pair of corresponding vector embeddings \((\mathbf{f}(c),g(\mathbf{x}))\) and computes their cosine similarity \[s(\mathbf{x},c)=\mathcal{C}_{\theta}(\mathbf{x},c)=\frac{\mathbf{g}(\mathbf{ x})}{\|\mathbf{g}(\mathbf{x})\|}\cdot\frac{\mathbf{f}(c)^{\intercal}}{\|\mathbf{f}(c) \|}, \tag{1}\] where a cosine similarity of \(1\) between \(\mathbf{f}(c)\) and \(\mathbf{g}(\mathbf{x})\) indicates and excellent match between prompt \(c\) and image \(\mathbf{x}\). In practice it has been found though, that such large scores are hardly achieved, and for well-fitting text-image pairs \(s(\mathbf{x},c)\approx 0.3\)[31, 19]. On the other hand, \(s(\mathbf{x},c)\approx 0\) indicates that prompt and image are unrelated. In the latent space of \(\mathcal{C}_{\theta}\), we aim to find an embedding \(\mathbf{g}(\mathbf{x}^{\prime})\) corresponding to a fooling master image \(\mathbf{x}^{\prime}\) for a number of matching text-image pairs \((c_{1},\mathbf{x_{1}}),(c_{2},\mathbf{x_{2}}),\ldots(c_{n},\mathbf{x_{n}})\) such that: \[\frac{\mathbf{g}(\mathbf{x}^{\prime})}{\|\mathbf{g}(\mathbf{x}^{\prime})\|} \cdot\frac{\mathbf{f}(c_{k})^{\intercal}}{\|\mathbf{f}(c_{i})\|}>\frac{\mathbf{ g}(\mathbf{x_{k}})}{\|\mathbf{g}(\mathbf{x_{k}})\|}\cdot\frac{\mathbf{f}(c_{k})^{ \intercal}}{\|\mathbf{f}(c_{k})\|}\quad\text{for}\quad k\in[1,n].\] The observation [31] that for most matching text-image-pairs \[\frac{\mathbf{g}(\mathbf{x_{k}})}{\|\mathbf{g}(\mathbf{x_{k}})\|}\cdot\frac{ \mathbf{f}(c_{k})^{\intercal}}{\|\mathbf{f}(c_{k})\|}\approx 0.3-0.35\] indicates that there is a limit on how well the CLIP-trained model \(\mathcal{C}_{\theta}\) can align \(\mathbf{g}\), which is extracted from a vector on the image manifold \(\mathbf{x_{k}}\) to \(\mathbf{f}\), the models vector embedding of text prompt \(c\). We hypothesize that this apparent limit for vectors on the image manifold implies that if one were to search for vectors \(\mathbf{x_{i}}^{\prime}\) off manifold, one might find a vector that aligns better (and thus has a better cosine similarity score \(s\)) to all the captions \(c_{1},c_{2},\ldots,c_{n}\), than any of the matching vectors on the image manifold \(\mathbf{x_{1}},\mathbf{x_{2}},\ldots,\mathbf{x_{n}}\). To test this hypothesis, we employ a Latent Variable Evolution (LVE) approach [4, 35], which searches the latent space of a generative model using an evolutionary algorithm to find latent vectors which, when decoded, minimize a loss function designed to favor content that fulfills particular properties. In order to find an image that maximizes \(s(\mathbf{x}^{\prime},c_{k})\) for \(n\) different image captions \(c_{1},c_{2},\ldots,c_{n}\) we minimize the loss function: \[\mathcal{L}(\mathbf{x})=-\min_{\forall c_{k}}\;s(\mathbf{x},c_{k}), \tag{2}\] which conditions the used evolutionary algorithm to find an image \(\mathbf{x}\) for which \(s\) as large as possible for all presented captions \(c_{1},c_{2},\ldots,c_{n}\). The particular evolutionary algorithm employed in this paper is Covariance Matrix Adaption - Evolution Strategy (CMA-ES) [13], which is known for its robustness as it adapts its sampling strategy to sample along the contour lines of the loss surface. CMA-ES searches the latent space of a pre-trained autoencoder for images minimizing the loss function shown in Equation 2 The candidate solutions suggested by CMA-ES are then transferred from latent space into image space using the decoder part of the aforementioned autoencoder. Subsequently, the CLIP cosine similarity is computed between each decoded candidate image for all to-be-optimized captions. Of these per-candidate scores, the lowest score obtained, corresponding to the caption that matches the candidate _worst_, is returned as a loss for the candidate. The returned candidate losses are then used by CMA-ES to suggest new candidates. The block diagram in Fig. 2 illustrates the process and Algorithm 1 describes it in greater detail as pseudocode. While the work most related to our approach [23, 4] utilizes evolution strategies and these strategies have proven themselves highly robust with respect to loss function and hyperparameters, given the loss we optimize is semi-differentiable, the proposed fooling examples can be mined using a gradient-based approach as well. We do so for comparison by presenting a number of randomly initialized latent vectors to the image decoder \(\mathcal{D}_{\theta_{2}}\), and iteratively updating these vectors by minimizing their losses as per Equation 2 using Adam [14]. The CLIP model used in the experiments in this paper is a pre-trained _ViT-L/14_ model; the decoder of StableDiffusion V1 [29] is used to translate the optimized latent rep resentation into image space. Note that we do not apply any diffusion in this process, the autoencoder is in principle exchangeable with any other strong autoencoder. ## 4 Results ### Experimental Setup **Generating fooling master images.** We test our approach to finding master images for both fooling CLIP on famous artworks and on ImageNet [30] classes. For the artworks, we train a fooling master image to obtain a high matching score on the _ViT-L/14@336px_ CLIP model [28] for 10 different text prompts, consisting of the titles of famous artworks and their authors. Famous artworks and their corresponding titles and artists were chosen for their familiarity: On the one hand, due to being widely known and therefore likely in the training data of the model, this approach ensures that CLIP scores between corresponding artwork-title pairs will be easily matched to each other, resulting in high cosine similarities obtained from the model for matching pairs. On the other hand, due to the uniqueness and distinctiveness of most images in both motive and style, it is unlikely that any two artworks will be confused by the model, resulting in low cosine similarities for image-text pairs that do not match. Following the approach described in Section 3, we search the latent space of the stable diffusion autoencoder [29] for fooling master images using CMA-ES for 18000 iterations. In order to do so, we flatten its 4 feature maps into a vector. Since images are encoded in this latent space with a downsampling factor of 8, given that we generate images of 512 \(\times\) 512 pixels in size, this results in a \(\frac{512}{8}\cdot\frac{512}{8}\cdot 4=16384\) dimensional space to be searched by CMA-ES. We initialize CMA-ES with a random vector sampled from a zero-mean unit-variance Gaussian distribution and choose \(\sigma=1\) as an initial sampling variance. To determine CMA-ES population size, we apply the heuristic suggested in [12], using \(4+3\cdot log(num\_features)=4+3\cdot log(16384)\approx 33\) candidates per iteration. For generating fooling master images for ImageNet classes, we mine a fooling master image for 25, 50, 75 and 100 randomly selected ImageNet classes for \(50000\) iterations respectively. The remaining CMA-ES parameters are set the same as in the previous experiment, except that, to speed up convergence, smaller images with a resolution 224 by 224 pixels were generated, which corresponds to a \(\frac{224}{8}\cdot\frac{224}{8}\cdot 4=3136\) dimensional search space and resulting population size of \(4+3\cdot log(3136)\approx 28\) candidates per iteration. Taking into account the smaller image size, the _ViT-L/14_ model [28] with a matching input resolution of \(224\) by \(224\) pixels is chosen here. On ImageNet classes, we also evaluate a gradient-based approach to mine CLIPMasterPrints for 25 ImageNet classes: we sample \(14\) candidate noise vectors of dimension \(\frac{224}{8}\cdot\frac{224}{8}\cdot 4\) from the standard uniform distribution, and optimize all candidates in parallel through the sum of their obtained losses using the Adam optimizer [14] (\(\beta_{1}=0.9\) Figure 2: Optimization pipeline: CMA-ES is used to generate image candidates in the latent space of a pre-trained autoencoder. The generated latent vector is passed through the autoencoder’s decoder and scored w.r.t. how well it fits to the caption using CLIP. The returned cosine similarity is thereafter fed back to CMA-ES. \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\)). We apply a learning rate \(5\cdot 10^{-}5\) for 50000 iterations. In terms of initialisation, we find that sampling from the standard uniform distribution yields better results in comparison to sampling from the standard normal distribution. The number of examples mined in parallel here is limited by the video RAM of our NVIDIA GeForce RTX 6000 GPU. **Mitigation by training.** We explore two possible approaches to mitigate the _ViT-L/14_ model's vulnerability to fooling master images. First, we refine the model on the ImageNet train set, where we add for every batch presented to the network, both a random noise image as well as a fooling example. Both the noise image and the fooling example get labeled with a special _\(<\)off-manifold\(>\)_ in order to have the model bind off-manifold inputs to that token rather than any valid ImageNet label. At every forward step of the model, we generate a new random noise image by presenting zero-mean unit-variance Gaussian Noise to the decoder part of our generating autoencoder. The fooling example on the other hand is generated by running CMA-ES in the loop with the training process. We start out with the best-found previous solution for our fooling hyperparameters and run one iteration of CMA-ES for every forward step to update the fooling example to the changed training weights of the model. This setup creates a similar optimization process as found in GANs where both models attempt to outperform each other. We refine the model for 1 epoch using Adam at a learning rate of \(10^{-7}\) and a batch size of \(20\). We regularize the model with a weight decay of \(\lambda=0.2\) and set Adam momentum parameters as described in [28]: \(\beta_{1}=0.9\), \(\beta_{2}=0.98\), \(\epsilon=10^{-6}\). Furthermore, we utilize mixed-precision training [21]. Hyperparameters for CMA-ES are identical to the ones used to mine the original fooling image. Finally, after refining the model, we mine a new fooling example from scratch for the updated model. We do so to test the model's robustness not only to the original fooling images, but fooling images in general. **Mitigation by bridging the modality gap** As a second mitigation approach, we attempt to bridge the modality gap of the _ViT-L/14_ model by shifting the centroids of image and text embeddings as suggested in [19]. In more detail, the authors decrease the gap between image and text vectors by moving them toward each other along a so-called gap vector \[\Delta_{\text{gap}}=\bar{\mathbf{x}}-\bar{\mathbf{y}}, \tag{3}\] where \(\bar{\mathbf{x}}\) and \(\bar{\mathbf{y}}\) are the centroids of image and text embeddings respectively. We extract \(\bar{\mathbf{x}}\) and \(\bar{\mathbf{y}}\) for the ImageNet training data and labels. We attempt to bridge the model's modality gap by computing \[\mathbf{x_{i}}^{\prime}=\mathbf{x_{i}}-\lambda\Delta_{\text{gap}} \tag{4}\] and \[\mathbf{y_{i}}^{\prime}=\mathbf{y_{i}}+\lambda\Delta_{\text{gap}}, \tag{5}\] as shifted image and text embeddings respectively. \(\lambda~{}=~{}0.25\) is a hyperparameter chosen such that the model retains it's original accuracy as much as possible while bridging the gap. ### Performance of fooling master images Fig. 1 shows the cosine similarities between titles and artists of famous artwork and the actual artwork as well as a baseline image and our generated fooling master image (denoted by red frame). All artworks are assigned their correct titles by the CLIP model: artworks and their respective titles exhibit a significantly higher cosine similarity (of about \(0.3\)) than randomly paired titles and paintings. Our baseline created by passing random noise through the stable diffusion decoder (second image from the right) exhibits scores between 0.13 and 0.20 for all title-captions, but interestingly usually shows higher scores compared to artworks with mismatched captions. Our trained fooling master example yields cosine similarities \(>0.33\) and consequentially outperforms for each title-caption the original artwork. It is interesting that despite the fact that our trained master example outperforms all selected artworks, it still Figure 3: Cosine similarity of trained fooling image for 25 optimized classes as well as similarities for ImageNet validation set of the same classes respectively. With a few exceptions, a single mined CLIPMasterPrint fooling image outperforms all images in terms of CLIP score for the targeted text labels. _Note that the same fooling image is used for all class label categories._ matches moderately well (0.24) to the caption 'A random noise image'. Our single master example achieves a higher CLIP model score than all actual artworks and would be chosen over these images when prompting the model to identify any of the targeted artworks next to the fooling example. Our obtained results for ImageNet labels are of similar quality. Fig. 3 shows the CLIP-returned cosine similarities of the fooling master image trained on 25 ImageNet labels as a point plot for both evolutionary (CMA) and gradient-based (SGD) approaches. The cosine similarities of the images of the respective labels found in the ImageNet validation set have been added for reference. For the majority of the classes, (with an exception for classes _indigo hunting_, _Bernese mountain dog_, _rhinoceros beetle_, _Christmas stocking_ and _gyromitra_), the CMA fooling image outperforms the entirety of the actual images of the class. The SGD image on the other hand scores slightly lower on most classes, but still manages to outscore all processed images for most of the targeted classes. These results demonstrate that CLIP models _ViT-L14_ and _ViT-L14@336px_ can be successfully fooled on a wider range of classes, using only a single image. When comparing both CLIPMasterPrints mined on ImageNet captions in terms of optimization approach, one can see from Fig. 3, that the image mined by means of CMA-ES delivers slightly better and more consistent scores for the majority of classes. We further benchmark the performance of both images over all classes, by introducing the percentage of outperformed images (POI), i.e. the percentage of images in targeted classes in the validation set with a lower CLIP score than the fooling image, for both fooling images. Here again we find that, with a POI of 91.36 %, the SGD+Adam solution performs slightly worse than our original CMA-ES approach, which attains 97.9 %. SGD+Adam is approximately 8 percent faster than CMA-ES (7h 9m vs. 7h 26m), but requires significantly more video RAM (42 GiB vs 25 GiB) to obtain (somewhat) on-par results. Since the evolutionary approach works slightly better and delivers somewhat more consistent results over all classes using default hyperparameters, all subsequently discussed fooling examples in this work are mined using CMA-ES. ### Generalization to semantically related prompts and labels To investigate whether the mined fooling master image generalizes also to unoptimized classes, i.e. the remaining 975 classes of ImageNet, we also visualized the estimated distributions of CLIP similarity scores per class for both optimized and unoptimized classes (Fig. 4). While the distribution of cosine similarities over all classes in the ImageNet validation set (left) is long-tailed, presumably due to a few mislabeled images or very hard examples, the distribution for scores of the trained fooling image for targeted classes is confined to a small interval around \(0.30\), which is also the score achieved on targeted labels as seen in Fig. 3. Considering the distribution of scores for the fooling image on all 975 not targeted classes, we see that while the distribution is long-tailed as well, most values seem to be confined to the range between \(0.2\) and \(0.3\), with a mean around approx 0.27. This result indicates that there seems to exist a certain generalization effect: the fooling images may achieve moderate to high scores on untargeted class labels, which are semantically related to labels it has been targeting. A potential explanation is that the classes of the ImageNet dataset have been derived as a subset from tree-like structures in WordNet [22], where many class labels are also member of a common superclass of semantically related objects, such as _animals_, _household appliances_ etc. ### Performance as number of targeted prompts increases Having demonstrated that CLIP models are vulnerable to fooling master images and that fooling effects appear to generalize to a degree, we investigate how the average cosine similarity on targeted classes deteriorates, as the number to of targeted class labels increases. We consider our mined fooling master images targeting 50, 75 and 100 randomly sampled ImageNet classes. Fig. 5 shows the average CLIP score over all targeted ImageNet labels versus the total number targeted labels. We see that the average score exhibits an initial decrease around 50 classes after which it Figure 4: Generalization of fooling example mined to target 25 ImageNet classes. The mined CLIPMasterPrint achieves high CLIP scores even for ImageNet class labels which have not been explicitly targeted. slightly rises for 75 and then slightly decreases 100 classes again. A possible explanation for this observation could be found in the generalization effects observed in Fig. 4: assuming that subsets of the targeted labels or prompts are sufficiently semantically related, due to the generalization of the fooling example, the achieved average score remains robust, even if more related labels are added. ### Analysis of information distribution in fooling master images In order to understand how information is distributed in our found fooling master examples, we create occlusion maps [36, 32] of the fooling master example trained on the titles of famous artworks. We smoothen a \(75\times 75\) pixels-size rectangle of the image using a large (\(\sigma=75\)) Gaussian blur kernel. As we blur \(75\times 75\) rectangles of the mined fooling master image in a sliding-window-manner at a 2 pixel stride, we measure the change in cosine similarity as returned by the _selkraViT-L14@336px_ model. As a reference, we perform the same procedure on a number of artworks the fooling image is intended to mimic. Fig. 6 shows our generated occlusion maps as an overlay to the source image converted to grayscale. While increases in cosine similarities when blurring out a certain part of the image are denoted in red, decreases are shown in blue. Comparing the occlusion maps of actual artworks with the fooling example one, one can notice that blurring any part of the fooling master image results in a significant decrease (between \(0.1\) and \(0.15\)) of the resulting cosine similarity of the model. The individual increases and decreases for actual artworks on the other hand are much more moderate and vary based on the location in the image. Given that this is the case for all prompts the image has been targeting, we conclude that the relevant information for all optimized prompts is spread throughout the image. Since the procedure of mining fooling examples can be viewed as searching for latent vectors with the maximum possible alignment to the latent vectors of all optimized captions, we find this result to be intuitive: While CLIP has likely learned to deal with blurring and occlusions on the image manifold due to a large variety of training data, blurring parts of the image off-manifold likely results in a significant misalignment of the resulting vector in relation to the text vectors it has been targeting. For the (not targeted) _Random noise image_ prompt, blurring parts of the image results in significantly less pronounced changes in model output score. Interestingly, while for the true noise image the regions in the center of the image seem to be the most critical, for the fooling example the opposite is the case: Blurring out the center of the image results in a score increase for the _Random noise image_ prompt: while these portions of the fooling image look very similar to the noise image, the model seems to consider it salient information. When considering the random noise image for all prompts, on the other hand, we find that whether blurring parts of the image increases or decreases the score depends on the image prompt. A possible explanation could be that for captions of images with strong textures, such as Munchs _The Scream_ or van Goghs _Starry Night_, a more smoother image results in a lower score while for the very clean and precise _Girl with a pearl earring_ smoothing the noise image increases the model output. Finally, when considering the occlusion maps of the artworks themselves, we can make a few more interesting observations: Upon blurring the characteristic face on Munchs _The Scream_, the models score for the prompt _"The starry night" by Vincent Van Gogh_ increases. A possible explanation could again be the model being sensitive to the texture of the river in _The Scream_ being rather similar to the night sky _A Starry Night_. Another interesting observation can be made when considering the occlusion map for _Girl with a pearl earring_, given the according prompt: the most critical part of the painting seems to be the face of the girl, rather than the pearl earring or her distinctive headdress. We conclude that information from fooling examples resulting in high CLIP confidence scores is spread throughout the image for all captions, and is quite sensitive to occlusions and cropping. The observed occlusion maps are rather distinctive from the ones obtained for the random noise image, since whether the score increases or decreases upon Figure 5: Average cosine similarity between ImageNet class captions and fooling image as a function of the number of optimized classes. Average similarity score between captions and images in the ImageNet validation set labelled with optimized class labels for comparison. Beyond 75 targeted classes, the score remains stable. A possible explanation could be CLIPMasterPrints generalizing semantically related non-targeted labels. Figure 6: Occlusion maps for famous artworks, random noise baseline, and mined fooling master image for various prompts. Note that while each row shows the same fooling master images, occlusion maps vary for different prompts. Information in the fooling master image is distributed over the whole image for all targeted prompts. There are no individual regions in the image that can be mapped to a particular prompt. blurring is not dependent on the prompt or the texture of the image. Finally, while it is in general Convolutional Neural Networks (CNNs) [17] who are well-known to be susceptible to image texture [7], it seems like similar effects can be at least to a minor degree observed for visual transformers and in particular CLIP as well. ### Mitigation Strategies Fig. 7 shows the CLIP scores of our refined model, which has been trained to align off-manifold vectors to a special token, in order to mitigate the model's vulnerability to fooling master examples. Shown are the average CLIP score on the ImageNet validation set, the CLIP score for the original fooling example, the score for a fooling example trained after refinement, as well as the score of a random noise image for each targeted ImageNet label respectively. Due to the newly introduced \(<\)_off-manifold\(>\)_-token, both noise and the original fooling examples are suppressed by the model and score significantly lower as the mean label score on the ImageNet validation set. The newly mined fooling example on the other hand has not been suppressed at all by the refined model and exhibits scores similar to the ImageNet mean for all labels. The results suggest that our mitigation strategy is sufficient to mitigate existing fooling examples, yet fails to be effective as new fooling examples are mined from the updated model. Shifting centroids of image and text embeddings along a computed gap vector on the other hand (see Equations 3, 4 and 5), appears to be an effective countermeasure against newly mined CLIPMasterPrints while preserving CLIP performance. Table 1 shows the percentage of outperformed images (POI) for CLIPMasterPrints mined both with and without shifting embeddings in the model. One can see that not only fooling examples mined on the regular model (Rows 1 and 2) do not work anymore on the model with shifted embeddings (the POI drops dramatically), but also newly mined examples from a model with shifted embeddings (Rows 3 and 4) show a drop of more than 50 percentage points from over \(90\%\) to less than \(40\%\) POI. Shifting embeddings therefore can be considered an effective mitigation strategy. These results also support our original hypothesis that CLIP models are vulnerable to CLIPMasterPrints due to the modality gap. ## 5 Discussion and Future Work We find that CLIP models can be successfully fooled on a wide range of diverse captions by mining fooling master examples. In more detail, we applied both latent variable evolution as well as a gradient-based approach successfully to mine images that result in high confidence CLIP scores for a significant number of diverse prompts, image captions or labels. We argue that the modality gap in contrastively pre-trained multimodal networks, that is, the fact that image and text embeddings can only be aligned to a certain degree in CLIP latent space plays a central role with respect to a models vulnerability to the introduced attack. Low cosine similarity scores assigned to well-matching text-image pairs by a vulnerable model imply that off-manifold images, which align better with a larger number of text embeddings, can be found. This relation between a modality gap and a model's \begin{table} \begin{tabular}{l r r} \hline \hline Method & POI, \(\lambda=0\) & POI, \(\lambda=0.25\) \\ \hline CMA-ES & 97.90\% & 1.28\% \\ \hline SGD+Adam & 91.36\% & 3.04\% \\ \hline CMA-ES, & & \\ \(\lambda=0.25\) & 48.49\% & 38.64\% \\ \hline SGD+Adam, & & \\ \(\lambda=0.25\) & 47.85\% & 34.33\% \\ \hline \hline \end{tabular} \end{table} Table 1: Percentage of outperformed images for different optimization approaches – Validation Set Figure 7: CLIP scores for fooling examples mined before and after refinement with off-manifold token. While mapping existing fooling examples to special tokens can mitigate their impact, the model is still vulnerable to fooling images vulnerability to CLIPMasterPrints is also supported by our results. A further noteworthy discovery is that our mined fooling master images seem to generalize not only to the prompts they target, but also to semantically related prompts. A possible explanation is that the encodings of semantically related prompts are close to each other in CLIP latent space, which implies that fooling examples, whose latent vectors are aligned well with the original targeted prompt, may tend to align similarly well with semantically related prompts. Blurring parts of our mined images reveals that the information in fooling examples is distributed throughout the whole image for all targeted prompts rather than locally at different places for each prompt and fooling examples are therefore vulnerable to occlusion and cropping. Finally, we attempt to mitigate the effect of fooling master images to the model via two ways, by refining the model to map off-manifold images to a special token first, and closing the gap between centroids of image and text embeddings respectively second. We find that while retraining the model marking existing fooling images is not an effective way to mitigate newly mined fooling examples, shifting embeddings can be considered a more effective mitigation strategy. The corresponding results also provide evidence that the modality gap leaves Contrastive Language-Image Pre-training models vulnerable towards fooling master images. Under that perspective, we argue that modality gaps are undesirable (Liang et al. [19] takes a neutral stance on the topic), and one should attempt to mitigate them. As modality gaps also occur in all contrastively pre-trained multimodal models, the question arises whether all these models are vulnerable to CLIPMasterPrints as well. In summary, we conclude that, further research on effective mitigation strategies is necessary. Further, it seems worthwhile to investigate whether other contrastively pre-trained multimodal beyond CLIP are vulnerable to CLIPMasterPrints as well. We argue that devising effective counter-strategies against the introduced model vulnerability is necessary to ensure this powerful class of models can be applied in a safe and robust way. A possible further path to improved mitigation could be posed by approaches to train CLIP models in a way such that latent embeddings of matching text and images can align more closely, i.e. avoiding a modality gap in the first place, consequentially reducing the attack surface for fooling examples. ## Acknowledgements This work is funded by the VILLUM FONDEN Synergy Grant 00040575 "Algorithmic Ways of Seeing: Improving Image Recognition by Training on Art Images".
2305.15783
Holographic quantum distances and replica trick
This paper gives concrete examples to exhibit how to use the replica trick to calculate the quantum (quasi-)distances holographically. First, we consider the fidelity and relative entropy between thermal states that are dual to the Schwarzschild-AdS black holes. Then we generalize our method into the RN-AdS black holes by adding a U(1) gauge field. We also investigate the fidelity between states excited by scalar operator in probe limit. In this case, it is surprising that the fidelity in standard quantization will suffer from new UV divergence though the usual holographic renormalization has been applied. We call for deep understanding for such divergence in the future. We also discover a holographic method to check whether the density matrices of two holographic states are commutative.
Zi-Qing Xiao, Run-Qiu Yang
2023-05-25T06:52:29Z
http://arxiv.org/abs/2305.15783v1
# Holographic quantum distances and replica trick ###### Abstract This paper gives concrete examples to exhibit how to use the replica trick to calculate the quantum (quasi-)distances holographically. First, we consider the fidelity and relative entropy between thermal states that are dual to the Schwarzschild-AdS black holes. Then we generalize our method into the RN-AdS black holes by adding a U(1) gauge field. We also investigate the fidelity between states excited by scalar operator in probe limit. In this case, it is surprising that the fidelity in standard quantization will suffer from new UV divergence though the usual holographic renormalization has been applied. We call for deep understanding for such divergence in the future. We also discover a holographic method to check whether the density matrices of two holographic states are commutative. ## 1 Introduction In recent years, quantum information theoretic considerations play an important role in the study of AdS/CFT or more general gauge/gravity duality [1; 2; 3]. For instance, Ryu-Takayanagi (RT) formula [4; 5] quantifies the entanglement entropy of the boundary conformal field theories (CFTs), which is given by the area of a codimension-2 minimal surface in the dual bulk spacetime. Under some reasonable assumptions, RT formula can be obtained from the more general Renyi entropy [6; 7; 8; 9] \[S_{n}\equiv\frac{1}{1-n}\ln\frac{\mathrm{tr}\rho^{n}}{(\mathrm{tr}\rho)^{n}}\] by applying the _replica trick_. Here the index \(n\) labels the number of the replicated boundary CFTs and \(\rho\) is the density matrix of the entangling region. The \(n\) is originally assumed to be a positive integer value. In [8], Lewkowycz and Maldacena argued that, for integer index \(n>1\), the CFTs on a branched cover1\(\mathcal{M}_{n}\) over the entangling region still have corresponding interior gravity duals \({\cal B}_{n}\) like the original Euclidean spacetime \({\cal B}_{1}\) does. In the gravity side, there exists a solution \(B_{n}\) that satisfies the Einstein equation along with boundary condition \(\partial{\cal B}_{n}={\cal M}_{n}\). Based on the holographic duality, one should expect that the partition functions of two sides' theories are equated each other \[Z[{\cal M}_{n}]=Z[{\cal B}_{n}]\,, \tag{1}\] where \(Z[{\cal M}_{n}]={\rm tr}\rho^{n}\). Note that we are working in the large \(N\) limit such that the bulk geometry can be taken as a solution of Einstein gravity. Under the saddle point approximation, the partition function of bulk geometry can be written as the exponential function of corresponding Einstein-Hilbert action \[Z[{\cal M}_{n}]=Z[{\cal B}_{n}]\approx\exp(-I_{\rm bulk}[{\cal B}_{n}])\,. \tag{2}\] Following Renyi entropy's definition, the direct calculation \[S_{n}=\frac{1}{1-n}\ln\frac{{\rm tr}\rho^{n}}{({\rm tr}\rho)^{n}}=\frac{1}{n-1 }\left(I_{\rm bulk}[{\cal B}_{n}]-nI_{\rm bulk}[{\cal B}_{1}]\right)\] indicates that Renyi entropy is proportional to the gravitational action of corresponding bulk geometry. If it is possible to analytically continue the Renyi entropy \(S_{n}\) to non-integer2\(n\), the von Neumann entropy that is usually referred as the entanglement entropy in quantum information theory can be recovered from \(S_{n}\) at the \(n\to 1\) limit. Footnote 2: The Carlson’s theorem shows that such analytical continuation is unique if it exists. The entanglement entropy concerns about the entanglement property of only one quantum state but not involves the relationship between two states. There are other quantities in quantum information [12], which measures how close two quantum states are. In other words, these quantities label some kind of "distance" between quantum states. For example, given quantum states \(\rho\) and \(\sigma\), the _trace distance_ is defined as \[D[\rho,\sigma]\equiv\frac{1}{2}{\rm tr}\left|\rho-\sigma\right|\,, \tag{3}\] where \({\rm tr}|A|\equiv\sum_{i}\lambda_{i}\) and \(\lambda_{i}\) is the eigenvalue of \(\sqrt{A^{\dagger}A}\). Introducing a positive number \(n\), \(D[\rho,\sigma]\) can be generalized into \(D_{n}[\rho,\sigma]\) \[D_{n}[\rho,\sigma]\equiv\frac{1}{2^{1/n}}\sqrt[n]{\mbox{tr}\left|\rho-\sigma \right|^{n}}\,. \tag{4}\] For \(n=1\), the above formula is just the trace distance. There is other special choice that is widely applied. By choosing \(n=2\), _Hilbert-Schmidt distance_ can be obtained, which is most convenient \(D_{n}\) for calculations. The second families of quantum distance are based on the _fidelity_ \[{\rm Fi}[\rho,\sigma]\equiv{\rm tr}\sqrt{\rho^{\frac{1}{2}}\sigma\rho^{\frac{ 1}{2}}}\,. \tag{5}\] The fidelity is not a distance but it can be used to define other measures of distance: the_Fubini-Study distance_\(D_{F}[\rho,\sigma]\) and the _Bures distance_\(D_{B}[\rho,\sigma]\) \[D_{F}[\rho,\sigma]\equiv\arccos{\rm Fi}[\rho,\sigma],\quad D_{B}[\rho,\sigma] \equiv\sqrt{1-{\rm Fi}[\rho,\sigma]^{2}}\,. \tag{6}\] Due to the well-defined calculations in finite dimensional Hilbert spaces and the preservation under unitary transformations, trace distance and fidelity are widely used within the quantum information community. However, their analytical calculations are extremely difficult in quantum field theory. Ref. [13] makes the first step towards this issue. By virtue of the certain correlation functions for the vacuum or the thermal state, author of Ref. [13] develop a replica trick to calculate the fidelity for \((1+1)\) dimensional CFTs. Using similar method, Refs. [14; 15; 16] also developed replica method to compute the trace distance for a class of special states for single short interval in 1+1 dimensional CFTs. Recently, Ref. [17] proposes how to use replica trick to compute Renyi mutual information in 1+1 CFTs. Though in higher dimensions the replica trick is still effective, the correlation functions of interacting theories are highly nontrivial to compute. Similar to holographic descriptions of entanglement, one would wish there exist a holographic duality to calculate trace distance and fidelity. Actually Ref. [18] argues that the fidelity between a state and its infinitesimal perturbation state is approximately given by a volume of maximal time slice in AdS spacetime. However, when the difference between two states is not infinitesimal, the gravity dual of fidelity given by Ref. [18] is not reliable. In addition, the holographic proof on the proposal of Ref. [18] is absent. See [19; 20; 21; 22; 23] for recent progress related to replicas in quantum information under the holographic setup. Although it seems quite difficult to solve this problem completely, this paper uses some concrete examples to exhibit how to calculate quantum distances in holography. As a basic perspective in holographic picture, Maldacena argues that the thermal states in the boundary CFT correspond to Schwarzschild-AdS black holes. Firstly, similar to the deduction of Renyi entropy by employing the replica trick, we analytically calculate the fidelity and relative entropy between thermal states in holography. Next, we want to check our method in the presence of bulk matters. Here we consider two concrete examples. In the first one we consider that there is a conserved charge in the boundary theory so the bulk theory is the Einstein-Maxwell theory. In the second example, an Einstein-scalar theory is constructed here to gain the insight into the fidelity between states excited by scalar operator. In probe limit, we derive the analytic expression of fidelity in this case. Surprisingly, we find that the fidelity of probe limit is well defined only in the alternative quantization. The fidelity in standard quantization will suffer from new UV divergency though the usual holographic renormalization has been applied. To the best of our knowledge, such kind of divergency has not been discussed well in the study of holography, and so we call for deep understanding in the future. In a physical system described by quantum mechanics, quantum states are generally the superposition of several eigenstates under a observable Hermitian operator. It is more troublesome but feasible to check the commutativity of two quantum mechanics states. As we claimed before, quantum states in infinite dimensional Hilbert spaces are more tricky. Without more assumptions, it is difficult to directly write the density matrix for a general quantum state in quantum field theory. So it is nontrivial question to determine the commutativity of two quantum states in quantum field theory. At the end of this paper, we propose a method to answer this question partially if two quantum states are holographic. We will see whether two holographic states are commutative or not is related the contribution of corresponding bulk theory's action. The organization of this paper is as follows. In section 2, we consider the fidelity and relative entropy between thermal states with different temperatures. In section 3, under the same procedure, we generalize the above method into the grand canonical ensemble, where we consider the fidelity and relative entropy of the states which has different chemical potentials. In section 4, we construct an Einstein-scalar theory in asymptotically AdS spacetime which is dual to states excited by scalar operator, and derive the semi-analytic formula of fidelity in the probe limit. Particularly, in the standard quantization, we show that the fidelity will suffer from a kind of new UV divergence even though we have added usual counterterms into the action. We will also show that such kind of new divergence will not appear in alternative quantization. In the section 5, we will discuss how to check the commutativity of two density matrices holographically. ## 2 Holographic quantum distance between thermal states In this section, we show how to holographically get the quantum distances between thermal states. In addition, we will explain the main idea about how to use replica trick to holographically compute the expression such as \(\mathrm{tr}(\rho_{1}\rho_{2}\cdots\rho_{n})\) for a series of given holographic states \(\{\rho_{1},\rho_{2},\cdots,\rho_{n}\}\). Our starting point is that thermal states in CFT\({}_{d}\) are dual to Schwarzschild-AdS\({}_{d+1}\) black hole [24]. Let us start from the properties of the thermal state. Thermal state can be used to describe the quantum system which is in contact with a heat bath at temperature \(T=1/\beta\). In the canonical ensemble at temperature \(T\), the density matrix of thermal state reads \[\rho=\frac{e^{-\beta H}}{Z(\beta)}\,. \tag{1}\] Here \(H\) is the Hamiltonian of this system and \(Z(\beta)=\mathrm{tr}e^{-\beta H}\) is the partition function. To illustrate how use replica trick to compute the quantum distance here, we will consider two thermal systems, of which the density matrices are \[\rho_{1}=\frac{e^{-\beta_{1}H}}{Z(\beta_{1})}\,,\ \rho_{2}=\frac{e^{-\beta_{2}H} }{Z(\beta_{2})}\,. \tag{2}\] In following we will first introduce how to compute the trace of product \(\rho_{1}\rho_{2}\) holographically, \[\rho_{1}\rho_{2}=\frac{e^{-(\beta_{1}+\beta_{2})H}}{Z(\beta_{1})Z(\beta_{2})}\,. \tag{3}\] This is product will play the essential role in our computations of quantum distances later on. Note that for general two different density matrix \(\rho_{1,2}\), we can not guarantee their product \(\rho_{1}\rho_{2}\) is hermitian. Then, after normalized, the \(\rho_{1}\rho_{2}/\mathrm{tr}(\rho_{1}\rho_{2})\) is not a density matrix unless satisfying the condition \([\rho_{1},\rho_{2}]=0\). However, for thermal state, \(\rho_{1,2}\) commute with each other since they own the same Hamiltonian \(H\). The main argument here is that can be regarded as an _un-normalized_ density matrix of a new thermal state with the inverse temperature \(\tilde{\beta}=\beta_{1}+\beta_{2}\), as depicted in Fig. 1. Note that \(\rho_{1}\rho_{2}\) is not properly normalized. From the bulk perspective, the gravity dual of \(\rho_{1}\rho_{2}\) would be a Schwarzschild-AdS black hole with new inverse temperature \(\tilde{\beta}\). See appendix A for the convention we use here. According to the relation (10) between temperature \(T\) and horizon radius \(r_{h}\), this new black hole's horizon radius \(r_{h}\) can be in principle solved. For convenience, this paper will focus on planar black hole (black brane). For planar Schwarzschild-AdS black hole denoted by \(k=0\), \[r_{h}=\frac{4\pi L^{2}}{d\beta}\,,\beta=\frac{1}{T}\,. \tag{4}\] Then we can get the new horizon radius \(\tilde{r}_{h}\) through \(r_{1,2}\), that satisfies \(r_{1,2}=4\pi L^{2}/d\beta_{1,2}\) \[\tilde{r}_{h}=\frac{4\pi L^{2}}{d\tilde{\beta}}=\frac{4\pi L^{2}}{d(\beta_{1}+ \beta_{2})}=\frac{r_{1}r_{2}}{r_{1}+r_{2}}\,. \tag{5}\] Based on the above calculation, we conclude that the gravity dual of \(\rho_{1}\rho_{2}\) is Schwarzschild-AdS black hole with smaller horizon radius \(\tilde{r}_{h}\). The product between thermal states gives us more "smaller" black hole as depicted in Fig. 2. [FI Figure 1: The schematic diagram about the replica trick between thermal states. Here we consider \(\rho_{1}\propto e^{-2\beta H}\) and \(\rho_{2}\propto e^{-\beta H}\). Then, the inverse temperature of replicate thermal state is \(3\beta\). After a long journey, we are ready to show how to calculate \(\text{tr}(\rho_{1}\rho_{2})\) in Einstein-Hilbert gravity. From the basic holographic dictionary in Ref. [24], we have the relation \[Z_{\text{CFT}}[\beta]=Z_{\text{bulk}}[\beta]\,, \tag{6}\] that is a most simplest nontrival example of relation (1). Recalling the relation (2), here we have \[Z[\beta]=\text{tr}e^{-\beta H}=e^{-I[\beta]}\,. \tag{7}\] Ref. [25] shows that, the full Euclidean gravitational action \(I[\beta]\) in \(d+1\) spacetime dimensions should have three contributions \[I[\beta]=I_{\text{bulk}}+I_{\text{surf}}+I_{\text{c.t}}\,. \tag{8}\] Here \[I_{\text{bulk}}+I_{\text{surf}}=-\frac{1}{16\pi G}\int_{\mathcal{B}}d^{d+1}x \sqrt{g}\left[\mathcal{R}+\frac{d(d-1)}{L^{2}}\right]-\frac{1}{8\pi G}\int_{ \partial\mathcal{B}}d^{d}x\sqrt{h}K\,. \tag{9}\] The first term is the Einstein-Hilbert action with cosmological constant \(-d(d-1)/2L^{2}\). The second term is called Gibbons-Hawking boundary term that guarantees the well-defined variational principle. The counter term \(I_{\text{c.t}}\) is added to regulate the infinity from the classical action. See Ref. [26; 27; 28; 29] for details. Here we consider Schwarzschild-AdS planar black hole with inverse temperature \(\beta\). The regulated action \(I[\beta]\) is given by \[I[\beta]=-\frac{\beta\Omega_{k=0,d-1}}{16\pi GL^{2}}r_{h}^{d}=-\frac{4^{d-2} \pi^{d-1}L^{2d-2}\Omega_{k=0,d-1}}{d^{d}G}\frac{1}{\beta^{d-1}}=-\frac{\Omega _{k=0,d-1}r_{h}^{d-1}}{4Gd} \tag{10}\] Under such holographic description and the saddle point approximation, the trace of \(\rho_{1}\rho_{2}\) can be expressed in geometric quantities \[\begin{split}\text{tr}(\rho_{1}\rho_{2})=\frac{Z(\beta_{1}+\beta_{2 })}{Z(\beta_{1})Z(\beta_{2})}&=\exp\left(I[\beta_{1}]+I[\beta_{2}] -I[\beta_{1}+\beta_{2}]\right)\\ &=\exp\left[\frac{\Omega_{k=0,d-1}(\tilde{r}_{h}^{d-1}-r_{1}^{d- 1}-r_{2}^{d-1})}{4Gd}\right]\end{split} \tag{11}\] Above calculation can be generalized into more general case such as \(\text{tr}(\rho_{1}\rho_{2}\rho_{3}\cdots)\). \[\begin{split}\text{tr}(\rho_{1}\rho_{2}\rho_{3}\cdots)& =\frac{Z(\beta_{1}+\beta_{2}+\beta_{3}+\cdots)}{Z(\beta_{1})Z( \beta_{2})Z(\beta_{3})\cdots}=\exp\left(I[\beta_{1}]+I[\beta_{2}]+I[\beta_{3} ]\cdots-I[\beta_{1}+\beta_{2}+\beta_{3}\cdots]\right)\\ &=\exp\left[\frac{\Omega_{k=0,d-1}(\tilde{r}_{h}^{d-1}-r_{1}^{d-1} -r_{2}^{d-1}-r_{3}^{d-1}-\cdots)}{4Gd}\right]\end{split} \tag{12}\] Here \(\tilde{r}_{h}=1/(r_{1}^{-1}+r_{2}^{-1}+r_{3}^{-1}+\cdots)\). This gives us a formula to compute the trace of \(\rho_{1}\ldots\rho_{n}\) from holography. To check our result, one can calculate the Renyi entropy for thermal state \(\rho\) by taking \(n\) copies. According to the definition of Renyi entropy, we can obtain that \[\begin{split} S_{n}=\frac{1}{1-n}\ln\frac{\text{tr}\rho^{n}}{( \text{tr}\rho)^{n}}&=\frac{1}{n-1}\left(I_{\text{bulk}}[\mathcal{ B}_{n}]-nI_{\text{bulk}}[\mathcal{B}_{1}]\right)\\ &=\frac{\Omega_{k=0,d-1}r_{h}^{d-1}}{4G}\frac{n^{d}-1}{d(n^{d}-n^ {d-1})}\,.\end{split} \tag{13}\] Following the spirit of the replica trick, then we analytically continue \(n\) into unit one \[S=\lim_{n\to 1}S_{n}=\frac{\Omega_{k=0,d-1}r_{h}^{d-1}}{4G}\,. \tag{14}\] We can see that the Bekenstein-Hawking entropy formula can be precisely recovered. This is an check of self-consistence on our method. Taken the lessons we learned above, we will consider various quantum distances between thermal states in the remaining content. More specifically, we will employ the method of constructing Renyi entropy, i.e. _replica trick_, to calculate these quantum distances. ### Fidelity Given two thermal states \(\rho\) and \(\sigma\) in the same Hilbert space, we now is ready to calculate the fidelity \(\text{Fi}(\rho,\sigma)=\)tr\(\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\) in holography. Since the dimension of Hilbert spacetime is infinity in the field theory, the fidelity of two different states will almost vanish and it will be more useful to consider the logarithmic fidelity \(\ln\text{Fi}(\rho,\sigma)\). Since the square root is involved into the trace, we cannot directly compute the fidelity by using integer-order replica shown in above example. To overcome this issue, we now define a generalized \(n\)-order log-fidelity for positive integer \(n\) \[\ln\text{Fi}_{n}(\rho,\sigma)=\ln\text{tr}\left(\sqrt{\sigma}\rho\sqrt{\sigma }\right)^{n}=\ln\text{tr}(\rho\sigma)^{n}\,. \tag{15}\] Then, we can analytically continue \(n\) into real number. If \(\ln\operatorname{Fi}_{n}(\rho,\sigma)\) is analytic for \(n>0\), the fidelity can be recovered \[\ln\operatorname{Fi}(\rho,\sigma)=\lim_{n\to 1/2}\ln\operatorname{Fi}_{n}(\rho, \sigma)\,. \tag{16}\] Generalizing the procedure for \(\operatorname{tr}\!\rho_{1}\rho_{2}\) into \(\operatorname{tr}(\rho_{1}\rho_{2})^{n}\) \[\begin{split}\operatorname{tr}(\rho_{1}\rho_{2})^{n}=\frac{Z(n \beta_{1}+n\beta_{2})}{\left[Z(\beta_{1})Z(\beta_{2})\right]^{n}}&= \exp\left(nI[\beta_{1}]+nI[\beta_{2}]-I[n\beta_{1}+n\beta_{2}]\right)\\ &=\exp\left[\frac{\Omega_{k=0,d-1}(\tilde{r}_{h}^{d-1}-nr_{1}^{d- 1}-nr_{2}^{d-1})}{4Gd}\right]\end{split} \tag{17}\] here \(\tilde{r}_{h}(n)=1/[n(r_{1}^{-1}+r_{2}^{-1})]\), that is an inverse proportional function of copies number \(n\). As just claimed before, we then analytically continue \(n\) into \(\frac{1}{2}\) to obtain the fidelity \[\ln\operatorname{Fi}(\rho,\sigma)=\lim_{n\to 1/2}\ln\operatorname{Fi}_{n}( \rho,\sigma)=\frac{\Omega_{k=0,d-1}\left[2\tilde{r}_{h}^{d-1}-r_{1}^{d-1}-r_{ 2}^{d-1}\right]}{8Gd}\,. \tag{18}\] here \(\tilde{r}_{h}\) take the value \(\tilde{r}_{h}=2r_{1}r_{2}/(r_{1}+r_{2})\). As a consistent check, we can confirm that \(F(\rho,\sigma)=1\) if \(\beta_{1}=\beta_{2}\). ### Relative entropy Relative entropy is a measure of the distinguishability of two states. It will not suffer from the ultraviolet divergence that carries no information about the state. Given two density matrices, the relative entropy is defined as \[S(\rho||\sigma)=\operatorname{tr}(\rho\ln\rho)-\operatorname{tr}(\rho\ln \sigma)\,. \tag{19}\] Here \(\rho\) and \(\sigma\) are both properly normalized. The first term on the right hand of equal sign is just the entanglement entropy of \(\rho\). In order to compute the second term in holography, we can imitate the construction from Renyi entropy to entanglement entropy \[\operatorname{tr}(\rho\ln\sigma)=\lim_{n\to 1}\frac{1}{n-1}\ln \operatorname{tr}(\rho\sigma^{n-1})\,. \tag{20}\] Still considering two thermal states \(\rho_{1}\) and \(\rho_{2}\), \(\operatorname{tr}(\rho_{1}\rho_{2}^{n-1})\) is given by \[\begin{split}\operatorname{tr}(\rho_{1}\rho_{2}^{n-1})=\frac{Z( \beta_{1}+(n-1)\beta_{2})}{Z(\beta_{1})Z(\beta_{2})^{n-1}}&=\exp \left(I[\beta_{1}]+(n-1)I[\beta_{2}]-I[\beta_{1}+(n-1)\beta_{2}]\right)\\ &=\exp\left[\frac{\Omega_{k=0,d-1}(\tilde{r}_{h}^{d-1}-r_{1}^{d-1 }-(n-1)r_{2}^{d-1})}{4Gd}\right]\end{split} \tag{21}\] here \(\tilde{r}_{h}(n)=4\pi L^{2}/d\left[\beta_{1}+(n-1)\beta_{2}\right]\). The second term in Eq. (19) can be recovered by analytically continue \(n\) into \(1\) \[\operatorname{tr}(\rho_{1}\ln\rho_{2})=\lim_{n\to 1}\frac{1}{n-1}\ln \operatorname{tr}(\rho_{1}\rho_{2}^{n-1})=-\frac{\Omega_{k=0,d-1}r_{2}^{d-1}}{ 4Gd}-\frac{\Omega_{k=0,d-1}r_{1}^{d-1}}{4Gd}\frac{\beta_{2}(d-1)}{\beta_{1}}\,. \tag{22}\] Combining the above result with entanglement entropy of \(\rho_{1}\), the relative entropy between thermal states \(\rho_{1}\) and \(\rho_{2}\) is given by \[S(\rho_{1}||\rho_{2})=\text{tr}(\rho_{1}\ln\rho_{1})-\text{tr}(\rho_{1}\ln\rho_{ 2})=-\frac{\Omega_{k=0,d-1}r_{1}^{d-1}}{4G}+\frac{\Omega_{k=0,d-1}r_{2}^{d-1}}{4 Gd}+\frac{\Omega_{k=0,d-1}r_{1}^{d-1}}{4Gd}\frac{\beta_{2}(d-1)}{\beta_{1}}\,. \tag{23}\] It should be noted that the relative entropy vanishes when \(\rho_{1}\) equates to \(\rho_{2}\), which is a expected result no matter what method is used here. ## 3 Add some matter: Maxwell field as the first step In this section, we want to generalize our replica trick into the charged black hole. For boundary field theory, we consider the grand canonical ensemble instead of the canonical ensemble. Given the conserved charge \(Q\), we can do the similar construction for the matrix density of the states \[\rho=\frac{e^{-\beta(H-\mu Q)}}{Z(\beta,\mu)}\,. \tag{24}\] Here \(\mu\) is the chemical potential. The gravity dual of such state (24) is charged black hole in AdS spacetime. Since charge \(Q\) is conserved, it should satisfy \([Q,H]=0\). By virtue of such commutativity, we will find that the density matrices of different temperatures and charges are also commutative to each others. Thus we can generalize our above replica trick from neutral system into the charged one. Considering states \(\rho_{1},\rho_{2}\) \[\rho_{1}=\frac{e^{-\beta_{1}(H-\mu_{1}Q)}}{Z(\beta_{1},\mu_{1})}\,,\rho_{2}= \frac{e^{-\beta_{2}(H-\mu_{2}Q)}}{Z(\beta_{2},\mu_{2})} \tag{25}\] Following the spirit of the previous section, we reconsider \(\rho_{1}\rho_{2}\) as a new state \[\begin{split}\rho_{1}\rho_{2}=\frac{e^{-\tilde{\beta}(H-\tilde{ \mu}Q)}}{Z(\beta_{1},\mu_{1})Z(\beta_{2},\mu_{2})}\\ \tilde{\beta}=\beta_{1}+\beta_{2},\ \tilde{\mu}=\frac{\mu_{1} \beta_{1}+\mu_{2}\beta_{2}}{\beta_{1}+\beta_{2}}\,.\end{split} \tag{26}\] We can see that the replicate state \(\rho_{1}\rho_{2}\) have the same construction with the original two states. In holography, this claim means the replicate state \(\rho_{1}\rho_{2}\) prepared by (25) is still dual to AdS-RN black hole with new parameters \(\tilde{\beta},\tilde{\mu}\) defined by (26). In gravity side, considering the Einstein-Maxwell theory, the total action (8) need to add the contribution from the Maxwell field \[I_{\text{Maxwell}}=-\frac{1}{16\pi G}\int_{\mathcal{B}}d^{d+1}x\sqrt{g}F_{\mu \nu}F^{\mu\nu}\,. \tag{27}\] If we just consider the RN black hole, the form of the metric coincide with the neutral one (10) but with \[f(r)=\frac{r^{2}}{L^{2}}+k-\frac{f_{0}}{r^{d-2}}+\frac{2(d-2)\mu^{2}r_{h}^{2(d -2)}}{(d-1)r^{2(d-2)}}\,. \tag{28}\] Here \(\mu\) is the chemical potential of the black hole and \(r_{h}\) is the largest root of \(f(r)=0\). For RN black hole, the Maxwell field strength tensor has a form \[F_{\mu\nu}=-\frac{Q}{r^{d-1}}(d\tau)_{\mu}\wedge(dr)_{\nu},\ Q=(d-2)\mu r_{h}^{d- 2}\,. \tag{3.6}\] Here \(Q\) is the total charge of RN black hole. Notice that \(F_{\mu\nu}F^{\mu\nu}=2Q^{2}/r^{2(d-1)}\), the direct calculation for \(I_{\rm Maxwell}\) reads \[\begin{split} I_{\rm Maxwell}&=-\frac{1}{16\pi G} \int_{\mathcal{B}}d^{d+1}x\sqrt{g}F_{\mu\nu}F^{\mu\nu}\\ &=-\frac{\beta\Omega_{k,d-1}Q^{2}r_{h}^{2-d}}{8\pi G(d-2)}\\ &=-\frac{(d-2)\beta\Omega_{k,d-1}\mu^{2}r_{h}^{d-2}}{8\pi G}\end{split} \tag{3.7}\] \(\{r_{h},\beta,\mu\}\) is not independent of each other. For \(k=0\), we have a relation among \(\{r_{h},\beta,\mu\}\): \[r_{h}=\frac{\sqrt{2(d-1)L^{2}\left[\beta^{2}d(d-2)^{2}\mu^{2}+2\pi^{2}(d-1)L^ {2}\right]}+2\pi(d-1)L^{2}}{d(d-1)\beta} \tag{3.8}\] According to the replica trick (3.3), we should consider the gravity dual of replicate state \(\rho_{1}\rho_{2}\) as a RN-AdS black hole with thermodynamic parameters \(\left\{\tilde{\beta},\tilde{\mu}\right\}\). By employing the holographic dictionary \[Z[\beta,\mu]=\text{tr}e^{-\beta(H-\mu Q)}=e^{-I[\beta,\mu]}\,,\] we can use the on shell action to express \(\text{tr}(\rho_{1}\rho_{2})\) \[\text{tr}(\rho_{1}\rho_{2})=\frac{Z(\tilde{\beta},\tilde{\mu})}{Z(\beta_{1}, \mu_{1})Z(\beta_{2},\mu_{2})}=\exp\left(I[\beta_{1},\mu_{1}]+I[\beta_{2},\mu_ {2}]-I[\tilde{\beta},\tilde{\mu}]\right)\,. \tag{3.9}\] For instance, if we set \(\left\{\beta_{1}=\beta_{2}=\beta\,,\mu_{1}=\mu_{2}=\mu\right\}\), the parameters of replicate RN black hole are \[\tilde{\beta}=\beta_{1}+\beta_{2}=2\beta,\ \tilde{\mu}=\frac{\mu_{1}\beta_{1}+\mu_ {2}\beta_{2}}{\beta_{1}+\beta_{2}}=\mu\,. \tag{3.10}\] In general the action \(Z(\beta,\mu)\) will be a complicated nonlinear function of \(\beta\) and \(\mu\). For simplicity of calculation, we can ignore the back reaction to bulk geometry from the Maxwell field if \(\mu\ll 1\). In this sense, \[r_{h}\approx\frac{4\pi L^{2}}{d\beta}\,. \tag{3.11}\] Then we obtain that \[\begin{split} I[\beta,\mu]&=-\frac{\Omega_{k=0,d-1} r_{h}^{d-1}}{4Gd}-\frac{(d-2)\Omega_{k=0,d-1}\mu^{2}r_{h}^{d-3}L^{2}}{2Gd}+ \mathcal{O}(\mu^{4})\\ I[2\beta,\mu]&=-\frac{\Omega_{k=0,d-1}\tilde{r}_{h}^ {d-1}}{4Gd}-\frac{(d-2)\Omega_{k=0,d-1}\mu^{2}\tilde{r}_{h}^{d-3}L^{2}}{2Gd}+ \mathcal{O}(\mu^{4})\end{split} \tag{3.12}\] Using the fact that \(\tilde{r}_{h}\) is the half of \(r_{h}\) and neglect the terms smaller than \(\mathcal{O}(\mu^{2})\), it is straight to calculate \(\text{tr}(\rho_{1}\rho_{2})\) \[\begin{split}\text{tr}(\rho_{1}\rho_{2})&=\exp\left( 2I[\beta,\mu]-I[2\beta,\mu]\right)\\ &=\exp\left[\frac{\Omega_{k=0,d-1}r_{h}^{d-1}(1/2^{d-1}-2)}{4Gd}+ \frac{(d-2)\Omega_{k=0,d-1}\mu^{2}L^{2}r_{h}^{d-3}(1/2^{d-3}-2)}{2Gd}\right] \end{split} \tag{3.13}\] Finally, we generalize our result into \(n\) copies and calculate the Renyi entropy \[S_{n}=\frac{\Omega_{k=0,d-1}r_{h}^{d-1}}{4Gd}\frac{n^{d}-1}{n^{d}-n^{d-1}}+ \frac{(d-2)\Omega_{k=0,d-1}\mu^{2}L^{2}r_{h}^{d-3}}{2Gd}\frac{n^{d-2}-1}{n^{d -2}-n^{d-3}}\,. \tag{3.14}\] By taking the limit \(n\to 1\), the Von Neumann entropy can be recovered \[S=\lim_{n\to 1}S_{n}=\frac{\Omega_{k=0,d-1}r_{h}^{d-1}}{4G}+\frac{(d-2)^{2} \Omega_{k=0,d-1}\mu^{2}L^{2}r_{h}^{d-3}}{2Gd}\,. \tag{3.15}\] One can also get the above result according to Bekenstein-Hawking formula. Since we consider the chemical potential \(\mu\) as a infinitesimal, the horizon (3.8) of RN-AdS black hole denoted by \(r_{\text{RN}}\) can be separated into two parts \[r_{\text{RN}}=r_{h}+\frac{2(d-2)^{2}L^{2}\mu^{2}}{d(d-1)r_{h}}\,. \tag{3.16}\] Here \(r_{h}\) is the largest root of \(f(r)=0\) for \(\mu=0\). Following the Bekenstein-Hawking formula, the entropy is described by the area of horizon. Up to the order \(\mathcal{O}(\mu^{2})\) we obtain \[\begin{split} S&=\frac{\Omega_{k=0,d-1}r_{\text{ RN}}^{d-1}}{4G}\\ &=\frac{\Omega_{k=0,d-1}r_{h}^{d-1}}{4G}+\frac{(d-2)^{2}\Omega_{ k=0,d-1}\mu^{2}L^{2}r_{h}^{d-3}}{2Gd}\,.\end{split} \tag{3.17}\] We see that two formulas (3.15) and (3.17) are same, which is what we expect and shows the consistence of our methods. ## 4 Fidelity between states excited by scalar operator In this section, we consider how to compute the quantum distance between states that are excited by scalar operator. In holography, the scalar operator \(\mathcal{O}_{\phi}\) in boundary CFT is dual to a bulk scalar field \(\phi\). Considering a \((d+1)\)-dimensional Einstein-scalar theory, such theory's action is \[I=-\frac{1}{16\pi G}\int_{\mathcal{B}}d^{d+1}x\sqrt{g_{E}}\left[\mathcal{R}+ \frac{d(d-1)}{L^{2}}-\frac{1}{2}\nabla_{\mu}\phi\nabla^{\mu}\phi-\frac{1}{2}m^ {2}\phi^{2}\right]\,. \tag{4.1}\] The parameter \(m\) is the mass of the scalar field. Different from the Minkowski spacetime, \(m^{2}\) is negative in AdS spacetime. Despite this, \(m^{2}\) can not be too negative but above the Breitenlohner-Freedman bound \(m_{\text{BF}}^{2}\equiv-d^{2}/4L^{2}\). For dual boundary CFT, the corresponding conformal dimension \(\Delta_{\mathcal{O}}\) for scalar operator is \((d\pm\sqrt{d^{2}+4m^{2}L^{2}})/2\). The "+" corresponding to standard quantization and "\(-\)" stands for alternative quantization. We can get the equations of motion following the action (4.1) \[\begin{split}\mathcal{R}_{\mu\nu}-\frac{1}{2}\mathcal{R}g_{\mu\nu} -\frac{d(d-1)}{2L^{2}}g_{\mu\nu}&=-\frac{1}{4}\left(\nabla_{\rho} \phi\nabla^{\rho}\phi+m^{2}\phi^{2}\right)\\ \nabla_{\mu}\nabla^{\mu}\phi-m^{2}\phi&=0\,.\end{split} \tag{4.2}\] In most general cases, we should solve this coupled gravitational system since the backreaction of the scalar field can not be neglected. In principle, given the proper boundary conditions, these equations can be solved analytically or numerically. We consider the thermal states in the canonical ensemble. However, we can neglect the backreaction of the scalar field in the bulk, if the change of the total energy \(\Delta E\) induced from the scalar operator \(\mathcal{O}_{\phi}\) is much smaller than the mass of black hole. This is the so-called _probe limit_. In the probe limit, the gravity and scalar field will decouple. That means in contact with a heat bath at \(\beta\), the bulk geometry remains to be Schwarzschild-AdS black hole. Then, we just analyze the behavior of scalar field in this fixed black hole background. What's more, we only need to explore the additional contributions from the scalar field to holographic quantum distance. In this paper, we focus on the fidelity between states excited by scalar operator since it is easy to obtain. The main reason why it is tricky to calculate other quantum distances will be explained soon. ### Set up In this section, we adopt the Poincare coordinate \(\left\{z,\tau,x^{i}\right\}\). Under this gauge, the \(n\)-copies Euclidean Schwarzschild-AdS black hole's metric reads \[ds^{2}=\frac{1}{z^{2}}\left[f(z)d\tau^{2}+\frac{dz^{2}}{f(z)}+\sum_{i=1}^{d-1} dx_{i}^{2}\right],\quad f(z)=\frac{1}{L^{2}}-(2nz)^{d}f_{0}\,. \tag{4.3}\] Here the factor \(2n\) is induced from the definition of fidelity where \(\text{tr}(\rho\sigma)^{n}\) is involved. Based on our previous discussion, for two states \(\rho\) and \(\sigma\) with same inverse temperature \(\beta\), the dual black hole to compute \(\text{tr}(\rho\sigma)^{n}\) will have inverse temperature \(2n\beta\). Then, we write the equation of motion for scalar field in this background: \[\frac{1}{\sqrt{g_{E}}}\frac{\partial}{\partial x^{\mu}}\left(\sqrt{g_{E}}g^{ \mu\nu}\frac{\partial}{\partial x^{\nu}}\right)\phi-m^{2}\phi=0,\quad g_{E}= \frac{\Omega_{0,d-1}}{z^{2(d+1)}}\,. \tag{4.4}\] More specifically, \[\frac{z^{2}\ddot{\phi}}{f}+z^{2}f\phi^{\prime\prime}+\left[z^{2}f^{\prime}-( d-1)zf\right]\phi^{\prime}-m^{2}\phi=0\,, \tag{4.5}\] where a prime/dot denotes the derivative with respect to \(z\) or \(\tau\), respectively. In general case, the Eq. (4.5) does not have analytical solutions so we have to solve this equation numerically. Here we assume the boundary can keep the asymptotic AdS behavior such that \[\phi(\tau,z)\rightarrow\tilde{\phi}_{s}(\tau)z^{d-\Delta}+\tilde{\phi}_{v}( \tau)z^{\Delta}+\ldots\,,\quad\Delta=\frac{d+\sqrt{d^{2}+4m^{2}L^{2}}}{2}\,. \tag{4.6}\] Other requirement is that \(\phi\) should be continuous near the horizon \(z_{h}\). Here \(\tilde{\phi}_{s}(\tau)\) and \(\tilde{\phi}_{v}(\tau)\) are independent of \(z\). In holography, both \(\tilde{\phi}_{s}(\tau)\) and \(\tilde{\phi}_{v}(\tau)\) can be considered as a source if the mass parameter satisfies \(m_{\text{BF}}^{2}<m^{2}<m_{\text{BF}}^{2}+1/L^{2}\). It needs to note that \(\tilde{\phi}_{s}\) and \(\tilde{\phi}_{v}\) will be both nonzero if the scalar field is not zero in the bulk. To keep the asymptotically AdS structure and make the probe limit self-consistent we have to require \(\phi^{2}\to 0\) as \(z\to 0\). This requires that we have to set \(d>\Delta\) and so we have to set: \[m_{\text{BF}}^{2}<m^{2}<0\,. \tag{4.7}\] The selection of source corresponds different dual field theories, see Ref. [30]. In gravity side, this selection determines how to set the boundary condition for Eq. (4.5). For instance, if considering \(\tilde{\phi}_{s}(\tau)\) as source, we must fix \(\tilde{\phi}_{s}(\tau)\) at \(z\to 0\) correspondingly. This is called _standard quantization scheme_ for the scalar field. While in this paper, we employ the _alternative quantization scheme_ that considers \(\tilde{\phi}_{v}(\tau)\) as the source. The reason why we choose "non-standard" quantization scheme will be explicitly explained. It should be noted that the scalar field obeys second order equation of motion (4.5), but we only give one boundary condition such as fixing \(\tilde{\phi}_{v}(\tau)\) in alternative quantization scheme. However, we must discard one of the independent solutions since this solution blows up at the horizon \(z_{h}\). Hence, the regularity of Euclidean spacetime makes the bulk scalar field uniquely specified. The next step is to solve Eq. (4.5) since the final result for action (4.1) is dependent of \(\tilde{\phi}_{v}(\tau)\) and \(\tilde{\phi}_{s}(\tau)\). Then we can assume the general solution of \(\phi(\tau,z)\) by the method of separation of variables \[\phi(\tau,z)=\sum_{j=0}^{\infty}R_{j}(z)T_{j}(\tau)\,. \tag{4.8}\] Substituting this into Eq. (4.5), we can get the asymptotic behavior of general solution for scalar field \[\begin{split}\lim_{z\to 0}\phi(\tau,z)&=\lim_{z\to 0} \sum_{j=0}^{\infty}T_{j}(\tau)R_{j}(z)\\ T_{0}(\tau)&=A_{0}\,,T_{j}(\tau)=A_{j}\cos w_{j} \tau+B_{j}\sin w_{j}\tau\,,\\ \lim_{z\to 0}R_{j}(z)&=\phi_{s}^{(j)}z^{d-\Delta}(1+ \cdots)+\phi_{v}^{(j)}z^{\Delta}(1+\cdots)\,.\end{split} \tag{4.9}\] and the radial equation \[\begin{split} R_{j}^{\prime\prime}+\left(\frac{f^{\prime}}{f}- \frac{d-1}{z}\right)R_{j}^{\prime}-\left(\frac{m^{2}}{z^{2}f}+\frac{w_{j}^{2} }{f^{2}}\right)R_{j}=0\\ f(z)&=\frac{1}{L^{2}}-(2nz)^{d}f_{0}\,,\ \ j=0,1,2,\ldots\end{split} \tag{4.10}\] Here \(\phi_{s,v}^{(j)}\) is denoted by the asymptotic behavior of \(R_{j}(z)\). The frequency \(w_{j}\) take some discrete values given by \[w_{j}=\frac{2\pi j}{\tilde{\tau}}\,,\quad T(\tau)=T(\tau+\tilde{\tau}) \tag{4.11}\] where \(\tilde{\tau}\) is the period in boundary condition (4.6). According to Fourier transformation, coefficients \(A_{0},A_{n},B_{n}\) are determined by \[A_{0}\phi_{v}^{(0)} =\frac{1}{\tilde{\tau}}\int_{0}^{\tilde{\tau}}\tilde{\phi}_{v}( \tau)\mathrm{d}\tau\] \[A_{j}\phi_{v}^{(j)} =\frac{2}{\tilde{\tau}}\int_{0}^{\tilde{\tau}}\tilde{\phi}_{v}( \tau)\cos w_{j}\tau\mathrm{d}\tau \tag{4.12}\] \[B_{j}\phi_{v}^{(j)} =\frac{2}{\tilde{\tau}}\int_{0}^{\tilde{\tau}}\tilde{\phi}_{v}( \tau)\sin w_{j}\tau\mathrm{d}\tau\] ### Fidelity in the probe limit In following we will first focus on _alternative quantization_, i.e. treating the coefficient \(\phi_{v}\) as the source term. As previously defined, given two density matrices \(\rho\) and \(\sigma\) that are both excited by scalar operator \(\mathcal{O}_{\phi}\) with different values of source \(\tilde{\phi}_{v}\), a generalized \(n\)-order log-fidelity for positive integer \(n\) is \[\ln\mathrm{Fi}_{n}(\rho,\sigma)=\ln\mathrm{tr}\left(\sqrt{\sigma}\rho\sqrt{ \sigma}\right)^{n}=\ln\mathrm{tr}(\rho\sigma)^{n}\,. \tag{4.13}\] Then, we can analytically continue \(n\) into \(1/2\) to recover the fidelity \[\ln\mathrm{Fi}(\rho,\sigma)=\lim_{n\to 1/2}\ln\mathrm{Fi}_{n}(\rho,\sigma)\,. \tag{4.14}\] Next we construct the corresponding distribution \(\tilde{\phi}_{v}(\tau)\) for \(\mathrm{tr}(\rho\sigma)^{n}\) in the bulk, see Fig. (3) for illustration of \(\mathrm{tr}(\rho\sigma)^{2}\). In the bulk gravity side, the boundary condition for scalar field \(\phi\) will be dependent of euclidean time \[\tilde{\phi}_{v}(\tau)=\left\{\begin{array}{ccc}&\rho_{v}&0<\tau\leq\beta\\ &\sigma_{v}&\beta<\tau\leq 2\beta\\ &\rho_{v}&2\beta<\tau\leq 3\beta\\ &&&\ldots\\ \rho_{v}&2(n-2)\beta<\tau\leq 2(n-1)\beta\\ &\sigma_{v}&2(n-1)\beta<\tau\leq 2n\beta\end{array}\right\}\,. \tag{4.15}\] Here \(\rho_{v}\) and \(\sigma_{v}\) are constants. We already set \(\beta_{1}=\beta_{2}=\beta\). To simplify the discussion, in this paper we just set \(\sigma_{v}=0\). In principle, we need to know the value of bulk scalar field in every spacetime point. Since the total action (4.1) is a spacetime volume integral. However, according to Gauss theorem and using the equation of motion, the on-shell action for scalar field can be expressed by the boundary term \[I_{m} =\frac{1}{16\pi G}\int_{\mathcal{B}_{n}}\mathrm{d}^{d+1}x\sqrt{g} \left[\frac{1}{2}\nabla_{\mu}\left(\phi\nabla^{\mu}\phi\right)-\frac{1}{2}\phi \nabla_{\mu}\nabla^{\mu}\phi+\frac{1}{2}m^{2}\phi^{2}\right] \tag{4.16}\] \[=\frac{1}{16\pi G}\int_{\mathcal{B}_{n}}\mathrm{d}^{d+1}x\sqrt{g }\left[\frac{1}{2}\nabla_{\mu}\left(\phi\nabla^{\mu}\phi\right)\right]\] \[=\frac{1}{16\pi G}\int_{\partial\mathcal{B}_{n}}\mathrm{d}^{d}x \sqrt{h}\left(\frac{1}{2}n_{\mu}\phi\nabla^{\mu}\phi\right)\,.\] Here \(n^{\mu}\) is the outward point normal vector of AdS boundary \(\partial\mathcal{B}_{n}\). In Poincare coordinate, the only nonzero component is \[n^{r}=-z\sqrt{f(z)}\,, \tag{4.17}\] so this boundary integral can reduce to \[I_{m}=-\frac{\Omega_{0,d-1}}{32\pi G}\int\mathrm{d}\tau\left(\frac{f\phi \partial_{z}\phi}{z^{d-1}}\right)\,. \tag{4.18}\] Note that the above action is not regulated. In principle, according to spacetime dimension \(d\) and conformal dimension \(\Delta\), we will correspondingly select different forms of counter term [26; 27; 28; 29]. In this paper, we will focus on 4-dimensional Einstein-scalar theory while fixing \(m^{2}=-2\). For \(\{d=3,\Delta=2\}\), the corresponding counter term is \[I_{\mathrm{c.t}}=-\frac{1}{16\pi G}\int d^{3}x\sqrt{h}\left[\phi(n^{\mu} \partial_{\mu}\phi)+\frac{1}{2}\phi^{2}\right]\,. \tag{4.19}\] Figure 3: The schematic diagram about the corresponding bulk geometry with the boundary field configuration \(\rho\sigma\rho\sigma\). Here the red line labels the distribution of state \(\rho\) and blue dashed line labels the state \(\sigma\). Here subscript \(v\) follows from treating the coefficient \(\phi_{v}\) as the source term under alternative quantization. As setting \(\beta_{1}=\beta_{2}=\beta\), we can see that the period \(\tilde{\tau}=2\beta\). Therefore, the regulated scalar field action \(\tilde{I}_{\rho\sigma}\) can be easily calculated \[\begin{split}\tilde{I}_{\rho\sigma}&=I_{m}+I_{\rm c.t} \\ &=\frac{1}{32\pi G}\int\mathrm{d}^{3}x\left(\phi_{s}^{(0)}A_{0}+ \sum_{i=1}^{\infty}\phi_{s}^{(i)}B_{i}\sin w_{i}\tau\right)\left(\phi_{v}^{(0) }A_{0}+\sum_{j=1}^{\infty}\phi_{v}^{(j)}B_{j}\sin w_{j}\tau\right)\\ &=\frac{2n\beta\Omega_{0,2}}{32\pi G}\left(\phi_{s}^{(0)}\phi_{v} ^{(0)}A_{0}^{2}+\frac{1}{2}\sum_{j=1}^{\infty}\phi_{s}^{(j)}\phi_{v}^{(j)}B_{j }^{2}\right)\\ &=\frac{n\beta\Omega_{0,2}}{16\pi G}\left[\frac{\rho_{v}^{2}\phi_ {s}^{(0)}}{4\phi_{v}^{(0)}}+\sum_{j=1}^{\infty}\frac{(1-\cos j\pi)^{2}}{j^{2} \pi^{2}}\frac{\rho_{v}^{2}\phi_{s}^{(j)}}{2\phi_{v}^{(j)}}\right]\\ &\equiv\frac{n\beta\Omega_{0,2}}{16\pi G}\left[\frac{\rho_{v}^{2} \phi_{s}^{(0)}}{4\phi_{v}^{(0)}}+\Xi[\rho_{v},n]\right]\,.\end{split} \tag{4.20}\] Here \[\Xi[\rho_{v},n]:=\sum_{j=1}^{\infty}\frac{(1-\cos j\pi)^{2}}{j^{2}\pi^{2}} \frac{\rho_{v}^{2}\phi_{s}^{(j)}}{2\phi_{v}^{(j)}}\,. \tag{4.21}\] Before we go further, it needs to emphasize that divergency of "naked" action arises only when we approaches to UV boundary \(z\to 0\). The counterterm then is applied to cancel such divergency. We will see later that, since the boundary is inhomogeneous, there is an other different divergency of on-shell action appears in standard quantization even if we do not take the limit \(z=0\). This kind of new divergency cannot be cancelled according to the known holographic renormalization schema [26; 27; 28; 29]. We now return to Eq. (4.20). Notice that \(\Xi\) is dependent on the copies number \(n\), since the radial equation is related to \(n\), i.e. \(f(z)=1/L^{2}-(2nz)^{d}f_{0}\). This property is very different from the usual cases that \(\tilde{\phi}_{v}\) is independent of the Euclidean time \(\tau\). If \(\Xi[\rho_{v},\frac{1}{2}]\) is convergent, then we would analytically continue \(n\) into \(1/2\). In other words, we need to solve the radial equation (4.10) with \(f(z)=1/L^{2}-z^{d}f_{0}\). Finally, the log-fidelity reads \[\begin{split}\ln\mathrm{Fi}(\rho,\sigma)&=\lim_{n \to 1/2}\ln\frac{\mathrm{tr}(\rho\sigma)^{n}}{(\mathrm{tr}\rho)^{n}( \mathrm{tr}\sigma)^{n}}\\ &=\lim_{n\to 1/2}\ln\exp[-\tilde{I}_{\rho\sigma}+n\tilde{I}_{ \rho}]\\ &=\lim_{n\to 1/2}\frac{n\beta\Omega_{0,2}}{16\pi G}\left[\frac{\rho_{v}^{2} \phi_{s}^{(0)}}{4\phi_{v}^{(0)}}-\Xi[\rho_{v},n]\right]\\ &=\frac{\beta\Omega_{0,2}}{32\pi G}\left[\frac{\rho_{v}^{2}\phi_ {s}^{(0)}}{4\phi_{v}^{(0)}}-\Xi[\rho_{v},\frac{1}{2}]\right]\,.\end{split} \tag{4.22}\] Here \(\tilde{I}_{\rho}\) is the regulated action for \(\text{tr}\rho\) \[\begin{split}\tilde{I}_{\rho}&=\frac{1}{32\pi G}\int \text{d}^{3}x\left[\phi_{s}^{(0)}\phi_{v}^{(0)}A_{0}^{2}\right]\\ &=\frac{1}{32\pi G}\int\text{d}^{3}x\left[\frac{\rho_{v}^{2}\phi_ {s}^{(0)}}{\phi_{v}^{(0)}}\right]\\ &=\frac{\beta\Omega_{0,2}}{32\pi G}\frac{\rho_{v}^{2}\phi_{s}^{( 0)}}{\phi_{v}^{(0)}}\,.\end{split} \tag{4.23}\] Note again that Eq. (4.10) can be solved for arbitrary real number \(n\). Thus, we can numerically compute the value of \(\Xi[\rho_{v},n]\) in the Eq. (4.21) and then obtain the fidelity numerically. ### Numerical result After the previous derivation for fidelity, we then show the corresponding numerical process and result. The only remaining unsolved equation is the radial equation \[\begin{split} R_{j}^{\prime\prime}+\left(\frac{f^{\prime}}{f}- \frac{d-1}{z}\right)R_{j}^{\prime}-\left(\frac{m^{2}}{z^{2}f}+\frac{w_{j}^{2} }{f^{2}}\right)R_{j}=0\\ f(z)=\frac{1}{L^{2}}-(2nz)^{d}f_{0}\,,\ \ j=0,1,2,\ldots\end{split} \tag{4.24}\] We need to get the value of \(\phi_{s}^{(j)}/\phi_{v}^{(j)}\) in \(\Xi[\rho_{v},n]\) for every frequency \(w_{j}\). For \(j=0\), the frequency is vanished, i.e. \(w_{0}=0\). The general solution for \(R_{0}\) is just the static case \[\begin{split} R_{0}=&\phi_{s}^{(0)}z^{d-\Delta}{}_ {2}F_{1}\left(1-\frac{\Delta}{d}\,,1-\frac{\Delta}{d}\,;2-\frac{2\Delta}{d}\,; f_{0}L^{2}n^{d}z^{d}\right)\\ +&\phi_{v}^{(0)}z^{\Delta}{}_{2}F_{1}\left(\frac{ \Delta}{d}\,,\frac{\Delta}{d}\,;\frac{2\Delta}{d}\,;f_{0}L^{2}n^{d}z^{d}\right) \,.\end{split} \tag{4.25}\] Continuity condition requires that \(R_{0}\) should be finite at horizon \(z_{h}\) which satisfies \[\frac{1}{L^{2}}-f_{0}n^{d}z_{h}^{d}=0\,. \tag{4.26}\] By virtue of \[{}_{2}F_{1}(a\,,b\,;c\,;1)=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c- b)}\,, \tag{4.27}\] \(R_{0}(z_{h})\) can be expressed as \[R_{0}(z_{h})=\phi_{s}^{(0)}z_{h}^{d-\Delta}\frac{\Gamma(2-\frac{2\Delta}{d}) \Gamma(0)}{\Gamma(1-\frac{\Delta}{d})\Gamma(1-\frac{\Delta}{d})}+\phi_{v}^{(0) }z_{h}^{\Delta}\frac{\Gamma(\frac{2\Delta}{d})\Gamma(0)}{\Gamma(\frac{\Delta} {d})\Gamma(\frac{\Delta}{d})}\,. \tag{4.28}\] Notice that \(R_{0}(z_{h})\) is singular since \(\Gamma(0)\) is infinite. We need to eliminate this singularity. Extract \(\Gamma(0)\) outside the square bracket \[R_{0}(z_{h})=\Gamma(0)\left[\phi_{s}^{(0)}z_{h}^{d-\Delta}\frac{\Gamma(2- \frac{2\Delta}{d})}{\Gamma(1-\frac{\Delta}{d})\Gamma(1-\frac{\Delta}{d})}+ \phi_{v}^{(0)}z_{h}^{\Delta}\frac{\Gamma(\frac{2\Delta}{d})}{\Gamma(\frac{ \Delta}{d})\Gamma(\frac{\Delta}{d})}\right]\,. \tag{4.29}\] Therefore, the square bracket must vanish and the ratio \(\phi_{s}^{(0)}/\phi_{v}^{(0)}\) is \[\frac{\phi_{s}^{(0)}}{\phi_{v}^{(0)}}=-\frac{\Gamma(\frac{2\Delta}{d})\Gamma^{2} (1-\frac{\Delta}{d})}{\Gamma(2-\frac{2\Delta}{d})\Gamma^{2}(\frac{\Delta}{d})} z_{h}^{2\Delta-d}=\frac{\pi d}{2\Delta-d}\frac{\sin\frac{2\pi\Delta}{d}\Gamma^{2}( \frac{2\Delta}{d})}{\sin^{2}\frac{\pi\Delta}{d}\Gamma^{4}(\frac{\Delta}{d})}z_ {h}^{2\Delta-d}\,. \tag{4.30}\] For \(j=1,2,3,\dots\), Eq. (4.24) does not have analytical solutions so we have to solve the radial equation numerically. Notice that \(\Xi[\rho_{v},n]\) is the sum of an infinite series. Although we cannot analytically get the explicit expression of \(\phi_{s}^{(j)}/\phi_{v}^{(j)}\), the convergence or not of \(\Xi[\rho_{v},n]\) can be obtained from the comparison with the scalar field in thermal AdS spacetime given \(f_{0}=0\) in metric (4.3). Considering the scalar field with the same boundary condition (4.15) in thermal AdS spacetime, the corresponding radio \(\phi_{s}^{(j)}/\phi_{v}^{(j)}\) can be analytically solved: \[\left.\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}\right|_{\text{thermal AdS}}=\frac{ \Gamma(\Delta-\frac{d}{2})}{(w_{j}/2)^{2\Delta-d}\Gamma(\frac{d}{2}-\Delta)}=- \frac{\sin[\pi(\Delta-\frac{d}{2})](\Delta-\frac{d}{2})\Gamma(\Delta-\frac{d} {2})^{2}}{\pi(w_{j}/2)^{2\Delta-d}}\,. \tag{4.31}\] Notice that in thermal AdS spacetime \(\Xi[\rho_{v},\frac{1}{2}]_{\text{thermal AdS}}\) is convergent since the general term is proportional to \(1/j^{3}\). To determine whether \(\Xi[\rho_{v},\frac{1}{2}]\) is convergent, we give a concrete numerical result to check the change of \(\delta_{j}\) with \(j\) \[\delta_{j}\equiv\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}-\left.\frac{\phi_{s}^{( j)}}{\phi_{v}^{(j)}}\right|_{\text{thermal AdS}} \tag{4.32}\] Here we fix the following parameters \[m^{2}=-2,\ d=3,\ L^{2}=1,\ f_{0}=1\,.\] Figure 4: The curve-fitting result of \(\delta_{j}\). Here we use a logarithmic function \(b+\ln j^{a}\) to fit their difference. From above parameters, we can obtain \(\beta=4\pi/3\) and \(\Delta=2\). We solve the equation (102) numerically and then read the radio \(\phi_{s}^{(j)}/\phi_{v}^{(j)}\) from the asymptotic behavior of the radial equation. According to the fitting result, i.e. Fig. (4), we conclude that the difference between \(\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}\) and \(\frac{\phi_{v}^{(j)}}{\phi_{v}^{(j)}}\bigg{|}_{\text{thermal AdS}}\) is \(j^{a}\) order. Since \(a\approx-3.81678\), the difference between two series is convergent. It should be noted that the numerical result of \(a\) is very close to \(-4\). In practice, we can confirm that \[\left.\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}-\left.\frac{\phi_{s}^{(j)}}{\phi_{ v}^{(j)}}\right|_{\text{thermal AdS}}=\mathcal{O}(1/w_{j}^{2\Delta}) \tag{103}\] with a analytical approximation method. See appendix B for more details about the solution for radial equation. Then, according to the comparison test, \(\Xi[\rho_{v},\frac{1}{2}]\) must be convergent under the alternative quantization scheme since in this scheme \(\Xi[\rho_{s},\frac{1}{2}]_{\text{thermal AdS}}\) is convergent. The final numerical result reads \[\frac{\rho_{v}^{2}\phi_{s}^{(0)}}{4\phi_{v}^{(0)}}\approx-1.29053\rho_{v}^{2} \tag{104}\] \[\Xi[\rho_{v},\frac{1}{2}]\approx-0.0176151\rho_{v}^{2}\] ### Comment on standard quantization We have shown that the on-shell action of replica spacetime is convergent in alternative quantization. However, under the standard quantization the situation is very different. In this case, we still have Eqs. (101)-(102), but the \(\rho_{v}\) is replaced by \(\rho_{s}\) and \(\Xi\) should be replaced by \[\Xi[\rho_{s},n]:=\sum_{j=1}^{\infty}\frac{(1-\cos j\pi)^{2}}{j^{2}\pi^{2}} \frac{\rho_{s}^{2}\phi_{v}^{(j)}}{2\phi_{s}^{(j)}}\,. \tag{105}\] We will find that \(\Xi[\rho_{s},\frac{1}{2}]_{\text{thermal AdS}}\) is divergent. Specifically, \(\Xi[\rho_{s},\frac{1}{2}]_{\text{thermal AdS}}\) is a _harmonic series_ under the standard quantization since the general term in \(\Xi[\rho_{s},\frac{1}{2}]_{\text{thermal AdS}}\) becomes \[\Xi\sim\sum\frac{\phi_{v}^{(j)}}{j^{2}\phi_{s}^{(j)}}\sim\frac{1}{j}\,. \tag{106}\] The high frequency scalar modes with \(w_{j}\gg 1\) will be not suppressed enough and so will play more and more important roles. When it comes to the computation of action, this effect will cause the infinity of \(\Xi[\rho_{v},n]\) in the standard quantization scheme. While under alternative quantization scheme, these UV contributions from high frequency modes are suppressed enough. It should be noted that this behavior is quite common when considering inhomogeneous boundary condition for bulk matter. It needs to emphasize that such as divergency appearing in standard quantization cannot be removed by using the usual so called "holographic renormalization", since one can realize that this divergency will appear even we move the AdS boundary into finite \(z\). We note similar thing was also noticed by Ref. [31], which finds that generalized susceptibly in terms of momentum integration will be divergent in standard quantization. One may seek to regulate this infinity in standard quantization scheme. For instance, given a UV cutoff \(k\), we seem to be able to extract divergence from the harmonic series \[\sum_{j=1}^{k}\frac{1}{j}=\ln k+\gamma+\mathcal{O}(1/k)\ \ \ k\gg 1 \tag{100}\] here \(\gamma\) is Euler-Mascheroni constant. So the divergence here is logarithmic divergence. When the mode \(j\) is high enough, the probe limit and even including the classical gravity approximation will be broken, so we should take the full backreaction and even full UV quantum gravity theory into account. If we believe the theory is UV complete, then "unknown" high energy physics should just cancel such a logarithmic divergency. Then we will throw away \(\ln k\), which will be interpreted as the contribution from the deformation of spacetime due to these high frequency modes. However, this procedure is only effective for the set of parameters we have selected here. When we fine-tune these parameters, such as \(m^{2}\), the above procedure is broken and we need to regulate such infinity again in this new set of parameters. From the Eq. (101) in appendix B one can see that, \[\frac{\phi_{0}^{(j)}}{\phi_{s}^{(j)}}\sim w_{j}^{2\Delta-d}\,. \tag{101}\] for large mode \(j\). Thus, in standard quantization, we can find from Eq. (100) that \[\Xi\sim\sum j^{2\Delta-d}\,. \tag{102}\] Since \(2\Delta-d=\sqrt{d^{2}+4m^{2}L^{2}}>0\), we see that \(\Xi\) will be divergent. It is still an open question for us about how to regularized and renormalize such a divergency systematically. We leave the general regularization under the standard quantization scheme for an important future problem. ## 5 Commutativity of \(\rho\) and \(\sigma\) In this section, we will gives a holographic method to check if the density matrices of two holographic states are commutative to each others. In quantum field theory, we can't usually write the density matrix of quantum states precisely. Assume that the quantum state \(\rho\) is excited by some operator \(\mathcal{O}_{\rho}(x)\) and other quantum state \(\sigma\) is excited by operator \(\mathcal{O}_{\sigma}(y)\). Here \(x,y\) label the points in the background spacetime. It should be noted that even employing the same operator, the commutator \([\mathcal{O}(x),\mathcal{O}(y)]\) is not necessarily vanish for different spacetime points. However, as mentioned before, there exist a gravity dual for the trace of density matrix's product. Based on the holographic description, is it possible to determine whether two operators are commutative? In this section, we propose a feasible strategy to answer this question. Considering two density matrix \(\rho\) and \(\sigma\) in the same Hilbert space, the commutativity of \(\rho\) and \(\sigma\) indicates \[\rho\sigma=\sigma\rho\,. \tag{103}\] One may think we can use their trace to judge whether they are commutative, but the property of trace \(\text{tr}(AB)=\text{tr}(BA)\) rejects this idea. Instead, considering a auxiliary matrix \(A\) \[A=\rho\sigma-\sigma\rho\,, \tag{5.2}\] if \(A=0\) then we can get \[\text{tr}(\rho\sigma\rho\sigma)=\text{tr}(\rho^{2}\sigma^{2})\,. \tag{5.3}\] This result is so straightforward since we can change the position of two density matrix arbitrarily. Conversely given the condition (5.3), \([\rho,\sigma]=0\) is expected to be proved. Firstly, it is not hard to find that \[\text{tr}(\rho\sigma\rho\sigma)=\text{tr}(\rho^{2}\sigma^{2})\implies\text{tr} (A^{2})=0\,. \tag{5.4}\] Since \(A=-A^{\dagger}\), \(A\) can be diagonalized with pure imaginary eigenvalue \[A=\sum_{n}ia_{n}|n\rangle\langle n|,\ a_{n}\in\mathbb{R} \tag{5.5}\] \(\{|n\rangle\}\) is a complete orthonormal set. \(\text{tr}(A^{2})=0\) indicates that \(a_{n}=0\) for all \(n\), by virtue of anti-hermiticity we actually obtain that \[\text{tr}(\rho\sigma\rho\sigma)=\text{tr}(\rho^{2}\sigma^{2})\iff[\rho,\sigma ]=0\,. \tag{5.6}\] From the perspective of holography, there are two bulk geometries corresponding to boundary condition \(\rho\sigma\rho\sigma\) or \(\rho^{2}\sigma^{2}\), as depicted in Fig. 5. In order to determine whether states \(\rho\) and \(\sigma\) are commutative, we should calculate these bulk geometries' partition function respectively. Recall that under the saddle point approximation, the partition function of bulk geometry can be written as the exponential function of corresponding action. So we only need to check whether their corresponding geometric action are equal, i.e. \[S_{\rho\sigma\rho\sigma}\overset{?}{=}S_{\rho^{2}\sigma^{2}}\,. \tag{5.7}\] Furthermore, in probe limit we only need to compare the sector of scalar field, \[\tilde{I}_{\rho\sigma\rho\sigma}\overset{?}{=}\tilde{I}_{\rho^{2}\sigma^{2}}\,. \tag{5.8}\] Without losing generality, we continue to use the scalar field model in previous section. In order to compute the fidelity between \(\rho\) and \(\sigma\), we already calculate the action of bulk gravity that corresponds to the boundary field \((\rho\sigma)^{n}\). For a detailed discussion, we will present such result in the previous section again \[\tilde{I}_{\rho\sigma\rho\sigma}=\frac{\beta\Omega_{0,2}}{8\pi G}\left[\frac{ \rho_{v}^{2}\phi_{s}^{(0)}}{4\phi_{v}^{(0)}}+\sum_{j=1}^{\infty}\frac{(1-\cos j \pi)^{2}}{j^{2}\pi^{2}}\frac{\rho_{v}^{2}\phi_{s}^{(j)}}{2\phi_{v}^{(j)}} \right]\,. \tag{5.9}\] Here, we need to set \(n=2\) to recover \(\tilde{I}_{\rho\sigma\rho\sigma}\) from the expression (4.20). Then we need to calculate the action \(\tilde{I}_{\rho^{2}\sigma^{2}}\) for boundary field configuration \(\rho^{2}\sigma^{2}\). Following the same process in previous section, we calculate the action \(\tilde{I}_{\rho^{2}\sigma^{2}}\) but with boundary condition that is different from (4.15) in \(n=2\) \[\tilde{\phi}_{v}(\tau)=\left\{\begin{aligned} \rho_{v}& 0<\tau\leq 2 \beta\\ \sigma_{v}& 2\beta<\tau\leq 4\beta\end{aligned}\right\}\,. \tag{5.10}\] It is straightforward that period \(\tilde{\tau}_{\rho^{2}\sigma^{2}}\) is two times than \(\tilde{\tau}_{\rho\sigma\rho\sigma}\). And it can be shown that \(\tilde{I}_{\rho^{2}\sigma^{2}}\) has the same form as \(\tilde{I}_{\rho\sigma\rho\sigma}\) \[\tilde{I}_{\rho^{2}\sigma^{2}}=\frac{\beta\Omega_{0,2}}{8\pi G} \left[\frac{\rho_{v}^{2}\phi_{s}^{(0)}}{4\phi_{v}^{(0)}}+\sum_{j=1}^{\infty} \frac{(1-\cos j\pi)^{2}}{j^{2}\pi^{2}}\frac{\rho_{v}^{2}\phi_{s}^{(j)}}{2\phi_ {v}^{(j)}}\right]\,. \tag{5.11}\] However, it should be noted that \(\phi_{s}^{(j)}/\phi_{v}^{(j)}\) has different values in formula (5.9) and (5.11). More specifically, we know that \(w_{\rho\sigma\rho\sigma}^{(j)}=2\pi j/\tilde{\tau}_{\rho^{2}\sigma^{2}}\) and \[\tilde{\tau}_{\rho^{2}\sigma^{2}}=2\tilde{\tau}_{\rho\sigma\rho \sigma}=4\beta\,. \tag{5.12}\] Then we can obtain that \(w_{\rho\sigma\rho\sigma}^{(j)}=2w_{\rho^{2}\sigma^{2}}^{(j)}\). In previous section, we already get the value of \(\phi_{s}^{(j)}/\phi_{v}^{(j)}\) for \(\tilde{I}_{\rho\sigma\rho\sigma}\) and \(j\) can only take the odd term, i.e. \(j=1,3,5,7,\cdots\). Since \(w_{\rho\sigma\rho\sigma}^{(j)}=2w_{\rho^{2}\sigma^{2}}^{(j)}\), based on radial equation (4.24) for \(\tilde{I}_{\rho\sigma\rho\sigma}\), we only take \(j=\frac{1}{2},\frac{3}{2},\frac{5}{2},\frac{7}{2},\cdots\) to get the exact value of \(\tilde{I}_{\rho^{2}\sigma^{2}}\). Why can we do this because \(\phi_{s}^{(j)}/\phi_{v}^{(j)}\) in radial equation (4.24) changes continuously with the continuous value of parameter \(j\). Considering the thermal AdS background, \(\tilde{I}_{\rho\sigma\rho\sigma}\) is related to the sum of series (4.21) \[\Xi_{\rho\sigma\rho\sigma}:=\sum_{j=1,3,5,\cdots}^{\infty}\frac{ \phi_{s}^{(j)}}{2\phi_{v}^{(j)}}=-\frac{\sin[\pi(\Delta-\frac{d}{2})](\Delta- \frac{d}{2})\Gamma(\Delta-\frac{d}{2})^{2}}{2\pi(w_{j}/2)^{2\Delta-d}}\,. \tag{5.13}\] According to the above discussion, \(\Xi\) for \(\tilde{I}_{\rho^{2}\sigma^{2}}\) can be easily seen that \[\Xi_{\rho^{2}\sigma^{2}}=\frac{\Xi_{\rho\sigma\rho\sigma}}{2^{2 \Delta-d}}\,. \tag{5.14}\] Figure 5: The schematic diagram about two boundary conditions relate to the commutativity of \(\rho\) and \(\sigma\). Here the red line labels the distribution of state \(\rho\) and blue dashed line labels the state \(\sigma\). For example, figure 4(a) shows schematically that the bulk geometry with the boundary field configuration \(\rho\sigma\rho\sigma\). So we conclude that \(\tilde{I}_{\rho\sigma\rho\sigma}\neq\tilde{I}_{\rho^{2}\sigma^{2}}\) and \(\rho,\sigma\) are not commutative to each other in our case3. Footnote 3: It is noted that \(\Xi_{\rho^{2}\sigma^{2}}=\Xi_{\rho\sigma\rho\sigma}\) if \(\Delta=d/2\). However, as we claimed before, the mass parameter should satisfy \(m_{\rm BF}^{2}<m^{2}<m_{\rm BF}^{2}+1/L^{2}\) if both \(\tilde{\phi}_{s}(\tau)\) and \(\tilde{\phi}_{v}(\tau)\) can be considered as the source. Another example is considering the Maxwell field \(F_{\mu\nu}\) probing the thermal states which are corresponding to AdS-Schwarzschild black hole. In the probe limit, Maxwell field \(F_{\mu\nu}\) is decoupled with gravity sector \(g_{\mu\nu}\). Therefore, the only different contribution of \({\rm tr}(\rho\sigma\rho\sigma)\) and \({\rm tr}(\rho^{2}\sigma^{2})\) is the contribution from the Maxwell filed: \[I_{\rm Maxwell}=-\frac{1}{16\pi G}\int_{\cal B}d^{d+1}x\sqrt{g}F_{\mu\nu}F^{ \mu\nu}\,. \tag{5.15}\] As we have calculated in section 3, the field strength is determined by \[F_{\mu\nu}=-\frac{Q(\tau)}{r^{d-1}}(d\tau)_{\mu}\wedge(dr)_{\nu},\ Q(\tau)=(d -2)\mu(\tau)r_{h}^{d-2}\,. \tag{5.16}\] Here \(r_{h}\) is determined by \(\pi L^{2}/d\beta\). In this case, the chemical potential is time dependent. For \({\rm tr}(\rho\sigma\rho\sigma)\), the boundary condition of \(\mu(\tau)\) is \[\mu(\tau)=\left\{\begin{array}{ll}\mu_{\rho}&0<\tau\leq\beta\\ \mu_{\sigma}&\beta<\tau\leq 2\beta\\ \mu_{\rho}&2\beta<\tau\leq 3\beta\\ \mu_{\sigma}&3\beta<\tau\leq 4\beta\end{array}\right\}\,. \tag{5.17}\] Here \(\mu_{\rho}\) and \(\mu_{\sigma}\) are constants. Although \(\mu(\tau)\) is time dependent, the field strength has the same nonvanishing component \(F_{rt}\). The direct calculation for \(I_{\rm Maxwell}\) for both \({\rm tr}(\rho\sigma\rho\sigma)\) and \({\rm tr}(\rho^{2}\sigma^{2})\) reads \[\begin{split} I_{\rm Maxwell}&=-\frac{1}{16\pi G}\int_{ \cal B}d^{d+1}x\sqrt{g}F_{\mu\nu}F^{\mu\nu}\\ &=-\frac{2(d-2)\beta\Omega_{k,d-1}(\mu_{\rho}^{2}+\mu_{\sigma}^{2} )r_{h}^{d-2}}{8\pi G}\,.\end{split} \tag{5.18}\] So we check the consistency of conclusion from section 3 in the probe limit, which shows any two charged thermal state in the same Hilbert space are commutative. As shown in the previous two examples, this gives us a universal4 holographic approach to check if the density matrices of two holographic states are commutative to each other. As we investigated in this section, the commutativity of \(\rho\) and \(\sigma\) are holographically determined by the difference between two bulk geometries dual to \(\rho\sigma\rho\sigma\) or \(\rho^{2}\sigma^{2}\). Footnote 4: At least in the probe limit. ## 6 Summary To summary, this paper uses the holographic approach to calculate the fidelity and relative entropy in some concrete examples. By employing the replica tick, we can analytically calculate these quantum distance between thermal states. The key point here is that for two thermal states, the product of their respective density matrices is still a thermal state. In gravity side, this indicates that the gravity dual of the replicated thermal state is continue to be Schwarzschild-AdS black hole. We also investigate our approach under the presence of matter field. By the virtue of \(U(1)\) invariance for the Maxwell field, these conclusions for the thermal state can be generalized into the charged system within the grand canonical ensemble. The similar step can be also generalized into other cases. Without loss of generality, an Einstein-scalar theory is constructed here to gain the insight into the process of calculating the fidelity. For bulk matter, the replicated boundary field will cause the inhomogeneous boundary condition for the equation of motion. In general, the physical system we consider here will be time-dependent since the bulk scalar field has inhomogeneous boundary condition. For simplification, we ignore the backreaction from the scalar field to the metric. In other words, the question becomes how to solve the scalar field under the background of fixed Schwarzschild-AdS black hole. Then, we solve the corresponding scalar field's equation of motion in the probe limit. According to the standard holographic renormalization, we derive the analytic expression of the fidelity. The contribution from the scalar operator is a sum of an infinite series. Then we give a numerical result to intuitively feel this contribution here. It is found that on-shell action is divergent under the standard quantization scheme while convergent under the alternative quantization scheme. We observed that this divergency is caused by high frequency modes in Fourier space. While these UV contributions from high frequency modes are suppressed under the alternative quantization scheme. We have concluded that it is a quite common behavior in terms of the bulk matter under inhomogeneous boundary condition. At the end of this paper, a holographic method is discovered to check whether the density matrices of two holographic states are commutative. As examples, we use scalar and Maxwell field to demonstrate this method in the probe limit. The result is expected that two holographic states excited by scalar operator are generally not commutative and two charged thermal states are commutative. However, it should be noted that we do not really solve the Einstein field equation with full backreaction from time-dependent chemical potential. To the best of our knowledge, the exact bulk solution for time-dependent chemical potential is still unclear and worth to explore. Although this paper studies the fidelity between the states that are excited by a scalar operator, it is also interesting to generalize the procedure into cases of other operators. Further, it is meaningful to generalize our method to study the quantum distance in the subsystem that is a more practical physical system. As an important future problem, more general regularization under the standard quantization scheme is needed to be explore. In the presence of the matter field, the probe limit is very critical for our calculation. In principle, someone can numerically calculate the quantum distance without neglecting the backreaction of the matter field. What is more, the divergency here though is also a UV divergency, it is not caused by AdS boundary and so the usual holographic renormalization cannot not be used to remove such divergency. We leave this problem in the future and call for deep understanding. Euclidean Schwarzschild-AdS\({}_{d+1}\) black hole In \((d+1)\)-dimensional AdS spacetime, the metric for neutral black is given by \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+\frac{r^{2}}{L^{2}}d\Sigma_{k,d-1}^{2} \tag{104}\] with \[f(r)=\frac{r^{2}}{L^{2}}+k-\frac{f_{0}}{r^{d-2}}\,. \tag{105}\] Where \(L\) denote the AdS radius and \((d-1)\)-dimensional metric \(d\Sigma_{k,d-1}^{2}\) is defined as \[d\Sigma_{k,d-1}^{2}=\begin{cases}L^{2}d\Omega_{d-1}^{2}&\text{for $k=+1$}\\ d\ell_{d-1}^{2}\equiv\sum_{i=1}^{d-1}dx_{i}^{2}&\text{for $k=0$}\\ L^{2}d\Xi_{d-1}^{2}&\text{for $k=-1$}\end{cases}\,. \tag{106}\] The parameter \(k=-1,0,+1\) corresponds hyperbolic, planar and spherical horizon geometries respectively. Where \(d\Omega_{d-1}^{2}/d\ell_{d-1}^{2}/d\Xi_{d-1}^{2}\) is the unit metric on \((d-1)\)-dimensional spherical/planar/hyperbolic space. In the following discussion, \(\Omega_{k,d-1}\) will be denoted as the dimensionless volume of \(d\Sigma_{k,d-1}^{2}/L^{2}\). Here \(f_{0}\) is interpreted as the mass parameter since that is related to the mass of black hole [25] \[M=\frac{(d-1)\Omega_{k,d-1}}{16\pi G_{N}}f_{0}\,. \tag{107}\] One can also get the following relation from the blackening factor \(f(r_{h})=0\) \[f_{0}=r_{h}^{d-2}\left(\frac{r_{h}^{2}}{L^{2}}+k\right)\,. \tag{108}\] The Euclidean Schwarzschild solution is obtained from the ordinary Schwarzschild metric by operating the Wick rotation \(t\rightarrow-i\tau\). In order to regulate the conical singularity located at \(r_{h}\), the imaginary time \(\tau\) should be identified as \[\tau=\tau+\beta \tag{109}\] here the period \(\beta=1/T\) under the natural unit \(k_{B}=1\). \(T\) is the temperature of the black hole, which is given by \[T=\frac{1}{4\pi}\left.\frac{\partial f}{\partial r}\right|_{r=r_{h}}=\frac{1} {4\pi r_{h}}\left(d\frac{r_{h}^{2}}{L^{2}}+k(d-2)\right)\,. \tag{110}\] ## Appendix B Details about the solution for radial equation After variable separation, we obtain the following equation, i.e. radial equation(101) \[R_{j}^{\prime\prime}+\left(\frac{f^{\prime}}{f}-\frac{d-1}{z}\right)R_{j}^{ \prime}-\left(\frac{m^{2}}{z^{2}f}+\frac{w_{j}^{2}}{f^{2}}\right)R_{j}=0\,,f(z )=1-z^{d}\,. \tag{111}\] In this paper, we assumes horizon locates at \(z=z_{h}=1\) and \(w_{j}=\frac{dj}{2n}\). At the horizon we have following asymptotic solution \[R_{j}=a_{1}(1-z)^{w_{j}/d}(1+\cdots)+a_{2}(1-z)^{-w_{j}/d}(1+\cdots)\,. \tag{110}\] Near the AdS boundary we have following asymptotically behavior: \[R_{j}=\phi_{s}^{(j)}z^{d-\Delta}(1+\cdots)+\phi_{v}^{(j)}z^{\Delta}(1+\cdots)\,. \tag{111}\] The horizon at the Euclidean black hole will be a smooth tip and scalar field should be finite here. Thus, we then have following boundary condition \[a_{2}=0\,. \tag{112}\] In subsection 4.3, we need to determine whether the series \(\Xi[\rho_{v},1/2]\) converges in different quantization schemes. According to comparison test, we obtain that \[\left.\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}-\left.\frac{\phi_{s}^{(j)}}{\phi_{ v}^{(j)}}\right|_{\text{thermal AdS}}=\mathcal{O}(j^{a})\,,a=-3.81678\approx-4\,.\] We note that the numerical result of \(a\) is very close to \(-4\). In fact, we can analytically confirm this through making a variable transformation for radial equation, \[s=\int_{0}^{z}\frac{w_{j}}{f(x)}dx\,,\ \ R_{j}=Z_{j}(s)z(s)\,. \tag{113}\] Then the radial equation is reduced into \[\frac{d^{2}}{ds^{2}}Z(s)-[V(s)+1]\,Z(s)=0\,,\ \ V(s)=\frac{f(z)\left[m^{2}+(d-1) f(z)-zf^{\prime}(z)\right]}{w_{j}^{2}z^{2}} \tag{114}\] where \(z\) is the function of \(s\) defined by Eq. (113) and the boundary condition at horizon becomes \[Z(s)\to 0\,,\ \ s\rightarrow\infty\,. \tag{115}\] From the definition (113) we see that \(s=w_{j}z+\mathcal{O}(z^{d+1})\), \(f(z)=1-(s/w_{j})^{d}+\mathcal{O}(s/w_{j})^{d+1}\) and \[Z[s]=\phi_{v}^{(j)}w_{j}^{1-\Delta}s^{\Delta-1}(1+\cdots)+\phi_{s}^{(j)}w_{j} ^{\Delta+1-d}s^{d-\Delta-1}(1+\cdots) \tag{116}\] near the AdS boundary. In principle, Eq. (114) has no analytical solution. However, we can find an analytically approximated solution, which approaches to the exact solution when \(w_{j}\) becomes large enough. We note that \[V(s)\to V^{(0)}(s)=\frac{m^{2}+d-1}{s^{2}} \tag{117}\] when \(w_{j}\gg 1\). We then use \(V^{(0)}(s)\) to replace the potential \(V(s)\) and obtain the analytical solution \[Z_{j}^{(0)}(s)=b_{1}\sqrt{s}I_{\Delta-d/2}(s)+b_{2}\sqrt{s}K_{\Delta-d/2}(s)\,. \tag{118}\] Here \(I_{v}(s)\) and \(K_{v}(s)\) are the first and second kinds of modified Bessel functions. To match the boundary condition (B.7), we have to set \(b_{1}=0\). For small \(s\), we have \[2K_{v}(s)=\Gamma(v)\left(\frac{s}{2}\right)^{-v}(1+\cdots)+\Gamma(-v)\left( \frac{s}{2}\right)^{v}(1+\cdots)\] (B.11) if \(v\) is not an integer. We then have \[Z_{j}^{(0)}(s)=\frac{b_{2}}{2}\left[\Gamma(\Delta-d/2)2^{\Delta-d/2}s^{2- \Delta}(1+\cdots)+\Gamma(d/2-\Delta)2^{d/2-\Delta}s^{\Delta-1}(1+\cdots)\right]\] (B.12) for small \(s\). Using the relationship \(s=w_{j}z+\mathcal{O}(z^{d+1})\) for small \(z\) and relationship (B.8), we then obtain \[\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}=-\frac{\sin[\pi(\Delta-\frac{d}{2})]( \Delta-\frac{d}{2})\Gamma(\Delta-\frac{d}{2})^{2}}{\pi(w_{j}/2)^{2\Delta-d}}\,.\] (B.13) For finite but large \(c_{n}\), we can find that \(z=s/w_{j}+\mathcal{O}(1/w_{j}^{d+1})\) and so \[V(s)=V^{(0)}(s)+\frac{(m^{2}+d)s^{d-2}}{w_{j}^{d}}+\cdots\,.\] (B.14) This shows that \[\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}=-\frac{\sin[\pi(\Delta-\frac{d}{2})]( \Delta-\frac{d}{2})\Gamma(\Delta-\frac{d}{2})^{2}}{\pi(w_{j}/2)^{2\Delta-d}} \left[1+\mathcal{O}(w_{j}^{-d})\right]\,.\] (B.15) We note that the solution in thermal AdS, i.e. \(f(z)=1\) is equal to the analytical solution (B.10) with \(V^{(0)}\). In other words, the scalar modes with \(w_{j}\gg 1\) can only probe the UV region which is far from the horizon. According to this fact and the above formula (B.15), we can get the difference \[\left.\frac{\phi_{s}^{(j)}}{\phi_{v}^{(j)}}-\left.\frac{\phi_{s}^{(j)}}{\phi_{ v}^{(j)}}\right|_{\text{thermal AdS}}=\mathcal{O}(1/w_{j}^{2\Delta})\,.\] (B.16) So we conclude that \(a=-3.81678\approx-4\) is not a coincidence, which is consistent with the previous analysis. ## Acknowledgements This work is supported by the Natural Science Foundation of China under Grant No. 12005155.
2303.09024
DeeBBAA: A benchmark Deep Black Box Adversarial Attack against Cyber-Physical Power Systems
An increased energy demand, and environmental pressure to accommodate higher levels of renewable energy and flexible loads like electric vehicles have led to numerous smart transformations in the modern power systems. These transformations make the cyber-physical power system highly susceptible to cyber-adversaries targeting its numerous operations. In this work, a novel black box adversarial attack strategy is proposed targeting the AC state estimation operation of an unknown power system using historical data. Specifically, false data is injected into the measurements obtained from a small subset of the power system components which leads to significant deviations in the state estimates. Experiments carried out on the IEEE 39 bus and 118 bus test systems make it evident that the proposed strategy, called DeeBBAA, can evade numerous conventional and state-of-the-art attack detection mechanisms with very high probability.
Arnab Bhattacharjee, Tapan K. Saha, Ashu Verma, Sukumar Mishra
2023-03-16T01:36:18Z
http://arxiv.org/abs/2303.09024v1
# DeeBBAA: A benchmark Deep Black Box Adversarial Attack against Cyber-Physical Power Systems ###### Abstract An increased energy demand, and environmental pressure to accommodate higher levels of renewable energy and flexible loads like electric vehicles have led to numerous smart transformations in the modern power systems. These transformations make the cyber-physical power system highly susceptible to cyber-adversaries targeting its numerous operations. In this work, a novel black box adversarial attack strategy is proposed targeting the AC state estimation operation of an unknown power system using historical data. Specifically, false data is injected into the measurements obtained from a small subset of the power system components which leads to significant deviations in the state estimates. Experiments carried out on the IEEE 39 bus and 118 bus test systems make it evident that the proposed strategy, called DeeBBAA, can evade numerous conventional and state-of-the-art attack detection mechanisms with very high probability. Cyber-security, Stealthy false data injection attack, Adversarial Attack against Regression, Deep Learning, AC state estimation. ## 1 Introduction Corwing energy needs of the 21st century, supplemented with increasing environmental pressure to replace conventional methods of energy generation with highly intermittent and unreliable renewable energy sources, have led to significant transformations in conventional power systems. Over the last few years, the confluence of information and automation technologies has improved the reliability, resilience, and operational efficiency of these modern conduits of energy, rendering them a dynamic cyber-physical nature [1]. In order to monitor and operate the power grid efficiently and prepare it against potential contingencies, power system operators regularly perform a critical task known as the Power System State Estimation (PSSE), which involves filtering and processing measurements from grid sensors in a centralized control center. The measurements are collected by a Supervisory Control and Data Acquisition (SCADA) system and communicated to the control center via Remote Terminal Units (RTUs). The real-time operating status of the grid obtained from the outcome of the PSSE is then used to improve situational awareness, make economic decisions about energy dispatch, and take contingency actions against threats that endanger the reliability of the grid [2]. In a smart cyber-physical power system where remote transmission of crucial information is carried out, it is imperative to ensure the security of the communication channels to protect the system against intrusion and subsequent degradation of operational integrity. In the face of growing threats from cyberattacks, like the one that led to the disruption of power distribution in Ukraine in 2015 [3], conventional defenses like firewalls and air gaps are deemed inadequate. Under these circumstances, envisioning and anticipating the myriad ways remote attackers can target these cyber-physical systems has become a need of the hour [4]. The objective of this study is to analyze the vulnerability of a smart cyber-physical power system, particularly the state estimation operation, against a specific type of cyber-attack called the Stealthy False Data Injection Attack (SF-DIA) that compromises data integrity while being able to evade Bad Data Detection (BDD) by the control center and can cause load shedding, economic loss, and even blackouts [2, 5]. A novel methodology leveraging transfer-based black box adversarial attack generation is presented for stealthy false data injection where the adversary is assumed to have no knowledge of the targeted power system other than access to incomplete historical data pertaining to a subset of power system components. Further analysis shows that the attacks generated by the proposed approach can bypass a variety of state-of-the-art Bad Data Detection algorithms, including data-driven methods and physics-inspired statistical consistency checking algorithms, with very high probability. ### _Related Works_ To carry out SFDIA, an attacker needs to compromise real-time power measurements by intruding into the commu nication network between the remote terminal units and the control center. While numerous works addressing the design of SFDIAs exists, most of these primarily study a simplified model of the power system based on a linear DC power flow ([6, 7, 8]). In a more practical AC power flow setting, where the power system measurements are a non-linear function of the power system states, the design of SFDIAs becomes difficult due to the spurious nature of the non-linear AC power flow model [9, 10, 2]. Indeed, SFDIAs built using the linearized DC power flow model (dc-SFDIA) can be easily detected by an AC power flow-based BDD (ac-BDD) [11]. Power systems are thus more vulnerable to ac-SFDIAs that can bypass ac-BDD and hence are more relevant to this study. The naive way to design ac-SFDIA requires the attacker to have complete knowledge of the target power network, including its topology and line parameters, and access to real-time state estimates, which are both highly impractical assumptions. Slightly relaxing the omnipotence assumption on the attacker, early works like [9, 12, 13, 14, 15, 16, 17] showed that strong ac-SFDIA can be carried out even if the attacker has access to only the model information pertaining to the entire or a localized region of the targeted power network. However, given the highly classified nature of such power network model information, it still requires the attacker to be an extremely powerful and well-connected entity. Completely doing away with the omnipotence assumptions, recent research works have proposed model-free methods that use historical power system data to generate ac-SFDIA against an unknown power system model. Using historical data to estimate network parameters, targeted blind FDIAs were generated in [18, 19] formulated the ac-SFDIA generation task as a constrained optimization task on the measurement vector and used a geometric approach to derive sufficient conditions for evading BDD. A similar approach using principal component analysis was proposed in [20]. Generative modeling is another widely used tool for data driven SFDIA generation. Wasserstein GAN [21] and a self-attention-based GAN [22] frameworks were proposed in the literature for modeling ac-SFDIA against an unknown power system. A major limitation of the aforementioned works is that their evasive properties are tested against a very limited set of Bad Data Detection algorithms, primarily the conventional Largest Normalized Residue Test and the Chi-Squared Test algorithms. Numerous powerful statistical methods and data-driven algorithms for SFDIA detection have since been proposed in the literature, and whether these ac-SFDIA design strategies are capable of bypassing them is not known. Over the years, learning-based algorithms have found applications in various operations of the cyber-physical power system, primarily catering to state estimation, forecasting, and control operations [2]. Adversarial attacks threaten the operational integrity of such algorithms [23]. Depending on whether the attacker has complete knowledge of the target model, adversarial attacks can be either white-box or black-box in nature [24, 25]. Black box attacks are more practical, albeit more difficult to design, as the target model is unknown. Designing black-box attacks against cyber-physical systems like a power system comes with added difficulties because even though adversarial attacks are demonstrated to be extremely effective against data-driven models, they can easily violate the physics of the power network, thus risking easy detection by BDD safeguards that carry out statistical and physical consistency checks of the system measurements data [23, 26, 27]. The objective of this work is to design a black box adversarial attack strategy targeting the state estimation operation of an unknown power system, using incomplete historical data pertaining to a small subset of the power system components in such a way that they can evade both data-driven and physics-inspired conventional and statistical Bad Data Detectors. Two existing works in literature ([28, 29]) are closely related to the proposed work. In [28], a sophisticated GAN framework was proposed to generate black box adversarial perturbations against data-driven control strategies in a power system, particularly targeting transient stability control. The adversarial perturbations were designed specifically to misdirect data-driven control algorithms while bypassing conventional BDD. A major limitation of the algorithm is that it needs to query the target model and use the corresponding outputs during the offline training stage of the GAN, which is highly impractical primarily because querying an online control or estimation algorithm in a power system without raising suspicions is almost impossible and also because no restrictions are imposed on the number of queries required to be made. Unlike this, the proposed method needs limited historical data from the targeted unknown power system only once during training in order to develop a substitute model partially mimicking the unknown state estimation operation, instead of continuous querying. In [29] a joint adversarial attack and stealthy false data injection attack was proposed in an attempt to bypass both conventional and data-driven BDD. A DC power flow model was used for SFDIA generation, and the attacker was assumed to have complete knowledge of the power system model and the data-driven BDD, which was formulated using a Multi-layer Perceptron classifier. The authors also demonstrated the lack of transferability of their proposed attack strategy against unknown SFDIA detectors in the black box setting, unlike in the proposed method which is a completely black box SFDIA generator that is demonstrated to be able to evade multiple BDD safeguards. ### _Key Contributions_ The key contributions of this work are enlisted below: 1. DeeBBAA is a benchmark black box adversarial attack strategy that directly targets an unknown non-linear AC-PSSE module of a cyber -physical power system. Using a substitute neural regression network, it mimics the physics behind the AC-PSSE operation using incomplete historical data following which adversarial examples generated against the substitute model are used as stealthy FDI attack vectors. No querying interaction is necessary between the attacker and the target model during training or inference stages. 2. A generic strategy is presented for developing adversarial examples targeting a regression network following which a novel convex reformulation is proposed for fast attack generation. The spread and magnitude of the attack can be adjusted using user-tunable parameters. 3. Unlike existing works where adversarial attacks are generated against specific data-driven target models, the attacks carried out using DeeBBAA can evade a wide range of SFDIA detection algorithms including physics-inspired detectors like the conventional residue-based BDD algorithms like LNRT and \(\chi^{2}\) test, and statistical consistency-evaluating methods as well as supervised and unsupervised learning-based attack detectors. 4. The attacker neither requires any privileged power system information like network topology or line parameters, nor do they have the requirement to know the SFDIA detection algorithms put in place by the operator for safeguarding the power network against data-manipulation. Only one time incomplete historical power system data pertaining to a small subset of the power system components is required to design the attacks. 5. The adversary can modulate the dimension of the attack at will, i.e., it can target any random subset of power system components. Two broad types of target/attack regions are considered in this work, providing greater flexibility to the adversary in a way that they can choose to attack components that are relatively less secure than others. 6. The attack model is versatile, i.e, can be used for targeting any system from which one-time partial and limited historical data can be accessed by an adversary during reconnaissance and is transferable, scalable and computationally efficient. ### _Organization_ The rest of the paper is organised as follows: Section 2 provides a brief introduction to power system fundamentals, AC state estimation, conventional bad data detection algorithms and adversarial attacks. Section 3 details the proposed attack model. In section 4, simulation and experimental details and results corresponding to the implementation of DeeBBAA are presented. The paper is concluded in Section 5. ### _Notations_ \(\mathbb{R}\) and \(\mathbb{C}\) denote the sets of real and complex numbers. \(\mathbb{S}^{n}\) and \(\mathbb{H}^{n}\) represent the sets of \(n\times n\) real symmetric and complex Hermitian matrices respectively. \([n]\) denotes the index set \(1,2,...,n\). \(\#(.)\) gives the cardinality of a set. Vectors are written in lowercase bold letters, matrices are represented using bold uppercase letters. \(\textbf{0}_{n}\), \(\textbf{1}_{n}\), \(\textbf{0}_{m\times n}\), \(\textbf{I}_{n\times n}\) denote the \(1\times n\) zero vector, the \(1\times n\) one vector, the \(m\times n\) zero matrix and the \(n\times n\) identity matrix respectively. \(W_{ij}\) and \(v_{j}\) represents the \((i,j)\)-the entry of matrix \(\boldsymbol{W}\) and the \(j\)-th element of vector \(\boldsymbol{v}\) respectively. \(\boldsymbol{W}\succeq 0\) implies that \(\boldsymbol{W}\) is Hermitian and positive semidefinite(PSD). \((.)^{T}\) and \((.)^{*}\) represent the transpose and conjugate transpose operators. \(\Re(.)\), \(\Im(.)\) and \(Tr(.)\) determine the real part, imaginary part and trace of a scalar/matrix respectively. \(\tilde{j}=\sqrt{-1}\). The Hadamard product is represented using \(\odot\). \(\angle x\) and \(|x|\) represent the angle and magnitude of a complex scalar. \(||.||_{p}\) represents the p-norm. ## 2 Background ### _Power System Modelling_ The electric power grid can be modelled as a graph \(\mathcal{G}:=\{\mathcal{N},\mathcal{L}\}\), where \(\mathcal{N}:=[N]\) and \(\mathcal{L}:=[M]\) represents its set of buses and branches respectively. Each branch \(l\in\mathcal{L}\), where \(l\) connects nodes \(s\) to \(t\), is characterized by its admittance, \(y_{st}\) that represents the ease with which electrical current can flow through the branch. The branch admittance matrix \(\boldsymbol{Y}\in\mathbb{C}^{N\times N}\) can be understood as a weighted laplacian matrix of \(\mathcal{G}\) whose weights are given by the admittances in each branch. The from and to branch admittance matrices \(\boldsymbol{Y_{s}}\in\mathbb{C}^{M\times N}\) and \(\boldsymbol{Y_{r}}\in\mathbb{C}^{M\times N}\) represent weighted branch to node incidence matrices. The power grid topology is implicit in \(\boldsymbol{Y},\boldsymbol{Y_{s}}\) and \(\boldsymbol{Y_{r}}\). Referring to the 2-bus system in Fig. 1, \(\boldsymbol{Y},\boldsymbol{Y_{s}}\) and \(\boldsymbol{Y_{r}}\) can be written as shown below: \[\boldsymbol{Y}=\begin{bmatrix}y_{k}+y_{kk^{\prime}}&-y_{kk^{\prime}}\\ -y_{kk^{\prime}}&y_{k^{\prime}}+y_{kk^{\prime}}\end{bmatrix} \tag{1}\] \[\boldsymbol{Y_{s}}=\begin{bmatrix}y_{k}+y_{kk^{\prime}}&-y_{kk^{\prime}} \end{bmatrix},\boldsymbol{Y_{r}}=\begin{bmatrix}-y_{kk^{\prime}}&y_{k^{\prime} }+y_{kk^{\prime}}\end{bmatrix} \tag{2}\] where \(y_{k}\) represents the admittance-to-ground at bus \(k\). For more details on the construction of the \(\boldsymbol{Y},\boldsymbol{Y_{s}}\) and \(\boldsymbol{Y_{r}}\), refer [30]. The state of a power system, denoted by \(\boldsymbol{x}\), consists of the bus voltage vector \(\boldsymbol{x}=\boldsymbol{v}=[v_{1},\cdots,v_{N}]^{T}\in\mathbb{C}^{N}\), where \(v_{j}\in\mathbb{C}\) is the complex voltage at bus \(j\in\mathcal{N}\) with magnitude \(|v_{j}|\) and phase \(\theta_{j}=\angle v_{j}\). The nodal current injection vector \(\boldsymbol{i}=\boldsymbol{Y_{v}}\). The sending and receiving end branch currents are given by \(\boldsymbol{i_{s}}=\boldsymbol{Y_{s}}\boldsymbol{v}\) and \(\boldsymbol{i_{r}}=\boldsymbol{Y_{r}}\boldsymbol{v}\), respectively. Let \(\{\boldsymbol{a_{1}},\cdots,\boldsymbol{a_{N}}\}\) and \(\{\boldsymbol{b_{1}},\cdots,\boldsymbol{b_{M}}\}\) be the sets of canonical vectors in \(\mathbb{R}^{N}\) and \(\mathbb{R}^{M}\), respectively. Power system measurements include voltage magnitude and real and reactive power injection measurements at each bus and the real and reactive sending and receiving end branch power flow measurements at each bus. The measurements can be represented as non-linear functions of the system state using AC power flow equations as shown below: 1. _Voltage Magnitude:_ The voltage magnitude at bus \(k\) is given by \(|v_{k}|=\sqrt{\Re(v_{k})^{2}+\Im(v_{k})^{2}}\) Fig. 1: Depiction of a simple 2 bus system with \(N=2\) and \(M=1\). A load with real and reactive power, \(p_{k^{\prime}}\) and \(q_{k^{\prime}}\) is connected to bus \(k^{\prime}\) and a generator, with measurements of real nodal power injection, \(p_{k}\) and voltage magnitude \(|v_{k}|\) to bus \(k\). The branch power flows over line \(l\) are also shown. 2. _Bus Power Injection_: The complex power injection at bus \(k\) consisting of real and reactive powers, \(p_{k}+\hat{j}q_{k}\), where \(\mathbf{A}_{k}=\mathbf{a}_{k}\mathbf{a}_{k}^{T}\), is given by: \[p_{k} =\Re(i_{k}^{*}v_{k})=Tr(\frac{1}{2}(\mathbf{Y^{*}}\mathbf{A}_{k}+\mathbf{A}_{k }\mathbf{Y})\mathbf{v}\mathbf{v}^{*})\] (3) \[q_{k} =\Im(i_{k}^{*}v_{k})=Tr(\frac{1}{2\hat{j}}(\mathbf{Y^{*}}\mathbf{A}_{k}- \mathbf{A}_{k}\mathbf{Y})\mathbf{v}\mathbf{v}^{*})\] 3. _Branch Power Flows_: The sending and receiving end power flows in a branch \(l\in\mathcal{L}\) connecting nodes \(k\) to \(k^{\prime}\) are given by: \[p_{l}^{s} =\Re(\mathbb{i}_{\mathbf{a}}]_{l}^{s}v_{k})=Tr(\frac{1}{2}(\mathbf{Y_{s}} ^{*}\mathbf{b}_{i}\mathbf{a}_{k}^{T}+\mathbf{a}_{k}\mathbf{b}_{l}^{T}\mathbf{Y_{s}})\mathbf{v}\mathbf{v}^ {*})\] (4) \[p_{l}^{r} =\Re(\mathbb{i}_{\mathbf{a}}]_{l}^{s}v_{k^{\prime}})=Tr(\frac{1}{2}( \mathbf{Y_{s}}^{*}\mathbf{b}_{i}\mathbf{a}_{k^{\prime}}^{T}+\mathbf{a}_{k^{\prime}}\mathbf{b}_{l} ^{T}\mathbf{Y_{s}})\mathbf{v}\mathbf{v}^{*})\] \[q_{l}^{s} =\Im(\mathbb{i}_{\mathbf{a}}]_{l}^{s}v_{k})=Tr(\frac{1}{2\hat{j}}(\mathbf{Y _{s}}^{*}\mathbf{b}_{i}\mathbf{a}_{k^{\prime}}^{T}-\mathbf{a}_{k}\mathbf{b}_{l}^{T}\mathbf{Y_{s}}) \mathbf{v}\mathbf{v}^{*})\] \[q_{l}^{r} =\Im(\mathbb{i}_{\mathbf{a}}]_{l}^{s}v_{k^{\prime}})=Tr(\frac{1}{2\hat {j}}(\mathbf{Y_{s}}^{*}\mathbf{b}_{i}\mathbf{a}_{k^{\prime}}^{T}-\mathbf{a}_{k^{\prime}}\mathbf{b }_{l}^{T}\mathbf{Y_{s}})\mathbf{v}\mathbf{v}^{*})\] (5) Let \(\mathbf{z}=[\{|v_{k}|,p_{k},q_{k}\}_{\forall k\in\mathcal{N}},\{p_{l}^{s},q_{l}^{ s},p_{l}^{r},q_{l}^{r}\}_{\forall l\in\mathcal{L}}]\in\mathbb{R}^{N_{m}}\), where \(N_{m}=3\times N+4\times M\), represent the vector consisting of all power system measurements and consider that the complete set of non-linear AC power flow equations mapping the states to the measurements as shown above can be compactly denoted by \(\mathbf{h}:\mathbb{C}^{N}\rightarrow\mathbb{R}^{N_{m}}\), then the forward relation between power system measurements and states can be written as shown below: \[\mathbf{z}=\mathbf{h(x)}+\zeta \tag{6}\] where \(\zeta\) is the measurement noise. In the subsequent sections, an alternative representation of the state vector as shown below will be used instead of the complex representation: \[\mathbf{x}=[|v_{1}|,\cdots,|v_{N}|,\theta_{1},\cdots,\theta_{N}]\in\mathbb{R}^{2N}\] ### _AC State Estimation_ Voltage and power measurements from buses and lines are collected using sensors and sent to the SCADA system through Remote Terminal Units (RTUs). These are then sent to the control center over communication channels where state estimation is executed. Given the noisy measurements, \(\mathbf{z}\), an iterative weighted least squares problem is solved by the non-linear AC-PSSE module to obtain accurate estimates of the states as shown below: \[\hat{\mathbf{x}}=\min_{\mathbf{x}}[(\mathbf{z}-\mathbf{h(x)})^{T}\mathbf{R}^{-1}(\mathbf{z}-\mathbf{h(x)})] \tag{7}\] where \(\mathbf{R}\) is the error covariance matrix of the measurement vector and \(\hat{\mathbf{x}}\) represents the estimated state vector. ### _Conventional Bad Data Detection Algorithms_ In order to detect the presence of deliberately induced false data in the measurement vectors, power system operators conventionally employ Bad Data Detection(BDD) algorithms to isolate bad data and prevent false state estimates from being used for downstream tasks as shown in Fig. 2. Conventional BDD strategies include the Largest Normalized Residue test (LNRT) and the \(\chi^{2}\) test. The procedure involves computing the difference between the measurement values collected from the power network, represented by \(\mathbf{z}\) and those computed from the estimated states, \(\hat{\mathbf{x}}\) given by \(\mathbf{h(\hat{x})}\). After obtaining the measurement residue vector, \(\mathbf{r}\), it is normalized and a hypothesis testing is carried out on its norm, \(||\mathbf{r}^{norm}||\). Depending on whether the \(L_{2}\) norm or the \(L_{\infty}\) norm is used for the hypothesis testing, it is called \(\chi^{2}\) test or LNRT. A conventional BDD algorithm can be represented using the following equations: \[\mathbf{r}=|\mathbf{z}-h(\hat{\mathbf{x}})| \tag{8}\] \[\mathbf{K}=\mathbf{H}^{T}\mathbf{R}^{-1}\mathbf{H}\] \[\mathbf{B}=\mathbf{I}-\mathbf{H}(\mathbf{K}\mathbf{H}^{T}\mathbf{R}^{-1})\] \[r_{i}^{norm}=\frac{|z_{i}-h(\hat{x})_{i}|}{R_{ii}B_{ii}}\] The null and alternative hypotheses are given by \(\mathcal{H}_{0}\): Attack has not taken place and \(\mathcal{H}_{1}\): System has been attacked. The \(\chi^{2}\) test is given by: \[\mathcal{H}_{1} \tag{9}\] \[||\mathbf{r}^{norm}||_{2}\gtrless\tau_{2}\] (10) \[\mathcal{H}_{0}\] The LNRT is given by: \[\mathcal{H}_{1} \tag{11}\] \[||\mathbf{r}^{norm}||_{\infty}\gtrless\tau_{\infty}\] (12) \[\mathcal{H}_{0}\] \(\tau_{2}\) and \(\tau_{\infty}\) are pre-specified thresholds that are obtained by fixing a specific false alarm rate. ### _Stealthy False Data Injection Attacks_ In an FDIA, the attacker adds a malicious vector to the acquired measurements to force the AC-PSSE to converge to a different state value than expected. \[\mathbf{z}_{a}=\mathbf{z}+\mathbf{a}=\mathbf{h}(\hat{\mathbf{x}}+\mathbf{c})+\zeta\] \(\mathbf{a}\) is the attack vector added to the original measurements \(\mathbf{z}\) leading to the introduction of an unsolicited deviation in the estimated states represented by \(\mathbf{c}\). A stealthy FDIA vector can bypass the conventional BDD with very high probability. With an AC power system model, a sufficient condition on the attack vector \(\mathbf{a}\) for bypassing the conventional residue based BDD is given by [31]: \[\mathbf{a}=\mathbf{h}(\hat{\mathbf{x}}+\mathbf{c})-\mathbf{h}(\hat{\mathbf{x}}) \tag{13}\] Fig. 2: Flow diagram of Conventional Bad Data Detection in Power systems. since if \(\mathbf{r_{i}}\) and \(\mathbf{r_{f}}\) denote the measurement residue vectors before and after SFDIA, then: \[\mathbf{r_{f}}=\mathbf{z}_{a}-\mathbf{h}(\hat{\mathbf{x}}+\mathbf{e})=\mathbf{z}+\mathbf{a}-\bm {h}(\hat{\mathbf{x}}+\mathbf{e})=\mathbf{z}-\mathbf{h}(\hat{\mathbf{x}})=\mathbf{r_{i}} \tag{11}\] implying that the conventional residue based BDD won't be able to distinguish an attacked sample from a benign one. However in order to compute \(\mathbf{a}\) using Eq. 10, the attacker needs to have complete knowledge of the power system without which \(\mathbf{h}(.)\) cannot be computed. Also, the real-time state estimates \(\hat{\mathbf{x}}\) will be required. We call this the 'Perfect SFDIA', that requires the attacker to be omnipotent and hence is impractical. The attacker thus needs to come up with innovative AC-SFDIA design methods with low information requirement. ### _Adversarial attacks_ Adversarial attacks have been historically studied in the context of classification tasks like image classification, segmentation, speech recognition, etc [25, 26, 32, 33, 34]. In this proposed work, however, the adversary imitates the unknown AC-PSSE module using a substitute regression model trained using incomplete historical data from the power system and aims to generate adversarial examples with the substitute model as the target. Hence existing methods for adversarial attack design cannot be directly used in this case. The task of finding a suitable adversarial attack vector against a regression network, \(g(\mathbf{x}):\mathbb{R}^{p}\rightarrow\mathbb{R}^{q}\) is formulated as a constrained optimization problem as shown below: \[min_{\mathbf{\eta}} \|\mathbf{\eta}\|_{2} \tag{12}\] \[s.t. \|g(\mathbf{x}+\mathbf{\eta})\text{-}g(\mathbf{x})\|_{2}\geq\rho \tag{13}\] or equivalently, \[max_{\mathbf{\eta}} \|g(\mathbf{x}+\mathbf{\eta})\text{-}g(\mathbf{x})\|_{2} \tag{14}\] \[s.t. \|\mathbf{\eta}\|_{2}\leq\epsilon \tag{15}\] The optimization problem (12, 13) aims to find a perturbation, \(\mathbf{\eta}\), with minimal L2 norm such that the norm of the difference between the output of the regression network, \(g\), before and after the perturbation is added to its input, \(\mathbf{x}\), is greater than a threshold, \(\rho\). Equivalently, (14, 15) represents the dual formulation. In the subsequent sections, the dual formulation will be used. \(\mathbf{\eta}\) can then be called an adversarial perturbation against \(g\). An immediate problem however is that the objective in Eq. 14 is a highly non-convex function of \(\mathbf{\eta}\) due to the neural network \(g\). In Section 3.3, a novel convex reformulation of (14, 15) is presented that requires significantly less computational effort to develop arbitrarily "good" adversarial perturbations. Existing works, that propose adversarial attacks against learning based anomalous data detection models in the power system, design white box attacks explicitly targeting a particular defensive algorithm modelled as a classification network and hence are not effective against other defensive systems. DeeBBAA, the adversarial attack strategy proposed in this work, instead designs transfer based black box adversarial attacks against the AC-PSSE itself and the adversarial examples hence generated are highly transferable and can bypass a wide range of defenses, including conventional, statistical and learning based approaches. ## 3 Proposed Attack Strategy This section consists of a detailed description of the proposed adversarial cum stealthy false data injection strategy using the DeeBBAA framework from the attacker's perspective. There are three main steps involved in the process: 1. Identification of Attack Region and Reconnaissance 2. Partial Approximation of the Unknown AC-PSSE 3. Adversarial Optimization Against Proxy State Estimator These three steps implicitly conform to the two stage process involved in the design of transfer based black box attacks. The first two steps correspond to the collection of historical power system data and development of a substitute model mimicking the target AC-PSSE module which is carried out offline. Whereas the third step corresponds to the online design of an adversarial perturbation against real-time measurements using white box strategies on the substitute model. The following parts describe the three steps in further detail. Before moving into further details, some assumptions about the resourcefulness of the attacker need to be made: _Assumption 1:_ Once the attacker identifies a suitable attack region, it can collect historical power system data, including only power measurements and estimated states corresponding to the buses and lines included in the attack region. This constitutes the reconnaissance phase of the attacker. This assumption on the capabilities of an attacker is a standard premise amongst the literature on data-driven SFDIA design [18, 21, 22]. Note that unlike these works, historical data is required corresponding to only the sensors present in the attack region and not the entire power system for training the substitute regression network and no querying interaction is required between the attacker and the target state estimation module at any stage of the process. Historical data collection is required only once for training. ### _Identification of Attack Region and Reconnaissance_ Prior to carrying out attacks and reconnaissance, the attacker identifies a suitable attack region that consists of a subset of the measurement units present in the power network. The attack region is characterized by a subset of buses, \(\mathcal{B}_{A}\subset\mathcal{N}\) and a subset of branches \(\mathcal{E}_{A}\) incident on the buses in the set \(\mathcal{B}_{A}\) such that \(\mathcal{E}_{A}\subset\mathcal{L}\). Each bus \(b\in\mathcal{B}_{A}\), is equipped with SCADA measurement units providing the nodal real and reactive power injections \(\{p_{b},q_{b}\}\) and the voltage magnitude \(|v_{b}|\). For designing the attack, however, the adversary needs to collect only the nodal power injections at a bus and the voltage magnitude is not required. Each line \(l\in\mathcal{E}_{A}\) consists of SCADA measurement units measuring the sending and receiving end active and reactive power flows \(\{p_{b}^{s},q_{l}^{s},p_{l}^{r},q_{l}^{r}\}\). Thus, the attack region of the adversary is defined by the tuple of sets \(\{\mathcal{B}_{A},\mathcal{E}_{A}\}\). Depending on whether the subset of buses and lines chosen in the attack region form a connected induced subgraph of the power network or are random in nature, two types of attack regions are defined as described below. For illustration purposes, the graph structure of a standard IEEE 14 bus network is shown in Figures 3 and 4 where the nodes represent buses and the edges represent connecting power lines. 1. _Localized Attack Region_: In this case, a target load bus is initially selected and all its k-hop neighboring non-generator buses including itself are included in \(\mathcal{B}_{A}\). The lines connecting the buses in \(\mathcal{B}_{A}\) are included in \(\mathcal{E}_{A}\). Such attack regions have been widely considered in literature [12, 13, 15, 17]. Figure 3.a depicts the formulation of a localized attack region on an IEEE 14 bus system. Bus 12 is selected as the initial target bus following which the 2-hop neighbors, i.e. the immediate neighbors and the immediate neighbors of immediate neighbors of bus 12 are included into the set of targeted buses, \(\mathcal{B}_{A}\), i.e. \(\mathcal{B}_{A}=\{5,6,11,12,13,14\}\). Finally all the lines connecting the buses in \(\mathcal{B}_{A}\) are included into the set of target lines, \(\mathcal{E}_{A}\). The attack region, \(\{\mathcal{B}_{A},\mathcal{E}_{A}\}\), thus formed represents a localized attack region. 2. _Delocalized Attack Region_: \(|\mathcal{B}_{A}|\) number of non-generator buses are randomly selected from amongst the set of buses \(\mathcal{N}\). Following this, a random subset of all the lines incident on the buses in \(\mathcal{B}_{A}\) and whose other end is incident on a non-generator bus are included in \(\mathcal{E}_{A}\). Figure 3.b depicts the formulation of a delocalized attack region on the IEEE 14 bus system. Buses \(5,9,11,12\) and \(14\) are randomly selected as the target buses, i.e., \(\mathcal{B}_{A}=\{5,9,11,12,14\}\). For each of the buses in \(\mathcal{B}_{A}\), a random subset of power lines incident on that bus and another non-generator bus are selected and included in \(\mathcal{E}_{A}\). As an example, out of the four lines \(\{(9,4),(9,7),(9,10),(9,14\})\) incident on the targeted bus \(9\), a random subset of \(\{(9,7),(9,14\})\) is selected to be included in \(\mathcal{E}_{A}\). The same goes for all the other buses in \(\mathcal{B}_{A}\). In the example shown in Fig. 3.b, \(\mathcal{E}_{A}=\{(5,6),(9,7),(9,14),(11,10),(12,13)\}\). A delocalized attack region represents a much broader family of attack regions than a localized attack region which is a subset of the former. The intuition behind formulating a delocalized attack region is to include the following cases: * The adversary doesn't have access to all measurement units from an electrically localized region of the power network. * Some buses serving crucial loads may be more tightly protected than others by the Power System Operator. The adversary then chooses not to attack those buses. These are some of the many possible use cases of formulating a delocalized attack region that provides a lot of flexibility for the adversary. Once a suitable attack region is identified, the attacker performs reconnaissance. During that procedure it collects historical data corresponding to the attack region consisting of power injections at each bus, power flows in each line and state estimates at each bus present in the attack region. This data collection can be carried out either by eavesdropping in the communication network connecting the Remote Terminal Units to the SCADA system over a long period of time during which the attacker simply collects data and doesn't carry out any attacks [18, 21, 22]. ### _Partial Approximation of the Unknown AC-PSSE_ Having collected historical measurements and estimated states corresponding to the targeted attack region, the adversary trains a neural network, called the Neural State Estimator(NSE) to mimic the unknown AC-PSSE module. The input to the NSE are the historical power measurements corresponding to the identified attack region, i.e., the power injections at each bus and power flows at each line in the attack region. The target output are the historical estimated states corresponding to the attack region. Thus, with an attack region specified by the tuple \((\mathcal{B}_{A},\mathcal{E}_{A})\), the input to the NSE has dimensions \((2\times\#(\mathcal{B}_{A})+4\times\#(\mathcal{E}_{A}))\) (corresponding to 2 measurements per bus, i.e., active and reactive power injections and 4 measurements per line, i.e., sending and receiving end active and reactive power flows) and the output has dimensions \((2\times\#(\mathcal{B}_{A}))\)(corresponding to 2 states per bus, i.e., estimated voltage magnitude and angle). Formally, the adversary learns a mapping \(f_{\phi}:\mathcal{Z}_{\delta}\rightarrow\mathcal{X}_{\delta}\) such that \[\check{\mathbf{x}}_{\delta}^{hist}=f_{\phi}(\mathbf{z}_{\delta}^{hist}) \tag{16}\] where \(\mathbf{z}_{\delta}^{hist}=[\{p_{b},q_{b}\}_{b\in\mathcal{B}_{A}},\{p_{l}^{s},q_{ l}^{s},p_{l}^{t},q_{l}^{r}\}_{l\in\mathcal{E}_{A}}]\in\mathbb{R}^{2\times\#( \mathcal{B}_{A})+4\times\#(\mathcal{E}_{A})}\) and \(\check{\mathbf{z}}_{\delta}^{hist}=\{[\{v_{b}|,\tilde{\theta}_{b}\}_{b\in\mathcal{ B}_{A}}]\in\mathbb{R}^{2\times\#(\mathcal{B}_{A})}\}\) are the partial historical power measurement and estimated state vectors corresponding to the attack region respectively. \(\check{\mathbf{x}}_{\delta}^{hist}\) is an estimate of \(\check{\mathbf{x}}_{\delta}^{hist}\) obtained at the output of the NSE while \(\mathcal{X}_{\delta}\) and \(\mathcal{Z}_{\delta}\) are the spaces spanned by the partial historical estimated state and measurement vectors respectively. \(f_{\phi}\) is the Neural State Estimator parameterized by \(\phi\). If \(\mathcal{D}=\{\{\mathbf{z}_{\delta}^{hist,i},\check{\mathbf{x}}_{\delta}^{hist,i}\}_{ \forall i=1toN_{s}}\}\) be the dataset of \(N_{s}\) samples collected by the adversary during reconnaissance, then the trained neural state estimator \(f_{\phi^{*}}\), parameterized by \(\phi^{*}\), is obtained as follows: Fig. 3: Graph representations of the IEEE 14 bus system demonstrating different types of attack regions. The red nodes represent buses in the attack region, the dotted edges indicate power lines included in the attack region, the magenta nodes and the solid edges represent buses and power lines not included in the attack region. The bus with the blue asterisk beside it represents the initial target bus for the localized case. \[\phi^{*} =argmin_{\phi}\sum_{i=1}^{N_{s}}\mathcal{L}(\hat{\mathbf{x}}_{\delta}^{ hist,i},f_{\phi}(\mathbf{z}_{\delta}^{hist,i})) \tag{17}\] \[=argmin_{\phi}\sum_{i=1}^{N_{s}}\mathcal{L}(\hat{\mathbf{x}}_{\delta}^ {hist,i},\check{\mathbf{x}}_{\delta}^{hist,i}) \tag{18}\] where \(\mathcal{L}(\mathbf{a},\mathbf{b})\) is an appropriate distance measure that quantifies the mismatch between two vectors \(\mathbf{a}\) and \(\mathbf{b}\). Intuitively, the trained Neural State Estimator \(f_{\phi^{*}}\) acts as a substitute partial state estimator learnt from the limited historical data available with the adversary from a small subset of measurement units in the power network. Figure 4 provides a pictorial representation of the training procedure of the Neural State Estimator. The Neural State estimator used in this work comprises of a vanilla Multi Layer Perceptron (MLP) with two hidden layers, each consisting of 512 neurons. A Leaky ReLU activation function is used after each hidden layer to induce non-linearity. A dropout regularization is used after every hidden layer where each hidden neuron is dropped with a probability of 20% during training. A Huber loss function with threshold \(\gamma\) is used as the distance function, \(\mathcal{L}\), to calculate the mismatch between the NSE output and the historical estimated state values corresponding to each training data sample. The Huber Loss for the \(i\)-th data sample, \(\{\mathbf{z}_{\delta}^{hist,i},\hat{\mathbf{x}}_{\delta}^{hist,i}\}\), is given by: \[\mathcal{L}_{\gamma}(\hat{\mathbf{x}}_{\delta}^{hist,i},\check{\mathbf{ x}}_{\delta}^{hist,i})=\frac{\frac{1}{2}||\Delta\mathbf{x}_{hist,i}||_{2}^{2}}{ \gamma(||\Delta\mathbf{x}_{hist,i}||_{1}-\frac{1}{2}\gamma)}\quad,otherwise \tag{19}\] where \(\Delta\mathbf{x}_{hist,i}=\check{\mathbf{x}}_{\delta}^{hist,i}-\check{\mathbf{x}}_{\delta }^{hist,i}=\check{\mathbf{x}}_{\delta}^{hist,i}-f_{\phi}(\mathbf{z}_{\delta}^{hist,i})\) The NSE is trained using backpropagation and the Adam optimizer in a minibatch fashion. For our work, we assume \(\gamma=1\). All the power measurements and voltage phasors are normalized respectively before training. In the next stage of the process, the adversary develops a black box adversarial attack on the learnt Neural State Estimator to generate joint stealthy false data and adversarial examples. ### _Adversarial Optimization Against Proxy State Estimator_ Having learnt the substitute model, \(f_{\phi^{*}}\), for the unknown AC-PSSE using incomplete historical power system data, the adversary can now inject adversarial perturbations to real-time measurements in the attack region. For injecting stealthy adversarial examples, the attacker collects the current real-time power measurements from the attack region, solves a convex adversarial optimization problem to compute a suitable perturbation vector and adds it to the measurements before injecting them into the power network. Note that, at this online attack generation stage, the attacker only needs access to power measurements from the attack region, i.e., power injections and line flows. In the following part, a convex reformulation of the complex non-convex generic adversarial attack design framework for regression networks, given by equations (14, 15) as presented in Section 2.5 is carried out for the NSE, \(f_{\phi^{*}}\). Rewriting equations (14, 15) to include the Neural State Estimator, \(f_{\phi^{*}}\), and its corresponding inputs as following: \[max_{\mathbf{\eta}} ||f_{\phi^{*}}(\mathbf{z}_{\delta}+\mathbf{\eta})-f_{\phi^{*}}(\mathbf{z}_{ \delta})||_{2} \tag{20}\] \[s.t. ||\mathbf{\eta}||_{2}\leq\epsilon \tag{21}\] \(\mathbf{z}_{\delta}\) is the partial real-time power measurement vector collected from the attack region and \(\mathbf{\eta}\) is the desired adversarial perturbation. For convenience, the formulation in equations (20, 21) is renamed as the perturbation constrained deviation maximization (PCDM) problem. A two step relaxation strategy is hereby proposed to reduce the complexity of the PCDM problem and reformulate it as a convex optimization problem. First, a Taylor's Series approximation is carried out to the objective of the PCDM problem to convert it to a quadratic function following which a Semi-Definite Programming or SDP reformulation and subsequent convex relaxations are carried out in the second stage. The two stage relaxation strategy is explained below in greater detail: 1. _Taylor's first order approximation of the PCDM objective:_ \[f_{\phi^{*}}(\mathbf{z}_{\delta}+\mathbf{\eta})=f_{\phi^{*}}(\mathbf{z}_{\delta})+\mathbf{J }\mathbf{\eta}\] (22) where \(\mathbf{J}\) is the Jacobian matrix of \(f_{\phi^{*}}\) with respect to its input \(\mathbf{z}_{\delta}\) calculated at \(\mathbf{z}_{\delta}\), i.e., \(J_{ij}=\frac{\partial[f_{\phi^{*}}(\mathbf{z}_{\delta})]_{i}}{\partial z_{j}}\). where \([f_{\phi^{*}}(\mathbf{z}_{\delta})]_{i}\) is the \(i\)-th element of the vector \(f_{\phi^{*}}(\mathbf{z}_{\delta})\) and \(z_{j}\) is the \(j\)-th element of \(\mathbf{z}_{\delta}\). Replacing \(f_{\phi^{*}}(\mathbf{z}_{\delta}+\mathbf{\eta})\) in Equation 20 with its first order taylor's approximation (22), the PCDM problem (20, 21) becomes: \[max_{\mathbf{\eta}} ||\mathbf{J}\mathbf{\eta}||_{2}\] (23) \[s.t. \|\mathbf{\eta}\|_{2}\leq\epsilon\] (24) or equivalently \[max_{\mathbf{\eta}} \mathbf{\eta}^{T}\mathbf{J}^{T}\mathbf{J}\mathbf{\eta}\] (25) \[s.t. \mathbf{\eta}^{T}\mathbf{\eta}\leq\epsilon^{2}\] (26) The superscript \(T\) represents the transpose operation. The objective function is now simplified Fig. 4: Training the Neural State Estimator \(f_{\phi}\) using historical data samples. from a complex non-linear function consisting of a neural network to a much simpler quadratic function of \(\mathbf{\eta}\). The PCDM is thus relaxed into a Quadratic Constrained Quadratic Programming (QCQP) problem given by equations (25, 26). The solution of the QCQP will be a local optima of the original PCDM problem. However, QCQPs are also in general NP-hard and non-convex in nature. Thus, further relaxations are necessary to convert it into a convex problem. 2. _Semi-Definite Programming Relaxation of QCQP:_ The QCQP (25, 26) can be reformulated into an SDP by making the following considerations: \[\mathbf{J}^{T}\mathbf{J}=\mathbf{\tilde{J}}\] (27) \[\mathbf{\eta}^{T}=\mathbf{W}\] (28) Note that, \[\mathbf{\eta}^{T}\mathbf{\tilde{J}}\mathbf{\eta}=Tr(\mathbf{\tilde{J}}\mathbf{\eta}^{T})=Tr(\mathbf{ \tilde{J}}\mathbf{W})\] (29) Here \(Tr(\mathbf{A})\) represents the trace or the sum of diagonal elements of matrix \(\mathbf{A}\). Using result (29), the QCQP, (25, 26), can be converted to the SDP shown below: \[max\mathbf{W} Tr(\mathbf{\tilde{J}}\mathbf{W})\] (30) \[s.t. \mathbf{W}\succeq 0\] (31) \[Tr(\mathbf{W})\leq\epsilon^{2}\] (32) \[Rank(\mathbf{W})=1\] (33) Eq. 30 is derived from result (29). Constraint (31) enforces the condition that \(\mathbf{W}\) is a positive semi-definite matrix. Constraint (32) is obtained by applying \(\mathbf{\eta}^{T}\mathbf{\eta}=Tr(\mathbf{\eta}\mathbf{\eta}^{T})=Tr(\mathbf{W})\) and Eq. 28 to constraint 26. Constraint 33 is a direct consequence of Eq. 28 where \(\mathbf{\eta}\) is a vector. Note that the presence of the rank constraint 33 renders this SDP non-convex and NP-hard. However, if a rank one matrix \(\mathbf{W}\) can be obtained as the solution to the SDP (30)-(33) then \(\mathbf{W}\) uniquely and exactly solves the QCQP given by equations (25)-(26). For convenience, the SDP formulation given by equations (30 - 33) is named as the exact SDP formulation of the QCQP, (25,26). A final convex relaxation is carried out to convert the non-convex rank equality to a convex inequality by replacing the rank function by its closest convex approximation, the nuclear norm. The nuclear norm of a matrix \(\mathbf{W}\) is defined as the sum of its singular values, i.e., \(||\mathbf{W}||_{*}=Tr(\sqrt{\mathbf{W}^{T}\mathbf{W}})\). The final convex SDP reformulation of the PCDM problem, (14, 15), is thus given by: \[max\mathbf{W} Tr(\mathbf{\tilde{J}}\mathbf{W})\] (34) \[s.t. \mathbf{W}\succeq 0\] (35) \[Tr(\mathbf{W})\leq\epsilon^{2}\] (36) \[||\mathbf{W}||_{*}\leq 1\] (37) _Result 1_: If the principle eigenvalue of the matrix \(\mathbf{W}\) obtained after solving the convex SDP represented by equations (30-33) is infinitesimally close to one, then it can be concluded that \(\mathbf{W}\) is an approximately rank one matrix. _Proof:_ Since \(\mathbf{W}\) is a positive semi-definite matrix, all its eigenvalues are non-negative. Suppose \(\lambda^{*}=\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{n}\geq 0\) be the \(n\) eigenvalues (in descending order of absolute values) of \(\mathbf{W}\in R^{n*n}\) and \(\lambda^{*}\) be the principle eigenvalue. Let \(\lambda^{*}=1-\delta\), where \(\delta\) is an arbitrarily small non-negative number close to zero. According to Equation 37, \[||\mathbf{W}||_{*}\leq 1\] (38) \[\implies Tr(\sqrt{\mathbf{W}^{T}\mathbf{W}})\leq 1\] (39) \[\implies \sum_{i=1}^{n}\lambda_{i}\leq 1\] (40) \[\implies (1-\delta)+\sum_{i=2}^{n}\lambda_{i}\leq 1\] (41) \[\implies \sum_{i=2}^{n}\lambda_{i}\leq\delta\] (42) \[\implies \lim_{\delta\to 0}\sum_{i=2}^{n}\lambda_{i}\to 0\] (43) \[\therefore\lambda_{i}\geq 0\ \ \forall i\implies\lambda_{i}\to 0\ \ \forall i=2\ to\ n\] (44) Result 1 states that if the principle eigenvalue of \(\mathbf{W}\) infinitesimally close to 1 then the remaining eigenvalues are very close to zero, implying that \(\mathbf{W}\) is an approximately rank one matrix. Under these conditions the \(\mathbf{\eta}\) derived from \(\mathbf{W}\) would be a solution arbitrarily close to a local optima of the original PCDM problem. Empirical evidence is provided in the supplementary document establishing that the dominant eigenvalue of \(\mathbf{W}\) obtained after solving the convex SDP formulation is infinitesimally close to 1 and the second largest eigenvalue is infinitesimally close to 0. After obtaining \(\mathbf{W}\) by solving the convex SDP, the adversary computes its principle eigenvalue, \(\lambda^{*}\), and principal eigenvector, \(\mathbf{\nu}^{*}\), using singular value decomposition. The principal eigenvector is the orthonormal eigenvector corresponding to the principle eigenvalue of \(\mathbf{W}\). The optimal Fig. 5: Stealthy Black Box Adversarial Attack Formulation against Power System State Estimation using the proposed DeeBBAA framework. adversarial perturbation, \(\mathbf{\eta}\), and the compromised measurement vector \(\mathbf{z}_{\delta}^{a}\) are then derived as follows: \[\mathbf{\eta}=\epsilon\sqrt{\lambda^{*}}\mathbf{\nu}^{*} \tag{45}\] \[\mathbf{z}_{\delta}^{a}=\mathbf{z}_{\delta}+\mathbf{\eta}\odot\mathbf{e} \tag{46}\] where \(\odot\) represents the element-wise product or the Hadamard product and \(\mathbf{e}\) is the compromised measurement selection vector, having the same dimensions as \(\mathbf{z}_{\delta}\). The \(i\)-th element of \(\mathbf{e}\) can be either \(1\) or \(0\) depending on whether the adversary wishes to inject adversarial perturbation to the \(i\)-th measurement contained in \(\mathbf{z}_{\delta}\) or not. This provides an additional level of flexibility to the adversary in terms of attack sparsity. Figure 5 shows a block diagram depicting the formulation of a stealthy black box adversarial attack against power system state estimation using the DeeBBAA framework. For attacking the power system at a particular time instant \(t\), the adversary collects power measurements from the attack region, following which the trained Neural State Estimator,\(f_{\delta}\)- is used to compute the Jacobian matrix \(\mathbf{J}\) using backpropagation. The convex SDP problem defined by Equations (30-33) is then solved to obtain an approximately rank-one matrix \(\mathbf{W}\) from which the optimal adversarial perturbation and input is computed according to Equations 45 and 46. ## 4 Simulation and Results ### _Test System and Data Generation_ Due to unavailability of a dataset pertaining to the application of Stealthy False Data Injection in cyber-physical power systems, data is generated via simulations on the standard IEEE 39 bus and 118 bus test systems. These test systems are explained in detail in the Supplementary Material. The simulations were done in two stages: * In the first stage, data representing normal operating conditions of the power system is generated. Two different real life load profiles corresponding to publicly available databases of the Australian Energy Market Operator (AEMO) [35] for the year of 2019 and the New York Independent System Operator (NYISO) [36]for the year of 2021 were selected. The AEMO load profiles have an interval length of 30 min while the NYISO load profile has a 5-minute interval span. The original AEMO load profiles are interpolated to a 5-minute interval span similar to the NYISO profiles. Following this the data generation algorithm proposed in [37] is employed to obtain two datasets, each consisting of power system measurements and estimated states under normal operating conditions. The dataset generated from the AEMO load profiles, called Dataset A is used to train the Neural State Estimator, whereas the dataset generated from the NYISO load profiles, called Dataset B is used to train the defensive BDD algorithms and for testing the evasive properties of the DeeBBAA attacks. Both Dataset A and B have 105120 samples, each sample consisting of a tuple \((\mathbf{z},\hat{\mathbf{x}})\), where \(\mathbf{z}\in\mathbb{R}^{2\times\#(\mathcal{N})+4\times\#(\mathcal{M})}\) and \(\hat{\mathbf{x}}\in\mathbb{R}^{2\times\#(\mathcal{N})}\) represent the power measurements and corresponding estimated states. Note that voltage measurements are not included in \(\mathbf{z}\) as it is not required for attack formulation using DeeBBAA. However, voltage measurements are included for state estimation and training the defense algorithms. * In the second stage, Dataset B is used to generate another dataset consisting of standard SFDIA attacks, which will henceforth be called Dataset \(B_{att}\) for convenience. Half of the data samples in Dataset B are randomly selected for standard SFD attack generation. Three conventional SFDIA generation strategies, used in [38], are used to generate the attack data samples. The SFDIA attack vector \(\mathbf{a}\) is generated using the generic SFDIA formulation given by equation 10. Recall that the attacker needs both power system parameters like admittances and topology information (reflected in the realization of \(h(.)\)) and real-time state estimates, \(\hat{\mathbf{z}}\) to formulate the attack using equation 10 and the three strategies vary depending on whether the information residing with the attacker is perfect or noisy. The first conventional type of attack assumes that the attacker has perfect knowledge of the power system parameters and real-time state estimates and then uses equation 10 to formulate the attack vector. The second strategy assumes that the attacks are generated using imperfect power system parameter values, i.e., the power line parameters like admittance values are perturbed by adding 10% random gaussian noise to them. In the third attack strategy, the attacker is assumed to have noisy estimates of the real-time state values with error levels varying between -8% to 8%. The vector \(\mathbf{c}\) is formulated randomly to consist of 10% to 80% non-zero elements whose values range between 10% to 190% of the original state values. A data sample from Dataset B is randomly selected with 50% probability for attack generation and one of the three standard attack strategies is randomly selected with uniform probability for injecting false data into the selected sample. Figure 6 summarizes the data generation and usage procedure for all the experiments carried out in this work. ### _Training the Neural State Estimator_ The Neural State Estimator described in Section 3.2 is trained using Dataset A. Appropriate localized and delocalized attack regions, denoted by \((\mathcal{B_{A}},\mathcal{E_{A}})^{L}\) and \((\mathcal{B_{A}},\mathcal{E_{A}})^{D}\) are selected for both the IEEE test systems. The localized attack regions created for the IEEE 39 bus system consists of 8 buses and 8 power lines and that for the IEEE 118 bus system consists of 28 buses and 17 power lines. The delocalized attack regions for the IEEE 39 bus system consists of 7 buses and 10 power lines and that for the IEEE 118 bus system consists of 30 buses and 34 lines. The details of the buses and the power lines selected for each attack region can be found in the Supplementary material. Dataset A is randomly split into a training and testing subsets with a 80-20 split to train the NSE. For each data sample in Dataset A, power measurements corresponding to the buses and lines in the attack region serve as input to the NSE and the estimated states corresponding to the buses in the attack region serve as the target output. Since for each of the two IEEE test systems, two attack regions have been defined, 4 different NSE models each corresponding to a test system and a particular attack region were needed to be trained. Training was carried out in a mini-batch fashion with batch a size of 256 for 50000 steps. ### _Training the Defense Baselines_ The false data generated using the DeeBBAA framework is tested against three broad types of state of the art defensive BDD algorithms existing in literature. The probability of the SFDIA bypassing these defensive baselines is computed as the number of attacked samples misclassified as benign to the total number of attacked samples in the test data. 14 BDD algorithms each belonging to one of three classes - conventional, statistical and learning based are considered. The conventional baselines include the residue-based BDD algorithms described in Section II.C, namely, the Largest Normalized Residue test and the Chi Squared test. The second group consists of detection strategies that check for consistency in the measurement data reflected by statistical measures like KL Divergence between historical and current residues [38, 39, 40], normalized cumulative sum of residues [41], and structural properties of residual outlier graphs [42] to detect the presence of anomalous data. The third group consists of data-driven learning based algorithms including supervised and unsupervised deep learning and extreme learning methods for classification of compromised data using power system measurements as inputs. The data-driven algorithms formulate the task of SFDIA detection either as a binary classification task in which the status of the overall power system is returned as either compromised or not compromised or as a localization task where for each of the possible nodes in the power system, a binary classification task is solved to identify the compromised locations. The details of the defensive algorithms are given in Table I. Other than the two conventional methods, 5 statistical and seven data-driven algorithms are considered, out of which 6 are supervised and one is unsupervised. Their training is carried out according to the procedure given in the original works that proposed them. Training the conventional and statistical methods entail the computation of thresholds of statistical measures to carry out hypothesis testing. Dataset B is used for this purpose, and a false alarm rate of 2% is considered. Data-driven methods are trained to correctly distinguish benign measurement samples from those compromised using the standard SFDI attacks contained in Dataset \(B_{att}\) as described in Section 4.1. Further details about the training procedure and model architectures have been shifted to the Supplementary document due to a lack of space. ### _Designing attack vectors using DeeBBAA_ A randomly selected subset of one third data samples from Dataset B is used to inject false data using DeeBBAA. The dataset hence generated will be called Dataset \(B_{DeeBBAA}\) for convenience. For each of the IEEE test systems considered in this work, adversarial attacks using four values of \(\epsilon\)- 1,25 and 10 - are carried out. For each of these cases, 3 different measurement selection vectors, \(\boldsymbol{e}\) are tested. The measurement selection vector \(\boldsymbol{e}\) has been introduced in Equation 46. The three values of \(\boldsymbol{e}\) vary in terms of the number of ones contained in \(\boldsymbol{e}\). The three cases of \(\boldsymbol{e}\) correspond to the following scenarios: 1. all measurements from the attack region are injected with false data, i.e., \(\boldsymbol{e}=\boldsymbol{1}^{T}\). If \(N_{A}\) is the total number of measurements from the attack region, then \(\boldsymbol{e}^{T}\boldsymbol{1}=N_{A}\) where \(\boldsymbol{1}^{T}\) is the vector of all ones. For the IEEE 39 bus system, \(N_{A}\) is equal to 48 for the localized attack region case and equal to 54 for the delocalized attack region case. For the IEEE 118 bus system, \(N_{A}\) is equal to 124 for the localized attack region case and equal to 200 for the delocalized attack region case. 2. one half of the measurements from the attack region are injected with false data, i.e., the number of ones in \(\boldsymbol{e}\) is exactly equal to half its length. The adversary chooses to attack \(N_{A}/2\) measurements which correspond to the greatest \(N_{A}/2\) elements of \(\boldsymbol{\eta}\) in absolute terms. More formally, let \(\boldsymbol{r}\) be an integer vector consisting of \(N_{A}\) unique indices from 1 to \(N_{A}\) such that \(|\eta_{r_{i}}|\geq|\eta_{r_{j}}|\) if and only if \(i<j\). \(r_{i}\) is the \(i\)-th element of \(\boldsymbol{r}\) and \(\eta_{r_{i}}\) is the element of \(\boldsymbol{\eta}\) corresponding to the index represented by the \(i\)-th element of \(\boldsymbol{r}\). Then, the \(i\)-th element of \(\boldsymbol{e}\), \(e_{i}\), will be equal to 1 if and only if \(i\in\mathcal{S}:\mathcal{S}=\{k:k=r_{j}\ \ iff\ \ j\leq N_{A}/2\}\), otherwise zero. 3. a tenth of the measurements are injected with false data, i.e., the number of ones in \(\boldsymbol{e}\) is equal to one tenth of the total number of measurements obtained from the attack region. Again the adversary finds the greatest \(N_{A}/10\) (rounded off to the nearest integer) elements in \(\boldsymbol{\eta}\) and makes the corresponding elements of \(\boldsymbol{e}\) equal to one while all the remaining elements of \(\boldsymbol{e}\) are made zero. For each of the IEEE test systems considered in this work, attacks are generated for each of the two choices of attack Fig. 6: Summary of data generation and subsequent purpose of usage of different datasets. Dataset A is only used to train the NSE in the offline stage of DeeBBAA. Dataset B is used for training the defense baselines and online generation of attack vectors using DeeBBAA. This is done to ensure the worst case scenario from the attacker’s perspective where the defense baselines are trained on the same data which is used to generate the online DeeBBAA attacks. At the same time, the NSE is trained on a completely different dataset to prevent data spillage. regions - localized and delocalized, and for each attack region 3 choices of the measurement selection vector, \(\mathbf{e}\), are explored as explained above, thus leading to 6 scenarios per IEEE test system. Corresponding to each scenario, different versions of Dataset \(B_{DeeBBAA}\) is generated. For each of these scenarios, the probability of the data samples representing DeeBBAA attacks that successfully bypass the aforementioned defense baselines and the deviations in estimated states and power measurements induced by these attacks are analysed. The probability of successfully bypassing the defense baselines is calculated as the ratio of the number of compromised data samples that are wrongly classified as good data to the number of compromised data samples. The following section consists of the aforementioned results and the corresponding inferences. ### _Results and Inferences_ In this section, the evasive properties of the DeeBBAA attacks is analysed for the defense baselines outlined in Section 4.3 for different test systems, attack magnitudes and spread. Following this, the deviations introduced by the DeeBBAA attacks in the measurements and the corresponding state estimates is analysed. #### 4.5.1 Bypassing Conventional BDD safeguards In Figure 7, the probability of the DeeBBAA attacks bypassing the conventional BDD safeguards, i.e., LNRT and the \(\chi^{2}\) test, are plotted for the IEEE 39 bus and 118 bus test systems under different attack region types. The title of each subplot consists of three parts corresponding to the attack region type, the IEEE test system and the conventional BDD algorithm considered in that specific plot in the same order. In each subplot, the probability of bypassing BDD safeguards is compared against different values of epsilon and the total number of measurements compromised by the attacker, which is decided using the methodology outlined in Section 4.4. Following are the observations and inferences related to this result: 1. In general, DeeBBAA attacks generated on the IEEE 39 bus test system have a lower probability of bypassing the BDD when \(\epsilon=10\) as compared to those generated on the IEEE 118 bus case. 2. The LNRT is able to detect DeeBBAA attacks with maximum probabilities of 40% and 30% for the IEEE 39 bus case in the delocalized and localized attack region settings respectively only when \(\epsilon=10\). Similarly, the \(\chi^{2}\) test is able to detect localized attacks on the 39 bus system with a maximum probability of around 40% when \(\epsilon=10\). In all the other cases, the probability of bypassing the BDD is at least 90%. 3. As the value of \(\epsilon\) increases, the magnitude of attack vectors increases proportionally and so does the deviation caused by them in the estimated state and measurement vectors. As a result attacks carried out with \(\epsilon=10\) is the most susceptible to detection. 4. In most all the cases, LNRT performs better in attack detection than the \(\chi^{2}\) test, i.e., DeeBBAA attacks can more comfortably hide when the \(L_{2}\) norm of the residues are considered for hypothesis testing. It loosely implies that on average the DeeBBAA attacks inject deviations in both positive and negative directions, intelligently, leading to an overall balanced residue vector, the squared sum of whose individual elements remain close to the pre-attack conditions. Fig. 7: Probability of DeeBBAA attacks bypassing conventional LNRT and \(\chi^{2}\) tests for IEEE 39 bus and 118 bus systems under different attack regions, epsilons and number of compromised measurements. \begin{table} \begin{tabular}{|p{14.2pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline **Bk** & **Name** & **Number Type** & \multicolumn{1}{p{142.3pt}|}{**Resampling Results**} \\ \hline 1 & \(\chi^{2}\) Automated Feedback Rate & Conventional Double Need & \multicolumn{1}{p{142.3pt}|}{Inception attacks using multiple categories for the loss of the transmitted transmitted #### 4.5.2 Bypassing Advanced Defensive Systems The probability of DeeBBAA attacks bypassing advanced defense mechanisms including physics-inspired statistical methods and learning based algorithms as outlined in Section 4.3 are plotted in Figures 8 and 9 for the IEEE 39 bus test system under localized and delocalized attack region settings respectively. Similar results for the IEEE 118 bus test system under localized and delocalized settings are plotted in Figures 10 and 11 respectively. In each of these figures there are 12 subplots, each corresponding to a particular defense baseline specified in their respective titles. For each of these subplots, the probability of bypassing the concerned defense baseline is presented for different values of \(\epsilon\) and the number of compromised measurements. The first 5 subplots correspond to the statistical methods of defense and the remaining 7 correspond to learning based detectors. Following are the observations and inferences drawn from these results: 1. [leftmargin=*] 2. DeeBBAA attacks can bypass the KLD and Transformed KLD approaches with almost 100% probability in all the considered scenarios. This shows that DeeBBAA attacks do not cause any significant deviations in the empirical distribution of the measurement vector. Additionally, the real-time quickest detection approaches like ANCUSUM-C also fails to detect DeeBBAA attacks for more than 95% of instances in all the considered scenarios. 3. Amongst the statistical methods considered, only the outlier graph based anomaly detection algorithm is able to detect DeeBBAA attacks of high Fig. 8: Probability of DeeBBAA attacks bypassing statistical consistency and learning based defenses for IEEE 39 bus system under localized attack region with different values of epsilons and number of compromised measurements. Fig. 9: Probability of DeeBBAA attacks bypassing statistical consistency and learning based defenses for IEEE 39 bus system under delocalized attack region with different values of epsilons and number of compromised measurements. magnitude and high spread which are generated when all the measurement units in an attack region are injected with the DeeBBAA attack vector formed with \(\epsilon=10\). Under all the remaining circumstances, i.e., when a subset of the measurements in an attack region are targeted or when the value of \(\epsilon\) is less than \(10\), DeeBBAA successfully bypasses the outlier graph based detector with high probabilities of greater than 80%. This is because as the spread and magnitude of DeeBBAA attacks simultaneously increase, the number of nodes and edges also increases in the outlier graph that leads to increased chances of detection. * Amongst the learning based methods, the MLP completely fails to detect DeeBBAA under all circumstances. This is reasonable as the substitute NSE model is also an MLP but with a different architecture. However, DeeBBAA bypasses powerful time-series networks like LSTM, Bi-LSTM and TCN with average probability of 78%, 70% and 80% respectively for the IEEE 39 bus system and with average probability of 76%, 60% and 70% for the IEEE 118 bus system. The probability of bypassing the Bidirectional LSTM is the least in all cases but is still appreciable as Bi-LSTMs are extremely powerful networks that can detect anomalous patterns in a time series while no special effort was made in making the DeeBBAA attacks consistent in time. * The unsupervised energy based DAGMM algorithm is evaded by the DeeBBAA attacks with probability Fig. 11: Probability of DeeBBAA attacks bypassing statistical consistency and learning based defenses for IEEE 118 bus system under delocalized attack region with different values of epsilons and number of compromised measurements. Fig. 10: Probability of DeeBBAA attacks bypassing statistical consistency and learning based defenses for IEEE 118 bus system under localized attack region with different values of epsilons and number of compromised measurements. of 70% or more in all the cases. 5. In the localization front, E3LM performs the best in terms of defense. For the IEEE 39 bus test system, DeeBBAA bypasses E3LM around 80% of the time in all cases as compared to bypassing ARMA-GNN 90% of the times. Similarly for the 118 bus test system, DeeBBAA bypasses E3LM around 80% of the time in all cases as compared to bypassing ARMA-GNN 90% of the times. Similarly for the 118 bus test system, DeeBBAA bypasses E3LM around 80% of the time in all cases as compared to bypassing ARMA-GNN 90% of the times. Similarly for the 118 bus test system, DeeBBAA bypasses E3LM around 80% of the time in all cases as compared to bypassing ARMA-GNN 90% of the times. Fig. 12: Deviation statistics for IEEE 39 bus test system. Number of measurements compromised by the attack vector is plotted on the x-axis and y-axis consists of the deviations. For each of the values in x-axis, there are 4 boxplots each corresponding to a value of \(\epsilon\) as shown in the legends. Subplots a, b, c,d correspond to the localized attack region setting and plots e, f, g and h correspond to the delocalized attack region case. (a,e). Maximum Deviation in real power measurements, (b,f). Maximum deviation in reactive power measurements, (c,g). Median Deviation in Voltage magnitude estimates (in log scale), (d,h). Median deviation in voltage angle estimates (in log scale). Fig. 13: Deviation statistics for the IEEE 118 bus test system. Number of measurements compromised by the attack vector is plotted on the x-axis and y-axis consists of the deviations. For each of the values in x-axis, there are 4 boxplots each corresponding to a value of \(\epsilon\) as shown in the legends. Subplots a, b, c,d correspond to the localized attack region setting and plots e, f, g and h correspond to the delocalized attack region case. (a,e). Maximum Deviation in real power measurements, (b,f). Maximum deviation in reactive power measurements, (c,g). Median Deviation in Voltage magnitude estimates (in log scale), (d,h). Median deviation in voltage angle estimates (in log scale). system, DeeBBAA bypasses E3LM with probability of 70% or more in the localized attack setting and of around 80% in the delocalized attack setting. Whereas in both the cases the probability of DeeBBAA evading ARMA-GNN is around 90% on average. It is an appreciable result noting the fact that E3LM consists of an ensemble of 200 extreme learning machines that makes it an extremely powerful pattern detector. 6. The most interesting fact is that the NSE mimicking the AC-PSSE is a simple Multi-Layer Perceptron and is trained using a completely different dataset, Dataset A, than what is used to train the defense baselines. Moreover, the online DeeBBAA attacks are formulated using the same dataset, Dataset B that is used to train the defense baselines, while using no information about the defense baselines. Note that this represents the worst case scenario from the attacker's perspective where the defenses that it has to bypass are trained to identify generic SFDIA using a dataset (Dataset B) on which it has to implement online attacks without any sort of re-training while DeeBBAA is trained offline on a completely different dataset (Dataset A) with no overlaps. Under these circumstances DeeBBAA is able to bypass the defense baselines with probabilities ranging between 70%-100% in most of the cases. #### 4.5.3 Deviations in state estimates and measurements Recall that since the input to the NSE are just power measurements, i.e. bus injections and line flows, the DeeBBAA attacks hence generated are injected only into power measurements. In this part, the deviations caused by the DeeBBAA injections in power measurements and the corresponding deviations caused in the states estimated from the manipulated measurements are analysed. Figures 12 and 13 consist of boxplots depicting the distribution of the maximum deviation caused in the active and reactive power measurements and the median deviations caused in the state estimates after DeeBBAA attacks for the IEEE 39 bus and 118 bus cases respectively. The first row of each of these figures correspond to the localized attack region setting and the second row to the delocalized attack region setting. The x-axis of each subplot consists of the number of measurements compromised and groups of four boxplots corresponding to each value in the x-axis represent the four values of \(\epsilon\). Deviations in estimated voltage magnitude and angles for the IEEE 39 bus case in Figures 12.c, 12.d, 12.g, 12.h are plotted on a log scale for better visualization. Fig. 12.a and b represents the maximum deviation in real and reactive power measurements caused by DeeBBAA. Fig 12.c and d represent the median deviations in estimated voltage magnitude and angles in the post attack scenario. Corresponding images in the second row of Fig.12 present identical results for the delocalized scenario. The exact same structure is followed in Fig. 13. As the value of epsilon increased the range of deviations and their median value also increases. In the localized attack region setting, DeeBBAA can introduce as high as 0.1 per unit median deviations in the estimated voltage magnitude and more than 2 degree median deviations in the estimated angles for both the IEEE 39 and 118 bus cases. In the delocalized attack region setting the deviations induced in states are higher with more than 1 per unit median deviations in the estimated voltage magnitude for the IEEE 39 bus system and more than 0.2 per units for the IEEE 118 bus system. Figure 14 shows the actual changes in active and reactive power measurements and estimated states for one particular instance for the IEEE 39 and 118 bus test systems before and after the DeeBBAA attacks. The four subplots on the top row represent the changes in active and reactive power measurements before and after attack for the IEEE 39 bus case in the localized setting, IEEE 39 bus case in the delocalized setting, IEEE 118 bus case in the localized setting and IEEE 118 bus case in the delocalized setting. The x-axis in each of these subplots represent the index set of all the available measurements in their respective networks. As can be observed, the number of measurement units having positive deviation is a small Fig. 14: Point Deviation for the IEEE 39 and 118 bus test systems. subset of the total number of measurements available in the network. In the second row subplots, a before and after attack representation of the estimated states are presented for the IEEE 39 bus case in the localized setting, IEEE 39 bus case in the delocalized setting, IEEE 118 bus case in the localized setting and IEEE 118 bus case in the delocalized setting. The x-axis in each of these plots represent the index set of bus numbers, \(\mathcal{N}\). In both the IEEE 39 bus and 118 bus cases, the deviations incurred by the attacks in the state estimates are greater in the delocalized attack region setting as compared to the localized one. ## 5 Conclusion Stealthy False Data Injection Attacks pose a major threat to the operational integrity of vital cyber-physical networks, the smart power system being one of the most crucial examples owing to its contribution to global economic development. While newer SFDIA design approaches focus only on bypassing the conventional residue-based BDD safeguards, even in the wake of newer and more sophisticated algorithms for SFDIA detection, it becomes necessary to rigorously ascertain the effectiveness and highlight the vulnerabilities and limitations of these state-of-the-art defensive systems in order to prevent false complacency. In the wake of these circumstances, this work proposes a benchmark SFD1 attack framework, called DeeBBAA that takes a fundamentally different approach than the existing attack design mechanisms in order to facilitate the development of stronger defense paradigms to ensure the security of the power systems against intelligent adversaries. We show how by leveraging only historical data pertaining to a small subset of components from an unknown power system, an adversary can launch strong, versatile, and highly evasive FDI attacks that can lead to significant deviations in the state estimates of the network using DeeBBAA, all while requiring no information on the underlying power system, the target state estimation operation or the defensive algorithms put in place by the power system operator. Using a fast convex relaxation of the adversarial optimization problem against a substitute regression model, partially mimicking the operation of an unknown power system state estimator, DeeBBAA adversarially perturbs the real-time power measurements in a stealthy manner that allows them to evade detection by not only conventional BDD but also a variety of state of the art statistical and data-driven attack detection mechanisms. The adversary can tune the strength of the attack either directly by using a scaling factor or by changing the size of the attack region that it is targeting. Flexible attack regions are considered to further simplify reconnaissance for the adversary. The analysis presented in this work uncovers a grave situation that highlights the extreme vulnerability of attack detection mechanisms in the face of a breach in data or communication channel security. It immediately follows that in order to ensure the sustainability of security measures in cyber-physical systems, a combination of means for data security and communication channel security needs to be studied along with stronger attack detection mechanisms, and even the slightest of slacks in any one of these can lead to harmful consequences for the underlying network. ## 6 Supplementary Material ### _Details of Attack Regions for the IEEE 39 and 118 bus test systems_ The IEEE 39 bus test system [51], depicted in Fig. 14(a) is characterized by 39 nodes, 10 generators and 46 branches. The IEEE 118 bus test system, depicted in Fig. 14(b), represents a simple approximation of the American Electric Power system (in the U.S. Midwest) as of December 1962 and consists of 19 generators, 35 synchronous condensers, 177 lines, 9 transformers, and 91 loads [52]. The localized and delocalized attack regions considered for the IEEE 39 and 118 bus cases in the proposed work are elaborated in Table II. The nodes are represented by their indices and the branches are represented by the tuple of nodes on which they are incident. The naming convention of the buses in the test systems follow the PandaPower conventions [53]. ### _EigenValues of W_ As stated in Section 3.3 of the main manuscript, here we show that the principal eigenvalue of \(\mathbf{W}\) remains close to one and the second largest value is very close to 0. We plot a bar graph of the first and second largest eigenvalues of \(\mathbf{W}\) computed over 3000 datapoints and the result is shown in Fig. 16. As can be clearly seen, the principle eigenvalues are all very close to 1 and the second largest eigenvalues Fig. 15: IEEE standard test systems used for experiments. are infinitesimally close to zero implying that \(\mathbf{W}\) is an approximately rank one matrix. ### _Training the Defensive Baselines_ This section is split into three parts each corresponding to one of the three types of defense algorithms used in this work: * _Conventional BDD_: As part of conventional BDD algorithms, LNRT and \(\chi^{2}\) tests were carried out. Training of these algorithms involve identifying the BDD thresholds or \(\tau_{\infty}\) and \(\tau_{2}\) respectively as given \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **IEEE Test System** & \multicolumn{2}{|c|}{**Localized Attack Region**} & \multicolumn{2}{|c|}{**Delocalized Attack Region**} \\ \hline & \(\mathcal{B}_{A}\) & \(\mathcal{E}_{A}\) & \(\mathcal{B}_{A}\) & \(\mathcal{E}_{A}\) \\ \hline **39 bus** & 0, 1, 2, 24, 25, 26, 27, 28 & (25, 26), (25, 28), (27, 28) & 7, 8, 11, 17, 26, 27, 30 & (4, 7), (6, 7), (7, 8), (25, 27), (27, 28), (16, 26), (25, 26), (2, 17), (16, 17), (7, 8) \\ \hline **118 bus** & & & (20, 21), (46, 68), (81, 82), (34, 36), (43, 44), (31, 113), (15, 16), (78, 79), (79, 96), (79, 97), (50, 51), (51, 52), (75, 117), (69, 70), (69, 74), (42, 43), (69, 74), (42, 43), (18, 19), (68, 74), (74, 117) & (20, 21), (46, 68), (81, 82), (34, 36), (43, 44), (31, 113), (15, 16), (78, 79), (79, 96), (79, 97), (50, 51), (51, 52), (50, 51), (62 63), (68 74), (74 117) & (20, 21), (46, 68), (81, 82), (34, 36), (43, 44), (31, 113), (15, 16), (78, 79), (79, 96), (79, 97), (50, 51), (51, 52), (75, 117), (69, 70), (69, 74), (42, 43), (18, 19), (68, 74), (74, 117), (104, 107), (49, 56), (33, 42), (4, 5), (5, 6), (32, 36), (16, 17), (107, 108), (36, 38), (51, 52), (100, 101), (105, 106), (82, 83), (12, 14), (13, 14) \\ \hline \end{tabular} \end{table} TABLE II: Details of Attack Regions for the IEEE 39 and 118 bus test systems Fig. 16: Distribution of eigenvalues of \(\mathbf{W}\). Fig. 17: Distribution of Largest Normalized residues corresponding to benign data samples in Equations 8 and 9 in the main manuscript. Dataset B consisting of good data representing normal operating conditions is used for this purpose. For each of the data samples in Dataset B, the L\(\infty\) norm and L2 norm of the normalized residue vector is computed using Eq. 7 of the main manuscript. Keeping false alarm rates at \(2\%\), the L\(\infty\) norm and L2 norm obtained as the 98 percentile values are used as \(\tau_{\infty}\) and \(\tau_{2}\) respectively. Figures (a)a and (b)b depict the distribution of the \(L_{\infty}\) norm of the normalized residues corresponding to benign data points for the IEEE 39 bus and 118 bus test systems respectively. Figures (a)a and (b)b depict the distribution of the L2 norm of the normalized residues corresponding to benign data points for the IEEE 39 bus and 118 bus test systems respectively. The threshold values are marked as vertical lines in the plots. * _Statistical Methods_: Five state of the art statistical consistency tests, namely the **KLD** test [39], jointly**Transformed KLD** test [40], **kSRS** test [38], **ANCUSUM-C**[41] and **Outlier Graph** based test [42] are considered as defense baselines. Training, with respect to the KLD test, transformed KLD test, kSRS test and ANCUSUM-C test, imply computation of thresholds to be used for hypothesis testing using data under normal operating conditions. Following the procedures elaborated in the respective papers, Dataset B is used for threshold computations of the aforementioned algorithms. For the Outlier Graph based SFDIA detection algorithm, thresholds are computed using Dataset \(B_{att}\) following the procedure outlined in [42]. * _Learning Based Methods_: Six supervised and one unsupervised state of the art learning based SFDIA detection algorithms are considered. A **MLP** classifier is used to solve a binary classification problem whose output indicates whether the power system is compromised or not. The MLP has three hidden layers of size 256, 128 and 64 respectively followed by a single unit output layer with sigmoid activation. The hidden layer outputs are followed by a dropout regularization with drop probability of 10% and a Leaky-ReLU activation function. The input to the network is the measurement vector, consisting of all power system measurements including the voltage magnitudes at each bus. For the sequence to sequence time series networks, i.e., **LSTM**[43], [44], **Bi-LSTM** and **TCN**[46], [47], [48], inputs are a sequence of measurement vectors over a time window of length 50 units. The output are binary labels of the same sequence length each representing whether the input measurements at a particular timestep is compromised or not. The LSTM and Bi-LSTM networks have 3 hidden layers of dimensionality 64 each with default activations, while the output layer has a sigmoid activation. The architecture of the TCN is inspired from [54]. Number of filters used were 256 with a kernel size of 5 and number of stacks fixed at 3. Dilations of values 1, 2 and 4 were used. For the localization task, **E3LM**[49] and **ARMA-GNN**[37] were considered as baselines. The hyperparameters of these algorithms were kept the same as in the respective works that proposed them. For the localization task, the target variable is an array of 0s or 1s depending on which state variables are compromised. For every attack data sample in Dataset \(B_{att}\), the target variable in these cases is a vector with dimensions equal to that of the state perturbation \(\mathbf{c}\). If the \(k\)-th element of \(\mathbf{c}\) is non-zero, then the \(k\)-th element of the target variable is 1 otherwise 0. A train-test split of 80-20 is carried out on Dataset \(B_{att}\) to train these algorithms. They are trained till the accuracy of classification of all these algorithms on the test sets is \(\downarrow\)98%. Finally, **DAGMM**[50] is implemented following [https://github.com/tmake/DAGMM](https://github.com/tmake/DAGMM) repository and keeping the same hyperparameters given in the paper.
2303.12362
Quantum non-Markovianity: Overview and recent developments
In the current era of noisy intermediate-scale quantum (NISQ) devices, research in the theory of open system dynamics has a crucial role to play. In particular, understanding and quantifying memory effects in quantum systems is critical to gain a better handle on the effects of noise in quantum devices. The main focus of this review is to address the fundamental question of defining and characterizing such memory effects -- broadly referred to as quantum non-Markovianity -- from various approaches. We first discuss the two-time-parameter maps approach to open system dynamics and review the various notions of quantum non-Markovianity that arise in this paradigm. We then discuss an alternate approach to quantum stochastic processes based on the quantum combs framework, which accounts for multi-time correlations. We discuss the interconnections and differences between these two paradigms, and conclude with a discussion on necessary and sufficient conditions for quantum non-Markovianity.
U. Shrikant, Prabha Mandayam
2023-03-22T07:54:58Z
http://arxiv.org/abs/2303.12362v1
# Quantum non-Markovianity: Overview and recent developments ###### Abstract In the current era of noisy intermediate-scale quantum (NISQ) devices, research in the theory of open system dynamics has a crucial role to play. In particular, understanding and quantifying memory effects in quantum systems is critical to gain a better handle on the effects of noise in quantum devices. The main focus of this review is to address the fundamental question of defining and characterizing such memory effects - broadly referred to as quantum non-Markovianity - from various approaches. We first discuss the two-time-parameter maps approach to open system dynamics and review the various notions of quantum non-Markovianity that arise in this paradigm. We then discuss an alternate approach to quantum stochastic processes based on the quantum combs framework, which accounts for multi-time correlations. We discuss the interconnections and differences between these two paradigms, and conclude with a discussion on necessary and sufficient conditions for quantum non-Markovianity. ## I Introduction A quantum system is said to be _open_ when it interacts with its environment [Breuer and Petruccione 2002]. As such a system evolves, it builds up correlations, such as entanglement, with the environment [de Vega and Alonso 2017]. This in turn results in decoherence and dissipation [Breuer et al. 2016a], which are known to be generally detrimental to quantum information tasks. The study of open system dynamics has thus become more important than ever, in today's era of noisy intermediate-scale quantum (NISQ) devices [Preskill 2018]. Of particular interest is the characterization of memory effects, or, quantum non-Markovianity, that arises due to strong system-environment coupling. A precise and universal definition of non-Markovianity has remained elusive, and understanding its origins and characteristics is pertinent for emerging quantum technologies. The study of open quantum system dynamics has been formalized in a number of approaches, starting with the traditional approaches in [Breuer and Petruccione 2002, Banerjee 2018], to operational characterizations [Pollock et al. 2018a], and more recently, approaches based on quantum collision models [Ciccarello et al. 2022, Campbell and Vacchini 2021]. From the quantum information theory point of view, system dynamics are represented by _quantum dynamical maps_, which are linear, completely positive (CP) and trace preserving (TP) maps. Such maps, referred to as _quantum channels_, can be described using an operator-sum representation (the so-called Kraus representation) [Nielsen and Chuang 2010], which can be derived by tracing out the environment degrees of freedom from the full system-environment unitary dynamics. Open system dynamics may be broadly classified as Markovian and non-Markovian. In the natural sciences, a process is said to be Markov if the future outcomes of the measurement on the system are independent of the past ones, conditioned on the present. When such past-future independence fails, or, when the environment retains the history of the system then the process is said to be non-Markovian. Over the past decade, there has been much effort focused towards characterizing, witnessing and quantifying non-Markovianity in the quantum domain. A number of witnesses and measures have been proposed, based on divisibility [Rivas et al.2010], distinguishability (or trace distance) [Breuer et al.2009], fidelity [Rajagopal et al.2010], quantum channel capacity [Bylicka et al.2014, Pineda et al.2016], accessible information [Fanchini et al.2014], mutual information [Luo et al.2012], quantum discord [Alipour et al.2012], interferometric power [Dhar et al.2015] and deviation from semigroup structure [Wolf et al.2008, Utagi et al.2020b], to name a few. However, a precise and universal definition of quantum (non-)Markovianity continues to remain one of the unsolved problems in open systems theory [Li et al.2018]. The traditional approach to quantum non-Markovianity does not have a well-defined classical limit and lacks a clear operational interpretation [Pollock et al.2018b]. In fact, the traditional approach characterizes dynamical processes either via one-parameter semigroups of dynamical maps or two-parameter families of maps that are divisible and indivisible, thus incorporating only two-time correlations functions of the environment. The results emerging from such approaches do not necessarily generalize to situations where multi-time correlations become prominent. A new approach known as the process tensor formalism promises a solution to this problem through complete tomographic characterization of a quantum stochastic process by taking into account multi-time correlations as well as the (possibly unknown) initial system-environment (S-E) correlations [Modi2012], offering an operationally motivated characterization of open systems which the previous approaches need not provide. The present review attempts to survey this active area of defining and characterizing quantum non-Markovian dynamics. While there have been a few good reviews on this topic in the literature in the past - see, for example, [Rivas et al.2014, Breuer et al.2016a, Breuer et al.2016b, de Vega and Alonso 2017, Li et al.2018] - our article focuses on the more notable recent developments aimed at detecting and quantifying non-Markovianity via temporal quantum correlations. After briefly reviewing the well-known definitions based on CP-divisibility [Rivas et al.2010] and distinguishability [Breuer et al.2009] which are only necessary but not sufficient indicators of non-Markovianity, we discuss a measure proposed by [Chen et al.2016] based on temporal steerable correlations and subsequently that proposed by [Utagi2021] based on causality measure arising out of pseudo-density matrix. However, as we note in this review that the definitions based on quantum temporal correlations are only sufficient but not necessary indicators of non-Markovianity. Later, we discuss the recent approaches in which multi-time correlations are taken into account, specifically the process tensor framework proposed by [Pollock et al.2018a, Pollock et al.2018b] and a definition of non-Markovianity based on conditional past-future correlations proposed by [Budini2018b, Budini2019, Budini2022]. Specifically, we address the issue of necessary and sufficient criteria for quantum non-Markovianity in this review. _A note on terminology_: (i) We use _system_ to refer to an open quantum system (ii) _environment_ refers to a quantum environment having quantum degrees of freedom, unless otherwise stated; (iii) the _master equation_ is an equation that describes the reduced dynamics of the system alone, after tracing out the environment degrees of freedom; (iv) the word _correlations_ implies quantum correlations unless otherwise stated. The rest of this review is structured as follows. In Sections II.1 and II.2, we briefly review the well-known master equation and dynamical maps approaches to open system dynamics. In Section II.3 we discuss some of the famous measures of non-Markovianity including the ones based on distinguishability of states and completely positive (CP) divisibility. We also briefly note some of the measures that are based on quantum correlations. In Section II.4, we review some recent measures that are based on correlations in time, namely temporal steering and temporal non-separability, and note an important relation between the two. Interestingly, these measures are known not to be strictly equivalent, as we discuss in Section II.5, leaving open the question of equivalence between the measures based on temporal steerable weight and that based on causality measure. Sections III.1 and III.2 form an interlude where we discuss some curious features of open systems and system-environment (S-E) correlations, and mention some recent developments. We then move on to Part II in Section IV, where we mainly focus on the frameworks that overcome the limitations of the existing two-time maps. Given that a notion of Markovianity exists, namely the independence of future outcomes on past measurement results, non-Markovianity is commensurate with a notion of causality and causal influence of past history on the future of evolution. In Sec.IV we present the notion of non-Markovianity based on an operational framework called process tensor and discuss various features, along with mentioning recent progress. In Sec.IV.3, we review the notion of non-Markovianity based on conditional past-future independence which is operationally motivated and yet does overcome the limitation of previous approaches. In Section V, we briefly review some of the aspects of non-Markovian dynamics in experimental settings. We conclude in Section VI, with a brief discussion on necessary and sufficient criterion for a witness and measure of non-Markovianity for any arbitrary quantum stochastic dynamics and provide a note on future prospects. ## II Part I: Two-time quantum dynamical maps and non-Markovianity ### The master equation Traditionally, the reduced dynamics of a system coupled to an environment is described by a Nakajima-Zwanzig master equation, also called the time-nonlocal equation, of the form \[\dot{\rho}(t)=-\frac{i}{\hbar}[H_{S},\rho(t)]+\int_{t_{0}}^{t}\mathcal{K}_{t, \tau}[\rho(\tau)]d\tau, \tag{1}\] \(\forall t\geq\tau\geq 0\), where \(\dot{\rho}=\frac{d\rho}{dt}\) and \(H_{S}\) is the system Hamiltonian. The linear map \(\mathcal{K}_{t,\tau}\) incorporates the non-Markovian memory effects that may be present in the system's evolution. One may go from the time-nonlocal equation to a time-local one by assuming that there exists a linear map \(\Phi\) which is invertible at all times, that is \(\Phi\Phi^{-1}=I\), such that [1], \[\dot{\rho}(t) =\int_{t_{0}}^{t}d\tau\left(\mathcal{K}_{\tau,t}\circ\Phi_{\tau} \right)[\rho(0)]\] \[=\int_{t_{0}}^{t}d\tau\left(\mathcal{K}_{\tau,t}\circ\Phi_{\tau} \circ\Phi_{t}^{-1}\right)\Phi_{t}[\rho(0)]=\mathcal{L}_{t}[\rho(t)]. \tag{2}\] \(\mathcal{L}_{t}\) is the time-local _generator_ or the _Lindbladian_, which is a linear super-operator on the space of density operators. When the corresponding dynamical map is non-invertible, it is not necessary that a time-local master equation should exist [1], although one may make use of Moore-Penrose pseudo-inverse [14] or generalized inverse and still be able to construct a generator at least for divisible dynamics [1]. That is, existence of a master equation is sufficient to imply the existence of a corresponding dynamical map, however the converse is not true [10]. In order to arrive at an exact Lindbladian \(\mathcal{L}\), one makes the famous Born-Markov approximation [see Sec. II.2], under which the time-local evolution of the open system is described by the famous Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation [15, 16], which in its canonical form reads, \[\dot{\rho}=\mathcal{L}[\rho]=\sum_{j}\gamma_{j}\bigg{(}L_{j}\rho L_{j}^{ \dagger}-\frac{1}{2}\{L_{j}^{\dagger}L_{j},\rho\}\bigg{)}. \tag{3}\] Here \(\dot{A}=\frac{dA}{dt}\) for any time-continuous operator \(A\) and the linear operators \(L_{j}\) are called the _Lindblad operators_ or simply the _jump operators_. The dynamics described by Eq. (3) is called time-homogeneous Markovian. Generally, the decay rates \(\gamma_{j}\) may be time-dependent. In this case, the Born-Markov approximation does not hold, but the rotating approximation is retained, so that the master equation modifies to the time-dependent GKSL-like equation (in the canonical form): \[\dot{\rho}=\mathcal{L}(t)[\rho]=\sum_{j}\gamma_{j}(t)\bigg{(}L_{j}(t)\rho L_{j} ^{\dagger}(t)-\frac{1}{2}\{L_{j}^{\dagger}(t)L_{j}(t),\rho\}\bigg{)}. \tag{4}\] Now the jump operators themselves are time-dependent along with the decay rates \(\gamma_{j}(t)\). The process in Eq. (4) is termed time-inhomogeneous Markovian when all the decay rates \(\gamma_{j}(t)\) are positive for all times. When at least one of the decay rates is negative for a certain interval of time-evolution, then the process is termed non-Markovian [Garraway, Breuer and Petruccione, 2002]. The price one pays for going from a nonlocal to a local description is that the generator may become highly singular [Chruscinski and Kossakowski, 2010], which makes the solution to dynamics analytically hard. Indeed, the nonlocal equation might be more natural and easy to handle in certain physical situations, see Ref. [Megier et al., 2020, Megier et al., 2020]. The distinction between the notions of Markovianity in Eq. (3) and Eq. (4) becomes clearer when one looks at the properties of the corresponding dynamical maps, which we discuss next. ### Quantum dynamical maps From a quantum information theoretic perspective, the general time evolution of a quantum system is described by a quantum dynamical map, which takes density operators to density operators. Since the environment is generally a many-body system with many degrees of freedom, it becomes difficult for an experimenter to fully control it. Therefore, studying the reduced dynamics of the system in a consistent manner becomes useful. Figure (1) depicts a simple example of an open quantum system, namely, qubit interacting with an environment. Let us denote the system Hamiltonian (also called the free Hamiltonian) as \(H_{S}\) and the environment Hamiltonian as \(H_{E}\). The interaction Hamiltonian \(H_{int}\) determines the nature of system-environment interaction and the coupling with the environment. The total S-E evolution may be represented by a global unitary \(U=\exp\{-\frac{i}{\hbar}(H_{S}+H_{E}+H_{int})t\}\). The effect of the environment on the system is often called _quantum noise_ in the context of quantum information processing and the open-system evolution is termed noisy evolution in contrast to closed-system evolution which is unitary. Tracing out the environment degrees of freedom gives rise to the operator-sum representation of the effect noise on the system, which falls under the broad formalism of quantum operations. Mathematically, the effect of the environment of the system is represented by a set of linear operators on the system, called the Kraus operators, and the reduced dynamics of the system is obtained as follows. \[\rho(t)=\mathrm{Tr}_{E}[U(\rho_{S}(0)\otimes\rho_{E})U^{\dagger}]=\sum_{j} \bra{e_{j}}U(\rho_{S}(0)\otimes\ket{e_{0}}\bra{e_{0}})U^{\dagger}\ket{e_{j}}, \tag{5}\] where the states \(\{\ket{e_{j}}\}\) represent environment degrees of freedom. Equivalently, Eq. 5 can be written for any input system state \(\rho\), in the so-called _Kraus form_, as, [Sudarshan et al., 1961, Kraus, 1971, Choi, 1975, Kraus et al., 1983] \[\rho(t)=\Phi(t)[\rho]=\sum_{j}K_{j}(t)\rho K_{j}^{\dagger}(t). \tag{6}\] Here, \(K_{j}\equiv\bra{e_{j}}U\ket{e_{0}}\) are called the _Kraus operators_ which obey \(\sum_{j}K_{j}^{\dagger}K_{j}\leq I\). This operator-sum representation is an important and powerful tool today in the context of quantum information and computation [Nielsen and Chuang 2010]. The map \(\Phi\) in Eq. 6 obeys the time-homogeneous (or, time-independent) master equation (3), that is, \(\dot{\Phi}=\mathcal{L}[\Phi]\), whose solution is given by \(\Phi=\exp\{t\mathcal{L}\}\), which is a one-parameter quantum dynamical semigroup. Similarly, a two-parameter map \(\Phi(t,t_{0})\) (or \(\Phi(t)\) for simplicity setting \(t_{0}=0\)) obeys a master equation (4) of the form, \(\dot{\Phi}(t)=\mathcal{L}(t)[\Phi(t)]\), whose solution is given by \[\Phi(t,t_{0})=\mathcal{T}\exp\bigg{\{}\int_{t_{0}}^{t}\mathcal{L}(\tau)d\tau \bigg{\}}\, \tag{7}\] where \(\mathcal{T}\) is the time-ordering operator. In a sense, a map that is derivable from a given generator depends on the nature of \(\mathcal{L}\). One must note that Eqs. (3), (4), and (6) are derived after assuming that the system-environment (S-E) state factors out at \(t=0\), which need not be the case generally. Furthermore, the environment state \(\rho_{e}\) is assumed to be _fixed_ for all times, in which case the Born-Markov approximation holds. Under the time-coarse-grained weak coupling limit, the evolution quickly "forgets" initial S-E correlations and tends to the Lindblad form [Royer 1996]. The existence of the time-independent Lindblad form 3 implies the following _equivalent_ statements: (i) The environment auto-correlation function is a delta function and corresponds to the white noise approximation. There is no back-action on the system due to static environment state, which also means that \(\tau_{E}<<\tau_{S}\), where \(\tau_{E}\) is environmental correlation time and \(\tau_{S}\) is the system relaxation time, in other words, the environment cannot store any information about the system's past evolution. This is the famous Born-Markov approximation; (ii) The system uniformly couples to all the degrees of freedom of the environment; (iii) The Lindbladian \(\mathcal{L}\) is time-independent and the corresponding solution to the equation \(\dot{\Phi}(t)=\mathcal{L}[\Phi(t)]\), is \(\Phi(t)=\exp\{t\mathcal{L}\}\) which is a semigroup satisfying the property, \(\Phi(t+\tau)=\Phi(t)\Phi(\tau)\), for all \(0\leq\tau\leq t\). Historically, any process that deviates from the semigroup structure was termed non-Markovian [Breuer et al. 2016a]. Later developments in the quantum information community indicate that this is not the complete story, as we will elaborate in the following sections. Figure 1: Left: A simple representation of a qubit interacting with environment degrees of freedom. Right: A simple model of quantum operation after tracing out the environment from the global unitary \(U\). Here, \(\rho_{S}\) and \(\rho_{E}\) are system and environment states, respectively, and \(H_{S}\), \(H_{E}\), and \(H_{int}\) are system, environment and S-E interaction Hamiltonians, respectively. ### Measures of non-Markovianity: Spatial domain #### ii.3.1 CP-indivisibility Divisibility is a property of dynamical maps which allows us to write a map as a concatenation of intermediate maps [Wolf and Cirac 2008, Wolf et al. 2008, Rivas et al. 2010, Chruscinski et al. 2011, Chruscinski and Maniscalco 2014, Chruscinski et al. 2018, Davalos et al. 2019]. In the classical case, a divisible (hence Markovian) stochastic process is given by concatenation of transition matrices. As far as the traditional approach is concerned, there is no known way of carrying the classical definition of non-Markovianity over to the quantum case. However, an approach based on divisibility states that a map \(\Phi(t,0)=\Phi(t,\tau)\Phi(\tau,0)\) is CP-indivisible if the intermediate map \(\Phi(t,\tau)\) is not completely positive (NCP) in the sense that at least one of the eigenvalues of the matrix \[\chi=(\Phi(t,\tau)\times I)[|\psi\rangle\!\langle\psi|] \tag{8}\] is negative, where \(\chi\) is called the Choi state [Choi 1975] or Sudarshan B matrix [Sudarshan et al. 1961a], which is dual to the intermediate map \(\Phi(t,\tau)\), and \(|\psi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\) is a maximally entangled state. Based on the above considerations, the following measure of non-Markovianity was introduced by Rivas-Huelga-Plenio (RHP) in [Rivas et al. 2010]. \[\mathcal{N}_{\rm RHP}=\int_{0;\,\chi(t,\tau)<0}^{t_{\rm max}}g(t)dt\,;\quad g (t)=\lim_{\tau\to 0^{+}}\frac{\|\chi(t,\tau)\|_{1}-1}{\tau}, \tag{9}\] where, \(\chi\) is the Choi matrix such that whenever \(g(t)>0\), the divisibility condition is broken and the time integral over the positive regions of \(g(t)\) quantifies the quantum memory in the dynamics. Note that \(\mathcal{N}_{\rm RHP}\) goes up to infinity, hence requires a suitable normalization so that it falls in the interval \(\{0,1\}\), implying that for Markov processes \(\mathcal{N}_{\rm RHP}=0\). In fact, for any finite \(d\)-dimensional open system, the RHP measure in Eq. 9 is equivalent the one due to Hall-Cressor-Li-Anderson (HCLA) given by [Hall et al. 2014, Shrikant et al. 2018], \[\mathcal{N}_{\rm RHP}=\frac{d}{2}\mathcal{N}_{\rm HCLA};\quad\mathcal{N}_{\rm HCLA }=-\int_{0;\,\gamma(t)<0}^{t_{\rm max}}\gamma(t)dt, \tag{10}\] where, the integration is carried over only the negative regions of the time-dependent decay rate \(\gamma(t)\). Historically, it was understood that when maps generating the dynamics deviate from having a semigroup structure, one speaks of non-Markovianity [Breuer et al. 2016a]. The condition \(\gamma_{j}(t)\geq 0\) pertains to Markovian approximation for the time-dependent noisy dynamics, and the corresponding dynamical maps do not belong to a semigroup; such processes are termed _time-dependent Markovian_. Note that the dynamics represented by the Lindblad form in Eq.(3) is strictly Markovian [Hall et al. 2014]. When the decay rates in Eq.(4) are temporarily negative, the corresponding dynamical maps are no more CP-divisible. #### ii.3.2 Information back-flow As an open system evolves, it generally sets up correlations with the environment and loses its information content irreversibly. However, that is true only when the system couples weakly to the environment. Under the strong coupling limit, the information might periodically return to the state from the environment, leading to _information back-flow_. This also means that the environment remembers the history of evolution of the system. We briefly review here an approach by Breuer, Laine and Piilo (BLP) [Breuer et al. 2009] to quantify this information back-flow based on the trace distance. One may find a distance measure on the space of density operators that is contractive under the given CPTP map. Since matrix trace norm is known to be CP-contractive under a CPTP map [Nielsen and Chuang 2010], trace distance is one such natural candidate1. Mathematically, the trace distance is defined as, Footnote 1: In fact, Bures distance and quantum relative entropy are other measures that are contractive under CPTP maps and can witness non-Markovianity [Liu et al. 2013b, Megier et al. 2021]. \[\mathcal{D}(\rho_{1},\rho_{2})=\frac{1}{2}\|\rho_{1}-\rho_{2}\|_{1}, \tag{11}\] where \(\|A\|_{1}=\mathrm{Tr}[\sqrt{AA^{\dagger}}]\) is the trace norm of an operator \(A\). A map \(\Phi(t_{2},t_{0})\), that takes a density operator from \(t_{0}\) to \(t_{2}\), is said to be Markovian if it satisfies the following data processing inequality. \[\mathcal{D}\big{(}\Phi[\rho_{1}],\Phi[\rho_{2}]\big{)}\leq\mathcal{D}\big{(} \rho_{1},\rho_{2}\big{)}, \tag{12}\] for all times, where \(\Phi[\rho]\) is given by Eq. (6). Breakdown of the monotonicity of trace distance shown in Eq.(12) between any two orthogonal initial states under a CPTP map was used as a witness of non-Markovianity. The decrease in non-orthogonality of the states is interpreted as back-flow of information from the environment to the system. Note that when there is information back-flow, the intermediate map \(\Phi(t_{2},t_{1})\) is not even positive, which in turn implies that the dynamical map \(\Phi(t_{2},t_{0})=\Phi(t_{2},t_{1})\Phi(t_{1},t_{0})\) is positive (P-) indivisible [Chruscinski et al. 2011, Chruscinski and Maniscalco 2014]. This is equivalent to saying that \(\frac{d}{dt}\|\Phi[(\rho_{1}-\rho_{2})]\|_{1}\geq 0\). Note that complete positivity of the map \(\Phi(t_{2},t_{0})\) requires that the trace distance only decreases at \(t=0\), but it can increase and decrease for \(t>0\) due to P-indivisibility. Quantum non-Markovianity in the sense of P-indivisibility can be quantified as follows [Breuer et al. 2009]. \[\mathcal{N}_{\mathrm{BLP}}:=\max_{\rho_{1},\rho_{2}}\int_{\frac{d\mathcal{D}}{ dt}>0}dt\ \frac{d\mathcal{D}}{dt}\, \tag{13}\] where integration is done over positive slope of \(\mathcal{D}\). Note that this criterion is only sufficient but not necessary, since it might fail as a witness of memory for some non-unital channels [Chruscinski et al. 2017, Liu et al. 2013b] as well as for certain unital channels [Hall et al. 2014]. In other words, P-indivisibility implies CP-indivisibility but the converse may not be true. Finally, we may note that as far as revival of quantum information and correlations is concerned, this is also possible when the environment is classical and therefore cannot store information about the system. In other words, for the revivals of information to take place, the environment need not be quantum. This observation calls for attention to re-evaluating the notion of system-environment back-action [Xu et al. 2013]. #### ii.1.3 Correlation-based measures We know that quantum mechanics allows for various forms of correlations namely, nonlocal correlations [Brunner et al. 2014] that violate Bell inequalities, steering [Uola et al. 2020], entanglement [Horodecki et al. 2009], entropic accord [Szasz 2019] and quantum discord [Modi et al. 2012]. While the RHP measure discussed in Eq. 9 is based on entanglement, there exist various proposals based on different measures of correlation such as quantum discord [Alipour et al. 2012], mutual information [Luo et al. 2012] and accessible information [Fanchini et al. 2014, Haseli et al. 2014, De Santis et al. 2019]. Interestingly, some works show a peculiar relationship between non-Markovianity and certain forms of correlations, for example, quantum discord and non-Markovianity [Mazzola et al. 2011, Alipour et al. 2012]. It was shown that a measure based on mutual information between the reduced system and an ancilla detects the range of non-Markovianity as BLP does [Luo et al. 2012]. Similar assertions may be made for any measure based on correlation between the system and an ancilla, for example the one proposed by [Rivas et al. 2010], in which entanglement is used to quantify non-Markovianity. However, it must be noted that some of them may be easier to calculate than the others. For instance, correlations between system-ancilla might be simpler compared to quantum discord between system and environment states which requires full knowledge of the S-E dynamics [Alipour et al. 2012]. As we have seen, a number measures and witnesses have been proposed based on correlations in space. However, recently, a few works have made use of _correlations in time_ to witness and measure non-Markovianity, which we take up in the next subsection. ### Measures of non-Markovianity: Temporal domain As we noted before that the (spatial) correlations form a hierarchy, quantum temporal correlations also do so which was show by [Ku et al. 2018] recently; that temporal nonlocality [Leggett and Garg 1985], temporal steering [Chen et al. 2014], and temporal non-separability [Fitzsimons et al. 2015] of the pseudo-density matrix (PDM) framework form the hierarchy. In the same paper, they also showed that temporal steering is a form of weak direct cause while temporal non-separability forms a stronger form of direct cause in quantum mechanics. Interestingly, temporal steering was quantified by temporal steerable weight (TSW) which was proven to be contractive under a divisible CPTP map, and was used to quantify quantum non-Markovianity by [Chen et al. 2016]. Here, we briefly review the measure based on TSW and causality measure based on PDM. #### ii.4.1 Temporal steering Quantum steering is a way to prepare a part of an entangled bipartite state by making measurement on the other. In spatial steering, Alice performs positive operator valued measure (POVM) on the her system. Bob does not trust Alice and her apparatus either, and would wish to distinguish between the correlations established due to true manipulation of his local state and that of due to underlying classical local hidden variables. Similar to the steering in space with a given spatially entangled state, one may steer a state in time by making a measurement on the input state and sending it via a quantum channel followed by a complete quantum state tomography of the output state at the end of the channel. Now, we shall introduce the notion of temporal steerable weight. Alice performs a positive operator valued measure (POVM) measurement on an input state \(\rho\) at \(t=0\) transforming it into \[\rho_{a|x}=\frac{\Pi_{a|x}\rho\Pi_{a|x}^{\dagger}}{p(a|x)}, \tag{14}\] where \(p(a|x)=\mathrm{Tr}[\Pi_{a|x}\rho\Pi_{a|x}^{\dagger}]\) is the probability that an outcome \(a\) occurs given that Alice preforms a measurement in the basis \(x\). Now the state \(\rho_{a|x}\) is sent to Bob down a noisy quantum channel \(\Phi\) for a time \(t\). When Bob receives the state at \(t\) he performs a quantum state tomography to get the state \(\sigma_{a|x}(t)=\Phi[\sigma_{a|x}(0)\). We may call the set of states \(\sigma_{a|x}(t)\) as temporal assemblages, and let the unnormalized assemblage be \(\sigma_{a|x}(t)\equiv p(a|x)\sigma_{a|x}\). Now, by assumption, Bob doesn't trust Alice nor her devices, and he would want to distinguish the correlations due to Alice's measurements from the correlations that might have originated from a hidden variable \(\lambda\), making the correlations to satisfy locality in time and realism. Therefore, we may represent the correlations that might be produced by such classical origins as \[\sigma_{a|x}^{US}(t)=\sum_{\lambda}P(\lambda)P(a|x,\lambda)\sigma_{ \lambda}, \tag{15}\] where \(\sigma_{a|x}^{US}(t)\) is the unsteerable assemblage and \(P(a|x,\lambda)\) is the probability that an outcomes \(a\) occurs given that Alice makes a measurement \(x\), and \(\lambda\) the hidden variable that might have influenced the outcome, in which case Bob will be able to write down his assemblage in the form (15), and when he can't, then he is sure that the state is prepared by Alice's measurement. Now, a measure of temporal steering was introduced by [Chen et al., 2016] called temporal steerable weight (TSW). In order to define TSW consider a convex mixture \[\sigma_{a|x}(t)=w\sigma_{a|x}^{US}(t)+(1-w)\sigma_{a|x}^{S}(t) \quad\forall a,x. \tag{16}\] Clearly, \(\sigma_{a|x}(t)\) is an assemblage which might contain both unsteerable and steerable correlations, with the constraint \(0\leq w\leq 1\). The TSW for a given assemblage \(\sigma_{a|x}(t)\) is defined by \[W^{\rm TS}=1-w^{\prime}, \tag{17}\] where \(w^{\prime}\) is the maximum value of \(w\). TSW may be interpreted as the minimal steerable resources required to reproduce temporal steerable assemblage. That is, \(W^{\rm TS}=0\,\text{and}\,1\) for minimal and maximal steerability, respectively. \(w^{\prime}\) may be obtained by semi-definite programming: \[\text{Find}\quad w^{\prime}=\max\text{Tr}\sum_{\lambda}w\sigma_{ \lambda},\] \[\text{subject to}\quad\left(\sigma_{a|x}(t)-\sum_{\lambda}q_{ \lambda}(a|x)w\sigma_{\lambda}\right)\geq 0\quad\forall a,x\] \[w\sigma_{\lambda}\geq 0\qquad\forall\lambda, \tag{18}\] where \(q_{\lambda}(a|x)\) are the extremal values of \(P_{\lambda}(a|x)\). Now, under the noisy quantum channel these correlations deteriorate and [Chen et al., 2016] have shown that \(W^{\rm TS}\) is non-increasing under local operations. Therefore, we have the monotonicity condition \[W^{\rm TS}_{\rho}\geq W^{\rm TS}_{\Phi[\rho]}. \tag{19}\] A Markov process satisfies the above condition, and a non-Markovian process violates it. Given this fact, a measure of non-Markovianity is nothing but the area under the positive slope of of \(W^{\rm TS}_{\Phi[\rho]}\): \[\mathcal{N}_{\rm TSW}=\int_{t=0;\,\frac{dW^{\rm TS}}{dt}>0}^{t} \frac{dW^{\rm TS}}{dt}dt, \tag{20}\] which by the factor of \(\frac{1}{2}\) is equivalent to \[N:=\int_{t_{0}}^{t_{\rm max}}\left|\frac{dW^{\rm TS}}{dt}\right| \!dt+(W^{\rm TS}_{t_{\rm max}}-W^{\rm TS}_{t_{0}}). \tag{21}\] It is important to mention that \(\mathcal{N}_{\rm TSW}\) is only a sufficient and not a necessary condition for non-Markovianity of \(\Phi\). There may be channels that will be detected as Markovian by this measure while other measures may detect them as non-Markovian. Breakdown of monotonicity of TSW may be interpreted as information back-flow from the environment to the system, hence this measure captures the range of memory effects that BLP does. Pseudo-density matrix Recently, attempts have been made to define _states across time_, similar to the states defined in space [Zhang 2021, Zhang et al. 2020, Cotler et al. 2018]. It was shown that both these states have different structure, and a construction by [Fitzsimons et al. 2015, Pisarczyk et al. 2019] called the pseudo-density matrix (PDM) - which is a state correlated in spacetime, allowing for a treatment of correlations in space and time on an equal footing. However, one should note that the framework is ambiguous for the systems of dimension other than prime power [Horsman et al. 2017]. Recently, [Utagi 2021] has defined a measure for non-Markovianity based on temporal correlation in PDM. Utilizing the fact that the most general PDM is constructed by making measurement before and after a system passes through a quantum channel, one could quantify quantum non-Markovianity in the quantum channel in a straightforward way. Here, we consider a qubit across time evolved through a quantum channel, for simplicity. Let \(\rho\) be the input state and \(\Phi(t_{B},t_{A})\) a quantum channel that takes a density operator \(\rho_{A}\) on \(\mathcal{L}(\mathcal{H}_{A})\) at time \(t_{A}\) to an operator \(\rho_{B}\) on \(\mathcal{L}(\mathcal{H}_{B})\) at time \(t_{B}\). Then a two-point PDM is given by \[R_{AB}=(I\otimes\Phi)\left[\left\{\rho\otimes\frac{I}{2},\mathsf{swap}\right\}\right] \tag{22}\] where \(\mathsf{swap}:=\frac{1}{2}\sum_{i=0}^{3}\sigma_{i}\otimes\sigma_{i}\) and \(\{\hat{a},\hat{b}\}=\hat{a}\hat{b}+\hat{b}\hat{a}\) is the anti-commutator of operators \(\hat{a}\) and \(\hat{b}\), and \(\sigma_{i}\) are Pauli-X,Y and Z operators with \(\sigma_{0}=I\). In fact, it can be shown [Horsman et al. 2017] that the two-point PDM can be written as \[R_{AB}=\frac{1}{2}\bigg{(}\rho_{A}\otimes\frac{I_{B}}{2}\chi_{AB}+\chi_{AB} \rho_{A}\otimes\frac{I_{B}}{2}\bigg{)} \tag{23}\] where \(\chi_{AB}\) is the Choi state of the channel \(\Phi\), which derives from the so-called start product. One must note that the PDM is hermitian and has unit trace, but is not positive semi-definite when it is constructed out of measurements made in time. The reason is that this framework considers the tensor product over the _same_ Hilbert space of the input and output density operators, in order to define a state across time. However, under partial trace it describes a positive semi-definite operator at each instant of time, which is consistent with the current formulation of quantum mechanics. A measure of temporal correlations in PDM has been defined by [Pisarczyk et al. 2019]: \[F=\log_{2}\|R_{\mathrm{AB}}\|_{1}, \tag{24}\] which implies that when \(F>0\) the state \(R_{\mathrm{AB}}\) is temporally correlated. Since \(F\) is non-increasing under local quantum operations, for a Markovian channel \(\Phi\), the following condition holds: \[F_{\Phi[\rho]}(t)\geq F_{\Phi[\rho]}(t+\tau), \tag{25}\] with \(t+\tau\geq t\). A non-Markovian (or CP-indivisible) channel breaks the monotonicity condition (25). Following [Rivas et al. 2010, Chen et al. 2016], a measure was proposed by [Utagi 2021] as the area under the positive slope of \(F_{\Phi[\rho](t)}\): \[\mathcal{N}_{\mathrm{causality}}:=\max_{\rho}\int_{\sigma_{(\rho,\mathcal{E}, t)}>0}dt\;\sigma_{(\rho,\mathcal{E},t)} \tag{26}\] where \[\sigma_{(\rho,\mathcal{E},t)}=\frac{dF}{dt}.\] The above definition, by a factor of \(\frac{1}{2}\), is equivalent to \[\mathcal{N}_{\rm causality}:=\max_{\rho}\int_{t_{0}}^{t_{\rm max}}\left|\frac{dF }{dt}\right|dt+(F_{t_{\rm max}}-F_{t_{0}}). \tag{27}\] The integral (27) is such that for a non-Markovian process, the derivative of \(F\) is positive and \(\mathcal{N}_{\rm causality}>0\). And for a time-dependent Markovian (or, CP-divisible) process, the derivative of \(F\) is negative and hence \(\mathcal{N}_{\rm causality}=0\). It has been shown, using a phenomenological process, that \(\mathcal{N}_{\rm causality}>0\) corresponds to negativity of decay rate in the master equation, which is equivalent to RHP definition. It was also shown that \(\mathcal{N}_{\rm causality}>0\) also corresponds to information back-flow, and hence understood to be equivalent to the BLP definition for a pair of states optimal for the process under consideration. This equivalence is due to the fact the process considered in [Utagi 2021] has single jump operator in the Lindbladian. Interestingly, however, [Utagi 2021] showed that this equivalence of PDM based measure and BLP measure breaks down when non-Markovianity solely originates from the non-unital part of the channel. Here again, we mention that PDM based non-Markovianity measure is sufficient but not a necessary criterion for non-Markovianiy. However, [Ku et al. 2018] have shown that PDM correlations contain stronger form of quantum direct cause and temporal steerable correlations that of weaker form. There certain advantages of using PDM based measure over that on TSW. The PDM based measure (27) doesn't require any optimization procedure and it is easy to compute. However, both measures do not require optimization over the input states as BLP requires, making these measures relatively easy to compute. A limitation of PDM is that, in current form, it is ambiguously defined for system with dimension other than prime power [Horsman et al. 2017]. The full validity of these two measures requires further study. ### Equivalence of the measures and regimes of failure In fact, there exists hierarchy of divisibility of map [Chruscinski et al. 2011, Chruscinski and Maniscalco 2014], in which if a process is non-Markovian according to BLP measure, then the corresponding map \(\Phi(t,0)\) is termed positive (P-)indivisible when the intermediate map \(\Phi(t,s)\) is not even positive in the sense that it takes a positive state to a negative state. Whereas, a CP-indivisible map can be P-divisible. This prompts us that these definitions need not be equivalent, which is the case for an 'eternally' non-Markovian Pauli channel [Hall et al. 2014] for example which is CP-indivisible, but P-divisible. However, for certain non-unital channels BLP measure fails, and may require some modification [Liu et al. 2013b]. BLP indicator essentially fails when the environment evolves _independent_ of the system [Budini 2018a]. It is interesting to note that when there is only a single decoherence channel, which corresponds to single jump operator in the Lindbladian, both CP- and P-indivisibility definitions coincide [Breuer et al. 2016a] for any non-Markovian process. P- and CP-divisibility based witnesses, in general, coincide for bijective maps [Bylicka et al. 2017]. Interestingly, [Chakraborty and Chruscinski 2019] show that information back-flow and CP-indivisibility are equivalent notions for any open _qubit_ evolution. Recently, it has been noted that the negativity of decay rate is not sufficient to capture CP-indivisibility for non-invertible maps [Chruscinski et al. 2018], particularly when there are multiple time-dependent decay rates in the master equation. Interestingly, P- and CP-divisibility as notions of Markovianity coincide for multi-level amplitude damping processes [Chruscinski et al. 2022]. It must be noted that PDM contains correlations that characterize a form of strong direct quantum cause and temporal steerable correlations form that of a weaker form. It was noted in [Chen et al. 2016] that the measure based on TWS is necessary but sufficient to detect non-Markovianity. Therefore, it remains an open question if these measures for non-Markovianity vary in their ability of detecting weaker and stronger forms of non-Markovianity as RHP and BLP measures respectively do. So far, it is clear that the causality based measure by [14] is sufficient, but it is not yet known if it is also necessary as a non-Markovianity indicator. However, these definition and measures, respectively detect and quantify non-Markovianity of only CP- and P-indivisible processes. Whereas it is known that there exist non-Markovian processes with colored environmental memory spectrum, hence non-Markovian [21, 20], but are CP-divisible. These processes, even though, CP-divisible, can delay entanglement sudden death because of the quantum memory effect. It was also noted that when the generator of the dynamics depends on the initial time, it leads to a kind of memory effect in the dynamics on the level of master equation even when the dynamics is CP-divisible [20, 14, 15]. An interesting notion of memorylessness (or, Markovianity) was proposed by [14], in which a dynamical map is said to Markovian (more precisely a semigroup), if the dynamical map is independent of the initial time. This notion was termed 'temporal self-similarity' in the sense that the form of the map remains same throughout the dynamics. This notion is, in fact, commensurate with time-homogeneity of semigroup evolution. The motivation behind this notion was to find a witness and measure of non-Markovianity of certain kinds of noise such Ornstein-Uhlenbeck and Power-Law noise that have colored memory spectrum [20], but yet are CP-divisible and hence oblivious to RHP measure. The measure based on the deviation from semigroup proposed by [14] is as follows. From Eq. (7) one may get the infinitesimal map \[(\delta\Phi)\rho(t)=\mathcal{T}\exp\left(\int_{t}^{t+dt}\mathcal{L}(\tau)d \tau\right)\rho(t)=(1+\mathcal{L}(t)dt)\rho(t). \tag{28}\] From CJ isomorphism, the Choi state \(\chi_{\Phi(t)}\) of the infinitesimal map (28) is found to be \((d\ket{\psi^{+}}\bra{\psi^{+}}+\chi_{\mathcal{L}(t)}dt)\), where \(\chi_{A}\) denotes the Choi state of the operator \(A\) in question and \(\ket{\psi^{+}}\equiv d^{-1/2}\sum_{i}|i,i\rangle\) is maximally entangled \(d\)-dimensional state. Some simple algebra gives one the Choi state of the generator. The authors use the time-averaged distance between the Choi states of the time-independent (\(\mathcal{L}\)) and time-dependent (\(\mathcal{L}(t)\)) generators to quantify non-Markovianity, given as, \[\mathcal{N}_{\text{SSS}}=\min_{\gamma}\frac{1}{T}\int_{0}^{T}\|\Delta L\|_{1}dt, \tag{29}\] where \(T\) represents some time interval. Here, \(\Delta L\equiv\delta\chi_{\Phi(t)}-\delta\chi_{\Phi}=(\chi_{\mathcal{L}(t)}- \chi_{\mathcal{L}})dt\). The minimization over time-independent \(\mathcal{L}\) leads to the minimization over all possible time-independent decay rates \(\gamma\). Positive \(\mathcal{N}_{\text{SSS}}\) means that a process is non-Markovian even when it is CP-divisible. This feature makes this measure a sufficient and necessary criterion for non-Markovianity. Recently, [14, 15] proposed a definition of non-Markovianity that can detect memory present even in CP-divisible processes [14]. Here, memory effect is associated with the breakdown of conditional past-future (CPF) independence (or to the existence of CPF correlations), which is calculated using only three (sufficient) consecutive measurements on the system and post-selection on the outcomes. Although past-future independence, shown by [13] to be equivalent to 'composability', thence to the semigroup structure of the dynamical maps, one may expect that semigroup dynamics generates statistics that obey CPF independence of [14, 15]. From this section one understands that any witness that detects non-Markovianity of a CP-divisible process is necessary and sufficient. We discuss this in the Afterward through Part II. ## III Interlude: The problem of (initial) system-environment correlations As noted before, a master equation, under Born-Markov approximation, need not exist if the initial S-E correlations are taken into account. Before going into those details, it is pertinent to ask when does actually decoherence begin, given that the underlying system evolution is described by a CP map (i.e., assuming no initial S-E correlations). We briefly point out the literature in the next sub-section and then move on to the issues surrounding the physically viable description of quantum stochastic processes without needing the initial S-E state to be separable. ### S-E correlations, decoherence, and non-Markovianity It is generally understood that decoherence takes place when the system degrees of freedom "entangle" with that of the environment; hence entanglement must be necessary for decoherence [Schlosshauer 2007]. However, this common wisdom might be mistaken, as [Pernice and Strunz 2011] show that this holds only when the system state starts out as a pure state. If the system's initial state is a mixed state, then decoherence may begin well before the system and environment get entangled. This also suggests that classical correlations might suffice for decoherence to take place. Given that correlations, classical or quantum, are responsible for the onset of decoherence, it is interesting to explore the relationship between S-E correlations and non-Markovianity. The earliest notion of quantum non-Markovianity actually goes back to the deviation from semigroup structure which arises out of the so called Born-Markov (BM) approximation [Breuer et al. 2016a], which basically means that system and environment are _weakly_ coupled for all times. The connection of S-E correlations with non-Markovianity has attracted the attention of the quantum information community [Devi et al. 2011, Breuer et al. 2016a, de Vega and Alonso 2017, Li et al. 2018]. The S-E joint state may start out as product state, and later due to strong coupling between the system and environment, there may be certain points in time the when S-E correlations either weaken or even break momentarily causing the open system dynamics to transition from being Markovian to non-Markovian. It was noted that the S-E correlations decrease when there is information back-flow from the environment to the system [Mazzola et al. 2012]. However, it was also shown by [Pernice et al. 2012] that there need not be any relationship between the decrease in S-E quantum correlations (specifically S-E entanglement) and non-Markovianity. Interestingly, if the environment is classical, there may be maximally non-Markovian evolution without S-E back-action or information flow [Budini 2018a]. When a two qubits are independently interacting with a classical random field, there may be revivals of classical, quantum discord and entanglement between them even when there is no back-action from the environment to qubits [Franco et al. 2012]. ### Initial S-E quantum correlations, CP evolution, and non-Markovianity As we have noted, system-environment (S-E) correlations play the central role in open systems. To describe the reduced dynamics of the system via Lindblad equation (3), the joint S-E state is assumed to be _factorized_, i.e., \(\rho_{s}(0)\otimes\rho_{e}\), at the initial time and the environment state \(\rho_{E}\) is assumed to be fixed for all times. Now, given that initial S-E state in not a product, the reduced dynamics of the system is argued to be not-CP; see Ref. [Pechukas 1994, Alicki 1995, Pechukas 1995, Shaji and Sudarshan 2005, Rodriguez-Rosario et al. 2008, Schmid et al. 2019] for some early results. However, it is possible to have physically meaningful not-CP maps, given that the domain of validity is known where such a map would still output a positive state [Jordan et al. 2004]. Moreover, Pechukas's assignment map can be made linear by sacrificing either positivity or reason able consistency [Rodriguez-Rosario et al.2010]. There also have been arguments for and against vanishing quantum discord as being necessary and sufficient condition for complete positivity [Shabani and Lidar2009, Brodutch et al.2013, Sabapathy et al.2013]. However, it is clearly established that when initial S-E correlations are classical the reduced dynamics can always be described by a CP map [Rodriguez-Rosario et al.2008]. [Buscemi2014] argued that, if there are no anomalous information back-flows from the environment to system, it is necessary and sufficient to describe the reduced dynamics of the system by a CPTP map. Interestingly, it was shown earlier that a witness for initial S-E correlations upper bounds the witness of non-Markovianity based on BLP (or, information back-flow) criterion [Rodriguez-Rosario et al.2012]. This prompts further investigations into the problem of initial S-E correlations and non-Markovianity. Recently, [Strasberg and Esposito2018] introduced a measure which quantifies non-Markovianity even when the initial S-E state is entangled. A work by [Schmid et al.2019] argues why initial system-environment correlations do not imply the failure of complete positivity, from the point of view of quantum causality. [Ringbauer et al.2015] proposed a method to characterize a superchannel by making measurements on the system alone even when it is correlated with the environment. A more general result by [Paz-Silva et al.2019] says that it is still possible to have a \(d^{2}\) (or less) number (i.e., a family) of CP maps describing \(d\)-dimensional system evolution with initial S-E (quantum) correlations, i.e., by doing local operations on the system, one could derive a set of CP maps that describe the S-E evolution with initially correlated state. This is still an active area of research where there is no clear consensus about the role of initial S-E (quantum) correlations in open system dynamics where complete positivity is paramount. [Alipour et al.2020] have put forth a technique that makes use of correlation parent operator that allows one to write a down a master equation with initial correlation within the weak coupling regime. A technique based on adapted projection operators was introduced by [Trevisan et al.2021], in which they apply a perturbative method to model a global S-E evolution which incorporates fully general initial correlations. For other recent attempts that have been made to accommodate initial S-E correlations into a valid theory of open systems without sacrificing linearity and complete positivity, see Ref. [Paz-Silva et al.2019] and [Pollock et al.2018b, Pollock et al.2018a], with specific aims of characterizing open system evolution operationally and allowing a quantum stochastic process to have an appropriate classical limit. We take this up in Section IV. ## IV Part II: Multi-time correlations and non-Markovian processes So far, we have been considering only the two-time parameter dynamical maps that are related to two-time correlation functions of the environment. In fact, a quantum regression formula can be obtained through two-time maps which helps us relate its satisfaction to the semigroup evolution of the open system under the initial factorized state assumption [Li et al.2018]. However, recent interest has grown in reconsidering quantum multi-time processes [Lindblad1979] generalizing the so-called quantum stochastic process [Sudarshan et al.1961a], however in the light of operational quantum theory. ### Quantum regression Quantum regression hypothesis (QRH) or quantum regression formula (QRF) must be invoked for calculation of _multi-time correlation functions_, without the knowledge of environmental degrees of freedom, that is only with the mean values of the operator on the system Hilbert space alone. However, while non-Markovian evolution _must_ violate QRH, Markovian evolution (in the sense of CP divisibility) _can_ as well [Guarnieri et al. 2014]. Under the weak coupling and singular coupling limit, semigroup dynamics obey QRH [Swain 1981, Dumcke 1983, Davies 1974, Davies 1976]. It was shown that Born-Markov (BM) approximation implies no-back-action [Swain 1981], which means that the environment doesn't evolve _due to_ the interaction with the system, and the S-E state remains factorized for all times. It has been shown that CP-divisible dynamics violate QRH [Guarnieri et al. 2014]. This prompts us that even RHP or CP-divisibility based criterion of non-Markovianity, like BLP criterion, is also not necessary but sufficient. Therefore, one is tempted to conjecture that violation of QRH is necessary and sufficient condition for a quantum stochastic process to be non-Markovian. ### Process tensor A classical stochastic process \((X,t)\) is a collection of joint probability distribution of a system's state: \[P(X_{j},t_{j};X_{j-1},t_{j-1};\cdots;X_{1},t_{1};X_{0},t_{0})\ \ \forall j\in N, \tag{30}\] which must satisfy Kolmogorov consistency conditions, where \(X\) is the random variable defining the process and \(t_{j}\) are the time instances at which \(X_{j}\) outcomes occur with probabilities \(P(X_{j},t_{j})\). Then, a Markov process or chain satisfies the following condition: \[P(X_{j},t_{j}|X_{j-1},t_{j-1};\cdots;X_{1},t_{1};X_{0},t_{0})=P(X_{j},t_{j}|X_{ j-1},t_{j-1})\quad\forall j\in N, \tag{31}\] where \(P(A|B)\) denotes probability of obtaining A given B. It is not straightforward to define similar joint probability distribution in the quantum domain. What is the most general way one may represent a physical process which also has an operational meaning? Quantum combs formalism is one such method [Chiribella et al. 2009, Pollock et al. 2018b]. As opposed to the traditional approach discussed in Part I of this review, where only two-time correlations are considered to define quantum non-Markovianity, process tensor formalism defines non-Markovianity based on the presence of temporal correlations in a multi-time quantum stochastic process [Pollock et al. 2018b, Pollock et al. 2018a]. Such descriptions of open systems might have specific implications for designing information processing tasks in the laboratory, where two-time correlations might not capture the full characteristics of an underlying process. A quantum process is characterized by \(j\) steps, with \(0\leq j\leq N\), when the system's state can be predicted at any instant \(j\). The system is subject to intermediary operations \(A\) which may be interrogations, manipulations, unitaries, or CP maps in general, and let \(\{A_{j}\}\) and \(\{M_{j}\}\) be the set of local operations and measurements, respectively, on the system. 'Process Tensor' \(T_{j:0}\) is a mapping from the sequence of operations [see Figure (2)] \[\mathbf{A}_{j-1:0}:=\{A_{j-1};A_{j-1};\cdots A_{1};A_{0}\} \tag{32}\] to the state \(\rho_{j}\) \[\rho_{j}=T_{j:0}[\mathbf{A}_{j-1:0}]. \tag{33}\] In general, \(T_{j:0}\) satisfies (i) linearity: \(T[a\mathbf{A}+b\mathbf{B}=aT[\mathbf{A}]+bT[\mathbf{B}]\); (ii) complete positivity, that it is a positive map on an extended space: \((T\otimes I)[\mathbf{A}_{SA}]=\rho_{SA}\geq 0\); (iii) containment: If \(j\geq j^{\prime}\geq k^{\prime}\geq k\), then \(T_{j^{\prime}:k^{\prime}}\) is contained in \(T_{j:k}\), in the sense that a process tensor on fewer times is not obtained by summing over excessive times, rather by appending identity maps for the excessive times. Process tensor can be used to describe open quantum system dynamics. Let \(\mathcal{U}_{j:j-1}\) be the S-E unitary that acts on S-E space as \(\mathcal{U}_{k:l}[\rho_{l}^{SE}]=U_{k:l}\rho_{l}^{SE}U_{k:l}^{\dagger}=\rho_{k }^{SE}\), with \(U_{k:l}U_{k:l}^{\dagger}=I\), and the system state be given by tracing over the environment \(\rho_{j}=\mathrm{Tr}_{E}[\rho_{j}^{SE}]\), then the total dynamics follows: \[\rho_{j}^{SE}=\mathcal{U}_{j:j-1}A_{j-1}\mathcal{U}_{j-1:j-2}\cdots A_{1} \mathcal{U}_{1:0}A_{0}[\rho_{0}^{SE}], \tag{34}\] where \(\rho_{0}^{SE}\) is the initial S-E state. Note that \(T_{j:0}\) itself can be given a Kraus decomposition [Pollock et al. 2018a]. In an appropriate limit, the process tensor reduces to the conventional two-time maps picture of open system evolution: \[\rho_{j}=\mathrm{Tr}_{E}\big{(}U_{j:0}\rho_{0}^{S}\otimes\rho_{0}^{E}U_{j:0}^{ \dagger}\big{)}=\Phi_{j:0}[\rho_{0}], \tag{35}\] where \(\Phi_{j:0}\) is a CPTP map. Therefore, \(\rho_{j}\) can be obtained from the process tensor by applying identity as intermediate control operations on the system: \[\rho_{j}=T_{j:0}[I;I;\cdots I;A_{0}]. \tag{36}\] The advantage of using process tensor framework is that one may map temporal correlations in the process to a many-body entangled state. A \(j\)-step process can be mapped to a many-body state via generalized CJ-isomorphism: \[T_{j:0}[\mathbf{A}_{j-1:0}]=\mathrm{Tr}_{S}[\xi_{j:0}(I_{S}\otimes A_{j-1} \otimes I\otimes\cdots\otimes A_{0}\otimes I[(\Psi^{+})^{\otimes j-1}])], \tag{37}\] where the partial trace is over all subsystems except the one corresponding to the output of the \(T_{j:0}\), and \(\Psi^{+}\) is a maximally entangled bipartite density operator. In other words, the action of process tensor \(T_{j:0}\) on the sequence of operations \(\mathbf{A}_{j-1:0}\) is equivalent to projecting the Choi state \(\xi_{j:0}\) onto the Choi state of \(\mathbf{A}_{j-1:0}\). Here, \(\xi_{j:0}\) is called the generalized Choi state of the \(j\)-step process that is mapped to a \((2j+1)\)-body state which has a matrix-product-operator representation [Perez-Garcia et al. 2007], and the Choi state \(\xi_{j:0}\) has the bond dimension that is bounded by the effective dimension of the environment [Pollock et al. 2018a]. Given the above framework, we are in a position to discuss a definition of quantum (non-)Markovianity from an operational point of view. Let us denote the system at time step \(i\) as a function of control operations: \(\rho_{i}=\rho_{i}(\mathbf{A}_{i-1:0})\). After the measurement, the system is re-prepared in a state \(P_{j}^{(s)}\), selected randomly out of a set \(\{P_{j}^{(s)}\}\). The procedure of measuring and re-preparing the system introduces a 'causal break' between the past \(k\leq j\) and future \(i>j\) [See Fig.(2)]. Similar to the classical definition, the Markov condition in the quantum regime reads \[\rho_{i}\big{(}P_{j}^{(s)}|M_{j}^{(r)};\mathbf{A}_{j-1:0}\big{)}=\rho_{i} \big{(}P_{j}^{(s)}\big{)}\quad\forall\{P_{j}^{(s)},M_{j}^{(r)},\mathbf{A}_{j- 1:0}\}\quad\text{and}\quad\forall\;i,j\in\{0,N\}. \tag{38}\] Figure 2: Schematic representation of process tensor framework with memory. \(A_{j}\) are the control operations and \(M_{j}^{(r)}\) and \(P_{j}^{(s)}\) are the measurements and re-prepared states, respectively. On the contrary, a quantum process is non-Markovian iff there exist at least two distinct, independent operation sets \(\{M_{j}^{(r)};\mathbf{A}_{j-1:0}\}\) and \(\{M_{j}^{\prime(r^{\prime})};\mathbf{A}_{j-1:0}^{\prime}\}\), such that the resulting two conditional states are different: \[\rho_{i}\big{(}P_{j}^{(s)}|M_{j}^{(r)};\mathbf{A}_{j-1:0}\big{)}\neq\rho_{i} \big{(}P_{j}^{(s)}|M_{j}^{\prime(r^{\prime})};\mathbf{A}_{j-1:0}^{\prime}\big{)}. \tag{39}\] The system itself cannot carry the information into the future across the causal breaks. An environment and the S-E correlations carry the information about the initial state of the system across causal breaks [See Figure (2)], and this is what is called quantum non-Markovian memory in the process tensor. In order to quantify the memory in the process, one makes use of the mapping from temporal correlated process tensor to a many-body state via generalized CJ-isomorphism. Given the intermediate maps \(\Phi_{j:j-1}\) that take a state from time step \(j-1\) to \(j\), a Markov process is fully characterized by its Choi state on the tensor product of initial system state and Choi states of independent CPTP two-time maps: \[\xi_{j:0}^{\text{Markov}} =\chi_{j:j-1}\otimes\chi_{j-1:j-2}\otimes\cdots\otimes\chi_{1:0} \otimes\rho_{0}\] \[=\bigotimes_{j=1}^{N}\chi_{j:j-1}\otimes\rho_{0}, \tag{40}\] where \(\rho_{0}\) is the average initial state of the process. In other words, the process is said to be a Markov chain if and only if the process tensor is a product state across time. This allows one to make use of a quasi-distance based measure of non-Markovianity: \[\mathcal{N}=\min_{\xi_{j:0}^{\text{Markov}}}D\big{(}\xi_{j:0}\|\xi_{j:0}^{ \text{Markov}}\big{)}, \tag{41}\] which can be interpreted as the minimum distance from the closest Markov process, where \(D\) could be any \(CP\)-contractive2 quasi-distance such as quantum relative entropy \(D(\rho\|\sigma)=Tr[\rho\log\rho-\rho\log\sigma]\)[White et al. 2021]. Footnote 2: Here, contractivity means that a \(CP\)-contractive distance must satisfy the data processing inequality under a Markov CPTP map \(\Phi\): \(D(\Phi[\rho]\|\Phi[\sigma])\leq D(\rho\|\sigma)\) Here, some important remarks are in order. The definition in Eq. (41) is a necessary and sufficient condition for a process to be called non-Markovian, however the converse may not be true. For example, [Milz et al.2019] recently proposed a notion of 'operational divisibility' which captures memory effect present in a CP-divisible process. Process tensor is most general enough to capture all the notions of non-Markovianity under certain limits, for example it incorporates BLP and RHP based witnesses assuming that all intermediary control operations are identity. It circumvents problems of conventional two-time maps approach when initial S-E correlations are present; it allows for completely positive, linear dynamics reconstructed from measurement data via quantum process tomography [see Ref. [Pollock et al.2018a] for proofs and further details]. Process tensor also tends to a definition of classical memory when the choice of instruments as well as the causal breaks are fixed. It provides a clear operational meaning to addressing the questions of open system evolution by separating the experimenter from the underlying process that is inaccessible, making it a suitable framework to handle information processing tasks in laboratory. One such situation where one wants to remove certain unwanted memory effects arising from cross-talk was recently studied in detail by [White et al.2022] using process tensor framework. Quantum combs, in fact, provide a unified framework to describe quantum channels with classical and quantum memory. Therefore, it is pertinent to ask how to distinguish such memory effects. Interestingly, [Giarmatzi and Costa2021] used the process matrix framework, proposed by [Oreshkov et al. 2012], to 'witness' _genuinely_ quantum memory. It is interesting to note that the process tensor can be used to identify genuinely quantum memory effects in an arbitrary process. Considering the fact that a non-Markovian process deviates from a product of marginals given in Eq. (40), the positive value of entanglement negativity, given by \(\max\limits_{\tau_{B}}\!\frac{1}{2}[\|\xi_{j:0}^{\tau_{B}}\|_{1}-1]\), of the Choi state \(\xi_{j:0}\) gives a measure of 'quantumness' of non-Markovian memory, where \(\tau_{B}\) is partial transpose over some bi-partition which could be an interval between any two time-steps [White et al. 2021]. ### Conditional past-future correlations The notion of past-future independence as definition of Markovianity was used in [Li et al. 2018] to analyse a hierarchy in the definition of quantum Markovianity. Recently, [Budini 2018b] proposed a definition of non-Markovianity based on the violation of _conditional_ past-future (CPF) independence. Similar to the process tensor formalism, CPF independence is generated by the "causal break" in the process via intermediate measurements. Hence, it is claimed that the definition via CPF independence has operational meaning in the traditional approach [Budini 2022]. In a classical stochastic process, measuring a system at three successive instances \(t_{a}<t_{b}<t_{c}\) yields outcomes \(a\to b\to c\). A Markov process gives rise to factorized joint probability conditioned on immediate past outcomes: \[P(a,b,c)=P(c|b)P(b|a)P(a), \tag{42}\] where P(a) is the probability that the outcome \(a\) occurs, and \(P(b|a)\) is the probability of \(b\) occurring given that \(a\) has been learned. Bayes' rule allows us to formulate the criterion for a Markovianity: That given a fixed intermediate state, the future outcomes become statistically independent from the past ones. So, we have the conditional probability of future even \(c\) and past event \(a\) given the present \(b\) is given by \[P(c,a|b)=P(c|b)P(a|b). \tag{43}\] This in fact can be quantified via the correlation function [Budini 2018b] \[C_{pf}\equiv\langle O_{c}O_{a}\rangle_{b}-\langle O_{c}\rangle_{b} \langle O_{a}\rangle_{b}\, \tag{44}\] where the operator \(O\) is specific system property one would want to measure for each system state. Given that, we can write \(C_{pf}\) as \[C_{pf}=\sum_{ca}[P(c,a|b)-P(c|b)P(a|b)]O_{c}O_{a}. \tag{45}\] Here, the sum is over all possible outcomes \(c\in\{c_{1},c_{2},...\}\) and \(a\in\{a_{1},a_{2},...\}\) that occur at \(t_{c}\in\{t_{c_{1}},t_{c_{1}}...\}\) and \(t_{a}\in\{t_{a_{1}},t_{a_{1}}...\}\), respectively, with a given, fixed value of \(b\in\{b_{1},b_{2},...\}\). Generally, a quantum Markov process condition satisfies Eq. (43) in the quantum setting as well and that yields \(C_{pf}=0\), which provides a straightforward generalization of classical definition to quantum domain. Calculating CPF correlation measure for the quantum non-Markovian process boils down to finding the predictive and retrodictive probabilities for given system operators, and substituting them in Eq. (45), which we discuss below. Let \(M_{a}\), \(M_{b}\), and \(M_{c}\) be the measurement operators successively performed on system at \(t_{a}\), \(t_{b}\), and \(t_{c}\), respectively, with the condition that \(\sum_{j}M_{j}^{\dagger}M_{j}=I\), where \(j\)=a or b or c. Given that \(a\) is in the past of \(b\), the conditional probability \(P(a|b)\) is a retrodicted quantum probability. In terms of measurement operators \(M_{a}\) and the past quantum state \(\rho\equiv(\rho_{0},E_{b})\), it can be written as \(P(a|b)=\mathrm{Tr}[E_{b}M_{a}\rho_{0}M_{a}^{\dagger}E_{b}^{\dagger}]\), where \(\rho_{0}\) is the initial density matrix and \(E_{b}=M_{a}^{\dagger}M_{a}\) is the 'effect' operator. On the other hand, \(P(c|b,a)\) is the standard predictive probability. Therefore, substituting these conditional probabilities in the LHS of Eq. (43), we have \[P(c,a|b)=\mathrm{Tr}(E_{c}\rho_{b})\cdot\frac{\mathrm{Tr}(E_{b}M_{a}\rho_{0}M_{ a}^{\dagger})}{\sum_{a^{\prime}}\mathrm{Tr}(E_{b}M_{a^{\prime}}\rho_{0}M_{a^{ \prime}}^{\dagger})}. \tag{46}\] Assuming that the system evolves under the action of environment, one may adopt the two-time dynamical (CPTP) map between two successive instances. Then the conditional probabilities will vary according to the intermediate evolution between measurements as: \[P(c,a|b)=\mathrm{Tr}(E_{c}\Phi^{\prime}[\rho_{b}])\cdot\frac{\mathrm{Tr}(E_{b} \Phi[M_{a}\rho_{0}M_{a}^{\dagger}])}{\sum_{a}^{\prime}\mathrm{Tr}(E_{b}\Phi[M_ {a^{\prime}}\rho M_{a^{\prime}}^{\dagger})]}, \tag{47}\] where \(\Phi=\Phi(t_{b},t_{a})\) and \(\Phi^{\prime}=\Phi^{\prime}(t_{c},t_{b})\), with \(\Phi_{t}=\exp\{t\mathcal{L}\}\) where \(\mathcal{L}\) is generator of the semigroup dynamics. In general, environment interaction may generate dynamics that is not semigroup and even be non-Markovian. In that case, the dynamical map \(\Phi(t,t_{0})\) between two successive instances will follow the Eq. (7). CPF correlation measure has some interesting properties. For a non-Markovian process either \(C_{pf}<0\) or \(C_{pf}>0\). Since this criterion witnesses memory in CP-divisible processes, it may be termed a necessary and sufficient condition for non-Markovianity. And the process is Markovian iff \(C_{pf}=0\). The reader is referred to [Budini 2018b] for other properties. ## V Quantum non-Markovianity in experiments It is now well-acknowledged that simulating open quantum systems is important for many technological applications. Memory effects could prove advantageous or disadvantageous for quantum information processing, depending on the task at hand. Therefore, it is imperative to discuss and understand aspects of simulating quantum non-Markovianity in experimental setups. In this section, we present a brief survey of experimental realizations of open system dynamics, with particular focus on experiments that also study signatures of non-Markovianity. One of the robust methods of simulating open system dynamics is via optical setups [Salles et al. 2008, Rossi et al. 2017, Cialdi et al. 2017, Cuevas et al. 2019]. These setups mimic the effect of an environment on a quantum system and are seen to be effective in experimentally realizing a quantum channel. In [Chiuri et al. 2012], non-Markovian dynamics of a qubit attached to an ancilla and a simulated environment (an Ising chain, for instance) is experimentally implemented and the importance of strong S-E correlations in the emergence of non-Markovianity is highlighted. A similar setup is also used in [Liu et al. 2018], where a simulated Ising chain in a transverse field is used as the environment to study arbitrary dephasing dynamics of a photonic qubit. Similar setups to simulate quantum channels have been used to understand the transition from Markovian to non-Markovian dynamics by controlling the S-E coupling [Liu et al. 2011, Chiuri et al. 2012, Tang et al. 2012, Fisher et al. 2012, Lyyra et al. 2022]. We have earlier noted that an open system (S) coupled to an ancilla (A), while undergoing non-Markovian evolution establishes quantum correlations with A that vary non-monotonously in time. In [Wu et al. 2020], by coupling polarization degree (system) with the frequency degree (environment), it was experimentally demonstrated that quantum-incoherent relative entropy of coherence (QI-REC) is commensurate with the S-A entanglement, thus establishing a relationship between QI-REC and information back-flow from environment to the system. Interestingly, it is shown via experiments [Farias et al. 2012] that when a part of a Bell pair interacts with an environment, the dynamics might lead to genuine multipartite entanglement between all the environment degrees of freedom and the initial Bell pair. When there are initial correlations in a composite environment, an open _bipartite_ system interacting with it might have locally Markovian evolution while globally it may show strong nonlocal memory effects [Laine et al. 2012] and this counter-intuitive effect has been realized experimentally in [Liu et al. 2013a]. Quantum non-Markovianity may also arise when two Markovian channels are convex combined and [Uriri et al. 2020] have experimentally confirmed how a convex mixture of two Pauli semigroups may result in a CP-indivisible quantum channel. In [Fanchini et al. 2014], the polarization degree of freedom is taken to be the two-level open system and a Sagnac interferometer is used to realize non-Markovian amplitude damping of photon polarization. All of the above works in the optical domain use interferometric setup to simulate decoherence of quantum systems. In Fig.(3) we give an schematic example of such a setup. Photonic simulation of quantum channels may find certain unique applications in quantum information tasks. For example, [Utagi et al. 2020a] showed that by deliberately adding amplitude damping quantum noise on the polarization degree of freedom (via optical simulation) in ping-pong protocol proposed by [Bostrom and Felbinger 2002] one could improve security against an attack due to [Wojcik 2003, Bostrom and Felbinger 2008]. It is known that squeezing is a resource for continuous variable quantum information processing. In [Xiong et al. 2018] it is shown that a cavity-optomechanical system interacting with a non-Markovian environment can lead to enhanced squeezing of mechanical mode. Therefore, exploring optical implementation of non-Markovian quantum channels may find similar counter-intuitive benefits in quantum information tasks. Although we have focused mainly on optical setups in this section, other platforms such as NMR [Bernardes et al. 2015], trapped ions [Wittener et al. 2018] and multi-qubit superconducting devices Figure 3: Optical simulation of amplitude damping channel using a Sagnac interferometer, which we have adopted from the Ref. [Salles et al. 2008]. Here, PBS is the polarization beam splitter, \(\text{HWP}(\theta_{V})\) (the black rectangular slabs) is a Half-wave plate which rotates the vertical polarization by an angle \(\theta\) and so does \(\text{HWP}(\theta_{V})\) to the horizontal polarization. \(\text{PP}(\phi)\) is a phase plate. The white rectangular slabs are perfectly reflecting mirrors. In the above setup, setting \(\phi=0\) one realizes an amplitude damping channel for arbitrary \(\theta_{V}\). Note that the symbols \(0\) and \(1\) are labels for optical modes. For different values of \(\theta_{H}\), \(\theta_{V}\), \(\theta_{1}\) and \(\phi\), other prototypical quantum channels can be realized [See, Table II in [Salles et al. 2008]]. [White et al.2020] have also been used to implement non-Markovian open system dynamics. The importance of such experimental characterizations of non-Markovianity is further highlighted by recent works that show how unwanted memory effects that creep in due to cross-talk between superconducting qubits in quantum computer can be removed [Gambetta et al.2012, White et al.2022]; however, mitigating errors due to non-Markovian memory effects arising from an uncontrollable environment can prove to be significantly harder. ## VI Afterword: Summary and Future Outlook Traditionally, the dynamics of open system is described by either master equation or a two-time dynamical map [Breuer and Petruccione2002, Banerjee2018]. Open system evolution may be categorized mainly as Markov or non-Markov [Rivas et al.2014]. The precise and universal definition of non-Markovianity has remained elusive [Li et al.2018], and there had been no known way of translating the classical definitions to the quantum domain until recently by [Pollock et al.2018a, Pollock et al.2018b, Budini2018b]. We have reviewed some recent developments in the field followed by brief accounts of traditional approaches to characterizing and quantifying quantum non-Markovianity. Quantum causality has been a long standing puzzle within quantum theory [Brukner2014, Costa2022, Vilasini and Renner2022]. It is interesting to note that quantum causality and non-Markovianity have been shown to be intimately connected via quantum temporal correlations. Notably, [Milz et al.2018] showed that a causally non-separable process with a tripartite initial entangled state can simulate a multi-time non-Markovian process. In this review, we have studied how one can quantify non-Markovianity using temporal quantum correlations such as temporal Figure 4: Containment of non-Markovianity criteria. Here, box 1 sufficiently implies 2 and 3, and 2 sufficiently implies 3, but 3 doesn’t necessarily imply 2 and 1, and 2 doesn’t necessarily imply 1. steering [Chen et al. 2016] and causal correlations in PDM [Utagi 2021]. Note, however, that these correlations are between the states _across_ time. Now, it will be an interesting study to understand if PDM can offer a multi-time characterization of correlations in the process, at least for the case of qubits. In fact, [Zhang et al. 2020] have shown that there are three different mappings from PDM to a process matrix, and since process matrix [Oreshkov et al. 2012, Costa and Shrapnel 2016] and process tensor [Pollock et al. 2018a] essentially arise from quantum combs [Chiribella et al. 2009], it would be an interesting future problem to find mappings from multi-time PDM to the process tensor, if any. If one studies open systems within the paradigm of two-time dynamical maps, one runs into the problem of defining a physically valid dynamical map that is both completely positive and linear when initial S-E quantum correlations are present. Pechukas's theorem states that in order to write down such a map one has to give up either complete positivity or linearity [Pechukas 1994, Alicki 1995, Pechukas 1995]. Later, the debate continued with regards to the nature of initial S-E correlations, for example that of quantum discord, and whether vanishing discord provided necessary and sufficient condition. However, recently some approaches have been proposed to describe dynamics of open system with initial S-E correlations in a consistent manner, some of which we have mentioned in Section III. Quantum combs [Chiribella et al. 2009] framework talks about the temporal correlations between observables corresponding to the dynamical process, by mapping a process to a state via CJ isomorphism. Particularly in process tensor framework [Pollock et al. 2018a], temporal correlations in the multi-time description of a process corresponds to memory (or, non-Markovianity) and the framework offers incorporation of (unknown) initial S-E correlation without sacrificing linearity and complete positivity of the map. Moreover, it offers an operational definition of (non-)Markovianity via quantum process tomography, which has an appropriate classical limit. It also offers a solution to the problem of necessary and sufficient condition for non-Markovianity. Figure (4) depicts the containment of non-Markovian processes according to various criteria, particularly with regards to necessary and/or sufficient condition for these criteria to witness non-Markovianity. The processes that are non-Markovian according to BLP are non-Markovian according all other criteria, hence such processes are _strongly_ non-Markovian. However, if BLP identifies a process as Markovian, it may still be non-Markovian according to RHP criterion hence also according to CPF correlation measure and process tensor measure. However, there may be processes that are non-Markovian according CPF correlations and process tensor, but are Markovian according to RHP. Therefore, one may conclude that Box 1 criteria are sufficient and not necessary relative to Box 2 and 3. Recent developments [Pollock et al. 2018b, Budini 2018b, Milz et al. 2019, Utagi et al. 2020b] have shown that RHP criterion is also only sufficient but not necessary for detecting non-Markovianity relative to Box 3. Thus, one may conclude that Box 3 are necessary and sufficient criteria for quantum non-Markovianity. Interestingly, it is known that the measure based on TSW detects the range of non-Markovianity that BLP does, however it is yet to be found where the TSW measure (21) and the causality based measure (27) fit in the above containment boxes. From the perspective of quantum information theory, it is possible that quantum non-Markovianity may be useful in certain specific situations. In particular, since quantum non-Markovianity brings information that is "lost" to the environment back to the system for certain time-intervals of the evolution, it might prove advantageous in certain quantum information processing tasks [Bylicka et al. 2014, Laine et al. 2014, Utagi et al. 2020a]. On the other hand, witnessing and characterizing the extent of non-Markovianity is essential in obtaining a complete understanding of noise in quantum systems. Indeed, one of the biggest challenges in scaling up quantum technologies today is to protect quantum information from environment-induced decoherence. The theory of quantum error correction (QEC) [Nielsen and Chuang 2010] provides the means to protect information from noise, by appending a large number of physical qubits to create a single, protected logical qubit. Standard works on QEC have heavily focused on Markovian noise, and barring a couple of works [22, 23], the role of QEC in mitigating noise in the non-Markovian regime remains largely unexplored. Going beyond error correction, the question of achieving quantum fault tolerance in the presence of non-Markovian quantum noise has also been explored in the past [1, 10]. Recently, there have been attempts to extend the theory of noise-adapted QEC [24, 25] to non-Markovian noise models [12, 26, 27]. Going forward, characterizing non-Markovianity in near-term quantum devices and developing QEC protocols adapted to non-Markovian noise promises to be an important and fruitful research avenue. In this contribution, we have attempted to put in perspective some of the recent developments in defining and measuring quantum non-Markovianity. It will be interesting to see how various frameworks of open system dynamics and definitions of quantum non-Markovianity allow for their uses in very specific cases of quantum information processing. ### Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ### Author Contributions US was the primary resource person for this article and the primary contributor. PM contributed to the writing and structural organization of this review. ### Funding US thanks IIT Madras for the support through the Institute Postdoctoral Fellowship. This work was partially funded under Grant No. DST/ICPS/QuST/Theme-3/2019/Q59 from the Department of Science and Technology, Gov. of India and a grant from the Mphasis Foundation to the Centre for Quantum Information, Communication, and Computing (MCQuICC), IIT Madras. ###### Acknowledgements. US thanks Simon Milz for insightful discussions.
2308.08048
Tunable topological magnon-polaron states and anomalous Hall phenomena in two-dimensional ferromagnetic insulators
We study magnon-polaron hybrid states, mediated by Dzyaloshinskii-Moriya and magnetoelastic interactions, in a two-dimensional ferromagnetic insulator. The magnetic system consists of both in-plane and flexural acoustic and optical phonon bands, as well as acoustic and optical magnon bands. Through manipulation of the ground-state magnetization direction using a magnetic field, we demonstrate the tunability of Chern numbers and (spin) Berry curvatures of magnon-polaron hybrid bands. This adjustment subsequently modifies two anomalous Hall responses of the system, namely, thermal Hall and spin Nernset signals. Notably, we find that by changing the magnetic field direction in particular directions, it is possible to completely suppress the thermal Hall signal while maintaining a finite spin Nernst signal. Our finding reveals the intricate interplay between topological and quantum geometrical phenomena and magnetic ordering, offering compelling avenues for on-demand control over emergent quantum states in condensed matter systems.
Jostein N. Kløgetvedt, Alireza Qaiumzadeh
2023-08-15T21:21:13Z
http://arxiv.org/abs/2308.08048v2
Tunable topological magnon-polaron states and anomalous Hall phenomena in two-dimensional ferromagnetic insulators ###### Abstract We study magnon-polaron hybrid states, mediated by Dzyaloshinskii-Moriya and magnetoelastic interactions, in a two-dimensional ferromagnetic insulator. The magnetic system consists of both in-plane and flexural acoustic and optical phonon bands, as well as acoustic and optical magnon bands. Through manipulation of the ground-state magnetization direction using a magnetic field, we demonstrate the tunability of Chern numbers and (spin) Berry curvatures of magnon-polaron hybrid bands. This adjustment subsequently modifies two anomalous Hall responses of the system, namely, thermal Hall and spin Nernset signals. Notably, we find that by changing the magnetic field direction in particular directions, it is possible to completely suppress the thermal Hall signal while maintaining a finite spin Nernst signal. Our finding reveals the intricate interplay between topological and quantum geometrical phenomena and magnetic ordering, offering compelling avenues for on-demand control over emergent quantum states in condensed matter systems. ## I Introduction Magnon-polarons are emergent quasiparticles arising from hybrid states between magnons, quanta of collective spin excitations, and phonons, quanta of lattice vibrations [1]. Coherent coupling between magnons and phonons modifies the thermodynamic responses of the material. By studying magnon-polaron quasiparticles, we may gain insight into the interaction strength and nature of the spin-phonon coupling, which in turn can reveal information about the ground state of the system. On the other hand, these emerging hybrid modes, with typical sub-nanometer wavelengths, may exhibit coherent spin angular momentum transport over long distances [2; 3], and can also manifest non-trivial topological properties [4]. These features make magnon-polaron hybrid modes promising for the realization of functional hybrid quantum systems [5] with potential applications in low-power and high-speed spintronic nanodevices, compact topological devices, hybrid quantum systems and quantum information technology. Therefore, tunability of coherent coupling between magnons and phonons is an essential prerequisite for their application in modern quantum technology. Recently discovered two-dimensional (2D) ferromagnetic (FM) and antiferromagnetic (AFM) materials [6; 7; 8; 9; 10] are ideal platforms and testbeds for investigation of these emerging quasiparticles. Theoretical and experimental studies in 2D FM [4; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] and AFM [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33] systems have shown that the interaction between magnons and phonons can produce nontrivial topological bands from trivial magnon and phonon bands, or strengthen the already existing topological bands. This phenomenon arises from the emergence of Berry curvatures, crucial for revealing nontrivial topological bands, which are generated in level repulsion hotspots where magnon and phonon branches intersect in the presence of interactions [4]. There are various mechanisms for coherent magnon-phonon (m-ph) coupling in collinear and noncollinear magnetic insulators, such as dipolar interactions [4], spin-orbit interactions, including Dzyaloshinskii-Moriya (DM) interactions and magnetoelastic couplings [1; 15; 34; 35; 36], Heisenberg exchange interactions [25], and a magnetic field gradient [37]. The fingerprint of nontrivial topology and finite Berry curvatures in the ground-state and/or excitation properties of a system can be intercepted in Hall-type quantum transport phenomena, such as thermal Hall and spin Nernst effects. Tuning Chern numbers, Berry curvatures, and quantum transports are highly demanded for application in emerging quantum technology. Recently, it was shown theoretically that an effective magnetic field or magnetic anisotropy can tune Chern numbers and the thermal Hall effect of magnon-polarons, generated via magnetoelastic interactions, in FM systems by changing the number of band-crossing lines [16; 21]. In Ref. [38], the authors studied how varying the direction of a magnetic field may change the topological structure and thermal Hall response of magnon-acoustic phonon hybrid modes in a simple square lattice with the Kittel-type magnetoelastic hybridization mechanism [34; 39]. In this paper, we demonstrate the influence of tuning the direction of an external magnetic field on the topological properties of magnon-polaron hybrid bands in a FM system with and without broken inversion symmetry. We examine the intricate hybridization of magnons with both in-plane (IP) and out-of-plane (OOP) or flexural modes of acoustic and optical phonons in the presence of two types of spin-orbit mediated hybridization mechanisms: generalized DM interactions and the Kittel-type magnetoelastic interactions. We explore the fingerprint of these topological properties through their influence on thermal Hall and spin Nernst effects. Focusing only on the magnon-polaron contribution, we show that by tuning the ground-state magnetization direction, we can turn off the thermal Hall signal while the spin Nernst signal remains finite. This behavior of magnon-polaron hybrid modes in our FM system, resembles pure magnonic counterpart effects in collinear AFM hexagonal lattices. In such systems, the thermal Hall effect is forbidden by symmetry, whereas the spin Nernst Hall effect remains finite [40; 41]. The rest of the paper is organized as follows. We first introduce our effective model Hamiltonian with different m-ph coupling mechanisms in Sec. II. In Sec. III, we present the band structure of magnon-polaron modes in the presence of various m-ph coupling mechanisms. Chern numbers of different magnon-polaron hybrid bands are computed in Sec. IV. Thermal Hall and spin Nernst conductivities are computed in Sec. V. We then summarize our findings in Sec. VI and discuss their implications. ## II Model We consider an FM spin Hamiltonian \(\mathcal{H}=\mathcal{H}_{\rm m}+\mathcal{H}_{\rm ph}+\mathcal{H}_{\rm m-ph}\), consisting of the Hamiltonian of localized spins \(\mathcal{H}_{\rm m}\), phonons \(\mathcal{H}_{\rm ph}\), and the m-ph interaction \(\mathcal{H}_{\rm m-ph}\). Without loss of generality, we consider a hexagonal lattice structure. ### Spin Hamiltonian The spin Hamiltonian reads [42], \[\mathcal{H}_{\rm m}= -J\sum_{\langle i,j\rangle}\mathbf{S}_{i}\cdot\mathbf{S}_{j}-\Lambda\sum_ {\langle i,j\rangle}(\mathbf{S}_{i}\cdot\mathbf{S}_{j})^{2}-K_{z}\sum_{i}\mathcal{S}_ {iz}^{2}\] \[-\sum_{i,j}\mathbf{D}_{ij}\cdot[\mathbf{S}_{i}\times\mathbf{S}_{j}]-\sum_{i} \mathbf{S}_{i}\cdot\mathbf{h}, \tag{1}\] where \(\mathbf{S}_{i}\) is the vector spin operator at the site \(i\) with amplitude \(|\mathbf{S}_{i}|=S\), \(J>0\) and \(\Lambda>0\) parameterize the isotropic bilinear FM Heisenberg and biquadratic exchange interactions, respectively, between nearest neighbor (nn) sites, \(K_{z}>0\) parameterizes the single-ion easy-axis magnetic anisotropy, \(\mathbf{D}_{ij}\) denotes the DM vector between both nn and next-nearest-neighbor (nnn) sites, and \(\mathbf{h}=g\mu_{B}\mathbf{B}\) is the Zeeman field, with \(g\) is the Lande g-factor, \(\mu_{B}\) is the Bohr magneton, and \(\mathbf{B}\) is the external magnetic field. In general, we define the following generalized nn and nnn DM vectors [15; 43], \[\mathbf{D}_{ij}^{\rm nnn}=\nu_{ij}D_{z}^{\rm nnn}\hat{\mathbf{z}}-\eta_{ ij}D_{xy}^{\rm nnn}\hat{\mathbf{R}}_{ij}, \tag{2a}\] \[\mathbf{D}_{ij}^{\rm nn}=D_{z}^{\rm nn}\hat{\mathbf{z}}+D_{xy}^{\rm nn}( \hat{\mathbf{z}}\times\hat{\mathbf{R}}_{ij}), \tag{2b}\] where \(\nu_{ij}=-\nu_{ji}=\pm 1\) depending on the hopping orientation from site \(i\) to \(j\), \(\eta_{ij}=+1(-1)\) for bonds between the \(A(B)\) sublattice, and \(\hat{\mathbf{R}}_{ij}\) represents the direction towards the neighboring lattice sites. In a freestanding 2D hexagonal lattice, only the OOP nnn DM vector can be finite [44], while the other DM vectors might be finite when the inversion and/or mirror symmetries are broken [45]. Realistic materials can exhibit the breaking of these symmetries through growth on different substrates or by applying a gate voltage. Using the Holstein-Primakoff transformation [46], the noninteracting magnon Hamiltonian in the second quantized representation, for an arbitrary direction of the ground-state magnetization, see Appendix A, reads, \[\mathcal{H}_{\rm m}=\sum_{\mathbf{q},\sigma}E_{\mathbf{q}\sigma}a_{\mathbf{q}, \sigma}^{\dagger}a_{\mathbf{q},\sigma}, \tag{3}\] where \(a_{\mathbf{q},\sigma}(a_{\mathbf{q},\sigma}^{\dagger})\) is the bosonic annihilation (creation) operator for acoustic, \(\sigma=-\), and optical, \(\sigma=+\), magnon modes with eigenenergy \(E_{\mathbf{q}\sigma}\)[47]. The dispersion of a freestanding FM hexagonal lattice, in the presence of an OOP magnetic field, is reduced to \(E_{\mathbf{q}\sigma}\ =\ S\left(\mathcal{Z}\tilde{J}+\sigma\sqrt{[D_{z}^{\rm nnn}(\mathbf{q})] ^{2}+\tilde{J}^{2}|f(\mathbf{q})|^{2}}\right)\)\(+\)\(\left(h_{z}+(2S-1)K_{z}\right)\), where \(f(\mathbf{q})=\sum_{i=1}^{\mathcal{Z}}e^{i\mathbf{q}\cdot\mathbf{\delta}}\) is the lattice structure factor, \(\mathbf{\delta}_{i}\) is the \(i^{\rm th}\) nn vector, \(\mathcal{Z}=3\) is the coordinate number of hexagonal lattices, \(\tilde{J}=J+2\Lambda S^{2}\) is the effective exchange interaction, and \(D_{z}^{\rm nnn}(\mathbf{q})=-2D_{z}^{\rm nnn}\sum_{i=1}^{\mathcal{Z}}\sin{(\mathbf{q} \cdot\mathbf{\tau}_{i})}\), with \(\mathbf{\tau}_{1}=a\hat{\mathbf{x}}\), \(\mathbf{\tau}_{2}=a(-\hat{\mathbf{x}}+\sqrt{3}\hat{\mathbf{y}})/2\) and \(\mathbf{\tau}_{3}=-a(\hat{\mathbf{x}}+\sqrt{3}\hat{\mathbf{y}})/2\), where \(a\) is the lattice constant. Figure 1 shows this dispersion relation. There is a topological gap between the acoustic and optical magnon bands at the \(K\) and \(K^{\prime}\) points due to a finite \(D_{z}^{\rm nnn}\). A similar topological gap has recently been reported in magnon dispersion of CrI\({}_{3}\)[48; 49]. The band gap in the acoustic branch of the magnon dispersion at the \(\Gamma\) point is due to the OOP magnetic anisotropy and the applied magnetic fields. Figure 1: Noninteracting acoustic \(E_{-}\) and optical \(E_{+}\) magnon bands in a freestanding FM honeycomb lattice, in the presence of an OOP magnetic field with amplitude \(|\mathbf{h}|=4.5\) meV. We use the following parameters for CrI\({}_{3}\)[42; 47]: \(J=2.01\) meV, \(\Lambda=0.21\) meV, \(K_{z}=0.109\) meV, \(D_{z}^{\rm nnn}=0.31\) meV, and \(S=3/2\). ### Phonon Hamiltonian The phonon Hamiltonian reads [50; 51], \[\mathcal{H}_{\rm ph}=\frac{1}{2}\sum_{i,\alpha,\mu}M_{\alpha}\dot{u}_{i\alpha\mu} ^{2}+\frac{1}{2}\sum_{\begin{subarray}{c}i,\alpha,\mu\\ j,\beta,\nu\end{subarray}}u_{i\alpha\mu}\Phi_{\mu\nu}^{\alpha\beta}(\mathbf{R}_{ ji})u_{j\beta\nu}, \tag{4}\] where \(i(j)\) labels the unit cells, \(\alpha(\beta)\) labels the ions inside a unit cell, \(\mu(\nu)\) denotes spatial coordinates, \(M_{\alpha}\) is the mass of \(\alpha\)th ion in the unit cell, \(\mathbf{u}_{i\alpha}(t)\) represents the displacement of the ion at time \(t\), and \(\Phi_{\mu\nu}^{\alpha\beta}\) is the force coefficient between the \(i\alpha\)th ion in the \(\mu\)th direction due to a displacement of the \(j\beta\)th ion in the \(\nu\)th direction. The second quantized form of the phonon Hamiltonian in the diagonalied basis reads, \[\mathcal{H}_{\rm ph}=\sum_{\mathbf{q},\lambda}E_{\mathbf{q}\lambda}\big{(}c_{\mathbf{q}, \lambda}^{\dagger}c_{\mathbf{q},\lambda}+\frac{1}{2}\big{)}, \tag{5}\] where \(c_{\mathbf{q},\lambda}(c_{\mathbf{q},\lambda}^{\dagger})\) is the bosonic annihilation (creation) operator for \(\lambda\) phonon mode with the eigenenergy \(E_{\mathbf{q}\lambda}\). Figure 2 shows the phonon dispersion relations of a hexagonal lattice. There are three acoustic modes: IP longitudinal (LA), IP transverse (TA), and OOP (ZA). Although the two IP acoustic modes have linear dispersions around the \(\Gamma\) point, the OOP or flexural acoustic mode in 2D systems has a quadratic low-energy dispersion in the presence of the rotation symmetry [52; 53; 7]. In addition, there are three optical modes: IP longitudinal (LO), IP transverse (TO), and OOP (ZO). ### Magnon-phonon interactions The m-ph interactions in our model can in general arise from bilinear and biquadratic exchange interactions, DM interactions, and crystalline magnetic anisotropy. We seek coherent hybridization between magnons and phonons that leads to level repulsion between two bosonic modes at degenerate points, rather than scattering phenomena between them. Therefore, we focus solely on the identification of linear-order m-ph interactions. The magnetic ground state is collinear in our model; therefore, up to the lowest order in m-ph coupling amplitude, only IP components of the DM interactions and crystalline magnetic anisotropy may lead to m-ph hybridization. Note that the OOP components of DM interactions may also lead to the m-ph hybridization but their contribution is higher order than the in-plane components, see the Appendix B, and thus we ignore them in this study. Therefore, the total m-ph Hamiltonian for an arbitrary direction of the magnetic ground state consists of three terms \(\mathcal{H}_{\rm m-ph}=\mathcal{H}_{\rm D_{xy}^{\rm nn}}+\mathcal{H}_{\rm D_{ xy}^{\rm nn}}+\mathcal{H}_{\rm me}\). Look at the Appendix B for more details. i) The IP nnn-DM interaction, Eq. (2a), leads to an effective m-ph coupling as, \[\mathcal{H}_{\rm D_{xy}^{\rm nnn}}=\sum_{\langle\langle i,j\rangle\rangle} \sum_{\mu,\nu}(u_{i\mu}-u_{j\mu})F_{ij}^{\mu\nu}(\mathbf{S}_{i}^{\prime}\times\bm {S}_{j}^{\prime})_{\nu}, \tag{6}\] with the following coupling matrix for arbitrary spin direction, see Appendix B.1[47]: \[F_{ij}^{\mu\nu}=\frac{\eta_{ij}D_{xy}^{\rm nnn}}{|\mathbf{R}_{ij}|}\sum_{\xi} \left(\delta_{\mu\xi}-\hat{R}_{ij}^{\mu}\hat{R}_{ij}^{\xi}\right)\mathcal{R}_{ \xi}^{\nu}. \tag{7}\] The rotation matrix \(\mathcal{R}\) is defined in a way that spins can be expressed in terms of a new frame coordinates \(\{\hat{\mathbf{e}}_{1},\hat{\mathbf{e}}_{2},\hat{\mathbf{e}}_{3}\}\), where \(\hat{\mathbf{e}}_{3}\) aligns with the magnetic ground state, dictated by the magnetic field direction; i.e., \((\mathbf{S}_{i}\times\mathbf{S}_{j})_{\nu}=[\mathcal{R}(\mathbf{S}_{i}^{\prime}\times\bm {S}_{j}^{\prime})]_{\nu}\), see Appendix A. This interaction term can be finite for both in-plane (\(\mu=x,y\)) and out-of-plane (\(\mu=z\)) phonon modes. ii) The IP nn-DM interaction, Eq. (2b), results in the following m-ph coupling, \[\mathcal{H}_{\rm D_{xy}^{\rm nn}}=\sum_{\langle i,j\rangle}\sum_{\mu,\nu}(u_ {i\mu}-u_{j\mu})T_{ij}^{\mu\nu}(\mathbf{S}_{i}^{\prime}\times\mathbf{S}_{j}^{\prime}) _{\nu}, \tag{8}\] with the nn coupling matrix for arbitrary spin direction is given by [47], \[T_{ij}^{\mu\nu}=-\frac{D_{xy}^{\rm nn}}{|\mathbf{R}_{ij}|}\sum_{\xi,\gamma} \epsilon_{z\gamma\xi}\left(\delta_{\mu\gamma}-\hat{R}_{ij}^{\mu}\hat{R}_{ij}^ {\gamma}\right)\mathcal{R}_{\xi}^{\nu}, \tag{9}\] where \(\epsilon_{z\gamma\xi}\) is the Levi-Civita tensor. See see Appendix B.2 for details. This interaction term can only be finite for in-plane (\(\mu=x,y\)) phonon modes. iii) The magnetoelastic interaction, the interaction between the spin and the elastic displacement, arising from the crystalline anisotropy is described by a Kittel-type magnetoelastic energy density at the site \(i\) as, \(f_{i}^{\rm me}=(b_{1}/S^{2})\sum_{\mu}e_{\mu\mu}\mathcal{S}_{i\mu}^{2}+(b_{2}/S^{ 2})\sum_{\mu\neq\nu}e_{\mu\nu}S_{i\mu}S_{i\nu}\), where \(e_{\mu\nu}=(\partial_{r_{\nu}}u_{\mu}+\partial_{r_{\mu}}u_{\nu})/2\) is the strain tensor component, \(b_{1}\) and \(b_{2}\) are magnetoelastic constants related to Figure 2: Noninteracting phonon bands in a freestanding hexagonal lattice. We use graphene parameters [50; 51; 47]. the normal strains and shear deformations, respectively [34; 39]. The effective m-ph coupling Hamiltonian in the linear order of magnon amplitude reads [34; 16; 38], \[\mathcal{H}_{\text{me}}=\sum_{(i,j)}\sum_{\mu\nu}(u_{i\mu}-u_{j\mu})K^{\mu\nu}_{ ij}S^{\prime}_{i\nu}, \tag{10}\] where \(K_{ij}\) coupling matrix between \(i\) and \(j\) sites. For an arbitrary magnetization direction, this matrix is lengthy and presented in the Appendix B.3. Here, we only consider a magnetization along the \(x\), \(y\), and \(z\) directions, and thus only the shear deformation contributes to the coupling matrix [47], \[K_{ij}=\frac{\kappa_{2}}{|\mathbf{R}_{ij}|^{2}}\begin{bmatrix}R^{y}_{ij}\Gamma^{ 1}_{xy}&R^{y}_{ij}\Gamma^{2}_{xy}\\ R^{z}_{ij}\Gamma^{1}_{xy}&R^{x}_{ij}\Gamma^{2}_{xy}\\ R^{x}_{ij}\Gamma^{1}_{xz}+R^{y}_{ij}\Gamma^{1}_{yz}&R^{x}_{ij}\Gamma^{2}_{xz} +R^{y}_{ij}\Gamma^{2}_{yz}\end{bmatrix}, \tag{11}\] where \(\Gamma^{\nu}_{\mu\mu^{\prime}}=\left(\mathcal{R}^{\nu}_{\mu}\mathcal{R}^{3}_{ \mu^{\prime}}+\mathcal{R}^{3}_{\mu}\mathcal{R}^{\nu}_{\mu^{\prime}}\right)/2\) and \(\kappa_{2}=2b_{2}a^{3}/S\). ## III Magnon-polaron bands We proceed with the diagonalization of the total Hamiltonian, with various m-ph interactions, to determine the eigenstates and eigenenergies associated with the emerging magnon-polaron hybrid modes. In our model, there are \(N_{d}=8\) magnon-polaron bands with eigenenergies \(\mathcal{E}_{k,n}\), where \(n=1,...,N_{d}\). In order to explore the impact of different m-ph coupling mechanisms and the orientation of the magnetic ground state on magnon-polaron hybrid bands, we present the band dispersions in Figs. 3. These figures show the cases where \(D^{\text{nn}}_{xy}\), \(D^{\text{nn}}_{xy}\), or \(\kappa_{2}\) is nonzero, as well as the scenario where all three m-ph mechanisms are present. Although a finite \(D^{\text{nn}}_{xy}\) does not result in coupling between magnons and OOP phonons for any magnetization direction, hybridization between magnons and OOP phonons may occur when there is a finite IP component of magnetization through \(D^{\text{nnn}}_{xy}\). On the other hand, both \(D^{\text{nn}}_{xy}\) and \(D^{\text{nnn}}_{xy}\) induce hybridization between magnons and IP phonons regardless of the magnetization direction. For a finite magnetoelastic coupling \(\kappa_{2}\), the most profound hybridization occurs among the TA phonon branch when the magnetization lies in the plane, while the coupling to OOP phonon modes is weak in all examined magnetization directions. Furthermore, it is evident that this coupling exhibits stronger hybridization with IP phonon modes near the \(\Gamma-\)point compared to the DM interactions. Apart from the avoided level crossings between the magnon and phonon branches, we also observe a tiny gap opening at the intersection between the LO-LA and ZO-ZA phonon branches in the \(K\) and \(K^{\prime}\) points in the presence of m-ph interactions. This gap opens as a consequence of an emerging inversion symmetry breaking induced by the effective m-ph interactions. However, the size of these gaps are too small to be visually distinguishable in the presented figures. The presence of such gaps in the phonon spectrum may indicate the existence of chiral phonons, i.e. phonons with circular polarization that have opposite chirality in different valleys (\(K\) and \(K^{\prime}\)) [54]. Chiral phonons can possess a finite angular momentum and may exhibit a valley phonon Hall effect. Exploring chiral phonons is beyond the scope of the present study, and we defer this aspect to future investigations. Similar gap openings have recently been reported in a twisted FM kagome lattice [13] and an AFM system [27] ## IV Berry curvature and Chern numbers The Berry curvature is closely related to the topological properties of the energy bands and plays a crucial role in determining the anomalous transport properties of bosonic systems. Controlling the Berry curvature is important for exploring novel functionalities and potential applications in bosonic topological materials. The Berry curvature of the \(n^{\text{th}}\) band is given by \(\Omega_{n}(\mathbf{k})=i\left(\langle\partial_{k_{x}}n(\mathbf{k})|\partial_{k_{y}}n( \mathbf{k})\rangle-\langle\partial_{k_{y}}n(\mathbf{k})|\partial_{k_{x}}n(\mathbf{k}) \rangle\right)\). It is more convenient to compute Berry curvatures by transforming the total interacting Hamiltonian into a bosonic Bogoliubov-de Gennes (BdG) form [33; 47]. The bosonic BdG Hamiltonian has two copies of the same eigenstates. In analogy to fermionic systems, we can denote the two sets as particle-like and hole-like states where the states \(n=1,\dots,N_{d}\) represent particle-like bands and the states \(n=N_{d}+1,\dots,2N_{d}\) are hole-like bands. The BdG system is diagonalized with a paraunitary transformation \(T^{\dagger}_{k}H_{k}T_{k}=\mathcal{E}_{k}=\text{diag}(\mathcal{E}_{k,1},\dots, \mathcal{E}_{k,N_{d}},\mathcal{E}_{-k,1},\dots,\mathcal{E}_{-k,N_{d}})\) where the matrix \(T_{k}\) satisfies \(T^{\dagger}_{k}\sigma_{3}T_{k}=\sigma_{3}\) and \(\sigma_{3}=\sigma_{z}\otimes I_{N_{d}\times N_{d}}\)[55]. \(\sigma_{z}\) is the \(z\) component of the Pauli matrices, and \(I_{N_{d}\times N_{d}}\) is the unit matrix of dimension \(N_{d}\). The eigenenergies and eigenstates are found numerically using Colpa's method [56]. A gauge-independent representation of the Berry curvature reads [57], \[\Omega_{n}(\mathbf{k})=2i\hbar^{2}\sum_{\begin{subarray}{c}m=1\\ m\neq n\end{subarray}}^{2N_{d}}(\sigma_{3})_{nn}(\sigma_{3})_{mm}\frac{ \langle n_{\mathbf{k}}|v_{x}|m_{\mathbf{k}}\rangle\langle m_{\mathbf{k}}|v_{y}|n_{\mathbf{k}} \rangle}{(\mathcal{E}_{k,n}-\mathcal{E}_{k,m})^{2}}, \tag{12}\] where \(\mathbf{v}=\hbar^{-1}\partial_{\mathbf{k}}\mathcal{H}\) is the vector of the velocity operator, \(\mathcal{E}_{k,n}=(\sigma_{3}\mathcal{E}_{k})_{nn}\) and \(|n_{\mathbf{k}}\rangle\) denotes the eigenstates corresponding to the columns \(T_{k,n}\) in the paraunitary matrix. The first Chern number characterizes the topology of the \(n^{\text{th}}\) bands and is given by, \[C_{n}=\frac{1}{2\pi}\int_{\text{BZ}}d^{2}\mathbf{k}\Omega_{n}(\mathbf{k}). \tag{13}\] In the absence of m-ph interactions, phonon bands are topologically trivial in our model. However, it is well-known that a finite \(D^{\text{nnn}}_{z}\) opens a gap at \(K\) points of FM hexagonal lattices and hence topological magnons are emerged in two magnon bands by Chern numbers \(C=\pm 1\), provided that the magnetization has a nonzero out-of-plane component [57; 58; 59; 60]. In contrast, a finite \(D_{xy}^{\text{nn}}\) and/or \(D_{xy}^{\text{nnn}}\) do not open any topological gap in the magnon dispersion of our system. However, upon considering the lowest order m-ph interactions, magnon-polaron hybrid states emerge with the possibility of exhibiting nontrivial topological properties. Effective Hamiltonian of magnon-polaron states may break inversion symmetries at certain directions. Topological gaps emerge at anticrossing points of hybrid magnon-polaron bands, or in magnon- (phonon)-like regions of hybrid bands. Consequently, the magnon-polaron bands may Figure 3: Magnon-polaron bands in an FM insulator with hexagonal lattice structure. Plots in each row illustrate the band dispersion with a magnetic ground-state \(\hat{\mathbf{S}}_{0}\) along \(\hat{\mathbf{x}}\), \(\hat{\mathbf{y}}\), and \(\hat{\mathbf{z}}\) directions, controlled by external magnetic field. In each column, we explore different scenarios with a specific nonzero m-ph coupling parameter: a) \(D_{xy}^{\text{nn}}=0.2\) meV, b) \(D_{xy}^{\text{nnn}}=0.3\) meV, and c) \(\kappa_{2}=1\) meV. Additionally, we consider a combined scenario, as depicted in d), where all the mentioned parameters are present, simultaneously. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline m-ph coupling mechanism & \(\hat{\mathbf{S}}_{0}=\hat{\mathbf{x}}\) & \(\hat{\mathbf{S}}_{0}=\hat{\mathbf{y}}\) & \(\hat{\mathbf{S}}_{0}=\hat{\mathbf{z}}\) \\ \hline \multicolumn{2}{|c|}{no m-ph coupling} & [0, 0, 0, 0, 0, 0, 0, 0] & [0, 0, 0, 0, 0, 0, 0, 0] & [0, 0, 0, 1, 0, 0, 0, -1] \\ \hline \multirow{4}{*}{\(\kappa_{2}\)} & \(D_{xy}^{\text{nn}}\) & \(\times\) & \(\times\) & \(\times\) \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ \hline \multirow{4}{*}{\(\kappa_{3}\)} & \(D_{xy}^{\text{nn}}\) & \(\times\), 0, 0, \(\times\), \(\times\), 0, 0, 0, 0, 0, 0] & [\(\times\), 0, 0, 0, -1, \(\times\), 0, 0, 0, 0, -1] \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ \hline \multirow{4}{*}{\(\kappa_{4}\)} & \(D_{xy}^{\text{nn}}\) & \(\Omega_{n}=0\) & \(\Omega_{n}=0\) & [\(\times\), 0, 0, -1, \(\times\), 3, -3, 1] \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ \hline \end{tabular} \end{table} Table 1: Chern number, \(C_{n}\), of magnon and phonon bands in the presence of various m-ph coupling mechanisms. The bands are labeled \(n\) = [ZA, TA, LA, \(E_{-}\), ZO, LO, TO, \(E_{+}\)]. The symbol \(\times\) denotes the absence of m-ph coupling. In certain cases, the Berry curvature of a band is zero, \(\Omega_{n}(\mathbf{k})=0\). exhibit a finite Berry curvature, localized around these topological hotspot gaps. These Berry curvature sources can influence the Chern numbers associated with the bands, contributing to the rich topological aspects of the system. Table 1 presents a comprehensive summary of the Chern numbers associated with distinct bands arising from various m-ph interaction mechanisms and several magnetic ground-state orientations. This table demonstrates that magnon-polaron bands can be either topologically nontrivial (\(C_{n}\neq 0\)) or trivial (\(C_{n}=0\)) based on the m-ph coupling mechanism and magnetization direction. ## V Anomalous Hall responses Measuring various thermal conductivities provides essential information on the underlying state of the system as well as the nature of emerging quasiparticles in the system [58; 61]. Direct detection of the (spin) Berry curvature and the Chern number is experimentally challenging in bosonic systems. However, the fingerprint of these quantities may be reflected in the anomalous Hall responses to a temperature gradient throughout the system. In general, the total anomalous Hall responses of a magnetic insulator may have three contributions from free magnon quasiparticles, free phonon quasiparticles, and magnon-polaron quasiparticles. Each of these quasiparticles may carry heat and/or spin angular momentum, manifested in thermal Hall and spin Nernst effects, respectively. Here, we only focus on the emerging contribution from the magnon-polaron quasiparticles. ### Thermal Hall effect The thermal Hall conductivity \(\kappa_{xy}\) is a quantity that relates the transverse heat current to the applied temperature gradient, \(\mathbf{J}^{Q}=\kappa_{xy}(\mathbf{\nabla}T\times\hat{z})\). Within the linear response theory, the thermal Hall conductivity is related to the Berry curvature as [58; 62; 63], \[\kappa_{xy}=-\frac{k_{B}^{2}T}{\hbar\mathcal{A}}\sum_{\mathbf{k}}\sum_{n=1}^{N_{d} }c_{2}(g_{k,n})\Omega_{n}(\mathbf{k}), \tag{14}\] where \(k_{B}\) is the Boltzmann constant, \(T\) is the temperature, \(\mathcal{A}\) is the area of the system, \(g_{k,n}=\left(e^{\mathcal{E}_{k,n}/k_{B}T}-1\right)^{-1}\) is the equilibrium Bose-Einstein distribution, and \(c_{2}(x)=(1+x)\left(\ln\frac{1+x}{x}\right)^{2}-\left(\ln x\right)^{2}-2\text{ Li}_{2}(-x)\), where \(\text{Li}_{2}(x)\) is the polylogarithm function of second order. In the absence of m-ph couplings, the only source of thermal Hall conductivity in our model is a finite \(D_{z}^{\text{nnn}}\). This noninteracting conductivity has a pure magnonic origin and is finite only if the magnetization has a finite projection along the \(z\) direction. As we shown earlier, m-ph interactions may generate a finite Berry curvature for phonon-like and magnon-like bands of the magnon-polaron hybrid states, and thus we expect a finite thermal Hall conductivity response. Figure 4 presents the thermal Hall conductivity arising from the m-ph coupling via \(D_{xy}^{\text{nnn}}\), as a function of temperature for several magnetization directions. This figure shows that the thermal Hall conductivity is tunable by applying a magnetic field along different directions. The thermal Hall conductivity exhibits a varying sign for both \(x\) and \(z\) directions. In the low temperature regime, the interaction between the LA phonon band and the lower magnon band \(E_{-}\) takes precedence. Neverthe Figure 5: The thermal Hall conductivity of magnon-polaron quasiparticles as a function of temperature for different m-ph coupling mechanisms. The scenario considers an OOP ground-state magnetization. We set \(\kappa_{2}=1.04\) meV, \(D_{xy}^{\text{nn}}=0.173\) meV, and \(D_{xy}^{\text{nnn}}=0.3\) meV. Figure 4: The thermal Hall conductivity of magnon-polaron quasiparticles, generated by a finite IP nm DM \(D_{xy}^{\text{nnn}}=0.3\) meV, as a function of temperature for different ground-state magnetization directions. The parameters are the same as in the Figure 3 column b). less, with increasing temperature, additional bands are thermally activated, resulting in a significant alteration in their signatures. However, the thermal Hall conductivity vanishes for magnetization along the \(y\) direction. This arises from the complete balance in the distribution of the Berry curvature, resulting in topologically trivial bands. Specifically, for each band gap that contributes to the conductivity, there exists a corresponding band gap elsewhere in the Brillouin zone, where the bands exhibit opposite Berry curvature, leading to a negative contribution. Consequently, the net conductivity should be zero. This finding strengthens the idea that nonzero thermal Hall conductivity can serve as an indicator of a topological system. In Fig. 5, we compare the thermal Hall conductivities of magnon-polaron states arising from different m-ph coupling mechanisms. Unlike the DM-induced m-ph coupling, the anisotropy contribution exhibits nonzero values even at very low temperatures. One possible explanation for this disparity is the emergence of a slightly negative Berry curvature in the ZA phonon branch around the \(\Gamma\)-points in the presence of finite \(\kappa_{2}\), leading to a positive conductivity. Interestingly, this effect diminishes when the quadratic low-energy dispersion of the ZA phonon mode is replaced by a linear dispersion, as commonly found in 3D materials or 2D hexagonal lattices with broken sublattice (rotation) symmetry. In such cases, a vanishing conductivity is seen at low temperatures in the presence of \(\kappa_{2}\). ### Spin Nernst effect The spin Nernst coefficient \(\alpha_{xy}\) relates the spin current density to the applied temperature gradient, \(\mathbf{J}^{s}=\alpha_{xy}(\mathbf{\nabla}T\times\hat{z})\). Within the linear response theory, the spin Nernst coefficient is related to the _spin_ Berry curvature and is given by [21, 33, 55], \[\alpha_{xy}=-\frac{2k_{B}}{\mathcal{A}}\sum_{\mathbf{k}}\sum_{n=1}^{N_{d}}c_{1}(g_ {k,n})\Omega_{n}^{s}(\mathbf{k}), \tag{15}\] where the spin Berry curvature is, \[\Omega_{n}^{s}(\mathbf{k})=2i\hbar^{2}\sum_{\begin{subarray}{c}m=1\\ m\neq n\end{subarray}}^{2N_{d}}(\sigma_{3})_{nn}(\sigma_{3})_{mm}\frac{\langle n _{\mathbf{k}}|j_{x}^{s}|m_{\mathbf{k}}\rangle\langle m_{\mathbf{k}}|v_{y}|n_{\mathbf{k}}\rangle} {(\mathcal{E}_{k,n}-\mathcal{E}_{k,m})^{2}}, \tag{16}\] with \(c_{1}(x)=\left(1+x\right)\ln\left(1+x\right)-x\ln\left(x\right)\), and \(\mathbf{j}^{s}=\frac{1}{4}\{\mathbf{v},\sigma_{3}\mathcal{S}\}\) is the spin current operator. Moreover, \(\mathcal{S}\) is the spin excitation operator, and can explicitly be written on the form \(\mathcal{S}=\text{diag}(S_{1},\dots,S_{N_{d}})\otimes I_{2\times 2}\), where \(S_{n}\) is the expectation value of the spin angular momentum in the noninteracting band \(n\)[21, 55]. In Fig. 6, we compare the spin Nernst coefficient for various magnetization directions as a function of temperature when the m-ph coupling mechanism is via \(D_{xy}^{\text{ann}}\). The magnitude of the coefficient is in the same order in all cases, but the sign varies. A positive coefficient is observed when the magnetization is oriented along the \(y\) direction, while negative contributions are observed for the \(x\) and \(z\) directions. In contrast to the thermal Hall conductivity, shown in Figure 4, the spin Nernst coefficient does not change sign when increasing the temperature. This difference can be attributed to the distinct distribution of the Berry curvature and the _spin_ Berry curvature. Notably, even in cases where the bands exhibit topologically trivial behavior and the thermal Hall effect vanishes, i.e., magnetization along the \(y\) direction, we still Figure 6: The spin Nernst coefficient of magnon-polaron quasiparticles as a function of temperature for different ground-state magnetization directions with \(D_{xy}^{\text{ann}}=0.3\) meV. Figure 7: Spin Nernst coefficient of magnon-polaron quasiparticles as a function of temperature for different m-ph coupling mechanisms. The ground-state magnetization is OOP. We set \(\kappa_{2}=1.04\)meV, \(D_{xy}^{\text{ann}}=0.173\) meV, \(D_{xy}^{\text{ann}}=0.3\) meV. find a nonzero spin Nernst coefficient. The nonvanishing spin current arises from the spin Berry curvature, which, unlike the Berry curvature, is not directly linked to the Chern number. Hence, the FM system exhibits a spin Hall current while the thermal Hall current vanishes. This means that in this case, there is a counter propagating thermal Hall effect of phonons that compensate the thermal Hall effect of magnons. In Fig. 7, we compare the spin Nernst conductivity of magnon-polaron states arising from different types of m-ph couplings. This figure illustrates that distinct sign and temperature-dependent behavior of the spin Nernst signal can be used to distinguish between different m-ph coupling mechanisms in experiments. ## VI Summary and concluding remarks We examine the effects of an external magnetic field and various m-ph coupling mechanisms on emerging magnon-polaron hybrid states. We have shown that the (spin) Berry curvature and the topology of the hybrid bands can be tuned by the direction of the applied magnetic field. We also explored the impact of the (spin) Berry curvature on thermal Hall and spin Nernst effects. We showed that the thermal Hall response of magnon-polaron hybrid states can be eliminated, while the spin Nernst effect remains finite in the system. Our study suggests that measuring the magnetic field dependence of anomalous Hall effects can be used as a probe of the underlying m-ph coupling and the topology of the system. ## Acknowledgment The authors thank Se Kwon Kim and Verena Brehm for helpful discussions. This project has been supported by the Norwegian Financial Mechanism Project No. 2019/34/H/ST3/00515, "2Dtronics"; and partially by the Research Council of Norway through its Centers of Excellence funding scheme, Project No. 262633, "QuSpin".
2306.09822
Lightweight Attribute Localizing Models for Pedestrian Attribute Recognition
Pedestrian Attribute Recognition (PAR) deals with the problem of identifying features in a pedestrian image. It has found interesting applications in person retrieval, suspect re-identification and soft biometrics. In the past few years, several Deep Neural Networks (DNNs) have been designed to solve the task; however, the developed DNNs predominantly suffer from over-parameterization and high computational complexity. These problems hinder them from being exploited in resource-constrained embedded devices with limited memory and computational capacity. By reducing a network's layers using effective compression techniques, such as tensor decomposition, neural network compression is an effective method to tackle these problems. We propose novel Lightweight Attribute Localizing Models (LWALM) for Pedestrian Attribute Recognition (PAR). LWALM is a compressed neural network obtained after effective layer-wise compression of the Attribute Localization Model (ALM) using the Canonical Polyadic Decomposition with Error Preserving Correction (CPD-EPC) algorithm.
Ashish Jha, Dimitrii Ermilov, Konstantin Sobolev, Anh Huy Phan, Salman Ahmadi-Asl, Naveed Ahmed, Imran Junejo, Zaher AL Aghbari, Thar Baker, Ahmed Mohamed Khedr, Andrzej Cichocki
2023-06-16T13:07:13Z
http://arxiv.org/abs/2306.09822v1
# Lightweight Attribute Locating Models for Pedestrian Attribute Recognition ###### Abstract Pedestrian Attribute Recognition (PAR) deals with the problem of identifying features in a pedestrian image. It has found interesting applications in person retrieval, suspect re-identification and soft biometrics. In the past few years, several Deep Neural Networks (DNNs) have been designed to solve the task; however, the developed DNNs predominantly suffer from over-parameterization and high computational complexity. These problems hinder them from being exploited in resource-constrained embedded devices with limited memory and computational capacity. By reducing a network's layers using effective compression techniques, such as tensor decomposition, neural network compression is an effective method to tackle these problems. We propose novel Lightweight Attribute Localizing Models (LWLM) for Pedestrian Attribute Recognition (PAR). LWLM is a compressed neural network obtained after effective layer-wise compression of the Attribute Localization Model (ALM) using the Canonical Polyadic Decomposition with Error Preserving Correction (CPD-EPC) algorithm. captionUnsal Neural Networks (CNNs) have been used to solve numerous computer vision tasks, such as _image recognition_, _object detection and pose estimation_. Although, some of the recent CNNs show promising results in image recognition, they suffer from high computational complexity and overparameterization. In many cases these problems act as roadblocks for their utilization in power/memory constrained hardware devices such as smartphones and surveillance cameras. Reducing NNs parameters and computational complexity is an active area of research, and a possibility of reduction without hindering their inference accuracy has been both theoretically and experimentally challenging. There are four known categories of model reduction: _low-rank tensor approximation_, _prunning_, _quantization_ and _knowledge distillation_. In a typical pruning algorithm redundant weights are pruned and important weights are kept. It generally consists of a three-stage pipeline, i.e., training, pruning and fine-tuning which makes it computationally very expensive. Unstructured pruning fails to show runtime speed-up on conventional GPUs, while structured pruning is problematic due to change in NNs structure. Quantization technique deals with conversion and storing weights at bit widths lower than floating point precision. Therefore, the associated back-propagation becomes infeasible, and the global structure of weights becomes inconvenient to maintain; hence, it makes quantized models hard to converge, and significantly reduces the accuracy. Knowledge Dis tillation is the process of transferring knowledge from a large model to a smaller one, since large models have higher knowledge capacity than smaller models, there often remains a surprisingly large discrepancy between the predictive distributions of the large and small models, even in cases when the small model has the capacity to perfectly match the larger one. In this paper we focus on the first category which suggests to reduce layers of a NN by a tensor decomposition, e.g. Canonical Polyadic decomposition (CPD) [9] to obtain _Light-Weight (LW) layers_. Weights of convolutional layers can be reduced, for they are known to lie in a low-rank subspace. This reduction often leads to some accuracy drop and can be recovered with fine-tuning. The technique replaces the CNN layers by a sequence of layers with smaller weights, so it results in reduction of computational cost and number of parameters. In their recent work, Lebedev et al.[10] used CP decomposition to compress weights of convolutional kernels in CNNs and reported the instability problem in CPD. The issue was later explained in the work by Phan et al[16]. The reason for it is some difficult scenarios; e.g., when the rank exceeds the tensor dimension. Compare to traditional algorithms, they propose to control the norm of the rank-one tensors during tensor decomposition, which appears to be a useful constraint. Although existing methods demonstrate effective compression at single or some layers, to our knowledge, they do not use a combination of tensor decomposition algorithms to reduce all the layers of a CNN and obtain a fully compressed LWALM. Moreover, existing algorithms are limited in terms of application. Motivated by this, we present a novel LWALM obtained by reducing the components of Attribute Localization Model (ALM) [17] using stable CPD-EPC algorithm [16], and SVD [8]. We compress the ALM at two stages (EPC-0.001 and EPC- 0.002) to obtain two different LWALMs with fewer parameters and less computational complexity. We apply LWALMs to the PAR task and provide substantial experimental evaluations using PAR metrics. The main contributions of our work can be summarized as follows: * We propose a novel lightweight Attribute Localization Model (LWALM) obtained by reducing the kernels of size (\(k>1\)) using CPD-EPC at two EPC (i.e. at \(\delta=0.001\) & \(\delta=0.002\)) and truncated SVD for the kernels of size (\(k=1\)) of Attribute Localization Model (ALM). * We propose a loss function with a norm constraint on the factorized layers of LWALM. * We demonstrate a significant reduction in parameters and computational complexity with less than 2% accuracy drop on Pedestrian Attribute Recognition datasets (PETA and PA-100K) and show reliability of LWALMs using reliability diagrams. The rest of the paper is organized as follows: Section 1 briefly overviews the most recent approach to NN compression. Section 2 presents the preliminary notations and concepts used throughout the paper. Section 3 introduces the PAR problems and algorithms, including the ALM. Section 4 describes the proposed compression algorithm for the PAR task. Our approach to compress the layers of ALM and to obtain LWALM has been described in Section 5. In section 6 we discuss the obtained results. Section 7 evaluates LWALMs using confidence calibration. Finally, we conclude and give a prospect of research directions in Section 8. ## 2 Related Works In their pioneering work [3], Danil et al. presented the idea of redundancy reduction, where the number of parameters and computational complexity of a neural network is reduced. Since then, several techniques have been proposed. Denton et al. [4] present an idea of applying truncated-singular value decomposition (SVD) to the weight matrix of a fully-connected layer. They achieve compression without a significant drop in prediction accuracy. Similarly, techniques to speed up the convolutional layers based on their low-rank approximations were proposed in work by Lebedev et al. [10]; however, their work show compression only of a single or several convolutional layers in the model. On the contrary, we combine algorithms to compress all convolutional layers. Some other methods, i.e., based on vector quantization [5] or on tensor train decomposition [15], have also shown good compression capabilities. A rank selection technique based on Principle Component Analysis (PCA) and an optimization technique to minimize the reconstruction error of non-linear responses have also been presented. A pruning technique presented in [7] aims at reducing the total amount of parameters and operations in the entire network. Some implementation-level approaches using the Fast Fourier Transform (FFT) to speed up convolutions [14] and CPU code optimizations [18] to improve the execution time have also shown promising results. ## 2 Preliminary reductions and concepts This section presents basic definitions and concepts used throughout the paper. Tensors are denoted by underlined bold capital letter, e.g. **X**, matrices by bold capital letters, e.g. **X**, and vectors by lower case letters, e.g. **x**. ### Canonical Polyadic Decomposition with Error Preserving Correction (CPD-EPC) CPD represents an \(N^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text The rank-\(R\) CP-decomposition of the \(3^{d}\) order tensor has the form: \[\mathbf{\underline{K}}(t,s,\{j,\{j\})\}\simeq\sum_{r=1}^{R}\mathbf{\underline{T}} ^{tw}(\{j,l\},r)\mathbf{\underline{T}}(s,r)\mathbf{\underline{T}}(t,r), \tag{1}\] where, \(\mathbf{\underline{T}}^{tw}(\{j,l\},r)\), \(\mathbf{\underline{T}}(s,r)\), and \(\mathbf{\underline{T}}^{\prime}(t,r)\) are of sizes \((D^{2}\times R)\), \(S\times R\), and \(T\times R\), respectively. Therefore, \(1\times 1\) convolution projects input from \(C_{in}\) input channels into \(R\) channels. Then, group convolution layer applies \(R\) separate convolutions with kernel size \(k=3\times 3\) or \(7\times 7\), one for each channel. Finally, one more \(1\times 1\) convolution expands \(R\) channels into \(C_{out}\) channels to get the output. In the second step we check the norm \(\|\eta_{\mathbf{K}}\|_{2}^{2}\) of rank-\(1\) tensor components obtained in step 1 and fit it exceeds a bound i.e. \(\|\eta_{\mathbf{K}}\|_{2}^{2}\geq c^{2}\), the correction method is applied to \(\mathbf{X_{kl}}\) to find a new tensor \(\mathbf{X_{lk+1}}\) with a minimum norm \(\|\eta_{\mathbf{K+1}}\|_{2}^{2}\) s.t. \(\|\mathbf{V}-\mathbf{X_{lk+1}}\|_{F}\leq\|\mathbf{V}-\mathbf{X_{kl}}\|_{F}\). Otherwise, standard CP decomposition in step 1 of \(\mathbf{Y}\) is applied to find \(\mathbf{X_{lk+1}}\) by \(\mathbf{X_{kl}}\). Following this, the estimated tensor is a feasible point with \(\delta=\|\mathbf{V}-\mathbf{X_{lk-1}}\|_{F}\) which is the current approximation error [16]. ### Compression using SVD SVD is the matrix decomposition that represents matrix in the follow way: \(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^{\top}\), where \(\mathbf{S}\) is diagonal matrix while \(\mathbf{U}\) and \(\mathbf{V}\) are unitary matrices[19]. In case of \(1\times 1\) convolution, the weight is matrix of size \(C_{in}\times C_{out}\). Therefore, it can be replaced with two \(1\times 1\) convolutions where \(1^{st}\) convolution projects \(C_{in}\) input channels into \(R\) channels and \(2^{nd}\) one expands \(R\) channels into \(C_{out}\) output channels, with weights \(\mathbf{V}^{\top}\) and \(\mathbf{U}\), respectively. ### Compression Approach Convolution operations contribute to the bulk of computations in ALM [17] and the model is embedded with Attention Modules (AM) at different levels of the Inception-V2 backbone (incep-3b,4d,5d). Number of embedded AMs is dependent on the number of attributes in the training dataset; i.e., it is \(35\times 3=105\) for PETA, for there are 35 attributes and 3 different levels of Inception-V2 backbone. Each AM consists of two Conv2D layers with shapes: (768, 48) and (48, 768) for incep_3b, (512, 32) and (32, 512) for incep_4d, and (256, 16) and (16, 256) for incep_5b; thus, there are a total of \(105\times 2=210\) convolutional layers with kernel size \(k=1\times 1\) for all AMs. Additionally, 31 layers have kernels with size \(k=3\times 3\) and a single layer has kernel with size \(k=7\times 7\). There are 17.1 million parameters in ALM. We apply a combination of two aforementioned approaches to the ALM. Firstly, each standard convolutional layer is compressed to obtain a sequence of LW layers. Then, we replace the former layer in ALM with latter LW layers and fine-tune the model on PAR datasets. The process can be described in the following main steps. Each convolutional kernel is decomposed using a tensor decomposition algorithm (CPD in case of convolutions with kernel size \(k>1\) and SVD in case of convolutions with kernel size \(k=1\)) with a given rank R. Kernel weights obtained from CP decomposition in \(1^{st}\) step can be corrected using error preserving method if they have diverging components. The final result are CP factors with minimum sensitivity. Initial convolutional kernel is replaced with the sequence of obtained kernels in CPD or SVD format which results in smaller total number of parameters and complexity. Lastly, the entire network is fine-tuned using backpropagation. As was mentioned in Section 4, applying CPD turns convolutional layer with shape \((C_{in}\times C_{out}\times D\times D)\) into sequence of three convolutional layers with shapes \((C_{in}\times R\times 1\times 1)\), depth-wise \((R\times R\times D\times D)\) and \((R\times C_{out}\times 1\times 1)\). In case of SVD, convolutional layer with shape \((C_{in}\times C_{out}\times 1\times 1)\) is replaced with sequence of two convolutional layers with shapes \((C_{in}\times R\times D\times D)\) and \((R\times C_{out}\times 1\times 1)\). \(1\times 1\) convolutions allows the transfer of input data to a more compact channel space. ### Rank Search Procedure The selection of an appropriate rank for compression is crucial for model performance. An iterative heuristic binary search algorithm [1] is used to find the smallest acceptable rank for each layer. This procedure is applied for both SVD and CPD rank searches. First step is to find the maximum rank for decomposition of the weight tensor at each layer, then to use a binary search algorithm for iterative factorization of each layers and observe how the drop in accuracy at a given rank and given layer influences accuracy with regards to fine-tuning the entire network. Fine-tuning after each decomposition ensures that the drop in accuracy does not exceed a predefined threshold sensitivity (EPC). ### Layerwise Speedup Analysis Table 1 compares the speedup between ALM and LWALM at different layers with different kernel sizes, \(k=(1\times 1,\ 3\times 3,\ 7\times 7)\), after compression. Speedup is computed as the ratio between the sum of GFLOPs over each decomposed layer and the equivalent ALM layer, as \[\text{Speedup}=\frac{\text{GFLOPs}(\text{Layer}_{\text{ALM}})}{\sum_{i}\text{ GFLOPs}(\text{Layer}_{\text{LWALM}})}.\] It can be observed that the layers compressed by CPD-EPC with larger kernel sizes show significant speedup. At layer (3a_3x3), we see a speedup of up to 15\(\times\). ## 5 Experiments In this section, we discuss the results obtained after compression of the ALM in terms of speedup and computational complexity. Moreover, we show the results obtained from implementing LWALMs for the PAR task by evaluating the LWALMs on two of the most popular datasets PA-100K and PETA against other compression algorithms, such as Tucker-2 and CPD. ### Losses The norms of rank-1 tensors are minimized during the compression stage. However, while training, their norms must be monitored since they can get large, hindering convergence. Therefore, we introduce an additional constraint to penalize the loss function if the norms of rank-1 tensors get large during training. Formally, the following objective function is minimized: \[\text{minimize}\ \mathcal{L}_{LWALM}+\lambda\sum_{L=1}^{N}\sum_{L=1}^{n} \left\|D_{l}^{(L)}\right\|_{F}^{2} \tag{2}\] where, \(D_{l}^{(L)}\) is the \(l^{th}\) factorized layer for the corresponding ALM layer \(L\). \(n\) is the number of factorized layers i.e. for CPD \(n=3\) and for SVD \(n=2\). \(L_{LWALM}\) stands for the weighted binary cross-entropy loss, \(L\) is a set of \(N\) ALM layers on which compression was performed and \(\lambda\) is the shrinkage factor. ### Training LWALMs were trained on a Tesla-T4 GPU with 26 GB memory in two batches of sizes 32 and 64. The initial learning rate was set to 0.001 with an adjustment of 0.1\(\times\) after every ten epochs. Adam optimizer with a weight decay of 0.0005 and proposed loss function with \(\lambda\) = 0.001 were used during training. ### Datasets Two widely known PAR datasets, PETA [2] and PA-100K [13], were used for evaluation. To make a fair comparison with the ALM, we used the same data partitions for both datasets as mentioned in their work [17]. PETA was evaluated at each attribute's mean recognition accuracy, which is given by the average of positive and negative examples' recognition accuracy. The widely used evaluation method is a random dataset division into multiple parts, 9500 for training, 1900 for verifying and 7600 for testing [12]. Similarly, for PA-100K the entire dataset was randomly split into 80,000 training images, 10,000 validation images and 10,000 test images. ### Performance Comparison We compare LWALMs with PAR models in 4 different categories: (1) Holistic methods, including ACN and DeepMar, (2) Relation-based methods (3) Attention-based and (4) Part-based methods. Table 2 (rows: 4-10 [17]) shows the performance comparison between different PAR models on the PETA dataset (rows: 4-12, coloumns:1-8). LWALMs have (66.1538%, 53.589%) lower GFLOPs and (62.5731%, 59.0643%) less parameters compared to the ALM. Similarly, LWALMs have the least parameters compared to all other PAR models. However, the \((\delta=0.001)\) model falls only behind DeepMar in terms of GFLOPs, with relatively better Top-5 accuracy. Overall, LWALMs achieve higher Top-5 classification accuracy over DeepMar, VeSPA, PGDM and BN-Inception models. (Table 2). Compared to models compressed using Tucker-2 and traditional CPD, LWALM compressed at (\(\delta\) = 0.001) performs better in almost all PAR metrics with comparatively higher speedup. At (\(\delta\) = 0.002), LWALM achieves the highest speedup (Table 2). For PA-100K dataset, LWALMs are faster in terms of GFLOPs and have less parameters compared to other PAR models (Table 2) with an accuracy of 79.77%, better than PAR models in all 4 categories but falling short by less than 1% below ALM. Overall, LWALM has (92.43%, 90.01%) fewer parameters and (89.23%, 60%) speedup compared to ALM, since the model has smaller ratio between number of 1 \(\times\) 1 convolutional layers compressed using SVD and number \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{**GFlops**} \\ \hline **Algorithm** & **Reduced Layer** & **LWALM** & **ALM** & **Speedup** \\ \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & conv2, 3x3 & 2.11e-2 & \multirow{3}{*}{1.616e-1} & \multirow{3}{*}{**1.829**} \\ & layer 1 & 2.979e-3 & & \\ & layer 2 & 6.415e-2 & \multirow{3}{*}{**3a\_3x3**} & \multirow{3}{*}{**3.34**} \\ & layer 1 & 1.177e-3 & & \\ & layer 1 & 4.414e-4 & & \\ & layer 3 & 3.923e-4 & & \\ & layer 2 & 5.52-5 & 1.346e-2 & 15.14 \\ & layer 2 & 4.414e-4 & & \\ \hline \hline \end{tabular} \end{table} Table 1: 3 Layers Speedup Analysis of layers compressed using CPD-EPC: \(\begin{array}{l}\text{Layers}_{\text{SVD}}\left(\text{PETA}\right)>\frac{\text{ Layers}_{\text{SVD}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{ CPD-EPC}}\left(\text{PETA}\right)}\cdot\frac{\text{Layers}_{\text{SVD}}\left(\text{PA}-100 \right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{ \text{Layers}_{\text{SVD}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD -EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD}}\left( \text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right) }\cdot\frac{\text{Layers}_{\text{SVD}}\left(\text{PA}-100\right)}{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD -EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA} -100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right) }{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{ \text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA} -100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot \frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC} }\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA} -100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot \frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC} }\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers} _{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}} \left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA} -100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot \frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC} }\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}} \left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot \frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}} \left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers} _{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}} \left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot \frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}} \left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{ \text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}} \left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}} \left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}{ \text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{\text{Layers}_ {\text{SVD-EPC}}\left(\text{PA}-100\right)}{\text{Layers}_{\text{SVD-EPC}}\left( \text{PA}-100\right)}\cdot\frac{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100 \right)}{\text{Layers}_{\text{SVD-EPC}}\left(\text{PA}-100\right)}\cdot\frac{ \text{Layers}_{\text{SVD-EPC}} where \(n\) is the number of samples and the difference between Acc(uracy) and Conf(idence) on the right-hand side gives _calibration gap_. [6]. ### Temperature Scaling For Temperature scaling a calibrated probability \(q_{i}\) is generated based on the raw output (Logits) \(\widehat{q}_{i}\). Then, the TS for a learned temperature \(T\), is calculated as \(\widehat{q}_{i}/T\). The optimal Temperature or the scaling factor (T) for a trained LWALM is obtained by minimizing the Weighted Binary Cross-Entropy (BCE) loss as follows: \[\ell(p_{i},\widehat{q}_{i})=\epsilon^{(p_{i}+(1-2\alpha)|W)}p_{i}\log(\widehat {q}_{i})+(1-p_{i})\log(1-\widehat{q}_{i}),\] where \(W\) is the initialized weight. We represent the calibrated model using reliability diagrams, gap metric and Expected Calibration Error (ECE) with 10 bins on both datasets (PETA and PA-100K) in Figure 1. It can be observed that both ALM and LWALM experience some level of miscalibration, with ECE roughly between 2% to 12%. However, ALM shows a slightly higher calibration error (2.76%, 9.20% & 4.79%, 9.29%) compared to its LW counterpart (2.51%, 8.28% & 3.79%, 8.38%) on both datasets. ## 3 Conclusion and Future Work In this paper, we proposed LWALMs for the PAR task, that were obtained by compressing layers of the Attribute Localization model (ALM) using a stable CPD-EPC algorithm at two stages (\(\delta\) = 0.001 and \(\delta\) = 0.002). LWALMs achieve high speedup with less than 2% accuracy drop for tests conducted on multiple Pedestrian datasets trained on the proposed loss function using Pedestrian Attribute Recognition (PAR) metrics. Independent evaluations using reliability diagrams on metrics such as ECE show that LWALMs well-preserve the true correctness despite changes in layer architecture and weights after compression. However, accuracy can be further improved by exploring different optimization techniques and by scaling learned parameters during training which will be a part of our future work. Moreover, we plan to explore the possibility of obtaining LWALMs by exploiting algorithms based on tensor network, such as Tensor Chain or Tensor Train decompositions.
2305.10823
FastFit: Towards Real-Time Iterative Neural Vocoder by Replacing U-Net Encoder With Multiple STFTs
This paper presents FastFit, a novel neural vocoder architecture that replaces the U-Net encoder with multiple short-time Fourier transforms (STFTs) to achieve faster generation rates without sacrificing sample quality. We replaced each encoder block with an STFT, with parameters equal to the temporal resolution of each decoder block, leading to the skip connection. FastFit reduces the number of parameters and the generation time of the model by almost half while maintaining high fidelity. Through objective and subjective evaluations, we demonstrated that the proposed model achieves nearly twice the generation speed of baseline iteration-based vocoders while maintaining high sound quality. We further showed that FastFit produces sound qualities similar to those of other baselines in text-to-speech evaluation scenarios, including multi-speaker and zero-shot text-to-speech.
Won Jang, Dan Lim, Heayoung Park
2023-05-18T09:05:17Z
http://arxiv.org/abs/2305.10823v1
# FastFit: Towards Real-Time Iterative Neural Vocoder ###### Abstract This paper presents FastFit, a novel neural vocoder architecture that replaces the U-Net encoder with multiple short-time Fourier transforms (STFTs) to achieve faster generation rates without sacrificing sample quality. We replaced each encoder block with an STFT, with parameters equal to the temporal resolution of each decoder block, leading to the skip connection. FastFit reduces the number of parameters and the generation time of the model by almost half while maintaining high fidelity. Through objective and subjective evaluations, we demonstrated that the proposed model achieves nearly twice the generation speed of baseline iteration-based vocoders while maintaining high sound quality. We further showed that FastFit produces sound qualities similar to those of other baselines in text-to-speech evaluation scenarios, including multi-speaker and zero-shot text-to-speech. Won Jang, Dan Lim, Heayoung Park Kakao Enterprise Corporation, Republic of Korea {taylor.martin, satoshi.2020, abigail.p}@kakaoenterprise.com **Index Terms**: Neural vocoder, text-to-speech, U-Net, short-time Fourier transform ## 1 Introduction Neural vocoders generate speech that conforms to the given input conditions by modeling short- and long-term dependencies. Owing to these features, these architectures have been applied[1, 2], wholly or partially, to various applications[3, 4, 5] that output speech and audio as well as text-to-speech applications[6, 7, 8, 9]. Moreover, the use of generative adversarial networks (GANs)[10] for neural waveform generation has further improved neural vocoders[2, 11, 12, 13, 14, 15]. However, according to recent text-to-speech studies, some vocoders require additional training (i.e., fine-tuning) using pairs of ground-truth waveforms and model-predicted features to adapt to low-quality audio features generated by an acoustic model[4, 12, 13, 14]. Recent research has shown that image generation models utilizing denoising diffusion probabilistic models (DDPMs)[16] outperform traditional GAN-based models[17]. Several studies have successfully applied DDPM to neural vocoders, with some reporting superior performance over conventional models[18, 19, 20, 21]. However, the trade-off between generation speed and quality owing to the need for repeated denoising is considered a barrier to the commercialization of these models. Subsequent studies have attempted to overcome this by maintaining robust performance with fewer iterations[20, 22, 23]. The symmetric architecture of U-Net[24] has made it an attractive choice for iteration-style models. To use an existing GAN-based vocoder as a decoder, some studies have added an encoder connected with skip connections[18, 20]. However, this doubles the size of the model and slows the generation speed by approximately half. To improve efficiency, we propose FastFit, a new architecture that replaces the encoders in U-Net with multiple short-time Fourier transforms (STFTs) to trade a small fidelity degradation for a high generation speed gain. Our work is inspired by the work of Kaneko _et al._[14], who replaced some of the blocks with an inverse STFT. We extended the GAN-based vocoder proposed by Jang _et al._[13] to U-Net and replaced each encoder block with an STFT with parameters corresponding to the shape of its skip connection. This modification of the model preserves the advantages of skip connection in U-Net, while expecting more efficient intermediate feature encoding of raw waveforms because the computational cost of STFT is less than that of the neural encoder. We applied the iteration-style principle proposed by Koizumi _et al._[23] to the proposed architecture. To compare the performance of FastFit, we used iteration-based vocoders, which reported fast generation speeds as baselines and conducted objective and subjective evaluations. The results showed that FastFit achieved twice the generation speed with statistically similar speech quality despite having half the baseline parameters. Further, we conducted experiments by applying each vocoder to multi-speaker and zero-shot text-to-speech without fine-tuning, and FastFit was found to be one of the best-performing models1. Footnote 1: Audio demo samples can be found at the following URL: [https://kallavinka8045.github.io/is2023/](https://kallavinka8045.github.io/is2023/) ## 2 Related work The proposed model is influenced by several improvements from previous studies on GAN-based vocoders. Parallel WaveGAN[11] applied multi-resolution STFT (MR-STFT) loss as an auxiliary loss to a vocoder to facilitate stable adversarial training. HiFi-GAN[12] and UnivNet[13] include a multi-period discriminator (MPD) and multi-resolution spectrogram discriminator (MRSD), respectively, which are discriminators that can observe real or generated waveforms with various patterns and resolutions. iSTFTNet[14] replaced the back part of the residual blocks of HiFi-GAN with an inverse short-time Fourier transform (iSTFT), trading off a small reduction in quality for higher synthesis speeds. Recently, BigVGAN[15] succeeded in the adversarial training of large-scale generators with more than 100M parameters, achieving overall state-of-the-art fidelity, including out-of-distribution robustness. Our work is based on the iteration-style vocoding principle proposed by WaveFit[23]. According to the fixed-point iteration theorem, if a mapping \(\mathcal{T}\) has a fixed point \(\mathbf{x}=\mathcal{T}(\mathbf{x})\) and is firmly quasi-nonexpansive (as described in Section 17.2.2 in Yamada _et al._[26]), then the mapping point \(\mathcal{T}(\mathbf{y})\) of an arbitrary point \(\mathbf{y}\) always has a smaller Euclidean distance from than \(\mathbf{y}\). \(\mathcal{T}\) can be extended to the form of iterative denoising \(\mathbf{y}_{t-1}=\mathcal{T}(\mathbf{y}_{t})\). If an arbitrary initial point \(\mathbf{y}_{T}\) is iteratively refined at each \(t\) from \(T\) to \(1\), then \(\mathbf{y}_{t-1}\) always moves closer than \(\mathbf{y}_{t}\) to the clean signal \(\mathbf{x}\), which is the fixed point of \(\mathcal{T}\). WaveFit proposed a denoising mapping and loss function for a vocoder that satisfies this property. ## 3 Description of the proposed model The proposed model, FastFit, begins with an initial point \(\mathbf{y}_{T}\). At each iteration step, \(t=T,T-1,...,1\), denoising mapping is applied to \(\mathbf{y}_{t}\) to obtain the denoised signal \(\mathbf{y}_{t-1}\). A model \(\mathcal{F}\) parameterized by \(\theta\) was trained to predict the noise components of \(\mathbf{y}_{t}\). \(\mathcal{F}_{\theta}\) was conditioned on the log-mel-spectrogram \(\mathbf{c}\), latent noise \(\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})\), and current step \(t\) as \(\mathcal{F}_{\theta}(\mathbf{y}_{t},\mathbf{c},\mathbf{z},t)\). The objective of the vocoder is to make \(\mathbf{y}_{t}\) at each iteration, including the final output \(\mathbf{y}_{0}\), close to the target waveform, \(\mathbf{x}\). ### Improving the architecture of the residual block Our U-Net model has \(N\) encoder and decoder blocks with mapping and step embedding networks for the intermediate latent \(\mathbf{w}\) and step embedding \(\mathbf{t}_{emb}\), respectively, as shown in Figure 1(b). Each \(n\)-th decoder block computes \(\mathbf{h}_{t}[N+n]\) with \(\mathbf{h}_{t}[N+n-1]\), \(\mathbf{c}\), \(\mathbf{w}\), \(\mathbf{t}_{emb}\), and \(\mathbf{h}_{t}[N-n]\) as inputs, as shown in Figure 1(a). \(\mathbf{t}_{emb}\) is conditioned to be broadcast and added to the following features after \(\mathbf{c}\) and after each dilated convolution. Each decoder block is based on the UnivNet[13] generator with three main changes. First, we added an adaptive layer normalization (AdaLN) after each residual connection to inject noise \(\mathbf{z}\) into the vocoder for improving the performance and training stability[27]. Second, we applied the snake activation function[15] to the model. This trainable activation function controls the output of each layer in the form of a periodic frequency and contributes to out-of-distribution robustness[15]. Finally, the gated activation units were removed to improve the generation speed. Although this layer contributes to a slight improvement in quality according to UnivNet, it doubles the number of channels in the previous layer. The encoder block is like the decoder block, with a few differences: there is no skip connection, and the upsampling layer is replaced with downsampling using strided convolution layer. ### Replacing U-Net encoder with multiple STFTs To approach a real-time iterative vocoder application, we propose an intuitive methodology: replacing the U-Net encoder with multiple STFTs. As shown in Figure 1(c), we used a frame shift interval equal to the temporal resolution of each decoder block, leading to the skip connection as a parameter for each STFT. Inspired by iSTFTNet[14], the number of points in the Fourier transform and Hann window length was set to four times the respective frame shift interval. To match the channel size, a convolution layer was placed between each encoder and decoder blocks. Because the computation speed of STFT is high compared to that of the neural encoder block, our proposed model could reduce the number of parameters by almost half, approximately doubling the generation speed. We expect limited degradation of speech quality using the methodology because skip connections are still used, which is the basis for the high performance of the U-Net architecture. We conducted an ablation study to determine the optimal representations of STFT. Consequently, we chose the Cartesian form (concatenation of real and imaginary channels) as the STFT representation. ### Denoising mapping and training losses As mentioned in the previous section, our model is based on denoising mapping and the loss function proposed using WaveFit. The denoised signal \(\mathbf{y}_{t-1}\) is then computed as follows: \[\tilde{\mathbf{y}}_{t}=\mathbf{y}_{t}-\mathcal{F}_{\theta}( \mathbf{y}_{t},\mathbf{c},\mathbf{z},t) \tag{1}\] \[\mathbf{y}_{t-1}=(P_{\mathrm{e}}/(P_{\mathbf{\hat{y}_{t}}}+s)) \tilde{\mathbf{y}}_{t} \tag{2}\] where \(s=10^{-8}\) is a constant used to avoid numerical errors. Figure 1: FastFit architecture. (a) The \(n\)-th decoder block. For example, the \(1\)-st decoder block computes \(\mathbf{h}_{t}[N+1]\) from the output \(\mathbf{h}_{t}[N]\) of the last \(N\)-th encoder block and the skip-connection \(\mathbf{h}_{t}[N-1]\), the output of the \(N-1\)-th encoder. (b) FastFit based on U-Net. \(\mathbf{c}\), \(\mathbf{w}\), and \(\mathbf{t}_{emb}\) are used as inputs to each block, but we omitted them in this figure for brevity. \(\mathrm{PE}\) denotes a positional encoding operation[25]. (c) FastFit with multiple STFTs encoder, based on the proposed U-Net version. Each channel size of STFT is converted to fit the channel of the decoder block through each convolution layer. The denoising mapping is defined by subtracting the noise component predicted by \(\mathcal{F}_{\theta}\) from \(\mathbf{y}_{t}\) to obtain \(\tilde{\mathbf{y}}_{t}\) and adjusting the power of \(\tilde{\mathbf{y}}_{t}\) to \(P_{\epsilon}\). \(\mathcal{F}_{\mathbf{y}_{t}}\) and \(P_{\epsilon}\) can be obtained by computing the power spectrograms of \(\tilde{\mathbf{y}}_{t}\) and \(\mathbf{c}\), respectively, and then taking the element-wise mean. Specifically, the power spectrogram of \(\mathbf{c}\) can be obtained by multiplying \(\mathbf{c}\) with the pseudoinverse of the mel-compression matrix and then squaring it. By scaling the power of the signal to a constant power of \(\mathbf{c}\) at each step, the power of \(\mathbf{y}_{t-1}\) can be kept constant until denoising is repeated for all \(t\) and the final output \(\mathbf{y}_{0}\) is obtained. FastFit is adversarially trained with the least squares GAN (LSGAN)[28] as the GAN loss and discriminators \(D\), which are a combination of MPD (as described in Appendix B.2 in Kim _et al._[9]) and MRSD[13]. The overall losses \(\mathcal{L}_{\mathrm{disc}}\) and \(\mathcal{L}_{\mathrm{gen}}\) are defined as follows. \[\mathcal{L}_{\mathrm{disc}}=\frac{1}{TK}\sum_{t=0}^{T-1}\sum_{k=0}^{K-1}\Big{[} \mathbb{E}_{\mathbf{x}}[(D_{k}(\mathbf{x})-1)^{2}]+\mathbb{E}_{\mathbf{y}_{t} }[D_{k}(\mathbf{y}_{t})^{2}]\Big{]} \tag{3}\] \[\mathcal{L}_{\mathrm{gen}}=\frac{1}{T}\sum_{t=0}^{T-1}\Big{[} \lambda_{\mathrm{aux}}\mathcal{L}_{\mathrm{aux}}(\mathbf{y}_{t},\mathbf{x})\] \[+\frac{1}{K}\sum_{k=0}^{K-1}\Big{[}\mathbb{E}_{\mathbf{y}_{t}}[(D _{k}(\mathbf{y}_{t})-1)^{2}]+\lambda_{\mathrm{fm}}\mathcal{L}_{\mathrm{fm}}(D _{k};\mathbf{y}_{t},\mathbf{x})\Big{]}\Big{]} \tag{4}\] where \(K\) denotes the number of sub-discriminators. We used MR-STFT[11] as the auxiliary loss \(\mathcal{L}_{\mathrm{aux}}\) and set \(\lambda_{\mathrm{aux}}\) to 2.5. Additionally, we applied the scaled feature matching loss \(\mathcal{L}_{\mathrm{fm}}\) proposed by Yang _et al._[29]; \(\lambda_{\mathrm{fm}}=\lambda_{\mathrm{aux}}\mathcal{L}_{\mathrm{aux}}/ \mathcal{L}_{\mathrm{fm}}\). According to WaveFit, the initial point \(\mathbf{y}_{T}\) is sampled using the noise generation algorithm of SpecGrad[21], which is defined as follows: \[\mathbf{y}_{T}=\mathbf{G}^{+}\mathbf{M}\mathbf{G}\boldsymbol{\epsilon} \tag{5}\] where \(\boldsymbol{\epsilon}\sim\mathcal{N}(0,\mathbf{I})\), \(\mathbf{G}\) and \(\mathbf{G}^{+}\) denote STFT and iSTFT, respectively, and \(\mathbf{M}\) denotes a filter computed from \(\mathbf{c}\) for prior adaptation. SpecGrad estimated a cepstrum-based spectral envelope from the spectrogram obtained by multiplying \(\mathbf{c}\) with the pseudoinverse of the mel-compression matrix (like the previous paragraph) and used it as \(\mathbf{M}\). However, SpecGrad reported that using the spectrogram from \(\mathbf{c}\) directly did not provide satisfactory results. In contrast, we experimentally verified that, unlike the case of the DDPM algorithm on which SpecGrad is based, using the spectrogram as \(\mathbf{M}\) yields higher sound quality under our denoising mapping and architecture. The experimental design and results are described in detail in the following sections. ## 4 Experiments ### Data configurations and evaluation metrics We adopted LibriTTS[30], a multi-speaker English dataset with 24 kHz sampling rate waveforms, for the training and evaluation of the vocoder models. We used the "train-clean-360" dataset to train the models, with 5% and 2% of the dataset for validation and testing, respectively, with all speakers included in each of the three splits. For ground-truth mel-spectrogram evaluation (GT mel evaluation) including the ablation study, a "test-clean" dataset was prepared. The STFT parameters used to extract the 100-band, 0-12 kHz log-mel-spectrograms are 1024-point Fourier transform, 256 sample frame shift, and 1024 sample Hann window length. Two objective evaluation metrics, PESQ and MR-STFT, were used to evaluate the performance of each model. An open-source library2 was used to calculate the wideband PESQ, and the parameters required to calculate the MR-STFT metric were set to the same values as those in Yamamoto _et al._[11]. Footnote 2: [https://github.com/](https://github.com/)“BaCai/python-pesq To clarify the comparison between the proposed model and baselines, a 5-point mean opinion score (MOS) was used for the TTS evaluation and a 7-point comparative MOS (CMOS) evaluation was used for GT mel evaluation. To collect 400 ratings for each evaluation item, we randomly sampled 20 speech samples and collected 20 ratings for each sample from 20 listeners located in the United States using Amazon Mechanical Turk. The loudness of all speech samples used was normalized to -23 LUFS. Other details of the subjective evaluations were based on Loizou's work[31]. ### Model settings The proposed model uses the hyperparameters of each block that follow UnivNet-c32[13], with the number of dilated convolutions reduced to three, each with a dilation of {1,3,9}, to improve speed. The channel size of each convolution in the MRSD was set to 16. The dimensions of the latent noise \(\mathbf{z}\) and the number of iterations \(T\) were set to 100 and 3, respectively. The step embedding network used the structure proposed by Kong _et al._[19], and the mapping network used the same structure, but the channel size of each layer was set to 256. The minimum phase response based on the homomorphic filter method was used to calculate the filter \(\mathbf{M}\). The same optimizer and learning rate as UnivNet were used to train FastFit and all the models for the ablation study up to 1M steps, with a batch size of 64. All other architectural details followed the settings of the studies on which they were based. The performance of the proposed model was compared with three baselines: UnivNet, FastDiff, and WaveFit. These models are based on three main methodologies of vocoder research: GAN, DDPM, and fixed-point iteration. For UnivNet, we used the "c32" version of our implementation. FastDiff was implemented using the official repository3, with \(T=4\) as suggested by Huang _et al._[20]. We implemented WaveFit with \(T=3\) following Koizumi _et al._[23], using an unofficial implementation4 of WaveGrad with 15.8M parameters as the base model. The upsampling ratios were set to {4,4,4,2,2} to fit our experimental setting. To improve the training stability, we replaced the GAN loss with an LSGAN, which resulted in more stable training and a lower auxiliary loss. All the models were trained up to 1M steps using four NVIDIA V100 GPUs, and no additional fine-tuning was applied. Footnote 3: [https://github.com/Rongjiehuang/FastDiff](https://github.com/Rongjiehuang/FastDiff) Footnote 4: [https://github.com/ivanovok/WaveGrad](https://github.com/ivanovok/WaveGrad) For the multi-speaker TTS evaluation, we trained the JDI-T[7] acoustic model with the adapter-based multi-speaker methodology of Hsieh _et al._[32] using the LibriTTS train-clean-360 subset with 100 speakers. For the zero-shot TTS evaluation, we used an open-source zero-shot TTS program named TorToiSe5. The recordings of the LibriTTS "test-clean" subset with 10 speakers were input into the program with an "ultrafast" offset to synthesize mel-spectrograms for evaluation. Sentences for all TTS evaluations were extracted from the utterances of speakers who were not included in the training. All speaker selection and data processing details for TTS evaluations followed Hsieh _et al._'s approach. ## 5 Results ### Ablation studies We conducted ablation studies to evaluate the proposed improvements under the GT mel conditions. According to the results presented in Table 1, FastFit achieved the non-significant objective and subjective metrics compared with U-Net with a neural encoder. We observed significant destabilization of the training when the model was trained without AdaLN, resulting in worse metrics. To demonstrate that the proposed methodology maintains quality by maintaining skip connections, we removed all skip connections and connected only one STFT to the first decoder block, which resulted in relatively poor metrics. Moreover, we have made several attempts to define an effective initial point \(\mathbf{y}_{T}\). Our experiments computing \(\mathbf{y}_{T}\) from the spectral envelope performed worse than the proposed method of computing \(\mathbf{y}_{T}\) directly from the spectrogram from \(\mathbf{c}\). We also attempted to convert the spectrogram to a waveform with 32 Griffin-Lim iterations and use it as \(\mathbf{y}_{T}\). This did not produce significant differences in metrics but resulted in a slight reduction in generation speed, so we did not adopt it. We were intrigued by Webber _et al._[33]'s work with different representations of STFT output and tested to find the optimal one. The magnitude spectrogram, used as a representation, showed no significant metric difference from the Cartesian form of the proposed model. However, internal tests showed a decrease in quality in some samples; therefore, we did not adopt it. For the polar form, training collapsed early on, so using phase as a representation was not appropriate for the proposed model. ### Comparison with baselines To measure the speed of the evaluation models, we generated 6 second segments 20 times using an NVIDIA V100 GPU and measured the average time. As shown in Table 2, FastFit achieved approximately twice the synthesis speed of the iteration-based vocoders despite having approximately half the number of parameters, and none of the metrics scored significantly worse in terms of speech quality. Although UnivNet's synthesis speed was superior to other baselines, it performed poorly in the CMOS evaluation owing to the occasional blurring of the harmonic component of the GT mel-spectrogram. FastDiff performed best on PESQ but worst on MR-STFT, which calculates the numerical distances. This is because the model confused segments with noise-like spectral shapes, such as consonants, breaths, and high-frequency spectral bands, with noise and denoised them. WaveFit performed well overall but had the slowest generation speed. ### Application to text-to-speech synthesis We characterized the mel-spectrograms generated by the models for two TTS evaluation tasks and found that the mel-spectrograms for multi-speaker TTS had relatively blurry shapes because they were not generated by high-quality models, such as GAN or DDPM. In contrast, the mel-spectrograms for zero-shot TTS had relatively more realistic shapes because they used a DDPM-based acoustic model that produced high-quality output regardless of the synthesis rate. However, each vocoder was not fine-tuned using the predicted mel-spectrograms. For multi-speaker TTS, FastFit produced better MOS scores than UnivNet and FastDiff, with a slight difference from WaveFit. For the zero-shot TTS, all models except FastDiff produced similar MOS. UnivNet was observed to produce noise artifacts in some segments of the multi-speaker TTS experiment, which may be responsible for its worse MOS. FastDiff recorded the worst MOS owing to the incorrect denoising of noise-like components and produced blurred harmonic components. The remaining models produced statistically similar MOS, with FastFit (U-Net) producing the best results, but the confidence intervals of the MOS overlapped. ## 6 Conclusion By improving the architecture of an iteration-based neural vocoder, we could double the generation rate while maintaining a high fidelity. As the U-Net architecture is widely used in speech processing applications, we expect our simple yet effective idea of replacing the encoder with STFTs to be applied in a variety of speech-based research and applications in the future. ## 7 Acknowledgements The authors would like to thank James Bekter for developing TorToiSe, an outstanding open-sourced TTS model. We also would like to thank Jaesam Yoon, Sunghee Jung, Gyeonghwan O and Bongwan Kim for providing insightful feedback. \begin{table} \begin{tabular}{l|c c c} \hline \hline Model & PESQ\(\uparrow\) & MR-STFT\(\downarrow\) & CMOS\(\uparrow\) \\ \hline Recordings & - & - & 0.251 \\ \hline FastFit & 3.712 & **0.866** & - \\ FastFit (U-Net) & **3.754** & 0.868 & 0.062 \\ \hline Without AdaLN & 3.411 & 0.974 & -0.168 \\ Without skip-connections & 3.449 & 0.936 & -0.199 \\ \hline \(\mathbf{y}_{T}\sim\) Spectral envelope & 3.422 & 0.969 & -0.175 \\ \(\mathbf{y}_{T}\sim\) Griffin-Lim & 3.685 & 0.872 & **0.069** \\ \hline Magnitude STFTs encoder & 3.677 & 0.875 & -0.031 \\ Polar STFTs encoder & & Failed to train \\ Polar+Cartesian STFTs encoder & & Failed to train \\ \hline \hline \end{tabular} \end{table} Table 1: The ablation study results. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Model complexity} & \multicolumn{2}{c|}{GT mel evaluation} & \multicolumn{2}{c}{Multi-speaker TTS} & Zero-shot TTS \\ \hline Model & Params\(\downarrow\) & Speed\(\uparrow\) & PESQ\(\uparrow\) & MR-STFT\(\downarrow\) & CMOS\(\uparrow\) & MOS\(\uparrow\) & MOS\(\uparrow\) \\ \hline UnivNet & 14.86M & \(\times\)**314.49** & 3.705 & **0.853** & -0.295 & 3.46\(\pm\)0.06 & 3.84\(\pm\)0.09 \\ FastDiff & 15.36M & \(\times\)52.35 & **3.786** & 1.385 & **0.078** & 3.32\(\pm\)0.07 & 3.68\(\pm\)0.10 \\ WaveFit & 15.85M & \(\times\)43.10 & 3.639 & 0.921 & 0.072 & 3.70\(\pm\)0.08 & 3.83\(\pm\)0.09 \\ FastFit & **6.81M** & \(\times\)101.40 & 3.712 & 0.866 & - & 3.67\(\pm\)0.08 & 3.86\(\pm\)0.08 \\ FastFit (U-Net) & 12.94M & \(\times\)59.88 & 3.754 & 0.868 & 0.062 & **3.75\(\pm\)0.07** & **3.90\(\pm\)0.09** \\ \hline Recordings & - & - & - & - & 0.251 & - & - \\ \hline \hline \end{tabular} \end{table} Table 2: Results of comparison with baseline models. “Speed” indicates each model’s generation speed relative to real time.